Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Guide to Experimental Design | Overview, Steps, & Examples

Guide to Experimental Design | Overview, 5 steps & Examples

Published on December 3, 2019 by Rebecca Bevans . Revised on June 21, 2023.

Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.

Experimental design create a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying.

There are five key steps in designing an experiment:

  • Consider your variables and how they are related
  • Write a specific, testable hypothesis
  • Design experimental treatments to manipulate your independent variable
  • Assign subjects to groups, either between-subjects or within-subjects
  • Plan how you will measure your dependent variable

For valid conclusions, you also need to select a representative sample and control any  extraneous variables that might influence your results. If random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead. This minimizes several types of research bias, particularly sampling bias , survivorship bias , and attrition bias as time passes.

Table of contents

Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, other interesting articles, frequently asked questions about experiments.

You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:

To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.

Start by simply listing the independent and dependent variables .

Research question Independent variable Dependent variable
Phone use and sleep Minutes of phone use before sleep Hours of sleep per night
Temperature and soil respiration Air temperature just above the soil surface CO2 respired from soil

Then you need to think about possible extraneous and confounding variables and consider how you might control  them in your experiment.

Extraneous variable How to control
Phone use and sleep in sleep patterns among individuals. measure the average difference between sleep with phone use and sleep without phone use rather than the average amount of sleep per treatment group.
Temperature and soil respiration also affects respiration, and moisture can decrease with increasing temperature. monitor soil moisture and add water to make sure that soil moisture is consistent across all treatment plots.

Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.

Diagram of the relationship between variables in a sleep experiment

Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.

Prevent plagiarism. Run a free check.

Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.

Null hypothesis (H ) Alternate hypothesis (H )
Phone use and sleep Phone use before sleep does not correlate with the amount of sleep a person gets. Increasing phone use before sleep leads to a decrease in sleep.
Temperature and soil respiration Air temperature does not correlate with soil respiration. Increased air temperature leads to increased soil respiration.

The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:

  • Systematically and precisely manipulate the independent variable(s).
  • Precisely measure the dependent variable(s).
  • Control any potential confounding variables.

If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.

How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalized and applied to the broader world.

First, you may need to decide how widely to vary your independent variable.

  • just slightly above the natural range for your study region.
  • over a wider range of temperatures to mimic future warming.
  • over an extreme range that is beyond any possible natural variation.

Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.

  • a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
  • a continuous variable (minutes of phone use measured every night).

How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.

First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.

Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).

You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.

When assigning your subjects to groups, there are two main choices you need to make:

  • A completely randomized design vs a randomized block design .
  • A between-subjects design vs a within-subjects design .

Randomization

An experiment can be completely randomized or randomized within blocks (aka strata):

  • In a completely randomized design , every subject is assigned to a treatment group at random.
  • In a randomized block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.
Completely randomized design Randomized block design
Phone use and sleep Subjects are all randomly assigned a level of phone use using a random number generator. Subjects are first grouped by age, and then phone use treatments are randomly assigned within these groups.
Temperature and soil respiration Warming treatments are assigned to soil plots at random by using a number generator to generate map coordinates within the study area. Soils are first grouped by average rainfall, and then treatment plots are randomly assigned within these groups.

Sometimes randomization isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .

Between-subjects vs. within-subjects

In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.

In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.

Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.

Counterbalancing (randomizing or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.

Between-subjects (independent measures) design Within-subjects (repeated measures) design
Phone use and sleep Subjects are randomly assigned a level of phone use (none, low, or high) and follow that level of phone use throughout the experiment. Subjects are assigned consecutively to zero, low, and high levels of phone use throughout the experiment, and the order in which they follow these treatments is randomized.
Temperature and soil respiration Warming treatments are assigned to soil plots at random and the soils are kept at this temperature throughout the experiment. Every plot receives each warming treatment (1, 3, 5, 8, and 10C above ambient temperatures) consecutively over the course of the experiment, and the order in which they receive these treatments is randomized.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

experimental design application

Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimize research bias or error.

Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalized to turn them into measurable observations.

  • Ask participants to record what time they go to sleep and get up each day.
  • Ask participants to wear a sleep tracker.

How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.

Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Likert scale

Research bias

  • Implicit bias
  • Framing effect
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 21). Guide to Experimental Design | Overview, 5 steps & Examples. Scribbr. Retrieved July 10, 2024, from https://www.scribbr.com/methodology/experimental-design/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, random assignment in experiments | introduction & examples, quasi-experimental design | definition, types & examples, how to write a lab report, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

19+ Experimental Design Examples (Methods + Types)

practical psychology logo

Ever wondered how scientists discover new medicines, psychologists learn about behavior, or even how marketers figure out what kind of ads you like? Well, they all have something in common: they use a special plan or recipe called an "experimental design."

Imagine you're baking cookies. You can't just throw random amounts of flour, sugar, and chocolate chips into a bowl and hope for the best. You follow a recipe, right? Scientists and researchers do something similar. They follow a "recipe" called an experimental design to make sure their experiments are set up in a way that the answers they find are meaningful and reliable.

Experimental design is the roadmap researchers use to answer questions. It's a set of rules and steps that researchers follow to collect information, or "data," in a way that is fair, accurate, and makes sense.

experimental design test tubes

Long ago, people didn't have detailed game plans for experiments. They often just tried things out and saw what happened. But over time, people got smarter about this. They started creating structured plans—what we now call experimental designs—to get clearer, more trustworthy answers to their questions.

In this article, we'll take you on a journey through the world of experimental designs. We'll talk about the different types, or "flavors," of experimental designs, where they're used, and even give you a peek into how they came to be.

What Is Experimental Design?

Alright, before we dive into the different types of experimental designs, let's get crystal clear on what experimental design actually is.

Imagine you're a detective trying to solve a mystery. You need clues, right? Well, in the world of research, experimental design is like the roadmap that helps you find those clues. It's like the game plan in sports or the blueprint when you're building a house. Just like you wouldn't start building without a good blueprint, researchers won't start their studies without a strong experimental design.

So, why do we need experimental design? Think about baking a cake. If you toss ingredients into a bowl without measuring, you'll end up with a mess instead of a tasty dessert.

Similarly, in research, if you don't have a solid plan, you might get confusing or incorrect results. A good experimental design helps you ask the right questions ( think critically ), decide what to measure ( come up with an idea ), and figure out how to measure it (test it). It also helps you consider things that might mess up your results, like outside influences you hadn't thought of.

For example, let's say you want to find out if listening to music helps people focus better. Your experimental design would help you decide things like: Who are you going to test? What kind of music will you use? How will you measure focus? And, importantly, how will you make sure that it's really the music affecting focus and not something else, like the time of day or whether someone had a good breakfast?

In short, experimental design is the master plan that guides researchers through the process of collecting data, so they can answer questions in the most reliable way possible. It's like the GPS for the journey of discovery!

History of Experimental Design

Around 350 BCE, people like Aristotle were trying to figure out how the world works, but they mostly just thought really hard about things. They didn't test their ideas much. So while they were super smart, their methods weren't always the best for finding out the truth.

Fast forward to the Renaissance (14th to 17th centuries), a time of big changes and lots of curiosity. People like Galileo started to experiment by actually doing tests, like rolling balls down inclined planes to study motion. Galileo's work was cool because he combined thinking with doing. He'd have an idea, test it, look at the results, and then think some more. This approach was a lot more reliable than just sitting around and thinking.

Now, let's zoom ahead to the 18th and 19th centuries. This is when people like Francis Galton, an English polymath, started to get really systematic about experimentation. Galton was obsessed with measuring things. Seriously, he even tried to measure how good-looking people were ! His work helped create the foundations for a more organized approach to experiments.

Next stop: the early 20th century. Enter Ronald A. Fisher , a brilliant British statistician. Fisher was a game-changer. He came up with ideas that are like the bread and butter of modern experimental design.

Fisher invented the concept of the " control group "—that's a group of people or things that don't get the treatment you're testing, so you can compare them to those who do. He also stressed the importance of " randomization ," which means assigning people or things to different groups by chance, like drawing names out of a hat. This makes sure the experiment is fair and the results are trustworthy.

Around the same time, American psychologists like John B. Watson and B.F. Skinner were developing " behaviorism ." They focused on studying things that they could directly observe and measure, like actions and reactions.

Skinner even built boxes—called Skinner Boxes —to test how animals like pigeons and rats learn. Their work helped shape how psychologists design experiments today. Watson performed a very controversial experiment called The Little Albert experiment that helped describe behaviour through conditioning—in other words, how people learn to behave the way they do.

In the later part of the 20th century and into our time, computers have totally shaken things up. Researchers now use super powerful software to help design their experiments and crunch the numbers.

With computers, they can simulate complex experiments before they even start, which helps them predict what might happen. This is especially helpful in fields like medicine, where getting things right can be a matter of life and death.

Also, did you know that experimental designs aren't just for scientists in labs? They're used by people in all sorts of jobs, like marketing, education, and even video game design! Yes, someone probably ran an experiment to figure out what makes a game super fun to play.

So there you have it—a quick tour through the history of experimental design, from Aristotle's deep thoughts to Fisher's groundbreaking ideas, and all the way to today's computer-powered research. These designs are the recipes that help people from all walks of life find answers to their big questions.

Key Terms in Experimental Design

Before we dig into the different types of experimental designs, let's get comfy with some key terms. Understanding these terms will make it easier for us to explore the various types of experimental designs that researchers use to answer their big questions.

Independent Variable : This is what you change or control in your experiment to see what effect it has. Think of it as the "cause" in a cause-and-effect relationship. For example, if you're studying whether different types of music help people focus, the kind of music is the independent variable.

Dependent Variable : This is what you're measuring to see the effect of your independent variable. In our music and focus experiment, how well people focus is the dependent variable—it's what "depends" on the kind of music played.

Control Group : This is a group of people who don't get the special treatment or change you're testing. They help you see what happens when the independent variable is not applied. If you're testing whether a new medicine works, the control group would take a fake pill, called a placebo , instead of the real medicine.

Experimental Group : This is the group that gets the special treatment or change you're interested in. Going back to our medicine example, this group would get the actual medicine to see if it has any effect.

Randomization : This is like shaking things up in a fair way. You randomly put people into the control or experimental group so that each group is a good mix of different kinds of people. This helps make the results more reliable.

Sample : This is the group of people you're studying. They're a "sample" of a larger group that you're interested in. For instance, if you want to know how teenagers feel about a new video game, you might study a sample of 100 teenagers.

Bias : This is anything that might tilt your experiment one way or another without you realizing it. Like if you're testing a new kind of dog food and you only test it on poodles, that could create a bias because maybe poodles just really like that food and other breeds don't.

Data : This is the information you collect during the experiment. It's like the treasure you find on your journey of discovery!

Replication : This means doing the experiment more than once to make sure your findings hold up. It's like double-checking your answers on a test.

Hypothesis : This is your educated guess about what will happen in the experiment. It's like predicting the end of a movie based on the first half.

Steps of Experimental Design

Alright, let's say you're all fired up and ready to run your own experiment. Cool! But where do you start? Well, designing an experiment is a bit like planning a road trip. There are some key steps you've got to take to make sure you reach your destination. Let's break it down:

  • Ask a Question : Before you hit the road, you've got to know where you're going. Same with experiments. You start with a question you want to answer, like "Does eating breakfast really make you do better in school?"
  • Do Some Homework : Before you pack your bags, you look up the best places to visit, right? In science, this means reading up on what other people have already discovered about your topic.
  • Form a Hypothesis : This is your educated guess about what you think will happen. It's like saying, "I bet this route will get us there faster."
  • Plan the Details : Now you decide what kind of car you're driving (your experimental design), who's coming with you (your sample), and what snacks to bring (your variables).
  • Randomization : Remember, this is like shuffling a deck of cards. You want to mix up who goes into your control and experimental groups to make sure it's a fair test.
  • Run the Experiment : Finally, the rubber hits the road! You carry out your plan, making sure to collect your data carefully.
  • Analyze the Data : Once the trip's over, you look at your photos and decide which ones are keepers. In science, this means looking at your data to see what it tells you.
  • Draw Conclusions : Based on your data, did you find an answer to your question? This is like saying, "Yep, that route was faster," or "Nope, we hit a ton of traffic."
  • Share Your Findings : After a great trip, you want to tell everyone about it, right? Scientists do the same by publishing their results so others can learn from them.
  • Do It Again? : Sometimes one road trip just isn't enough. In the same way, scientists often repeat their experiments to make sure their findings are solid.

So there you have it! Those are the basic steps you need to follow when you're designing an experiment. Each step helps make sure that you're setting up a fair and reliable way to find answers to your big questions.

Let's get into examples of experimental designs.

1) True Experimental Design

notepad

In the world of experiments, the True Experimental Design is like the superstar quarterback everyone talks about. Born out of the early 20th-century work of statisticians like Ronald A. Fisher, this design is all about control, precision, and reliability.

Researchers carefully pick an independent variable to manipulate (remember, that's the thing they're changing on purpose) and measure the dependent variable (the effect they're studying). Then comes the magic trick—randomization. By randomly putting participants into either the control or experimental group, scientists make sure their experiment is as fair as possible.

No sneaky biases here!

True Experimental Design Pros

The pros of True Experimental Design are like the perks of a VIP ticket at a concert: you get the best and most trustworthy results. Because everything is controlled and randomized, you can feel pretty confident that the results aren't just a fluke.

True Experimental Design Cons

However, there's a catch. Sometimes, it's really tough to set up these experiments in a real-world situation. Imagine trying to control every single detail of your day, from the food you eat to the air you breathe. Not so easy, right?

True Experimental Design Uses

The fields that get the most out of True Experimental Designs are those that need super reliable results, like medical research.

When scientists were developing COVID-19 vaccines, they used this design to run clinical trials. They had control groups that received a placebo (a harmless substance with no effect) and experimental groups that got the actual vaccine. Then they measured how many people in each group got sick. By comparing the two, they could say, "Yep, this vaccine works!"

So next time you read about a groundbreaking discovery in medicine or technology, chances are a True Experimental Design was the VIP behind the scenes, making sure everything was on point. It's been the go-to for rigorous scientific inquiry for nearly a century, and it's not stepping off the stage anytime soon.

2) Quasi-Experimental Design

So, let's talk about the Quasi-Experimental Design. Think of this one as the cool cousin of True Experimental Design. It wants to be just like its famous relative, but it's a bit more laid-back and flexible. You'll find quasi-experimental designs when it's tricky to set up a full-blown True Experimental Design with all the bells and whistles.

Quasi-experiments still play with an independent variable, just like their stricter cousins. The big difference? They don't use randomization. It's like wanting to divide a bag of jelly beans equally between your friends, but you can't quite do it perfectly.

In real life, it's often not possible or ethical to randomly assign people to different groups, especially when dealing with sensitive topics like education or social issues. And that's where quasi-experiments come in.

Quasi-Experimental Design Pros

Even though they lack full randomization, quasi-experimental designs are like the Swiss Army knives of research: versatile and practical. They're especially popular in fields like education, sociology, and public policy.

For instance, when researchers wanted to figure out if the Head Start program , aimed at giving young kids a "head start" in school, was effective, they used a quasi-experimental design. They couldn't randomly assign kids to go or not go to preschool, but they could compare kids who did with kids who didn't.

Quasi-Experimental Design Cons

Of course, quasi-experiments come with their own bag of pros and cons. On the plus side, they're easier to set up and often cheaper than true experiments. But the flip side is that they're not as rock-solid in their conclusions. Because the groups aren't randomly assigned, there's always that little voice saying, "Hey, are we missing something here?"

Quasi-Experimental Design Uses

Quasi-Experimental Design gained traction in the mid-20th century. Researchers were grappling with real-world problems that didn't fit neatly into a laboratory setting. Plus, as society became more aware of ethical considerations, the need for flexible designs increased. So, the quasi-experimental approach was like a breath of fresh air for scientists wanting to study complex issues without a laundry list of restrictions.

In short, if True Experimental Design is the superstar quarterback, Quasi-Experimental Design is the versatile player who can adapt and still make significant contributions to the game.

3) Pre-Experimental Design

Now, let's talk about the Pre-Experimental Design. Imagine it as the beginner's skateboard you get before you try out for all the cool tricks. It has wheels, it rolls, but it's not built for the professional skatepark.

Similarly, pre-experimental designs give researchers a starting point. They let you dip your toes in the water of scientific research without diving in head-first.

So, what's the deal with pre-experimental designs?

Pre-Experimental Designs are the basic, no-frills versions of experiments. Researchers still mess around with an independent variable and measure a dependent variable, but they skip over the whole randomization thing and often don't even have a control group.

It's like baking a cake but forgetting the frosting and sprinkles; you'll get some results, but they might not be as complete or reliable as you'd like.

Pre-Experimental Design Pros

Why use such a simple setup? Because sometimes, you just need to get the ball rolling. Pre-experimental designs are great for quick-and-dirty research when you're short on time or resources. They give you a rough idea of what's happening, which you can use to plan more detailed studies later.

A good example of this is early studies on the effects of screen time on kids. Researchers couldn't control every aspect of a child's life, but they could easily ask parents to track how much time their kids spent in front of screens and then look for trends in behavior or school performance.

Pre-Experimental Design Cons

But here's the catch: pre-experimental designs are like that first draft of an essay. It helps you get your ideas down, but you wouldn't want to turn it in for a grade. Because these designs lack the rigorous structure of true or quasi-experimental setups, they can't give you rock-solid conclusions. They're more like clues or signposts pointing you in a certain direction.

Pre-Experimental Design Uses

This type of design became popular in the early stages of various scientific fields. Researchers used them to scratch the surface of a topic, generate some initial data, and then decide if it's worth exploring further. In other words, pre-experimental designs were the stepping stones that led to more complex, thorough investigations.

So, while Pre-Experimental Design may not be the star player on the team, it's like the practice squad that helps everyone get better. It's the starting point that can lead to bigger and better things.

4) Factorial Design

Now, buckle up, because we're moving into the world of Factorial Design, the multi-tasker of the experimental universe.

Imagine juggling not just one, but multiple balls in the air—that's what researchers do in a factorial design.

In Factorial Design, researchers are not satisfied with just studying one independent variable. Nope, they want to study two or more at the same time to see how they interact.

It's like cooking with several spices to see how they blend together to create unique flavors.

Factorial Design became the talk of the town with the rise of computers. Why? Because this design produces a lot of data, and computers are the number crunchers that help make sense of it all. So, thanks to our silicon friends, researchers can study complicated questions like, "How do diet AND exercise together affect weight loss?" instead of looking at just one of those factors.

Factorial Design Pros

This design's main selling point is its ability to explore interactions between variables. For instance, maybe a new study drug works really well for young people but not so great for older adults. A factorial design could reveal that age is a crucial factor, something you might miss if you only studied the drug's effectiveness in general. It's like being a detective who looks for clues not just in one room but throughout the entire house.

Factorial Design Cons

However, factorial designs have their own bag of challenges. First off, they can be pretty complicated to set up and run. Imagine coordinating a four-way intersection with lots of cars coming from all directions—you've got to make sure everything runs smoothly, or you'll end up with a traffic jam. Similarly, researchers need to carefully plan how they'll measure and analyze all the different variables.

Factorial Design Uses

Factorial designs are widely used in psychology to untangle the web of factors that influence human behavior. They're also popular in fields like marketing, where companies want to understand how different aspects like price, packaging, and advertising influence a product's success.

And speaking of success, the factorial design has been a hit since statisticians like Ronald A. Fisher (yep, him again!) expanded on it in the early-to-mid 20th century. It offered a more nuanced way of understanding the world, proving that sometimes, to get the full picture, you've got to juggle more than one ball at a time.

So, if True Experimental Design is the quarterback and Quasi-Experimental Design is the versatile player, Factorial Design is the strategist who sees the entire game board and makes moves accordingly.

5) Longitudinal Design

pill bottle

Alright, let's take a step into the world of Longitudinal Design. Picture it as the grand storyteller, the kind who doesn't just tell you about a single event but spins an epic tale that stretches over years or even decades. This design isn't about quick snapshots; it's about capturing the whole movie of someone's life or a long-running process.

You know how you might take a photo every year on your birthday to see how you've changed? Longitudinal Design is kind of like that, but for scientific research.

With Longitudinal Design, instead of measuring something just once, researchers come back again and again, sometimes over many years, to see how things are going. This helps them understand not just what's happening, but why it's happening and how it changes over time.

This design really started to shine in the latter half of the 20th century, when researchers began to realize that some questions can't be answered in a hurry. Think about studies that look at how kids grow up, or research on how a certain medicine affects you over a long period. These aren't things you can rush.

The famous Framingham Heart Study , started in 1948, is a prime example. It's been studying heart health in a small town in Massachusetts for decades, and the findings have shaped what we know about heart disease.

Longitudinal Design Pros

So, what's to love about Longitudinal Design? First off, it's the go-to for studying change over time, whether that's how people age or how a forest recovers from a fire.

Longitudinal Design Cons

But it's not all sunshine and rainbows. Longitudinal studies take a lot of patience and resources. Plus, keeping track of participants over many years can be like herding cats—difficult and full of surprises.

Longitudinal Design Uses

Despite these challenges, longitudinal studies have been key in fields like psychology, sociology, and medicine. They provide the kind of deep, long-term insights that other designs just can't match.

So, if the True Experimental Design is the superstar quarterback, and the Quasi-Experimental Design is the flexible athlete, then the Factorial Design is the strategist, and the Longitudinal Design is the wise elder who has seen it all and has stories to tell.

6) Cross-Sectional Design

Now, let's flip the script and talk about Cross-Sectional Design, the polar opposite of the Longitudinal Design. If Longitudinal is the grand storyteller, think of Cross-Sectional as the snapshot photographer. It captures a single moment in time, like a selfie that you take to remember a fun day. Researchers using this design collect all their data at one point, providing a kind of "snapshot" of whatever they're studying.

In a Cross-Sectional Design, researchers look at multiple groups all at the same time to see how they're different or similar.

This design rose to popularity in the mid-20th century, mainly because it's so quick and efficient. Imagine wanting to know how people of different ages feel about a new video game. Instead of waiting for years to see how opinions change, you could just ask people of all ages what they think right now. That's Cross-Sectional Design for you—fast and straightforward.

You'll find this type of research everywhere from marketing studies to healthcare. For instance, you might have heard about surveys asking people what they think about a new product or political issue. Those are usually cross-sectional studies, aimed at getting a quick read on public opinion.

Cross-Sectional Design Pros

So, what's the big deal with Cross-Sectional Design? Well, it's the go-to when you need answers fast and don't have the time or resources for a more complicated setup.

Cross-Sectional Design Cons

Remember, speed comes with trade-offs. While you get your results quickly, those results are stuck in time. They can't tell you how things change or why they're changing, just what's happening right now.

Cross-Sectional Design Uses

Also, because they're so quick and simple, cross-sectional studies often serve as the first step in research. They give scientists an idea of what's going on so they can decide if it's worth digging deeper. In that way, they're a bit like a movie trailer, giving you a taste of the action to see if you're interested in seeing the whole film.

So, in our lineup of experimental designs, if True Experimental Design is the superstar quarterback and Longitudinal Design is the wise elder, then Cross-Sectional Design is like the speedy running back—fast, agile, but not designed for long, drawn-out plays.

7) Correlational Design

Next on our roster is the Correlational Design, the keen observer of the experimental world. Imagine this design as the person at a party who loves people-watching. They don't interfere or get involved; they just observe and take mental notes about what's going on.

In a correlational study, researchers don't change or control anything; they simply observe and measure how two variables relate to each other.

The correlational design has roots in the early days of psychology and sociology. Pioneers like Sir Francis Galton used it to study how qualities like intelligence or height could be related within families.

This design is all about asking, "Hey, when this thing happens, does that other thing usually happen too?" For example, researchers might study whether students who have more study time get better grades or whether people who exercise more have lower stress levels.

One of the most famous correlational studies you might have heard of is the link between smoking and lung cancer. Back in the mid-20th century, researchers started noticing that people who smoked a lot also seemed to get lung cancer more often. They couldn't say smoking caused cancer—that would require a true experiment—but the strong correlation was a red flag that led to more research and eventually, health warnings.

Correlational Design Pros

This design is great at proving that two (or more) things can be related. Correlational designs can help prove that more detailed research is needed on a topic. They can help us see patterns or possible causes for things that we otherwise might not have realized.

Correlational Design Cons

But here's where you need to be careful: correlational designs can be tricky. Just because two things are related doesn't mean one causes the other. That's like saying, "Every time I wear my lucky socks, my team wins." Well, it's a fun thought, but those socks aren't really controlling the game.

Correlational Design Uses

Despite this limitation, correlational designs are popular in psychology, economics, and epidemiology, to name a few fields. They're often the first step in exploring a possible relationship between variables. Once a strong correlation is found, researchers may decide to conduct more rigorous experimental studies to examine cause and effect.

So, if the True Experimental Design is the superstar quarterback and the Longitudinal Design is the wise elder, the Factorial Design is the strategist, and the Cross-Sectional Design is the speedster, then the Correlational Design is the clever scout, identifying interesting patterns but leaving the heavy lifting of proving cause and effect to the other types of designs.

8) Meta-Analysis

Last but not least, let's talk about Meta-Analysis, the librarian of experimental designs.

If other designs are all about creating new research, Meta-Analysis is about gathering up everyone else's research, sorting it, and figuring out what it all means when you put it together.

Imagine a jigsaw puzzle where each piece is a different study. Meta-Analysis is the process of fitting all those pieces together to see the big picture.

The concept of Meta-Analysis started to take shape in the late 20th century, when computers became powerful enough to handle massive amounts of data. It was like someone handed researchers a super-powered magnifying glass, letting them examine multiple studies at the same time to find common trends or results.

You might have heard of the Cochrane Reviews in healthcare . These are big collections of meta-analyses that help doctors and policymakers figure out what treatments work best based on all the research that's been done.

For example, if ten different studies show that a certain medicine helps lower blood pressure, a meta-analysis would pull all that information together to give a more accurate answer.

Meta-Analysis Pros

The beauty of Meta-Analysis is that it can provide really strong evidence. Instead of relying on one study, you're looking at the whole landscape of research on a topic.

Meta-Analysis Cons

However, it does have some downsides. For one, Meta-Analysis is only as good as the studies it includes. If those studies are flawed, the meta-analysis will be too. It's like baking a cake: if you use bad ingredients, it doesn't matter how good your recipe is—the cake won't turn out well.

Meta-Analysis Uses

Despite these challenges, meta-analyses are highly respected and widely used in many fields like medicine, psychology, and education. They help us make sense of a world that's bursting with information by showing us the big picture drawn from many smaller snapshots.

So, in our all-star lineup, if True Experimental Design is the quarterback and Longitudinal Design is the wise elder, the Factorial Design is the strategist, the Cross-Sectional Design is the speedster, and the Correlational Design is the scout, then the Meta-Analysis is like the coach, using insights from everyone else's plays to come up with the best game plan.

9) Non-Experimental Design

Now, let's talk about a player who's a bit of an outsider on this team of experimental designs—the Non-Experimental Design. Think of this design as the commentator or the journalist who covers the game but doesn't actually play.

In a Non-Experimental Design, researchers are like reporters gathering facts, but they don't interfere or change anything. They're simply there to describe and analyze.

Non-Experimental Design Pros

So, what's the deal with Non-Experimental Design? Its strength is in description and exploration. It's really good for studying things as they are in the real world, without changing any conditions.

Non-Experimental Design Cons

Because a non-experimental design doesn't manipulate variables, it can't prove cause and effect. It's like a weather reporter: they can tell you it's raining, but they can't tell you why it's raining.

The downside? Since researchers aren't controlling variables, it's hard to rule out other explanations for what they observe. It's like hearing one side of a story—you get an idea of what happened, but it might not be the complete picture.

Non-Experimental Design Uses

Non-Experimental Design has always been a part of research, especially in fields like anthropology, sociology, and some areas of psychology.

For instance, if you've ever heard of studies that describe how people behave in different cultures or what teens like to do in their free time, that's often Non-Experimental Design at work. These studies aim to capture the essence of a situation, like painting a portrait instead of taking a snapshot.

One well-known example you might have heard about is the Kinsey Reports from the 1940s and 1950s, which described sexual behavior in men and women. Researchers interviewed thousands of people but didn't manipulate any variables like you would in a true experiment. They simply collected data to create a comprehensive picture of the subject matter.

So, in our metaphorical team of research designs, if True Experimental Design is the quarterback and Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, and Meta-Analysis is the coach, then Non-Experimental Design is the sports journalist—always present, capturing the game, but not part of the action itself.

10) Repeated Measures Design

white rat

Time to meet the Repeated Measures Design, the time traveler of our research team. If this design were a player in a sports game, it would be the one who keeps revisiting past plays to figure out how to improve the next one.

Repeated Measures Design is all about studying the same people or subjects multiple times to see how they change or react under different conditions.

The idea behind Repeated Measures Design isn't new; it's been around since the early days of psychology and medicine. You could say it's a cousin to the Longitudinal Design, but instead of looking at how things naturally change over time, it focuses on how the same group reacts to different things.

Imagine a study looking at how a new energy drink affects people's running speed. Instead of comparing one group that drank the energy drink to another group that didn't, a Repeated Measures Design would have the same group of people run multiple times—once with the energy drink, and once without. This way, you're really zeroing in on the effect of that energy drink, making the results more reliable.

Repeated Measures Design Pros

The strong point of Repeated Measures Design is that it's super focused. Because it uses the same subjects, you don't have to worry about differences between groups messing up your results.

Repeated Measures Design Cons

But the downside? Well, people can get tired or bored if they're tested too many times, which might affect how they respond.

Repeated Measures Design Uses

A famous example of this design is the "Little Albert" experiment, conducted by John B. Watson and Rosalie Rayner in 1920. In this study, a young boy was exposed to a white rat and other stimuli several times to see how his emotional responses changed. Though the ethical standards of this experiment are often criticized today, it was groundbreaking in understanding conditioned emotional responses.

In our metaphorical lineup of research designs, if True Experimental Design is the quarterback and Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, and Non-Experimental Design is the journalist, then Repeated Measures Design is the time traveler—always looping back to fine-tune the game plan.

11) Crossover Design

Next up is Crossover Design, the switch-hitter of the research world. If you're familiar with baseball, you'll know a switch-hitter is someone who can bat both right-handed and left-handed.

In a similar way, Crossover Design allows subjects to experience multiple conditions, flipping them around so that everyone gets a turn in each role.

This design is like the utility player on our team—versatile, flexible, and really good at adapting.

The Crossover Design has its roots in medical research and has been popular since the mid-20th century. It's often used in clinical trials to test the effectiveness of different treatments.

Crossover Design Pros

The neat thing about this design is that it allows each participant to serve as their own control group. Imagine you're testing two new kinds of headache medicine. Instead of giving one type to one group and another type to a different group, you'd give both kinds to the same people but at different times.

Crossover Design Cons

What's the big deal with Crossover Design? Its major strength is in reducing the "noise" that comes from individual differences. Since each person experiences all conditions, it's easier to see real effects. However, there's a catch. This design assumes that there's no lasting effect from the first condition when you switch to the second one. That might not always be true. If the first treatment has a long-lasting effect, it could mess up the results when you switch to the second treatment.

Crossover Design Uses

A well-known example of Crossover Design is in studies that look at the effects of different types of diets—like low-carb vs. low-fat diets. Researchers might have participants follow a low-carb diet for a few weeks, then switch them to a low-fat diet. By doing this, they can more accurately measure how each diet affects the same group of people.

In our team of experimental designs, if True Experimental Design is the quarterback and Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, Non-Experimental Design is the journalist, and Repeated Measures Design is the time traveler, then Crossover Design is the versatile utility player—always ready to adapt and play multiple roles to get the most accurate results.

12) Cluster Randomized Design

Meet the Cluster Randomized Design, the team captain of group-focused research. In our imaginary lineup of experimental designs, if other designs focus on individual players, then Cluster Randomized Design is looking at how the entire team functions.

This approach is especially common in educational and community-based research, and it's been gaining traction since the late 20th century.

Here's how Cluster Randomized Design works: Instead of assigning individual people to different conditions, researchers assign entire groups, or "clusters." These could be schools, neighborhoods, or even entire towns. This helps you see how the new method works in a real-world setting.

Imagine you want to see if a new anti-bullying program really works. Instead of selecting individual students, you'd introduce the program to a whole school or maybe even several schools, and then compare the results to schools without the program.

Cluster Randomized Design Pros

Why use Cluster Randomized Design? Well, sometimes it's just not practical to assign conditions at the individual level. For example, you can't really have half a school following a new reading program while the other half sticks with the old one; that would be way too confusing! Cluster Randomization helps get around this problem by treating each "cluster" as its own mini-experiment.

Cluster Randomized Design Cons

There's a downside, too. Because entire groups are assigned to each condition, there's a risk that the groups might be different in some important way that the researchers didn't account for. That's like having one sports team that's full of veterans playing against a team of rookies; the match wouldn't be fair.

Cluster Randomized Design Uses

A famous example is the research conducted to test the effectiveness of different public health interventions, like vaccination programs. Researchers might roll out a vaccination program in one community but not in another, then compare the rates of disease in both.

In our metaphorical research team, if True Experimental Design is the quarterback, Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, Non-Experimental Design is the journalist, Repeated Measures Design is the time traveler, and Crossover Design is the utility player, then Cluster Randomized Design is the team captain—always looking out for the group as a whole.

13) Mixed-Methods Design

Say hello to Mixed-Methods Design, the all-rounder or the "Renaissance player" of our research team.

Mixed-Methods Design uses a blend of both qualitative and quantitative methods to get a more complete picture, just like a Renaissance person who's good at lots of different things. It's like being good at both offense and defense in a sport; you've got all your bases covered!

Mixed-Methods Design is a fairly new kid on the block, becoming more popular in the late 20th and early 21st centuries as researchers began to see the value in using multiple approaches to tackle complex questions. It's the Swiss Army knife in our research toolkit, combining the best parts of other designs to be more versatile.

Here's how it could work: Imagine you're studying the effects of a new educational app on students' math skills. You might use quantitative methods like tests and grades to measure how much the students improve—that's the 'numbers part.'

But you also want to know how the students feel about math now, or why they think they got better or worse. For that, you could conduct interviews or have students fill out journals—that's the 'story part.'

Mixed-Methods Design Pros

So, what's the scoop on Mixed-Methods Design? The strength is its versatility and depth; you're not just getting numbers or stories, you're getting both, which gives a fuller picture.

Mixed-Methods Design Cons

But, it's also more challenging. Imagine trying to play two sports at the same time! You have to be skilled in different research methods and know how to combine them effectively.

Mixed-Methods Design Uses

A high-profile example of Mixed-Methods Design is research on climate change. Scientists use numbers and data to show temperature changes (quantitative), but they also interview people to understand how these changes are affecting communities (qualitative).

In our team of experimental designs, if True Experimental Design is the quarterback, Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, Non-Experimental Design is the journalist, Repeated Measures Design is the time traveler, Crossover Design is the utility player, and Cluster Randomized Design is the team captain, then Mixed-Methods Design is the Renaissance player—skilled in multiple areas and able to bring them all together for a winning strategy.

14) Multivariate Design

Now, let's turn our attention to Multivariate Design, the multitasker of the research world.

If our lineup of research designs were like players on a basketball court, Multivariate Design would be the player dribbling, passing, and shooting all at once. This design doesn't just look at one or two things; it looks at several variables simultaneously to see how they interact and affect each other.

Multivariate Design is like baking a cake with many ingredients. Instead of just looking at how flour affects the cake, you also consider sugar, eggs, and milk all at once. This way, you understand how everything works together to make the cake taste good or bad.

Multivariate Design has been a go-to method in psychology, economics, and social sciences since the latter half of the 20th century. With the advent of computers and advanced statistical software, analyzing multiple variables at once became a lot easier, and Multivariate Design soared in popularity.

Multivariate Design Pros

So, what's the benefit of using Multivariate Design? Its power lies in its complexity. By studying multiple variables at the same time, you can get a really rich, detailed understanding of what's going on.

Multivariate Design Cons

But that complexity can also be a drawback. With so many variables, it can be tough to tell which ones are really making a difference and which ones are just along for the ride.

Multivariate Design Uses

Imagine you're a coach trying to figure out the best strategy to win games. You wouldn't just look at how many points your star player scores; you'd also consider assists, rebounds, turnovers, and maybe even how loud the crowd is. A Multivariate Design would help you understand how all these factors work together to determine whether you win or lose.

A well-known example of Multivariate Design is in market research. Companies often use this approach to figure out how different factors—like price, packaging, and advertising—affect sales. By studying multiple variables at once, they can find the best combination to boost profits.

In our metaphorical research team, if True Experimental Design is the quarterback, Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, Non-Experimental Design is the journalist, Repeated Measures Design is the time traveler, Crossover Design is the utility player, Cluster Randomized Design is the team captain, and Mixed-Methods Design is the Renaissance player, then Multivariate Design is the multitasker—juggling many variables at once to get a fuller picture of what's happening.

15) Pretest-Posttest Design

Let's introduce Pretest-Posttest Design, the "Before and After" superstar of our research team. You've probably seen those before-and-after pictures in ads for weight loss programs or home renovations, right?

Well, this design is like that, but for science! Pretest-Posttest Design checks out what things are like before the experiment starts and then compares that to what things are like after the experiment ends.

This design is one of the classics, a staple in research for decades across various fields like psychology, education, and healthcare. It's so simple and straightforward that it has stayed popular for a long time.

In Pretest-Posttest Design, you measure your subject's behavior or condition before you introduce any changes—that's your "before" or "pretest." Then you do your experiment, and after it's done, you measure the same thing again—that's your "after" or "posttest."

Pretest-Posttest Design Pros

What makes Pretest-Posttest Design special? It's pretty easy to understand and doesn't require fancy statistics.

Pretest-Posttest Design Cons

But there are some pitfalls. For example, what if the kids in our math example get better at multiplication just because they're older or because they've taken the test before? That would make it hard to tell if the program is really effective or not.

Pretest-Posttest Design Uses

Let's say you're a teacher and you want to know if a new math program helps kids get better at multiplication. First, you'd give all the kids a multiplication test—that's your pretest. Then you'd teach them using the new math program. At the end, you'd give them the same test again—that's your posttest. If the kids do better on the second test, you might conclude that the program works.

One famous use of Pretest-Posttest Design is in evaluating the effectiveness of driver's education courses. Researchers will measure people's driving skills before and after the course to see if they've improved.

16) Solomon Four-Group Design

Next up is the Solomon Four-Group Design, the "chess master" of our research team. This design is all about strategy and careful planning. Named after Richard L. Solomon who introduced it in the 1940s, this method tries to correct some of the weaknesses in simpler designs, like the Pretest-Posttest Design.

Here's how it rolls: The Solomon Four-Group Design uses four different groups to test a hypothesis. Two groups get a pretest, then one of them receives the treatment or intervention, and both get a posttest. The other two groups skip the pretest, and only one of them receives the treatment before they both get a posttest.

Sound complicated? It's like playing 4D chess; you're thinking several moves ahead!

Solomon Four-Group Design Pros

What's the pro and con of the Solomon Four-Group Design? On the plus side, it provides really robust results because it accounts for so many variables.

Solomon Four-Group Design Cons

The downside? It's a lot of work and requires a lot of participants, making it more time-consuming and costly.

Solomon Four-Group Design Uses

Let's say you want to figure out if a new way of teaching history helps students remember facts better. Two classes take a history quiz (pretest), then one class uses the new teaching method while the other sticks with the old way. Both classes take another quiz afterward (posttest).

Meanwhile, two more classes skip the initial quiz, and then one uses the new method before both take the final quiz. Comparing all four groups will give you a much clearer picture of whether the new teaching method works and whether the pretest itself affects the outcome.

The Solomon Four-Group Design is less commonly used than simpler designs but is highly respected for its ability to control for more variables. It's a favorite in educational and psychological research where you really want to dig deep and figure out what's actually causing changes.

17) Adaptive Designs

Now, let's talk about Adaptive Designs, the chameleons of the experimental world.

Imagine you're a detective, and halfway through solving a case, you find a clue that changes everything. You wouldn't just stick to your old plan; you'd adapt and change your approach, right? That's exactly what Adaptive Designs allow researchers to do.

In an Adaptive Design, researchers can make changes to the study as it's happening, based on early results. In a traditional study, once you set your plan, you stick to it from start to finish.

Adaptive Design Pros

This method is particularly useful in fast-paced or high-stakes situations, like developing a new vaccine in the middle of a pandemic. The ability to adapt can save both time and resources, and more importantly, it can save lives by getting effective treatments out faster.

Adaptive Design Cons

But Adaptive Designs aren't without their drawbacks. They can be very complex to plan and carry out, and there's always a risk that the changes made during the study could introduce bias or errors.

Adaptive Design Uses

Adaptive Designs are most often seen in clinical trials, particularly in the medical and pharmaceutical fields.

For instance, if a new drug is showing really promising results, the study might be adjusted to give more participants the new treatment instead of a placebo. Or if one dose level is showing bad side effects, it might be dropped from the study.

The best part is, these changes are pre-planned. Researchers lay out in advance what changes might be made and under what conditions, which helps keep everything scientific and above board.

In terms of applications, besides their heavy usage in medical and pharmaceutical research, Adaptive Designs are also becoming increasingly popular in software testing and market research. In these fields, being able to quickly adjust to early results can give companies a significant advantage.

Adaptive Designs are like the agile startups of the research world—quick to pivot, keen to learn from ongoing results, and focused on rapid, efficient progress. However, they require a great deal of expertise and careful planning to ensure that the adaptability doesn't compromise the integrity of the research.

18) Bayesian Designs

Next, let's dive into Bayesian Designs, the data detectives of the research universe. Named after Thomas Bayes, an 18th-century statistician and minister, this design doesn't just look at what's happening now; it also takes into account what's happened before.

Imagine if you were a detective who not only looked at the evidence in front of you but also used your past cases to make better guesses about your current one. That's the essence of Bayesian Designs.

Bayesian Designs are like detective work in science. As you gather more clues (or data), you update your best guess on what's really happening. This way, your experiment gets smarter as it goes along.

In the world of research, Bayesian Designs are most notably used in areas where you have some prior knowledge that can inform your current study. For example, if earlier research shows that a certain type of medicine usually works well for a specific illness, a Bayesian Design would include that information when studying a new group of patients with the same illness.

Bayesian Design Pros

One of the major advantages of Bayesian Designs is their efficiency. Because they use existing data to inform the current experiment, often fewer resources are needed to reach a reliable conclusion.

Bayesian Design Cons

However, they can be quite complicated to set up and require a deep understanding of both statistics and the subject matter at hand.

Bayesian Design Uses

Bayesian Designs are highly valued in medical research, finance, environmental science, and even in Internet search algorithms. Their ability to continually update and refine hypotheses based on new evidence makes them particularly useful in fields where data is constantly evolving and where quick, informed decisions are crucial.

Here's a real-world example: In the development of personalized medicine, where treatments are tailored to individual patients, Bayesian Designs are invaluable. If a treatment has been effective for patients with similar genetics or symptoms in the past, a Bayesian approach can use that data to predict how well it might work for a new patient.

This type of design is also increasingly popular in machine learning and artificial intelligence. In these fields, Bayesian Designs help algorithms "learn" from past data to make better predictions or decisions in new situations. It's like teaching a computer to be a detective that gets better and better at solving puzzles the more puzzles it sees.

19) Covariate Adaptive Randomization

old person and young person

Now let's turn our attention to Covariate Adaptive Randomization, which you can think of as the "matchmaker" of experimental designs.

Picture a soccer coach trying to create the most balanced teams for a friendly match. They wouldn't just randomly assign players; they'd take into account each player's skills, experience, and other traits.

Covariate Adaptive Randomization is all about creating the most evenly matched groups possible for an experiment.

In traditional randomization, participants are allocated to different groups purely by chance. This is a pretty fair way to do things, but it can sometimes lead to unbalanced groups.

Imagine if all the professional-level players ended up on one soccer team and all the beginners on another; that wouldn't be a very informative match! Covariate Adaptive Randomization fixes this by using important traits or characteristics (called "covariates") to guide the randomization process.

Covariate Adaptive Randomization Pros

The benefits of this design are pretty clear: it aims for balance and fairness, making the final results more trustworthy.

Covariate Adaptive Randomization Cons

But it's not perfect. It can be complex to implement and requires a deep understanding of which characteristics are most important to balance.

Covariate Adaptive Randomization Uses

This design is particularly useful in medical trials. Let's say researchers are testing a new medication for high blood pressure. Participants might have different ages, weights, or pre-existing conditions that could affect the results.

Covariate Adaptive Randomization would make sure that each treatment group has a similar mix of these characteristics, making the results more reliable and easier to interpret.

In practical terms, this design is often seen in clinical trials for new drugs or therapies, but its principles are also applicable in fields like psychology, education, and social sciences.

For instance, in educational research, it might be used to ensure that classrooms being compared have similar distributions of students in terms of academic ability, socioeconomic status, and other factors.

Covariate Adaptive Randomization is like the wise elder of the group, ensuring that everyone has an equal opportunity to show their true capabilities, thereby making the collective results as reliable as possible.

20) Stepped Wedge Design

Let's now focus on the Stepped Wedge Design, a thoughtful and cautious member of the experimental design family.

Imagine you're trying out a new gardening technique, but you're not sure how well it will work. You decide to apply it to one section of your garden first, watch how it performs, and then gradually extend the technique to other sections. This way, you get to see its effects over time and across different conditions. That's basically how Stepped Wedge Design works.

In a Stepped Wedge Design, all participants or clusters start off in the control group, and then, at different times, they 'step' over to the intervention or treatment group. This creates a wedge-like pattern over time where more and more participants receive the treatment as the study progresses. It's like rolling out a new policy in phases, monitoring its impact at each stage before extending it to more people.

Stepped Wedge Design Pros

The Stepped Wedge Design offers several advantages. Firstly, it allows for the study of interventions that are expected to do more good than harm, which makes it ethically appealing.

Secondly, it's useful when resources are limited and it's not feasible to roll out a new treatment to everyone at once. Lastly, because everyone eventually receives the treatment, it can be easier to get buy-in from participants or organizations involved in the study.

Stepped Wedge Design Cons

However, this design can be complex to analyze because it has to account for both the time factor and the changing conditions in each 'step' of the wedge. And like any study where participants know they're receiving an intervention, there's the potential for the results to be influenced by the placebo effect or other biases.

Stepped Wedge Design Uses

This design is particularly useful in health and social care research. For instance, if a hospital wants to implement a new hygiene protocol, it might start in one department, assess its impact, and then roll it out to other departments over time. This allows the hospital to adjust and refine the new protocol based on real-world data before it's fully implemented.

In terms of applications, Stepped Wedge Designs are commonly used in public health initiatives, organizational changes in healthcare settings, and social policy trials. They are particularly useful in situations where an intervention is being rolled out gradually and it's important to understand its impacts at each stage.

21) Sequential Design

Next up is Sequential Design, the dynamic and flexible member of our experimental design family.

Imagine you're playing a video game where you can choose different paths. If you take one path and find a treasure chest, you might decide to continue in that direction. If you hit a dead end, you might backtrack and try a different route. Sequential Design operates in a similar fashion, allowing researchers to make decisions at different stages based on what they've learned so far.

In a Sequential Design, the experiment is broken down into smaller parts, or "sequences." After each sequence, researchers pause to look at the data they've collected. Based on those findings, they then decide whether to stop the experiment because they've got enough information, or to continue and perhaps even modify the next sequence.

Sequential Design Pros

This allows for a more efficient use of resources, as you're only continuing with the experiment if the data suggests it's worth doing so.

One of the great things about Sequential Design is its efficiency. Because you're making data-driven decisions along the way, you can often reach conclusions more quickly and with fewer resources.

Sequential Design Cons

However, it requires careful planning and expertise to ensure that these "stop or go" decisions are made correctly and without bias.

Sequential Design Uses

In terms of its applications, besides healthcare and medicine, Sequential Design is also popular in quality control in manufacturing, environmental monitoring, and financial modeling. In these areas, being able to make quick decisions based on incoming data can be a big advantage.

This design is often used in clinical trials involving new medications or treatments. For example, if early results show that a new drug has significant side effects, the trial can be stopped before more people are exposed to it.

On the flip side, if the drug is showing promising results, the trial might be expanded to include more participants or to extend the testing period.

Think of Sequential Design as the nimble athlete of experimental designs, capable of quick pivots and adjustments to reach the finish line in the most effective way possible. But just like an athlete needs a good coach, this design requires expert oversight to make sure it stays on the right track.

22) Field Experiments

Last but certainly not least, let's explore Field Experiments—the adventurers of the experimental design world.

Picture a scientist leaving the controlled environment of a lab to test a theory in the real world, like a biologist studying animals in their natural habitat or a social scientist observing people in a real community. These are Field Experiments, and they're all about getting out there and gathering data in real-world settings.

Field Experiments embrace the messiness of the real world, unlike laboratory experiments, where everything is controlled down to the smallest detail. This makes them both exciting and challenging.

Field Experiment Pros

On one hand, the results often give us a better understanding of how things work outside the lab.

While Field Experiments offer real-world relevance, they come with challenges like controlling for outside factors and the ethical considerations of intervening in people's lives without their knowledge.

Field Experiment Cons

On the other hand, the lack of control can make it harder to tell exactly what's causing what. Yet, despite these challenges, they remain a valuable tool for researchers who want to understand how theories play out in the real world.

Field Experiment Uses

Let's say a school wants to improve student performance. In a Field Experiment, they might change the school's daily schedule for one semester and keep track of how students perform compared to another school where the schedule remained the same.

Because the study is happening in a real school with real students, the results could be very useful for understanding how the change might work in other schools. But since it's the real world, lots of other factors—like changes in teachers or even the weather—could affect the results.

Field Experiments are widely used in economics, psychology, education, and public policy. For example, you might have heard of the famous "Broken Windows" experiment in the 1980s that looked at how small signs of disorder, like broken windows or graffiti, could encourage more serious crime in neighborhoods. This experiment had a big impact on how cities think about crime prevention.

From the foundational concepts of control groups and independent variables to the sophisticated layouts like Covariate Adaptive Randomization and Sequential Design, it's clear that the realm of experimental design is as varied as it is fascinating.

We've seen that each design has its own special talents, ideal for specific situations. Some designs, like the Classic Controlled Experiment, are like reliable old friends you can always count on.

Others, like Sequential Design, are flexible and adaptable, making quick changes based on what they learn. And let's not forget the adventurous Field Experiments, which take us out of the lab and into the real world to discover things we might not see otherwise.

Choosing the right experimental design is like picking the right tool for the job. The method you choose can make a big difference in how reliable your results are and how much people will trust what you've discovered. And as we've learned, there's a design to suit just about every question, every problem, and every curiosity.

So the next time you read about a new discovery in medicine, psychology, or any other field, you'll have a better understanding of the thought and planning that went into figuring things out. Experimental design is more than just a set of rules; it's a structured way to explore the unknown and answer questions that can change the world.

Related posts:

  • Experimental Psychologist Career (Salary + Duties + Interviews)
  • 40+ Famous Psychologists (Images + Biographies)
  • 11+ Psychology Experiment Ideas (Goals + Methods)
  • The Little Albert Experiment
  • 41+ White Collar Job Examples (Salary + Path)

Reference this article:

About The Author

Photo of author

Free Personality Test

Free Personality Quiz

Free Memory Test

Free Memory Test

Free IQ Test

Free IQ Test

PracticalPie.com is a participant in the Amazon Associates Program. As an Amazon Associate we earn from qualifying purchases.

Follow Us On:

Youtube Facebook Instagram X/Twitter

Psychology Resources

Developmental

Personality

Relationships

Psychologists

Serial Killers

Psychology Tests

Personality Quiz

Memory Test

Depression test

Type A/B Personality Test

© PracticalPsychology. All rights reserved

Privacy Policy | Terms of Use

  • Privacy Policy

Research Method

Home » Experimental Design – Types, Methods, Guide

Experimental Design – Types, Methods, Guide

Table of Contents

Experimental Research Design

Experimental Design

Experimental design is a process of planning and conducting scientific experiments to investigate a hypothesis or research question. It involves carefully designing an experiment that can test the hypothesis, and controlling for other variables that may influence the results.

Experimental design typically includes identifying the variables that will be manipulated or measured, defining the sample or population to be studied, selecting an appropriate method of sampling, choosing a method for data collection and analysis, and determining the appropriate statistical tests to use.

Types of Experimental Design

Here are the different types of experimental design:

Completely Randomized Design

In this design, participants are randomly assigned to one of two or more groups, and each group is exposed to a different treatment or condition.

Randomized Block Design

This design involves dividing participants into blocks based on a specific characteristic, such as age or gender, and then randomly assigning participants within each block to one of two or more treatment groups.

Factorial Design

In a factorial design, participants are randomly assigned to one of several groups, each of which receives a different combination of two or more independent variables.

Repeated Measures Design

In this design, each participant is exposed to all of the different treatments or conditions, either in a random order or in a predetermined order.

Crossover Design

This design involves randomly assigning participants to one of two or more treatment groups, with each group receiving one treatment during the first phase of the study and then switching to a different treatment during the second phase.

Split-plot Design

In this design, the researcher manipulates one or more variables at different levels and uses a randomized block design to control for other variables.

Nested Design

This design involves grouping participants within larger units, such as schools or households, and then randomly assigning these units to different treatment groups.

Laboratory Experiment

Laboratory experiments are conducted under controlled conditions, which allows for greater precision and accuracy. However, because laboratory conditions are not always representative of real-world conditions, the results of these experiments may not be generalizable to the population at large.

Field Experiment

Field experiments are conducted in naturalistic settings and allow for more realistic observations. However, because field experiments are not as controlled as laboratory experiments, they may be subject to more sources of error.

Experimental Design Methods

Experimental design methods refer to the techniques and procedures used to design and conduct experiments in scientific research. Here are some common experimental design methods:

Randomization

This involves randomly assigning participants to different groups or treatments to ensure that any observed differences between groups are due to the treatment and not to other factors.

Control Group

The use of a control group is an important experimental design method that involves having a group of participants that do not receive the treatment or intervention being studied. The control group is used as a baseline to compare the effects of the treatment group.

Blinding involves keeping participants, researchers, or both unaware of which treatment group participants are in, in order to reduce the risk of bias in the results.

Counterbalancing

This involves systematically varying the order in which participants receive treatments or interventions in order to control for order effects.

Replication

Replication involves conducting the same experiment with different samples or under different conditions to increase the reliability and validity of the results.

This experimental design method involves manipulating multiple independent variables simultaneously to investigate their combined effects on the dependent variable.

This involves dividing participants into subgroups or blocks based on specific characteristics, such as age or gender, in order to reduce the risk of confounding variables.

Data Collection Method

Experimental design data collection methods are techniques and procedures used to collect data in experimental research. Here are some common experimental design data collection methods:

Direct Observation

This method involves observing and recording the behavior or phenomenon of interest in real time. It may involve the use of structured or unstructured observation, and may be conducted in a laboratory or naturalistic setting.

Self-report Measures

Self-report measures involve asking participants to report their thoughts, feelings, or behaviors using questionnaires, surveys, or interviews. These measures may be administered in person or online.

Behavioral Measures

Behavioral measures involve measuring participants’ behavior directly, such as through reaction time tasks or performance tests. These measures may be administered using specialized equipment or software.

Physiological Measures

Physiological measures involve measuring participants’ physiological responses, such as heart rate, blood pressure, or brain activity, using specialized equipment. These measures may be invasive or non-invasive, and may be administered in a laboratory or clinical setting.

Archival Data

Archival data involves using existing records or data, such as medical records, administrative records, or historical documents, as a source of information. These data may be collected from public or private sources.

Computerized Measures

Computerized measures involve using software or computer programs to collect data on participants’ behavior or responses. These measures may include reaction time tasks, cognitive tests, or other types of computer-based assessments.

Video Recording

Video recording involves recording participants’ behavior or interactions using cameras or other recording equipment. This method can be used to capture detailed information about participants’ behavior or to analyze social interactions.

Data Analysis Method

Experimental design data analysis methods refer to the statistical techniques and procedures used to analyze data collected in experimental research. Here are some common experimental design data analysis methods:

Descriptive Statistics

Descriptive statistics are used to summarize and describe the data collected in the study. This includes measures such as mean, median, mode, range, and standard deviation.

Inferential Statistics

Inferential statistics are used to make inferences or generalizations about a larger population based on the data collected in the study. This includes hypothesis testing and estimation.

Analysis of Variance (ANOVA)

ANOVA is a statistical technique used to compare means across two or more groups in order to determine whether there are significant differences between the groups. There are several types of ANOVA, including one-way ANOVA, two-way ANOVA, and repeated measures ANOVA.

Regression Analysis

Regression analysis is used to model the relationship between two or more variables in order to determine the strength and direction of the relationship. There are several types of regression analysis, including linear regression, logistic regression, and multiple regression.

Factor Analysis

Factor analysis is used to identify underlying factors or dimensions in a set of variables. This can be used to reduce the complexity of the data and identify patterns in the data.

Structural Equation Modeling (SEM)

SEM is a statistical technique used to model complex relationships between variables. It can be used to test complex theories and models of causality.

Cluster Analysis

Cluster analysis is used to group similar cases or observations together based on similarities or differences in their characteristics.

Time Series Analysis

Time series analysis is used to analyze data collected over time in order to identify trends, patterns, or changes in the data.

Multilevel Modeling

Multilevel modeling is used to analyze data that is nested within multiple levels, such as students nested within schools or employees nested within companies.

Applications of Experimental Design 

Experimental design is a versatile research methodology that can be applied in many fields. Here are some applications of experimental design:

  • Medical Research: Experimental design is commonly used to test new treatments or medications for various medical conditions. This includes clinical trials to evaluate the safety and effectiveness of new drugs or medical devices.
  • Agriculture : Experimental design is used to test new crop varieties, fertilizers, and other agricultural practices. This includes randomized field trials to evaluate the effects of different treatments on crop yield, quality, and pest resistance.
  • Environmental science: Experimental design is used to study the effects of environmental factors, such as pollution or climate change, on ecosystems and wildlife. This includes controlled experiments to study the effects of pollutants on plant growth or animal behavior.
  • Psychology : Experimental design is used to study human behavior and cognitive processes. This includes experiments to test the effects of different interventions, such as therapy or medication, on mental health outcomes.
  • Engineering : Experimental design is used to test new materials, designs, and manufacturing processes in engineering applications. This includes laboratory experiments to test the strength and durability of new materials, or field experiments to test the performance of new technologies.
  • Education : Experimental design is used to evaluate the effectiveness of teaching methods, educational interventions, and programs. This includes randomized controlled trials to compare different teaching methods or evaluate the impact of educational programs on student outcomes.
  • Marketing : Experimental design is used to test the effectiveness of marketing campaigns, pricing strategies, and product designs. This includes experiments to test the impact of different marketing messages or pricing schemes on consumer behavior.

Examples of Experimental Design 

Here are some examples of experimental design in different fields:

  • Example in Medical research : A study that investigates the effectiveness of a new drug treatment for a particular condition. Patients are randomly assigned to either a treatment group or a control group, with the treatment group receiving the new drug and the control group receiving a placebo. The outcomes, such as improvement in symptoms or side effects, are measured and compared between the two groups.
  • Example in Education research: A study that examines the impact of a new teaching method on student learning outcomes. Students are randomly assigned to either a group that receives the new teaching method or a group that receives the traditional teaching method. Student achievement is measured before and after the intervention, and the results are compared between the two groups.
  • Example in Environmental science: A study that tests the effectiveness of a new method for reducing pollution in a river. Two sections of the river are selected, with one section treated with the new method and the other section left untreated. The water quality is measured before and after the intervention, and the results are compared between the two sections.
  • Example in Marketing research: A study that investigates the impact of a new advertising campaign on consumer behavior. Participants are randomly assigned to either a group that is exposed to the new campaign or a group that is not. Their behavior, such as purchasing or product awareness, is measured and compared between the two groups.
  • Example in Social psychology: A study that examines the effect of a new social intervention on reducing prejudice towards a marginalized group. Participants are randomly assigned to either a group that receives the intervention or a control group that does not. Their attitudes and behavior towards the marginalized group are measured before and after the intervention, and the results are compared between the two groups.

When to use Experimental Research Design 

Experimental research design should be used when a researcher wants to establish a cause-and-effect relationship between variables. It is particularly useful when studying the impact of an intervention or treatment on a particular outcome.

Here are some situations where experimental research design may be appropriate:

  • When studying the effects of a new drug or medical treatment: Experimental research design is commonly used in medical research to test the effectiveness and safety of new drugs or medical treatments. By randomly assigning patients to treatment and control groups, researchers can determine whether the treatment is effective in improving health outcomes.
  • When evaluating the effectiveness of an educational intervention: An experimental research design can be used to evaluate the impact of a new teaching method or educational program on student learning outcomes. By randomly assigning students to treatment and control groups, researchers can determine whether the intervention is effective in improving academic performance.
  • When testing the effectiveness of a marketing campaign: An experimental research design can be used to test the effectiveness of different marketing messages or strategies. By randomly assigning participants to treatment and control groups, researchers can determine whether the marketing campaign is effective in changing consumer behavior.
  • When studying the effects of an environmental intervention: Experimental research design can be used to study the impact of environmental interventions, such as pollution reduction programs or conservation efforts. By randomly assigning locations or areas to treatment and control groups, researchers can determine whether the intervention is effective in improving environmental outcomes.
  • When testing the effects of a new technology: An experimental research design can be used to test the effectiveness and safety of new technologies or engineering designs. By randomly assigning participants or locations to treatment and control groups, researchers can determine whether the new technology is effective in achieving its intended purpose.

How to Conduct Experimental Research

Here are the steps to conduct Experimental Research:

  • Identify a Research Question : Start by identifying a research question that you want to answer through the experiment. The question should be clear, specific, and testable.
  • Develop a Hypothesis: Based on your research question, develop a hypothesis that predicts the relationship between the independent and dependent variables. The hypothesis should be clear and testable.
  • Design the Experiment : Determine the type of experimental design you will use, such as a between-subjects design or a within-subjects design. Also, decide on the experimental conditions, such as the number of independent variables, the levels of the independent variable, and the dependent variable to be measured.
  • Select Participants: Select the participants who will take part in the experiment. They should be representative of the population you are interested in studying.
  • Randomly Assign Participants to Groups: If you are using a between-subjects design, randomly assign participants to groups to control for individual differences.
  • Conduct the Experiment : Conduct the experiment by manipulating the independent variable(s) and measuring the dependent variable(s) across the different conditions.
  • Analyze the Data: Analyze the data using appropriate statistical methods to determine if there is a significant effect of the independent variable(s) on the dependent variable(s).
  • Draw Conclusions: Based on the data analysis, draw conclusions about the relationship between the independent and dependent variables. If the results support the hypothesis, then it is accepted. If the results do not support the hypothesis, then it is rejected.
  • Communicate the Results: Finally, communicate the results of the experiment through a research report or presentation. Include the purpose of the study, the methods used, the results obtained, and the conclusions drawn.

Purpose of Experimental Design 

The purpose of experimental design is to control and manipulate one or more independent variables to determine their effect on a dependent variable. Experimental design allows researchers to systematically investigate causal relationships between variables, and to establish cause-and-effect relationships between the independent and dependent variables. Through experimental design, researchers can test hypotheses and make inferences about the population from which the sample was drawn.

Experimental design provides a structured approach to designing and conducting experiments, ensuring that the results are reliable and valid. By carefully controlling for extraneous variables that may affect the outcome of the study, experimental design allows researchers to isolate the effect of the independent variable(s) on the dependent variable(s), and to minimize the influence of other factors that may confound the results.

Experimental design also allows researchers to generalize their findings to the larger population from which the sample was drawn. By randomly selecting participants and using statistical techniques to analyze the data, researchers can make inferences about the larger population with a high degree of confidence.

Overall, the purpose of experimental design is to provide a rigorous, systematic, and scientific method for testing hypotheses and establishing cause-and-effect relationships between variables. Experimental design is a powerful tool for advancing scientific knowledge and informing evidence-based practice in various fields, including psychology, biology, medicine, engineering, and social sciences.

Advantages of Experimental Design 

Experimental design offers several advantages in research. Here are some of the main advantages:

  • Control over extraneous variables: Experimental design allows researchers to control for extraneous variables that may affect the outcome of the study. By manipulating the independent variable and holding all other variables constant, researchers can isolate the effect of the independent variable on the dependent variable.
  • Establishing causality: Experimental design allows researchers to establish causality by manipulating the independent variable and observing its effect on the dependent variable. This allows researchers to determine whether changes in the independent variable cause changes in the dependent variable.
  • Replication : Experimental design allows researchers to replicate their experiments to ensure that the findings are consistent and reliable. Replication is important for establishing the validity and generalizability of the findings.
  • Random assignment: Experimental design often involves randomly assigning participants to conditions. This helps to ensure that individual differences between participants are evenly distributed across conditions, which increases the internal validity of the study.
  • Precision : Experimental design allows researchers to measure variables with precision, which can increase the accuracy and reliability of the data.
  • Generalizability : If the study is well-designed, experimental design can increase the generalizability of the findings. By controlling for extraneous variables and using random assignment, researchers can increase the likelihood that the findings will apply to other populations and contexts.

Limitations of Experimental Design

Experimental design has some limitations that researchers should be aware of. Here are some of the main limitations:

  • Artificiality : Experimental design often involves creating artificial situations that may not reflect real-world situations. This can limit the external validity of the findings, or the extent to which the findings can be generalized to real-world settings.
  • Ethical concerns: Some experimental designs may raise ethical concerns, particularly if they involve manipulating variables that could cause harm to participants or if they involve deception.
  • Participant bias : Participants in experimental studies may modify their behavior in response to the experiment, which can lead to participant bias.
  • Limited generalizability: The conditions of the experiment may not reflect the complexities of real-world situations. As a result, the findings may not be applicable to all populations and contexts.
  • Cost and time : Experimental design can be expensive and time-consuming, particularly if the experiment requires specialized equipment or if the sample size is large.
  • Researcher bias : Researchers may unintentionally bias the results of the experiment if they have expectations or preferences for certain outcomes.
  • Lack of feasibility : Experimental design may not be feasible in some cases, particularly if the research question involves variables that cannot be manipulated or controlled.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Questionnaire

Questionnaire – Definition, Types, and Examples

Focus Groups in Qualitative Research

Focus Groups – Steps, Examples and Guide

Descriptive Research Design

Descriptive Research Design – Types, Methods and...

Applied Research

Applied Research – Types, Methods and Examples

Quasi-Experimental Design

Quasi-Experimental Research Design – Types...

Phenomenology

Phenomenology – Methods, Examples and Guide

  • Info Videos
  • What’s New in PASS 2024?
  • PASS Documentation
  • System Requirements
  • Publications Citing PASS
  • Customer Satisfaction
  • Plot Capabilities
  • What’s New in NCSS 2024?
  • NCSS Documentation
  • Academic Institutions
  • Publications Citing NCSS
  • PASS 2024 (Sample Size)
  • NCSS 2024 (Data Analysis)
  • Medical Research
  • Business Research
  • Quality Control
  • Mass Appraisal
  • PASS Videos
  • PASS Training Videos
  • PASS Downloads & Updates
  • PASS System Requirements
  • PASS License Agreements
  • NCSS Videos
  • NCSS Training Videos
  • NCSS Downloads & Updates
  • NCSS System Requirements
  • NCSS License Agreements
  • Online Store
  • Student Store
  • Custom Payment
  • Price Lists
  • Documentation

Design of Experiments in NCSS

Introduction, technical details, randomization lists, balanced incomplete block designs, fractional factorial designs, latin square designs, response surface designs, screening designs, taguchi designs, two-level designs, design generator, d-optimal designs, procedure input.

Randomization Lists - Procedure Window

Sample Output

Randomization Lists - Sample Output

1 A B C
2 A B D
3 A C D
4 B C D
  • 1. Randomly assign the numbers to the blocks.
  • 2. Randomly assign the letters to the treatments.
  • 3. Randomly assign the treatments within the blocks.
  • 4. Randomly group blocks as replicates. A replicate is a complete set of all treatments.

Balanced Incomplete Block Designs - Sample Output

A B C D
B C D A
C D A B
D A B C
Aa Bb Cc Dd
Bd Ca Db Ac
Cb Dc Ad Ba
Dc Ad Ba Cb

Latin Square Designs - Sample Output

  • 1. The low-level value is assigned to -1.
  • 2. The high-level value is assigned to 1.
  • 3. The average of these two values is assigned to 0.
  • 4. The values of - a and a are used to find the minimum and the maximum values.
45
50
55
60
65
1.41
1.73
2.00
2.00
2.24

Response Surface Designs - Sample Output

Start Your Free 30 Day Trial Now Buy Now

"I had been using [another statistical software package] for over 15 years until recently when I noticed it would not run ANOVAs correctly. After researching numerous options I switched to NCSS and PASS and couldn't be happier. Your product is accurate, fast, easy to use, and very inexpensive. And thanks to the software’s flexibility in reporting and graphing, it has made my consulting business easier as well. Thank you!"

— Ken Hansen, M.S. , Mechanical Engineer and Applied Statistician

"I have been using NCSS for almost 20 years. I absolutely love it. It does everything [I need that] SPSS or SAS does, is more reasonably priced and user friendly and has wonderful customer support. If you have a problem, when you call a real person (who created the software) gently talks you through it. It is a great product and a great company."

— Cheryl L. Meyer, PhD , Wright State University

Sample Size

  • What’s New in PASS 2024?

Data Analysis

Free Trials

Copyright © 2024 NCSS. All trademarks are the properties of their respective owners. Privacy Policy | Terms of Use | Sitemap

NCSS

  • Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar

Statistics By Jim

Making statistics intuitive

Experimental Design: Definition and Types

By Jim Frost 3 Comments

What is Experimental Design?

An experimental design is a detailed plan for collecting and using data to identify causal relationships. Through careful planning, the design of experiments allows your data collection efforts to have a reasonable chance of detecting effects and testing hypotheses that answer your research questions.

An experiment is a data collection procedure that occurs in controlled conditions to identify and understand causal relationships between variables. Researchers can use many potential designs. The ultimate choice depends on their research question, resources, goals, and constraints. In some fields of study, researchers refer to experimental design as the design of experiments (DOE). Both terms are synonymous.

Scientist who developed an experimental design for her research.

Ultimately, the design of experiments helps ensure that your procedures and data will evaluate your research question effectively. Without an experimental design, you might waste your efforts in a process that, for many potential reasons, can’t answer your research question. In short, it helps you trust your results.

Learn more about Independent and Dependent Variables .

Design of Experiments: Goals & Settings

Experiments occur in many settings, ranging from psychology, social sciences, medicine, physics, engineering, and industrial and service sectors. Typically, experimental goals are to discover a previously unknown effect , confirm a known effect, or test a hypothesis.

Effects represent causal relationships between variables. For example, in a medical experiment, does the new medicine cause an improvement in health outcomes? If so, the medicine has a causal effect on the outcome.

An experimental design’s focus depends on the subject area and can include the following goals:

  • Understanding the relationships between variables.
  • Identifying the variables that have the largest impact on the outcomes.
  • Finding the input variable settings that produce an optimal result.

For example, psychologists have conducted experiments to understand how conformity affects decision-making. Sociologists have performed experiments to determine whether ethnicity affects the public reaction to staged bike thefts. These experiments map out the causal relationships between variables, and their primary goal is to understand the role of various factors.

Conversely, in a manufacturing environment, the researchers might use an experimental design to find the factors that most effectively improve their product’s strength, identify the optimal manufacturing settings, and do all that while accounting for various constraints. In short, a manufacturer’s goal is often to use experiments to improve their products cost-effectively.

In a medical experiment, the goal might be to quantify the medicine’s effect and find the optimum dosage.

Developing an Experimental Design

Developing an experimental design involves planning that maximizes the potential to collect data that is both trustworthy and able to detect causal relationships. Specifically, these studies aim to see effects when they exist in the population the researchers are studying, preferentially favor causal effects, isolate each factor’s true effect from potential confounders, and produce conclusions that you can generalize to the real world.

To accomplish these goals, experimental designs carefully manage data validity and reliability , and internal and external experimental validity. When your experiment is valid and reliable, you can expect your procedures and data to produce trustworthy results.

An excellent experimental design involves the following:

  • Lots of preplanning.
  • Developing experimental treatments.
  • Determining how to assign subjects to treatment groups.

The remainder of this article focuses on how experimental designs incorporate these essential items to accomplish their research goals.

Learn more about Data Reliability vs. Validity and Internal and External Experimental Validity .

Preplanning, Defining, and Operationalizing for Design of Experiments

A literature review is crucial for the design of experiments.

This phase of the design of experiments helps you identify critical variables, know how to measure them while ensuring reliability and validity, and understand the relationships between them. The review can also help you find ways to reduce sources of variability, which increases your ability to detect treatment effects. Notably, the literature review allows you to learn how similar studies designed their experiments and the challenges they faced.

Operationalizing a study involves taking your research question, using the background information you gathered, and formulating an actionable plan.

This process should produce a specific and testable hypothesis using data that you can reasonably collect given the resources available to the experiment.

  • Null hypothesis : The jumping exercise intervention does not affect bone density.
  • Alternative hypothesis : The jumping exercise intervention affects bone density.

To learn more about this early phase, read Five Steps for Conducting Scientific Studies with Statistical Analyses .

Formulating Treatments in Experimental Designs

In an experimental design, treatments are variables that the researchers control. They are the primary independent variables of interest. Researchers administer the treatment to the subjects or items in the experiment and want to know whether it causes changes in the outcome.

As the name implies, a treatment can be medical in nature, such as a new medicine or vaccine. But it’s a general term that applies to other things such as training programs, manufacturing settings, teaching methods, and types of fertilizers. I helped run an experiment where the treatment was a jumping exercise intervention that we hoped would increase bone density. All these treatment examples are things that potentially influence a measurable outcome.

Even when you know your treatment generally, you must carefully consider the amount. How large of a dose? If you’re comparing three different temperatures in a manufacturing process, how far apart are they? For my bone mineral density study, we had to determine how frequently the exercise sessions would occur and how long each lasted.

How you define the treatments in the design of experiments can affect your findings and the generalizability of your results.

Assigning Subjects to Experimental Groups

A crucial decision for all experimental designs is determining how researchers assign subjects to the experimental conditions—the treatment and control groups. The control group is often, but not always, the lack of a treatment. It serves as a basis for comparison by showing outcomes for subjects who don’t receive a treatment. Learn more about Control Groups .

How your experimental design assigns subjects to the groups affects how confident you can be that the findings represent true causal effects rather than mere correlation caused by confounders. Indeed, the assignment method influences how you control for confounding variables. This is the difference between correlation and causation .

Imagine a study finds that vitamin consumption correlates with better health outcomes. As a researcher, you want to be able to say that vitamin consumption causes the improvements. However, with the wrong experimental design, you might only be able to say there is an association. A confounder, and not the vitamins, might actually cause the health benefits.

Let’s explore some of the ways to assign subjects in design of experiments.

Completely Randomized Designs

A completely randomized experimental design randomly assigns all subjects to the treatment and control groups. You simply take each participant and use a random process to determine their group assignment. You can flip coins, roll a die, or use a computer. Randomized experiments must be prospective studies because they need to be able to control group assignment.

Random assignment in the design of experiments helps ensure that the groups are roughly equivalent at the beginning of the study. This equivalence at the start increases your confidence that any differences you see at the end were caused by the treatments. The randomization tends to equalize confounders between the experimental groups and, thereby, cancels out their effects, leaving only the treatment effects.

For example, in a vitamin study, the researchers can randomly assign participants to either the control or vitamin group. Because the groups are approximately equal when the experiment starts, if the health outcomes are different at the end of the study, the researchers can be confident that the vitamins caused those improvements.

Statisticians consider randomized experimental designs to be the best for identifying causal relationships.

If you can’t randomly assign subjects but want to draw causal conclusions about an intervention, consider using a quasi-experimental design .

Learn more about Randomized Controlled Trials and Random Assignment in Experiments .

Randomized Block Designs

Nuisance factors are variables that can affect the outcome, but they are not the researcher’s primary interest. Unfortunately, they can hide or distort the treatment results. When experimenters know about specific nuisance factors, they can use a randomized block design to minimize their impact.

This experimental design takes subjects with a shared “nuisance” characteristic and groups them into blocks. The participants in each block are then randomly assigned to the experimental groups. This process allows the experiment to control for known nuisance factors.

Blocking in the design of experiments reduces the impact of nuisance factors on experimental error. The analysis assesses the effects of the treatment within each block, which removes the variability between blocks. The result is that blocked experimental designs can reduce the impact of nuisance variables, increasing the ability to detect treatment effects accurately.

Suppose you’re testing various teaching methods. Because grade level likely affects educational outcomes, you might use grade level as a blocking factor. To use a randomized block design for this scenario, divide the participants by grade level and then randomly assign the members of each grade level to the experimental groups.

A standard guideline for an experimental design is to “Block what you can, randomize what you cannot.” Use blocking for a few primary nuisance factors. Then use random assignment to distribute the unblocked nuisance factors equally between the experimental conditions.

You can also use covariates to control nuisance factors. Learn about Covariates: Definition and Uses .

Observational Studies

In some experimental designs, randomly assigning subjects to the experimental conditions is impossible or unethical. The researchers simply can’t assign participants to the experimental groups. However, they can observe them in their natural groupings, measure the essential variables, and look for correlations. These observational studies are also known as quasi-experimental designs. Retrospective studies must be observational in nature because they look back at past events.

Imagine you’re studying the effects of depression on an activity. Clearly, you can’t randomly assign participants to the depression and control groups. But you can observe participants with and without depression and see how their task performance differs.

Observational studies let you perform research when you can’t control the treatment. However, quasi-experimental designs increase the problem of confounding variables. For this design of experiments, correlation does not necessarily imply causation. While special procedures can help control confounders in an observational study, you’re ultimately less confident that the results represent causal findings.

Learn more about Observational Studies .

For a good comparison, learn about the differences and tradeoffs between Observational Studies and Randomized Experiments .

Between-Subjects vs. Within-Subjects Experimental Designs

When you think of the design of experiments, you probably picture a treatment and control group. Researchers assign participants to only one of these groups, so each group contains entirely different subjects than the other groups. Analysts compare the groups at the end of the experiment. Statisticians refer to this method as a between-subjects, or independent measures, experimental design.

In a between-subjects design , you can have more than one treatment group, but each subject is exposed to only one condition, the control group or one of the treatment groups.

A potential downside to this approach is that differences between groups at the beginning can affect the results at the end. As you’ve read earlier, random assignment can reduce those differences, but it is imperfect. There will always be some variability between the groups.

In a  within-subjects experimental design , also known as repeated measures, subjects experience all treatment conditions and are measured for each. Each subject acts as their own control, which reduces variability and increases the statistical power to detect effects.

In this experimental design, you minimize pre-existing differences between the experimental conditions because they all contain the same subjects. However, the order of treatments can affect the results. Beware of practice and fatigue effects. Learn more about Repeated Measures Designs .

Assigned to one experimental condition Participates in all experimental conditions
Requires more subjects Fewer subjects
Differences between subjects in the groups can affect the results Uses same subjects in all conditions.
No order of treatment effects. Order of treatments can affect results.

Design of Experiments Examples

For example, a bone density study has three experimental groups—a control group, a stretching exercise group, and a jumping exercise group.

In a between-subjects experimental design, scientists randomly assign each participant to one of the three groups.

In a within-subjects design, all subjects experience the three conditions sequentially while the researchers measure bone density repeatedly. The procedure can switch the order of treatments for the participants to help reduce order effects.

Matched Pairs Experimental Design

A matched pairs experimental design is a between-subjects study that uses pairs of similar subjects. Researchers use this approach to reduce pre-existing differences between experimental groups. It’s yet another design of experiments method for reducing sources of variability.

Researchers identify variables likely to affect the outcome, such as demographics. When they pick a subject with a set of characteristics, they try to locate another participant with similar attributes to create a matched pair. Scientists randomly assign one member of a pair to the treatment group and the other to the control group.

On the plus side, this process creates two similar groups, and it doesn’t create treatment order effects. While matched pairs do not produce the perfectly matched groups of a within-subjects design (which uses the same subjects in all conditions), it aims to reduce variability between groups relative to a between-subjects study.

On the downside, finding matched pairs is very time-consuming. Additionally, if one member of a matched pair drops out, the other subject must leave the study too.

Learn more about Matched Pairs Design: Uses & Examples .

Another consideration is whether you’ll use a cross-sectional design (one point in time) or use a longitudinal study to track changes over time .

A case study is a research method that often serves as a precursor to a more rigorous experimental design by identifying research questions, variables, and hypotheses to test. Learn more about What is a Case Study? Definition & Examples .

In conclusion, the design of experiments is extremely sensitive to subject area concerns and the time and resources available to the researchers. Developing a suitable experimental design requires balancing a multitude of considerations. A successful design is necessary to obtain trustworthy answers to your research question and to have a reasonable chance of detecting treatment effects when they exist.

Share this:

experimental design application

Reader Interactions

' src=

March 23, 2024 at 2:35 pm

Dear Jim You wrote a superb document, I will use it in my Buistatistics course, along with your three books. Thank you very much! Miguel

' src=

March 23, 2024 at 5:43 pm

Thanks so much, Miguel! Glad this post was helpful and I trust the books will be as well.

' src=

April 10, 2023 at 4:36 am

What are the purpose and uses of experimental research design?

Comments and Questions Cancel reply

Experimental design: Guide, steps, examples

Last updated

27 April 2023

Reviewed by

Miroslav Damyanov

Short on time? Get an AI generated summary of this article instead

Experimental research design is a scientific framework that allows you to manipulate one or more variables while controlling the test environment. 

When testing a theory or new product, it can be helpful to have a certain level of control and manipulate variables to discover different outcomes. You can use these experiments to determine cause and effect or study variable associations. 

This guide explores the types of experimental design, the steps in designing an experiment, and the advantages and limitations of experimental design. 

Make research less tedious

Dovetail streamlines research to help you uncover and share actionable insights

  • What is experimental research design?

You can determine the relationship between each of the variables by: 

Manipulating one or more independent variables (i.e., stimuli or treatments)

Applying the changes to one or more dependent variables (i.e., test groups or outcomes)

With the ability to analyze the relationship between variables and using measurable data, you can increase the accuracy of the result. 

What is a good experimental design?

A good experimental design requires: 

Significant planning to ensure control over the testing environment

Sound experimental treatments

Properly assigning subjects to treatment groups

Without proper planning, unexpected external variables can alter an experiment's outcome. 

To meet your research goals, your experimental design should include these characteristics:

Provide unbiased estimates of inputs and associated uncertainties

Enable the researcher to detect differences caused by independent variables

Include a plan for analysis and reporting of the results

Provide easily interpretable results with specific conclusions

What's the difference between experimental and quasi-experimental design?

The major difference between experimental and quasi-experimental design is the random assignment of subjects to groups. 

A true experiment relies on certain controls. Typically, the researcher designs the treatment and randomly assigns subjects to control and treatment groups. 

However, these conditions are unethical or impossible to achieve in some situations.

When it's unethical or impractical to assign participants randomly, that’s when a quasi-experimental design comes in. 

This design allows researchers to conduct a similar experiment by assigning subjects to groups based on non-random criteria. 

Another type of quasi-experimental design might occur when the researcher doesn't have control over the treatment but studies pre-existing groups after they receive different treatments.

When can a researcher conduct experimental research?

Various settings and professions can use experimental research to gather information and observe behavior in controlled settings. 

Basically, a researcher can conduct experimental research any time they want to test a theory with variable and dependent controls. 

Experimental research is an option when the project includes an independent variable and a desire to understand the relationship between cause and effect. 

  • The importance of experimental research design

Experimental research enables researchers to conduct studies that provide specific, definitive answers to questions and hypotheses. 

Researchers can test Independent variables in controlled settings to:

Test the effectiveness of a new medication

Design better products for consumers

Answer questions about human health and behavior

Developing a quality research plan means a researcher can accurately answer vital research questions with minimal error. As a result, definitive conclusions can influence the future of the independent variable. 

Types of experimental research designs

There are three main types of experimental research design. The research type you use will depend on the criteria of your experiment, your research budget, and environmental limitations. 

Pre-experimental research design

A pre-experimental research study is a basic observational study that monitors independent variables’ effects. 

During research, you observe one or more groups after applying a treatment to test whether the treatment causes any change. 

The three subtypes of pre-experimental research design are:

One-shot case study research design

This research method introduces a single test group to a single stimulus to study the results at the end of the application. 

After researchers presume the stimulus or treatment has caused changes, they gather results to determine how it affects the test subjects. 

One-group pretest-posttest design

This method uses a single test group but includes a pretest study as a benchmark. The researcher applies a test before and after the group’s exposure to a specific stimulus. 

Static group comparison design

This method includes two or more groups, enabling the researcher to use one group as a control. They apply a stimulus to one group and leave the other group static. 

A posttest study compares the results among groups. 

True experimental research design

A true experiment is the most common research method. It involves statistical analysis to prove or disprove a specific hypothesis . 

Under completely experimental conditions, researchers expose participants in two or more randomized groups to different stimuli. 

Random selection removes any potential for bias, providing more reliable results. 

These are the three main sub-groups of true experimental research design:

Posttest-only control group design

This structure requires the researcher to divide participants into two random groups. One group receives no stimuli and acts as a control while the other group experiences stimuli.

Researchers perform a test at the end of the experiment to observe the stimuli exposure results.

Pretest-posttest control group design

This test also requires two groups. It includes a pretest as a benchmark before introducing the stimulus. 

The pretest introduces multiple ways to test subjects. For instance, if the control group also experiences a change, it reveals that taking the test twice changes the results.

Solomon four-group design

This structure divides subjects into two groups, with two as control groups. Researchers assign the first control group a posttest only and the second control group a pretest and a posttest. 

The two variable groups mirror the control groups, but researchers expose them to stimuli. The ability to differentiate between groups in multiple ways provides researchers with more testing approaches for data-based conclusions. 

Quasi-experimental research design

Although closely related to a true experiment, quasi-experimental research design differs in approach and scope. 

Quasi-experimental research design doesn’t have randomly selected participants. Researchers typically divide the groups in this research by pre-existing differences. 

Quasi-experimental research is more common in educational studies, nursing, or other research projects where it's not ethical or practical to use randomized subject groups.

  • 5 steps for designing an experiment

Experimental research requires a clearly defined plan to outline the research parameters and expected goals. 

Here are five key steps in designing a successful experiment:

Step 1: Define variables and their relationship

Your experiment should begin with a question: What are you hoping to learn through your experiment? 

The relationship between variables in your study will determine your answer.

Define the independent variable (the intended stimuli) and the dependent variable (the expected effect of the stimuli). After identifying these groups, consider how you might control them in your experiment. 

Could natural variations affect your research? If so, your experiment should include a pretest and posttest. 

Step 2: Develop a specific, testable hypothesis

With a firm understanding of the system you intend to study, you can write a specific, testable hypothesis. 

What is the expected outcome of your study? 

Develop a prediction about how the independent variable will affect the dependent variable. 

How will the stimuli in your experiment affect your test subjects? 

Your hypothesis should provide a prediction of the answer to your research question . 

Step 3: Design experimental treatments to manipulate your independent variable

Depending on your experiment, your variable may be a fixed stimulus (like a medical treatment) or a variable stimulus (like a period during which an activity occurs). 

Determine which type of stimulus meets your experiment’s needs and how widely or finely to vary your stimuli. 

Step 4: Assign subjects to groups

When you have a clear idea of how to carry out your experiment, you can determine how to assemble test groups for an accurate study. 

When choosing your study groups, consider: 

The size of your experiment

Whether you can select groups randomly

Your target audience for the outcome of the study

You should be able to create groups with an equal number of subjects and include subjects that match your target audience. Remember, you should assign one group as a control and use one or more groups to study the effects of variables. 

Step 5: Plan how to measure your dependent variable

This step determines how you'll collect data to determine the study's outcome. You should seek reliable and valid measurements that minimize research bias or error. 

You can measure some data with scientific tools, while you’ll need to operationalize other forms to turn them into measurable observations.

  • Advantages of experimental research

Experimental research is an integral part of our world. It allows researchers to conduct experiments that answer specific questions. 

While researchers use many methods to conduct different experiments, experimental research offers these distinct benefits:

Researchers can determine cause and effect by manipulating variables.

It gives researchers a high level of control.

Researchers can test multiple variables within a single experiment.

All industries and fields of knowledge can use it. 

Researchers can duplicate results to promote the validity of the study .

Replicating natural settings rapidly means immediate research.

Researchers can combine it with other research methods.

It provides specific conclusions about the validity of a product, theory, or idea.

  • Disadvantages (or limitations) of experimental research

Unfortunately, no research type yields ideal conditions or perfect results. 

While experimental research might be the right choice for some studies, certain conditions could render experiments useless or even dangerous. 

Before conducting experimental research, consider these disadvantages and limitations:

Required professional qualification

Only competent professionals with an academic degree and specific training are qualified to conduct rigorous experimental research. This ensures results are unbiased and valid. 

Limited scope

Experimental research may not capture the complexity of some phenomena, such as social interactions or cultural norms. These are difficult to control in a laboratory setting.

Resource-intensive

Experimental research can be expensive, time-consuming, and require significant resources, such as specialized equipment or trained personnel.

Limited generalizability

The controlled nature means the research findings may not fully apply to real-world situations or people outside the experimental setting.

Practical or ethical concerns

Some experiments may involve manipulating variables that could harm participants or violate ethical guidelines . 

Researchers must ensure their experiments do not cause harm or discomfort to participants. 

Sometimes, recruiting a sample of people to randomly assign may be difficult. 

  • Experimental research design example

Experiments across all industries and research realms provide scientists, developers, and other researchers with definitive answers. These experiments can solve problems, create inventions, and heal illnesses. 

Product design testing is an excellent example of experimental research. 

A company in the product development phase creates multiple prototypes for testing. With a randomized selection, researchers introduce each test group to a different prototype. 

When groups experience different product designs , the company can assess which option most appeals to potential customers. 

Experimental research design provides researchers with a controlled environment to conduct experiments that evaluate cause and effect. 

Using the five steps to develop a research plan ensures you anticipate and eliminate external variables while answering life’s crucial questions.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 18 April 2023

Last updated: 27 February 2023

Last updated: 6 February 2023

Last updated: 6 October 2023

Last updated: 5 February 2023

Last updated: 16 April 2023

Last updated: 9 March 2023

Last updated: 12 December 2023

Last updated: 11 March 2024

Last updated: 4 July 2024

Last updated: 6 March 2024

Last updated: 5 March 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next.

experimental design application

Users report unexpectedly high data usage, especially during streaming sessions.

experimental design application

Users find it hard to navigate from the home page to relevant playlists in the app.

experimental design application

It would be great to have a sleep timer feature, especially for bedtime listening.

experimental design application

I need better filters to find the songs or artists I’m looking for.

  • Types of experimental

Log in or sign up

Get started for free

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • A Quick Guide to Experimental Design | 5 Steps & Examples

A Quick Guide to Experimental Design | 5 Steps & Examples

Published on 11 April 2022 by Rebecca Bevans . Revised on 5 December 2022.

Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.

Experimental design means creating a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying. 

There are five key steps in designing an experiment:

  • Consider your variables and how they are related
  • Write a specific, testable hypothesis
  • Design experimental treatments to manipulate your independent variable
  • Assign subjects to groups, either between-subjects or within-subjects
  • Plan how you will measure your dependent variable

For valid conclusions, you also need to select a representative sample and control any  extraneous variables that might influence your results. If if random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead.

Table of contents

Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, frequently asked questions about experimental design.

You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:

To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.

Start by simply listing the independent and dependent variables .

Research question Independent variable Dependent variable
Phone use and sleep Minutes of phone use before sleep Hours of sleep per night
Temperature and soil respiration Air temperature just above the soil surface CO2 respired from soil

Then you need to think about possible extraneous and confounding variables and consider how you might control  them in your experiment.

Extraneous variable How to control
Phone use and sleep in sleep patterns among individuals. measure the average difference between sleep with phone use and sleep without phone use rather than the average amount of sleep per treatment group.
Temperature and soil respiration also affects respiration, and moisture can decrease with increasing temperature. monitor soil moisture and add water to make sure that soil moisture is consistent across all treatment plots.

Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.

Diagram of the relationship between variables in a sleep experiment

Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.

Prevent plagiarism, run a free check.

Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.

Null hypothesis (H ) Alternate hypothesis (H )
Phone use and sleep Phone use before sleep does not correlate with the amount of sleep a person gets. Increasing phone use before sleep leads to a decrease in sleep.
Temperature and soil respiration Air temperature does not correlate with soil respiration. Increased air temperature leads to increased soil respiration.

The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:

  • Systematically and precisely manipulate the independent variable(s).
  • Precisely measure the dependent variable(s).
  • Control any potential confounding variables.

If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.

How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalised and applied to the broader world.

First, you may need to decide how widely to vary your independent variable.

  • just slightly above the natural range for your study region.
  • over a wider range of temperatures to mimic future warming.
  • over an extreme range that is beyond any possible natural variation.

Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.

  • a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
  • a continuous variable (minutes of phone use measured every night).

How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.

First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.

Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).

You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.

When assigning your subjects to groups, there are two main choices you need to make:

  • A completely randomised design vs a randomised block design .
  • A between-subjects design vs a within-subjects design .

Randomisation

An experiment can be completely randomised or randomised within blocks (aka strata):

  • In a completely randomised design , every subject is assigned to a treatment group at random.
  • In a randomised block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.
Completely randomised design Randomised block design
Phone use and sleep Subjects are all randomly assigned a level of phone use using a random number generator. Subjects are first grouped by age, and then phone use treatments are randomly assigned within these groups.
Temperature and soil respiration Warming treatments are assigned to soil plots at random by using a number generator to generate map coordinates within the study area. Soils are first grouped by average rainfall, and then treatment plots are randomly assigned within these groups.

Sometimes randomisation isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .

Between-subjects vs within-subjects

In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.

In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.

Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.

Counterbalancing (randomising or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.

Between-subjects (independent measures) design Within-subjects (repeated measures) design
Phone use and sleep Subjects are randomly assigned a level of phone use (none, low, or high) and follow that level of phone use throughout the experiment. Subjects are assigned consecutively to zero, low, and high levels of phone use throughout the experiment, and the order in which they follow these treatments is randomised.
Temperature and soil respiration Warming treatments are assigned to soil plots at random and the soils are kept at this temperature throughout the experiment. Every plot receives each warming treatment (1, 3, 5, 8, and 10C above ambient temperatures) consecutively over the course of the experiment, and the order in which they receive these treatments is randomised.

Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimise bias or error.

Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalised to turn them into measurable observations.

  • Ask participants to record what time they go to sleep and get up each day.
  • Ask participants to wear a sleep tracker.

How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.

Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.

Experimental designs are a set of procedures that you plan in order to examine the relationship between variables that interest you.

To design a successful experiment, first identify:

  • A testable hypothesis
  • One or more independent variables that you will manipulate
  • One or more dependent variables that you will measure

When designing the experiment, first decide:

  • How your variable(s) will be manipulated
  • How you will control for any potential confounding or lurking variables
  • How many subjects you will include
  • How you will assign treatments to your subjects

The key difference between observational studies and experiments is that, done correctly, an observational study will never influence the responses or behaviours of participants. Experimental designs will have a treatment condition applied to at least a portion of participants.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word ‘between’ means that you’re comparing different conditions between groups, while the word ‘within’ means you’re comparing different conditions within the same group.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bevans, R. (2022, December 05). A Quick Guide to Experimental Design | 5 Steps & Examples. Scribbr. Retrieved 10 July 2024, from https://www.scribbr.co.uk/research-methods/guide-to-experimental-design/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Principles of Experimental Design

  • First Online: 16 April 2021

Cite this chapter

experimental design application

  • Hans-Michael Kaltenbach 4  

Part of the book series: Statistics for Biology and Health ((SBH))

2467 Accesses

1 Altmetric

We introduce the statistical design of experiments and put the topic into the larger context of scientific experimentation. We give a non-technical discussion of some key ideas of experimental design, including the role of randomization, replication, and the basic idea of blocking for increasing precision and power. We also take a more high-level view and consider the construct, internal and external validities of an experiment, and the corresponding tools that experimental design offers to achieve them.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Abelson, R. P. (1995). Statistics as Principled Argument. Psychology Press.

Google Scholar  

Bailar III, J. C. (1981). “Bailar’s laws of data analysis”. In: Clinical Pharmacology & Therapeutics 20.1, pp. 113–119.

Article   Google Scholar  

Couzin-Frankel, J. (2013). “When mice mislead”. In: Science 342.6161, pp. 922–925.

Cox, D. R. (1958). Planning of Experiments. Wiley-Blackwell.

Cox, D. R. (2009). “Randomization in the design of experiments”. In: International Statistical Review 77, pp. 415–429.

Coxon, C. H., C. Longstaff, and C. Burns (2019). “Applying the science of measurement to biology: Why bother?” In: PLOS Biology 17.6, e3000338.

Dong, Y. and C. Y. J. Peng (2013). “Principled missing data methods for researchers”. In: SpringerPlus 2.1, pp. 1–17.

Fisher, R. A. (1938). “Presidential Address to the First Indian Statistical Congress”. In: Sankhya: The Indian Journal of Statistics 4, pp. 14–17.

Gigerenzer, G. (2002). Adaptive Thinking: Rationality in the Real World. Oxford Univ Press.

Gigerenzer, G. and J. N. Marewski (2014). “Surrogate Science: The Idol of a Universal Method for Scientific Inference”. In: Journal of Management 41.2, pp. 421–440.

Hand, D. J. (1996). “Statistics and the theory of measurement”. In: Journal of the Royal Statistical Society A 159.3, pp. 445–492.

Kafkafi, N. et al. (2017). “Addressing reproducibility in single-laboratory phenotyping experiments”. In: Nature Methods 14.5, pp. 462–464.

Karp, N. A. (2018). “Reproducible preclinical research-Is embracing variability the answer?” In: PLOS Biology 16.3, e2005413.

Kilkenny, C. et al. (2010). “Improving Bioscience Research Reporting: The ARRIVE Guidelines for Reporting Animal Research”. In: PLOS Biology 8.6, e1000412.

Kimmelman, J., J. S. Mogil, and U. Dirnagl (2014). “Distinguishing between Exploratory and Confirmatory Preclinical Research Will Improve Translation”. In: PLOS Biology 12.5, e1001863.

Llovera, G. and A. Liesz (2016). “The next step in translational research: lessons learned from the first preclinical randomized controlled trial”. In: Journal of Neurochemistry 139, pp. 271–279.

Moher D.and Hopewell, S. et al. (2010). “CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trials”. In: BMJ: British Medical Journal 340.

Moore, C. G. et al. (2011). “Recommendations for planning pilot studies in clinical and translational research.” In: Clinical and Translational Science 4.5, pp. 332–337.

Pound, P. and M. Ritskes-Hoitinga (2018). “Is it possible to overcome issues of external validity in preclinical animal research? Why most animal models are bound to fail”. In: Journal of Translational Medicine 16.1, p. 304.

Richter, S. H. (2017). “Systematic heterogenization for better reproducibility in animal experimentation”. In: Lab Animal 46.9, pp. 343–349.

Richter, S. H. et al. (2010). “Systematic variation improves reproducibility of animal experiments”. In: Nature Methods 7.3, pp. 167–168.

Sansone, S.-A. et al. (2019). “FAIRsharing as a community approach to standards, repositories and policies”. In: Nature Biotechnology 37.4, pp. 358–367.

Sim, J. (2019). “Should treatment effects be estimated in pilot and feasibility studies?” In: Pilot and Feasibility Studies 5.107, e1–e7.

Thabane, L. et al. (2010). “A tutorial on pilot studies: the what, why and how”. In: BMC Medical Research Methodology 10.1, p. 1.

Travers J.and Marsh, S. et al. (2007). “External validity of randomised controlled trials in asthma: To whom do the results of the trials apply?” In: Thorax 62.3, pp. 219–233.

Tufte, E. (1997). Visual Explanations: Images and Quantities, Evidence and Narrative. 1st. Graphics Press.

Voelkl, B. et al. (2018). “Reproducibility of preclinical animal research improves with heterogeneity of study samples”. In: PLOS Biology 16.2, e2003693.

Würbel, H. (2017). “More than 3Rs: The importance of scientific validity for harm-benefit analysis of animal research”. In: Lab Animal 46.4, pp. 164–166.

Download references

Author information

Authors and affiliations.

Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland

Hans-Michael Kaltenbach

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Hans-Michael Kaltenbach .

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Kaltenbach, HM. (2021). Principles of Experimental Design. In: Statistical Design and Analysis of Biological Experiments. Statistics for Biology and Health. Springer, Cham. https://doi.org/10.1007/978-3-030-69641-2_1

Download citation

DOI : https://doi.org/10.1007/978-3-030-69641-2_1

Published : 16 April 2021

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-69640-5

Online ISBN : 978-3-030-69641-2

eBook Packages : Mathematics and Statistics Mathematics and Statistics (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Experimental Design: Types, Examples & Methods

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Experimental design refers to how participants are allocated to different groups in an experiment. Types of design include repeated measures, independent groups, and matched pairs designs.

Probably the most common way to design an experiment in psychology is to divide the participants into two groups, the experimental group and the control group, and then introduce a change to the experimental group, not the control group.

The researcher must decide how he/she will allocate their sample to the different experimental groups.  For example, if there are 10 participants, will all 10 participants participate in both groups (e.g., repeated measures), or will the participants be split in half and take part in only one group each?

Three types of experimental designs are commonly used:

1. Independent Measures

Independent measures design, also known as between-groups , is an experimental design where different participants are used in each condition of the independent variable.  This means that each condition of the experiment includes a different group of participants.

This should be done by random allocation, ensuring that each participant has an equal chance of being assigned to one group.

Independent measures involve using two separate groups of participants, one in each condition. For example:

Independent Measures Design 2

  • Con : More people are needed than with the repeated measures design (i.e., more time-consuming).
  • Pro : Avoids order effects (such as practice or fatigue) as people participate in one condition only.  If a person is involved in several conditions, they may become bored, tired, and fed up by the time they come to the second condition or become wise to the requirements of the experiment!
  • Con : Differences between participants in the groups may affect results, for example, variations in age, gender, or social background.  These differences are known as participant variables (i.e., a type of extraneous variable ).
  • Control : After the participants have been recruited, they should be randomly assigned to their groups. This should ensure the groups are similar, on average (reducing participant variables).

2. Repeated Measures Design

Repeated Measures design is an experimental design where the same participants participate in each independent variable condition.  This means that each experiment condition includes the same group of participants.

Repeated Measures design is also known as within-groups or within-subjects design .

  • Pro : As the same participants are used in each condition, participant variables (i.e., individual differences) are reduced.
  • Con : There may be order effects. Order effects refer to the order of the conditions affecting the participants’ behavior.  Performance in the second condition may be better because the participants know what to do (i.e., practice effect).  Or their performance might be worse in the second condition because they are tired (i.e., fatigue effect). This limitation can be controlled using counterbalancing.
  • Pro : Fewer people are needed as they participate in all conditions (i.e., saves time).
  • Control : To combat order effects, the researcher counter-balances the order of the conditions for the participants.  Alternating the order in which participants perform in different conditions of an experiment.

Counterbalancing

Suppose we used a repeated measures design in which all of the participants first learned words in “loud noise” and then learned them in “no noise.”

We expect the participants to learn better in “no noise” because of order effects, such as practice. However, a researcher can control for order effects using counterbalancing.

The sample would be split into two groups: experimental (A) and control (B).  For example, group 1 does ‘A’ then ‘B,’ and group 2 does ‘B’ then ‘A.’ This is to eliminate order effects.

Although order effects occur for each participant, they balance each other out in the results because they occur equally in both groups.

counter balancing

3. Matched Pairs Design

A matched pairs design is an experimental design where pairs of participants are matched in terms of key variables, such as age or socioeconomic status. One member of each pair is then placed into the experimental group and the other member into the control group .

One member of each matched pair must be randomly assigned to the experimental group and the other to the control group.

matched pairs design

  • Con : If one participant drops out, you lose 2 PPs’ data.
  • Pro : Reduces participant variables because the researcher has tried to pair up the participants so that each condition has people with similar abilities and characteristics.
  • Con : Very time-consuming trying to find closely matched pairs.
  • Pro : It avoids order effects, so counterbalancing is not necessary.
  • Con : Impossible to match people exactly unless they are identical twins!
  • Control : Members of each pair should be randomly assigned to conditions. However, this does not solve all these problems.

Experimental design refers to how participants are allocated to an experiment’s different conditions (or IV levels). There are three types:

1. Independent measures / between-groups : Different participants are used in each condition of the independent variable.

2. Repeated measures /within groups : The same participants take part in each condition of the independent variable.

3. Matched pairs : Each condition uses different participants, but they are matched in terms of important characteristics, e.g., gender, age, intelligence, etc.

Learning Check

Read about each of the experiments below. For each experiment, identify (1) which experimental design was used; and (2) why the researcher might have used that design.

1 . To compare the effectiveness of two different types of therapy for depression, depressed patients were assigned to receive either cognitive therapy or behavior therapy for a 12-week period.

The researchers attempted to ensure that the patients in the two groups had similar severity of depressed symptoms by administering a standardized test of depression to each participant, then pairing them according to the severity of their symptoms.

2 . To assess the difference in reading comprehension between 7 and 9-year-olds, a researcher recruited each group from a local primary school. They were given the same passage of text to read and then asked a series of questions to assess their understanding.

3 . To assess the effectiveness of two different ways of teaching reading, a group of 5-year-olds was recruited from a primary school. Their level of reading ability was assessed, and then they were taught using scheme one for 20 weeks.

At the end of this period, their reading was reassessed, and a reading improvement score was calculated. They were then taught using scheme two for a further 20 weeks, and another reading improvement score for this period was calculated. The reading improvement scores for each child were then compared.

4 . To assess the effect of the organization on recall, a researcher randomly assigned student volunteers to two conditions.

Condition one attempted to recall a list of words that were organized into meaningful categories; condition two attempted to recall the same words, randomly grouped on the page.

Experiment Terminology

Ecological validity.

The degree to which an investigation represents real-life experiences.

Experimenter effects

These are the ways that the experimenter can accidentally influence the participant through their appearance or behavior.

Demand characteristics

The clues in an experiment lead the participants to think they know what the researcher is looking for (e.g., the experimenter’s body language).

Independent variable (IV)

The variable the experimenter manipulates (i.e., changes) is assumed to have a direct effect on the dependent variable.

Dependent variable (DV)

Variable the experimenter measures. This is the outcome (i.e., the result) of a study.

Extraneous variables (EV)

All variables which are not independent variables but could affect the results (DV) of the experiment. Extraneous variables should be controlled where possible.

Confounding variables

Variable(s) that have affected the results (DV), apart from the IV. A confounding variable could be an extraneous variable that has not been controlled.

Random Allocation

Randomly allocating participants to independent variable conditions means that all participants should have an equal chance of taking part in each condition.

The principle of random allocation is to avoid bias in how the experiment is carried out and limit the effects of participant variables.

Order effects

Changes in participants’ performance due to their repeating the same or similar test more than once. Examples of order effects include:

(i) practice effect: an improvement in performance on a task due to repetition, for example, because of familiarity with the task;

(ii) fatigue effect: a decrease in performance of a task due to repetition, for example, because of boredom or tiredness.

Print Friendly, PDF & Email

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Experimental Design Research - Approaches, Perspectives, Applications

Profile image of Mario Storga

This book presents a new, multidisciplinary perspective on and paradigm for integrative experimental design research. It addresses various perspectives on methods, analysis and overall research approach, and how they can be synthesized to advance understanding of design. It explores the foundations of experimental approaches and their utility in this domain, and brings together analytical approaches to promote an integrated understanding. The book also investigates where these approaches lead to and how they link design research more fully with other disciplines (e.g. psychology, cognition, sociology, computer science, management). Above all, the book emphasizes the integrative nature of design research in terms of the methods, theories, and units of study—from the individual to the organizational level. Although this approach offers many advantages, it has inherently led to a situation in current research practice where methods are diverging and integration between individual, team and organizational understanding is becoming increasingly tenuous, calling for a multidisciplinary and transdiscipinary perspective. Experimental design research thus offers a powerful tool and platform for resolving these challenges. Providing an invaluable resource for the design research community, this book paves the way for the next generation of researchers in the field by bridging methods and methodology. As such, it will especially benefit postgraduate students and researchers in design research, as well as engineering designers.

Related Papers

experimental design application

Dagmar Steffen

Revised and extended version of the Nordes'13 paper "Characteristics and interferences of experiments in science, the arts, and in design research" Published in Artifact Special Issue on "Experiments in Design Research", Vol 3., No. 2 (2014)

Ashley Hall

Experimentation is often considered a constituent part of the design process and designing in general, yet its exact function and identity is open to a wide variety of interpretations. For example experimentation is often confused with differentiation or iteration. The relationship between scientific and industrial design experimentation; rationale, process, objectives and methods have rarely been considered. This paper will reflect on the role of experimentation in industrial design and compare its activity to that in the scientific world. Through case studies (from the new Experimental design strand at the Royal College of Art and Imperial College’s Innovation Design Engineering dual MA/MSc) a methodology of balanced mapping and exploration will be discussed. Scientific experiments have to be repeatable in order to be valid, yet in the design world this is often impossible due to the tackling of ‘wicked’ problems that change the very nature of the problem itself, preventing repetition. In practical terms designers value a unique ‘one-off’ approach helping to guarantee the innovation and originality of their solution. At the heart of this enquiry is the difference between design experimentation: designing using experimental methods and experimental design, a fundamental creative methodology for the foundation of new industrial designs, systems and technologies.

Tobias Mettler

Chris McMahon

This paper presents a broad review of progress in design research in recent years, before making suggestions for some key research challenges that must be resolved if the grand societal challenges of the early 21st century are to be overcome. The review builds from the foundations in systematic and methodological approaches to design developed in the later decades of the 20th century through to recent research presented especially in the International Conference in Engineering Design (ICED) series of conferences. The consolidated research themes of the ICED conferences are presented together with an exploration of topics of particular focus in recent research before describing developments in research methodology and in design theory which form a foundation for current research in the subject. It is proposed that the present status is that a consolidated view of design can be formed based on accumulated recent research results. A suggested curriculum for design based on these is pre...

Journal of the learning …

Allan Collins

The term “design experiments” was introduced in 1992, in articles by Ann Brown (1992) and Allan Collins (1992). Design experiments were developed as a way to carry out formative research to test and refine educational designs based on principles derived from prior research. More recently the term design research has been applied to this kind of work. In this article, we outline the goals of design research and how it is related to other methodologies.We illustrate how design research is carried out with two very different examples. And we provide guidelines for how design research can best be carried out in the future.

Mette Agger Eriksen

swissdesignnetwork, et al. edizioni

Massimo Botta

The evolution of sciences and technologies, and their impact on society, raise new research questions which constantly tend to expand the ways to design research – in terms of topics of interests, approaches, and contaminations –; research questions that can be relevant for design knowledge, practice, and education. Starting from these considerations, the aim of the Fifth Swiss Design Network Symposium is to present an overview on design research in order to outline those theories, methods and practices that influence and reshape the design discipline. The results of the conference, published in this book, show that today design research covers a wide range of topics and acts upon the different levels of the design discipline. In fact, the contents of this book move from the theoretical to the methodological to the practical levels of the discipline and give us different kinds of contributions, from general to specific ones.

Canadian Journal of Communication

Brenda Laurel

In design practice and in design research the term ‘experiment’ is widely used and often misused. To some extent, this can be ascribed to the fact that the experimental method comes close to or partly overlaps the approaches of ‘trial and error’ and ‘reflection-in-action’, as defined by Donald Schön. Nevertheless, these methods or rather approaches differ in regard to their aims, results, and context of application. Based on an investigation in design literature and various case examples from practice-led doctoral research, this paper attempts to highlight the differences between scholarly experiment,‘trial and error’ and ‘reflection-in-action’. The initial point of this investigation is from the perspective of the so-called New Experimentalism: a branch of the philosophy of natural science, and from the work of Ian Hacking that redirected and broadened the traditional conception of experiment. Hence, the role of creative practice in design research will be scrutinized from the perspective of New Experimentalism. The goal is to justify the role of artefacts in practice-led design research and in making and doing (action, intervention) as an experimental practice that contributes to the creation of knowledge and the construction of theory.

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED PAPERS

Terence Love

Christian Schunn

Technology | Architecture + Design

Jennifer Roberts-Smith

Martin Stacey

John Damm Scheuer

Bo Westerlund

Jim Herbsleb

Design Issues

Nigan BAYAZIT

Design Studies

Philip Cash

Keld Bødker

The Design Journal

Brigitte Borja de Mozota

Pieter Vermaas

John Zimmerman

Angela O'Donnell

Imre Horvath

Thomas Markussen , Peter Krogh

louis bucciarelli , Martin Stacey

Thomas Binder

Laith Matarweh

DFX 2019: Proceedings of the 30th Symposium Design for X, 18-19 September 2019, Jesteburg, Germany

Sven Matthiesen

Dennis Doordan

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024
  • Experimental Research Designs: Types, Examples & Methods

busayo.longe

Experimental research is the most familiar type of research design for individuals in the physical sciences and a host of other fields. This is mainly because experimental research is a classical scientific experiment, similar to those performed in high school science classes.

Imagine taking 2 samples of the same plant and exposing one of them to sunlight, while the other is kept away from sunlight. Let the plant exposed to sunlight be called sample A, while the latter is called sample B.

If after the duration of the research, we find out that sample A grows and sample B dies, even though they are both regularly wetted and given the same treatment. Therefore, we can conclude that sunlight will aid growth in all similar plants.

What is Experimental Research?

Experimental research is a scientific approach to research, where one or more independent variables are manipulated and applied to one or more dependent variables to measure their effect on the latter. The effect of the independent variables on the dependent variables is usually observed and recorded over some time, to aid researchers in drawing a reasonable conclusion regarding the relationship between these 2 variable types.

The experimental research method is widely used in physical and social sciences, psychology, and education. It is based on the comparison between two or more groups with a straightforward logic, which may, however, be difficult to execute.

Mostly related to a laboratory test procedure, experimental research designs involve collecting quantitative data and performing statistical analysis on them during research. Therefore, making it an example of quantitative research method .

What are The Types of Experimental Research Design?

The types of experimental research design are determined by the way the researcher assigns subjects to different conditions and groups. They are of 3 types, namely; pre-experimental, quasi-experimental, and true experimental research.

Pre-experimental Research Design

In pre-experimental research design, either a group or various dependent groups are observed for the effect of the application of an independent variable which is presumed to cause change. It is the simplest form of experimental research design and is treated with no control group.

Although very practical, experimental research is lacking in several areas of the true-experimental criteria. The pre-experimental research design is further divided into three types

  • One-shot Case Study Research Design

In this type of experimental study, only one dependent group or variable is considered. The study is carried out after some treatment which was presumed to cause change, making it a posttest study.

  • One-group Pretest-posttest Research Design: 

This research design combines both posttest and pretest study by carrying out a test on a single group before the treatment is administered and after the treatment is administered. With the former being administered at the beginning of treatment and later at the end.

  • Static-group Comparison: 

In a static-group comparison study, 2 or more groups are placed under observation, where only one of the groups is subjected to some treatment while the other groups are held static. All the groups are post-tested, and the observed differences between the groups are assumed to be a result of the treatment.

Quasi-experimental Research Design

  The word “quasi” means partial, half, or pseudo. Therefore, the quasi-experimental research bearing a resemblance to the true experimental research, but not the same.  In quasi-experiments, the participants are not randomly assigned, and as such, they are used in settings where randomization is difficult or impossible.

 This is very common in educational research, where administrators are unwilling to allow the random selection of students for experimental samples.

Some examples of quasi-experimental research design include; the time series, no equivalent control group design, and the counterbalanced design.

True Experimental Research Design

The true experimental research design relies on statistical analysis to approve or disprove a hypothesis. It is the most accurate type of experimental design and may be carried out with or without a pretest on at least 2 randomly assigned dependent subjects.

The true experimental research design must contain a control group, a variable that can be manipulated by the researcher, and the distribution must be random. The classification of true experimental design include:

  • The posttest-only Control Group Design: In this design, subjects are randomly selected and assigned to the 2 groups (control and experimental), and only the experimental group is treated. After close observation, both groups are post-tested, and a conclusion is drawn from the difference between these groups.
  • The pretest-posttest Control Group Design: For this control group design, subjects are randomly assigned to the 2 groups, both are presented, but only the experimental group is treated. After close observation, both groups are post-tested to measure the degree of change in each group.
  • Solomon four-group Design: This is the combination of the pretest-only and the pretest-posttest control groups. In this case, the randomly selected subjects are placed into 4 groups.

The first two of these groups are tested using the posttest-only method, while the other two are tested using the pretest-posttest method.

Examples of Experimental Research

Experimental research examples are different, depending on the type of experimental research design that is being considered. The most basic example of experimental research is laboratory experiments, which may differ in nature depending on the subject of research.

Administering Exams After The End of Semester

During the semester, students in a class are lectured on particular courses and an exam is administered at the end of the semester. In this case, the students are the subjects or dependent variables while the lectures are the independent variables treated on the subjects.

Only one group of carefully selected subjects are considered in this research, making it a pre-experimental research design example. We will also notice that tests are only carried out at the end of the semester, and not at the beginning.

Further making it easy for us to conclude that it is a one-shot case study research. 

Employee Skill Evaluation

Before employing a job seeker, organizations conduct tests that are used to screen out less qualified candidates from the pool of qualified applicants. This way, organizations can determine an employee’s skill set at the point of employment.

In the course of employment, organizations also carry out employee training to improve employee productivity and generally grow the organization. Further evaluation is carried out at the end of each training to test the impact of the training on employee skills, and test for improvement.

Here, the subject is the employee, while the treatment is the training conducted. This is a pretest-posttest control group experimental research example.

Evaluation of Teaching Method

Let us consider an academic institution that wants to evaluate the teaching method of 2 teachers to determine which is best. Imagine a case whereby the students assigned to each teacher is carefully selected probably due to personal request by parents or due to stubbornness and smartness.

This is a no equivalent group design example because the samples are not equal. By evaluating the effectiveness of each teacher’s teaching method this way, we may conclude after a post-test has been carried out.

However, this may be influenced by factors like the natural sweetness of a student. For example, a very smart student will grab more easily than his or her peers irrespective of the method of teaching.

What are the Characteristics of Experimental Research?  

Experimental research contains dependent, independent and extraneous variables. The dependent variables are the variables being treated or manipulated and are sometimes called the subject of the research.

The independent variables are the experimental treatment being exerted on the dependent variables. Extraneous variables, on the other hand, are other factors affecting the experiment that may also contribute to the change.

The setting is where the experiment is carried out. Many experiments are carried out in the laboratory, where control can be exerted on the extraneous variables, thereby eliminating them. 

Other experiments are carried out in a less controllable setting. The choice of setting used in research depends on the nature of the experiment being carried out.

  • Multivariable

Experimental research may include multiple independent variables, e.g. time, skills, test scores, etc.

Why Use Experimental Research Design?  

Experimental research design can be majorly used in physical sciences, social sciences, education, and psychology. It is used to make predictions and draw conclusions on a subject matter. 

Some uses of experimental research design are highlighted below.

  • Medicine: Experimental research is used to provide the proper treatment for diseases. In most cases, rather than directly using patients as the research subject, researchers take a sample of the bacteria from the patient’s body and are treated with the developed antibacterial

The changes observed during this period are recorded and evaluated to determine its effectiveness. This process can be carried out using different experimental research methods.

  • Education: Asides from science subjects like Chemistry and Physics which involves teaching students how to perform experimental research, it can also be used in improving the standard of an academic institution. This includes testing students’ knowledge on different topics, coming up with better teaching methods, and the implementation of other programs that will aid student learning.
  • Human Behavior: Social scientists are the ones who mostly use experimental research to test human behaviour. For example, consider 2 people randomly chosen to be the subject of the social interaction research where one person is placed in a room without human interaction for 1 year.

The other person is placed in a room with a few other people, enjoying human interaction. There will be a difference in their behaviour at the end of the experiment.

  • UI/UX: During the product development phase, one of the major aims of the product team is to create a great user experience with the product. Therefore, before launching the final product design, potential are brought in to interact with the product.

For example, when finding it difficult to choose how to position a button or feature on the app interface, a random sample of product testers are allowed to test the 2 samples and how the button positioning influences the user interaction is recorded.

What are the Disadvantages of Experimental Research?  

  • It is highly prone to human error due to its dependency on variable control which may not be properly implemented. These errors could eliminate the validity of the experiment and the research being conducted.
  • Exerting control of extraneous variables may create unrealistic situations. Eliminating real-life variables will result in inaccurate conclusions. This may also result in researchers controlling the variables to suit his or her personal preferences.
  • It is a time-consuming process. So much time is spent on testing dependent variables and waiting for the effect of the manipulation of dependent variables to manifest.
  • It is expensive. 
  • It is very risky and may have ethical complications that cannot be ignored. This is common in medical research, where failed trials may lead to a patient’s death or a deteriorating health condition.
  • Experimental research results are not descriptive.
  • Response bias can also be supplied by the subject of the conversation.
  • Human responses in experimental research can be difficult to measure. 

What are the Data Collection Methods in Experimental Research?  

Data collection methods in experimental research are the different ways in which data can be collected for experimental research. They are used in different cases, depending on the type of research being carried out.

1. Observational Study

This type of study is carried out over a long period. It measures and observes the variables of interest without changing existing conditions.

When researching the effect of social interaction on human behavior, the subjects who are placed in 2 different environments are observed throughout the research. No matter the kind of absurd behavior that is exhibited by the subject during this period, its condition will not be changed.

This may be a very risky thing to do in medical cases because it may lead to death or worse medical conditions.

2. Simulations

This procedure uses mathematical, physical, or computer models to replicate a real-life process or situation. It is frequently used when the actual situation is too expensive, dangerous, or impractical to replicate in real life.

This method is commonly used in engineering and operational research for learning purposes and sometimes as a tool to estimate possible outcomes of real research. Some common situation software are Simulink, MATLAB, and Simul8.

Not all kinds of experimental research can be carried out using simulation as a data collection tool . It is very impractical for a lot of laboratory-based research that involves chemical processes.

A survey is a tool used to gather relevant data about the characteristics of a population and is one of the most common data collection tools. A survey consists of a group of questions prepared by the researcher, to be answered by the research subject.

Surveys can be shared with the respondents both physically and electronically. When collecting data through surveys, the kind of data collected depends on the respondent, and researchers have limited control over it.

Formplus is the best tool for collecting experimental data using survey s. It has relevant features that will aid the data collection process and can also be used in other aspects of experimental research.

Differences between Experimental and Non-Experimental Research 

1. In experimental research, the researcher can control and manipulate the environment of the research, including the predictor variable which can be changed. On the other hand, non-experimental research cannot be controlled or manipulated by the researcher at will.

This is because it takes place in a real-life setting, where extraneous variables cannot be eliminated. Therefore, it is more difficult to conclude non-experimental studies, even though they are much more flexible and allow for a greater range of study fields.

2. The relationship between cause and effect cannot be established in non-experimental research, while it can be established in experimental research. This may be because many extraneous variables also influence the changes in the research subject, making it difficult to point at a particular variable as the cause of a particular change

3. Independent variables are not introduced, withdrawn, or manipulated in non-experimental designs, but the same may not be said about experimental research.

Conclusion  

Experimental research designs are often considered to be the standard in research designs. This is partly due to the common misconception that research is equivalent to scientific experiments—a component of experimental research design.

In this research design, one or more subjects or dependent variables are randomly assigned to different treatments (i.e. independent variables manipulated by the researcher) and the results are observed to conclude. One of the uniqueness of experimental research is in its ability to control the effect of extraneous variables.

Experimental research is suitable for research whose goal is to examine cause-effect relationships, e.g. explanatory research. It can be conducted in the laboratory or field settings, depending on the aim of the research that is being carried out. 

Logo

Connect to Formplus, Get Started Now - It's Free!

  • examples of experimental research
  • experimental research methods
  • types of experimental research
  • busayo.longe

Formplus

You may also like:

Response vs Explanatory Variables: Definition & Examples

In this article, we’ll be comparing the two types of variables, what they both mean and see some of their real-life applications in research

experimental design application

Experimental Vs Non-Experimental Research: 15 Key Differences

Differences between experimental and non experimental research on definitions, types, examples, data collection tools, uses, advantages etc.

Simpson’s Paradox & How to Avoid it in Experimental Research

In this article, we are going to look at Simpson’s Paradox from its historical point and later, we’ll consider its effect in...

What is Experimenter Bias? Definition, Types & Mitigation

In this article, we will look into the concept of experimental bias and how it can be identified in your research

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

experimental design application

Desice is a tool for planning and analyzing your experiments. Designed to assist researchers and engineers get experimentation done right. In providing powerful tools we help you design the optimal experiment.

Starting with an efficient and optimized experimental approach, you can significantly reduce costs and shorten the time to market. It gives you the tools, models, and confidence to analyze and present data in a precise, repeatable, and verifiable method.

We use Desice to accelerate the learning from our R&D testing. It is simple and efficient and costs fractions of the traditional equivalent software packages. Being a web app, we also benefit from new features immediately. Fully recommend it!

Very helpful tool for a fast and easy design of experiments.

Profile Picture CTO Again

Internet Archive Audio

experimental design application

  • This Just In
  • Grateful Dead
  • Old Time Radio
  • 78 RPMs and Cylinder Recordings
  • Audio Books & Poetry
  • Computers, Technology and Science
  • Music, Arts & Culture
  • News & Public Affairs
  • Spirituality & Religion
  • Radio News Archive

experimental design application

  • Flickr Commons
  • Occupy Wall Street Flickr
  • NASA Images
  • Solar System Collection
  • Ames Research Center

experimental design application

  • All Software
  • Old School Emulation
  • MS-DOS Games
  • Historical Software
  • Classic PC Games
  • Software Library
  • Kodi Archive and Support File
  • Vintage Software
  • CD-ROM Software
  • CD-ROM Software Library
  • Software Sites
  • Tucows Software Library
  • Shareware CD-ROMs
  • Software Capsules Compilation
  • CD-ROM Images
  • ZX Spectrum
  • DOOM Level CD

experimental design application

  • Smithsonian Libraries
  • FEDLINK (US)
  • Lincoln Collection
  • American Libraries
  • Canadian Libraries
  • Universal Library
  • Project Gutenberg
  • Children's Library
  • Biodiversity Heritage Library
  • Books by Language
  • Additional Collections

experimental design application

  • Prelinger Archives
  • Democracy Now!
  • Occupy Wall Street
  • TV NSA Clip Library
  • Animation & Cartoons
  • Arts & Music
  • Computers & Technology
  • Cultural & Academic Films
  • Ephemeral Films
  • Sports Videos
  • Videogame Videos
  • Youth Media

Search the history of over 866 billion web pages on the Internet.

Mobile Apps

  • Wayback Machine (iOS)
  • Wayback Machine (Android)

Browser Extensions

Archive-it subscription.

  • Explore the Collections
  • Build Collections

Save Page Now

Capture a web page as it appears now for use as a trusted citation in the future.

Please enter a valid web address

  • Donate Donate icon An illustration of a heart shape

Experimental Design:Theory and Application

Bookreader item preview, share or embed this item, flag this item for.

  • Graphic Violence
  • Explicit Sexual Content
  • Hate Speech
  • Misinformation/Disinformation
  • Marketing/Phishing/Advertising
  • Misleading/Inaccurate/Missing Metadata

plus-circle Add Review comment Reviews

23 Previews

DOWNLOAD OPTIONS

No suitable files to display here.

PDF access not available for this item.

IN COLLECTIONS

Uploaded by station04.cebu on November 11, 2022

SIMILAR ITEMS (based on metadata)

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

applsci-logo

Article Menu

experimental design application

  • Subscribe SciFeed
  • Recommended Articles
  • Author Biographies
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Analysis, design, and experimental validation of a high-isolation, low-cross-polarization antenna array demonstrator for software-defined-radar applications.

experimental design application

1. Introduction

2. array design, 2.1. analytical techniques, 2.1.1. sidelobe level, 2.1.2. beamwidth, 2.1.3. bandwidth, 2.2. numerical results, 3. element design, 3.1. antenna structure, 3.2. full-wave results, 4. sub-array demonstrator, 4.1. full-wave results, 4.2. experimental results.

  • An important PCB curvature has been observed. This is due to the manufacturing process, and at the design frequency this issue can affect the antenna gain and the S i j -parameters;
  • Possible variations in the dielectric constant with respect to the one shown in the data sheet of the materials;
  • Possible interactions between the metallic support and the radiating elements. These metallic parts are very near to the radiating board, and their effect could be not negligible.

5. Conclusions

Author contributions, institutional review board statement, informed consent statement, data availability statement, acknowledgments, conflicts of interest.

  • Fulton, C.; Yeary, M.; Thompson, D.; Lake, J.; Mitchell, A. Digital Phased Arrays: Challenges and Opportunities. Proc. IEEE 2016 , 104 , 487–503. [ Google Scholar ] [ CrossRef ]
  • Kwag, Y.K.; Jung, J.S.; Woo, I.S.; Park, M.S. Modern software defined radar (SDR) technology and its trends. J. Electromagn. Eng. Sci. 2014 , 14 , 321–328. [ Google Scholar ] [ CrossRef ]
  • Debatty, T. Software defined radar a state of the art. In Proceedings of the 2nd International Workshop on Cognitive Information Processing (CIP), Elba, Italy, 14–16 June 2010; pp. 253–257. [ Google Scholar ]
  • Ciociola, A.; Infante, L.; Ricciardella, N.; Solimene, R.; Felaco, M.; Pellegrini, G. Digitally Synthesized Antenna Test Bench for Next Generation Phased Array Systems. In Proceedings of the IEEE International Conference Phased Array Systems & Technology, Waltham, MA, USA, 11–14 October 2022; pp. 1–6. [ Google Scholar ] [ CrossRef ]
  • Castillo-Rubio, C.F.; Pascual, J.M. Current Full Digital Phased-Array Radar developments for Naval applications. In Proceedings of the 2019 IEEE International Symposium on Phased Array System &Technology (PAST), Waltham, MA, USA, 15–18 October 2019; pp. 1–6. [ Google Scholar ] [ CrossRef ]
  • Capria, A.; Petri, D.; Moscardini, C.; Conti, M.; Forti, A.C.; Massini, R.; Cerretelli, M.; Ledda, S.; Tesi, V.; Dalle Mese, E.; et al. Software-defined Multiband Array Passive Radar (SMARP) demonstrator: A test and evaluation perspective. In Proceedings of the OCEANS 2015, IEEE, Genova, Italy, 18–21 May 2015; pp. 1–6. [ Google Scholar ]
  • Capria, A.; Giusti, E.; Moscardini, C.; Conti, M.; Petri, D.; Martorella, M.; Berizzi, F. Multifunction imaging passive radar for harbour protection and navigation safety. IEEE Aerosp. Electron. Syst. Mag. 2017 , 32 , 30–38. [ Google Scholar ] [ CrossRef ]
  • Jamil, K.; Alam, M.; Hadi, M.A.; Alhekail, Z.O. A multi-band multi-beam software-defined passive radar. Part I: System design. In Proceedings of the Radar 2012, Glasgow, UK, 22–25 October 2012; pp. 64–67. [ Google Scholar ]
  • Jamil, K.; Alam, M.; Hadi, M.A.; Alhekail, Z.O. A multi-band multi-beam software-defined passive radar. Part II: Signal processing. In Proceedings of the Radar 2012, Glasgow, UK, 22–25 October 2012; pp. 72–75. [ Google Scholar ]
  • Saeidi-Manesh, H.; Karimkashi, S.; Zhang, G.; Doviak, R.J. High-isolation low cross-polarization phased-array antenna for MPAR application. Radio Sci. 2017 , 52 , 1544–1557. [ Google Scholar ] [ CrossRef ]
  • Saeidi-Manesh, H.; Zhang, G. Low cross-polarization, high-isolation microstrip patch antenna array for multi-mission applications. IEEE Access 2018 , 7 , 5026–5033. [ Google Scholar ] [ CrossRef ]
  • Saeidi-Manesh, H.; Zhang, G. High-isolation, low cross-polarization, dual-polarization, hybrid feed microstrip patch array antenna for MPAR application. IEEE Trans. Antennas Propag. 2018 , 66 , 2326–2332. [ Google Scholar ] [ CrossRef ]
  • Tseng, F.I.; Cheng, D.K. Optimum scannable planar arrays with an invariant sidelobe level. Proc. IEEE 1968 , 56 , 1771–1778. [ Google Scholar ] [ CrossRef ]
  • Rocca, P.; Oliveri, G.; Mailloux, R.J.; Massa, A. Unconventional phased array architectures and design methodologies—A review. Proc. IEEE 2016 , 104 , 544–560. [ Google Scholar ] [ CrossRef ]
  • Cheng, D.K. Optimization techniques for antenna arrays. Proc. IEEE 1971 , 59 , 1664–1674. [ Google Scholar ] [ CrossRef ]
  • Kim, Y.; Elliott, R. Extensions of the Tseng–Cheng pattern synthesis technique. J. Electromagn. Waves Appl. 1988 , 2 , 255–268. [ Google Scholar ] [ CrossRef ]
  • Elliot, R.S. Antenna Theory and Design ; John Wiley & Sons: Hoboken, NJ, USA, 2006. [ Google Scholar ]
  • Balanis, C.A. Antenna Theory—Analysis and Design ; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2005. [ Google Scholar ]
  • Mailloux, R.J. Phased Array Antenna Handbook ; Artech House: Washington, DC, USA, 2005. [ Google Scholar ]
  • Waterhouse, R. Microstrip Patch Antennas: A Designer’s Guide ; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [ Google Scholar ]
  • Mishra, P.K.; Jahagirdar, D.R.; Kumar, G. A Review of Broadband Dual Linearly Polarized Microstrip Antenna Designs with High Isolation [Education Column]. IEEE Antennas Propag. Mag. 2014 , 56 , 238–251. [ Google Scholar ] [ CrossRef ]
  • Ghorbani, K.; Waterhouse, R. Dual polarized wide-band aperture stacked patch antennas. IEEE Trans. Antennas Propag. 2004 , 52 , 2171–2175. [ Google Scholar ] [ CrossRef ]
  • Yang, F.; Rahmat-Samii, Y. Microstrip antennas integrated with electromagnetic band-gap (EBG) structures: A low mutual coupling design for array applications. IEEE Trans. Antennas Propag. 2003 , 51 , 2936–2946. [ Google Scholar ] [ CrossRef ]
  • SIMULIA CST Studio Suite. Dassault Systemes ; SIMULIA CST Studio Suite: Dearborn, MI, USA, 2023. [ Google Scholar ]
  • Pozar, D. The active element pattern. IEEE Trans. Antennas Propag. 1994 , 42 , 1176–1178. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

Substrates/Patch/ConnectorsSlotsStrips
ParameterValue [mm]ParameterValue [mm]ParameterValue [mm]
p26.98 12.42 4.80
0.51 10.00 13.05
0.79 6.02 8.55
2.36 12.56 4.80
12.9 1.84 0.9
12.9 1.84 8.87
14.5 0.26 0.55
14.5 3.29 6.00
5.50 1.85 0.90
13.49 1.85 11.37
9.00 0.15
5.50 2.25
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Ricciardella, N.; Fuscaldo, W.; Mattei, T.; Fiorello, A.M.; Infante, L.; Galli, A. Analysis, Design, and Experimental Validation of a High-Isolation, Low-Cross-Polarization Antenna Array Demonstrator for Software-Defined-Radar Applications. Appl. Sci. 2024 , 14 , 6015. https://doi.org/10.3390/app14146015

Ricciardella N, Fuscaldo W, Mattei T, Fiorello AM, Infante L, Galli A. Analysis, Design, and Experimental Validation of a High-Isolation, Low-Cross-Polarization Antenna Array Demonstrator for Software-Defined-Radar Applications. Applied Sciences . 2024; 14(14):6015. https://doi.org/10.3390/app14146015

Ricciardella, Nicholas, Walter Fuscaldo, Tito Mattei, Anna Maria Fiorello, Leopoldo Infante, and Alessandro Galli. 2024. "Analysis, Design, and Experimental Validation of a High-Isolation, Low-Cross-Polarization Antenna Array Demonstrator for Software-Defined-Radar Applications" Applied Sciences 14, no. 14: 6015. https://doi.org/10.3390/app14146015

Article Metrics

Further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

ORIGINAL RESEARCH article

Design and experimental research of air-assisted nozzle for pesticide application in orchard.

Mingxiong Ou*

  • 1 School of Agricultural Engineering, Jiangsu University, Zhenjiang, China
  • 2 Tillage and Pesticide Application Research Center, Chinese Academy of Agriculture Mechanization Sciences Group Co., Ltd., Beijing, China
  • 3 Nanjing Institute of Agricultural Mechanization, Ministry of Agriculture and Rural Affairs, Nanjing, China

This article reports the design and experiment of a novel air-assisted nozzle for pesticide application in orchard. A novel air-assisted nozzle was designed based on the transverse jet atomization pattern. This article conducted the performance and deposition experiments and established the mathematical model of volume median diameter (D 50 ) and liquid flow rate with the nozzle design parameters. The D 50 of this air-assisted nozzle ranged from 52.45 μm to 113.67 μm, and the liquid flow rate ranged from 142.6 ml/min to 1,607.8 ml/min within the designed conditions. These performances meet the low-volume and ultra-low-volume pesticide application in orchard. The droplet deposition experiment results demonstrated that the droplet coverage distribution in different layers and columns is relatively uniform, and the predicted value of spray penetration ( SP ) numbers SP iA , SP iB , and SP iC ( i = 1, 2, and 3) are approximately 70%, 60%, and 70%, respectively. The droplet deposits on the foliage of the canopy (inside and outside) uniformly bring benefit for plant protection and pesticide saving. Compared with the traditional air-assisted nozzle that adopts a coaxial flow atomization pattern, the atomization efficiency of this air-assisted nozzle is higher. Moreover, the nozzle air pressure and liquid flow rate are considerably lower and greater than the traditional air-assisted nozzle, and these results proved that this air-assisted nozzle has great potential in orchard pesticide application. The relationship between the D 50 and nozzle liquid pressure of this air-assisted nozzle differs from that of traditional air-assisted nozzles due to the atomization pattern and process. While this article provides an explanation for this relationship, further study about the atomization process and mechanism is needed so as to improve the performance.

1 Introduction

In the last decade, with the growth of the global fruit trade market, the high-density orchard model, which is suitable for mechanized management, has been widely promoted and adopted worldwide to continuously produce high-yielding and high-quality fruit production ( Phuyal et al., 2020 ; Ou et al., 2024 ). Pest control is a crucial aspect of orchard management, which plays a significant role in ensuring the safety and quality of agricultural products ( Zuoping et al., 2014 ; Appah et al., 2019 ). In high-density orchards, the use of pesticides has become essential for producing high-quality fruit. The nozzle is the key component for pesticide sprayers, and a conventional atomizing nozzle is inadequate for the fine atomization and high efficiency requirements of orchard pesticide application. Air-assisted sprayers guarantee that most of the fine pesticide droplets could deposited on the target surface, which were used to eradicate pests and prevent crop damage ( Li et al., 2020a ; Zhang et al., 2022 ). The liquid atomization process in the air-assisted nozzle depends on the collision and friction force resulting from the air–liquid velocity difference ( Li et al., 2021 ; Wang et al., 2021 ). Nozzle atomization is a complex, multiphase, and transient process; it consumes a significant amount of atomization energy to break the liquid into liquid film or filament at the nozzle outlet. The liquid film or filament spreads into the break point by high-speed air, creating a large air–liquid velocity difference and finally forming droplets ( Zhao et al., 2019 ). The air–liquid mass ratio had an effect on the nozzles. As the air–liquid mass ratio increases, the interaction between the air–liquid phase becomes stronger. The air–liquid mass ratio also has a great influence on the droplet size of the coaxial air–liquid atomization nozzle ( Broumand et al., 2020 ; Chu et al., 2020 ). The air–liquid ratio can significantly improve the atomization effect of the nozzle and achieve fine atomization and low-capacity application; it can also reduce the droplet drift and improve the droplet deposition uniformity in the orchard pesticide application ( Kang et al., 2018 ; Boiko et al., 2019 ; Chen et al., 2020 ). Some experimental studies were conducted for the mathematical model of the air-assisted nozzle’s atomization performance, and results indicated that the air in the nozzle has a significant influence on the droplet atomization quality ( Czaczyk, 2012 ; Pizziol et al., 2017 ). The studies confirmed that the coaxial air–liquid air-assisted nozzle can effectively reduce the pesticide consumption and environmental pollution in the pesticide application ( Patel et al., 2016 , 2017 ). The similar study indicated that the number of liquid pores, liquid hole diameter, and stomatal diameter have different effects on the liquid flow rate and air rate performance of air-assisted atomization nozzles ( Wang et al., 2019 ). An experimental study of fan-assisted nozzles revealed that the distribution uniformity of droplets firstly decreased and subsequently increased, and the droplet size firstly increased and subsequently decreased with the increased liquid pressure ( Li et al., 2020b ).

In this study, a novel air-assisted nozzle, comprising an air flow part and a nozzle cover, is designed. The inner chamber of the air flow part consists of a cylindrical section and a conical section, the liquid flowing into the high-velocity air along the radial direction around the nozzle outlet. The effects of nozzle structure and working parameters on droplet size and liquid flow rate were studied through laboratory experiments. The droplet deposition experiment was conducted to analyze the droplet coverage and penetration within the imitated tree canopy. The results provide valuable experiences for the development of high-performance air-assisted sprayers.

2 Materials and methods

2.1 air-assisted nozzle design.

The atomization process of air-assisted nozzles is a complex air–liquid interaction. High-velocity air flows improve the atomization effect of nozzles and disturb the canopy of fruit trees in the orchard pesticide application, so as to improve the deposition rate and reduce the usage of pesticide ( Salcedo et al., 2019 ; Wang et al., 2022a ). The air-assisted nozzle in this article is designed based on the transverse jet atomization pattern, and the nozzle consists of the air flow part and nozzle cover, as shown in Figure 1 . The high-pressure air enters the air flow part from the nozzle inlet and forms high-velocity transverse airflow at the nozzle outlet, and the liquid is atomized into droplets by the transverse air flow at the nozzle outlet. The liquid jet during this atomization process is an annular jet form when it passes the gap between the air flow part and the nozzle cover; this annular jet form increases the air–liquid contact area significantly compared with the traditional cylindrical jet form. In order to reduce the flow losses inside the nozzle and improve the airflow velocity in the nozzle outlet, the inner chamber of the air flow part is divided into cylindrical section and conical section; this design minimizes the flow losses when the high-pressure air passes the nozzle and is suitable for multi-nozzle sprayer development. The liquid is delivered from the liquid inlet to the liquid chamber, which is located between the air flow part and nozzle cover. Then, it flows into the high-velocity transverse airflow along the radial direction through the gap which is located between the end of the air flow part and the nozzle cover. Subsequently, the liquid is atomized into droplets in the nozzle outlet. The structural diagram is shown in Figure 1 . This nozzle has a compact structure and a high-efficiency flow pattern.

www.frontiersin.org

Figure 1 Structure of the air-assisted atomization nozzle.

Based on the governing equation for frictionless, adiabatic, steady, and one-dimensional isentropic compressible flow, under certain pressure and temperature conditions, the airflow velocity increases when it passes through a confined space. The pressure at the nozzle inlet is fixed for nozzle design. Once the environment pressure decreases, it results in an increase in the mass flow rate. A “choking” phenomenon occurs when the airflow velocity in the nozzle outlet reaches the speed of local sound. At this point, the airflow velocity is related to the temperature and pressure at the nozzle inlet. The airflow velocity V , nozzle inlet pressure P 1 , and environment pressure P 2 are calculated with Equations (1) , (2) and as follows:

where P 1 is the inlet pressure of the nozzle (MPa); P 2 is the environment pressure (MPa); γ is the adiabatic exponent, as 1.4 in nozzle design; V is the airflow velocity in the confined space (m/s); ρ a is the air density (kg/m); and V refers to the airflow velocity at the nozzle outlet. The nozzle inlet air flow can be calculated with Equation (3) as follows.

where Q in is the nozzle inlet air flow (m 3 /h) and ∅ 1 is the nozzle outlet diameter (mm).

The shape and size of the nozzle cover are determined according to the size of the air flow part. The nozzle outlet diameter ∅ 1 is designed for 6 mm, 8 mm, and 10 mm; the nozzle inlet diameter is 44 mm, and the shrinking angle of the conical section is 60°. The ring gasket is used to adjust the gap width, which was designed for 0.2 mm, 0.4 mm, and 0.8 mm.

2.2 Experiment design of droplet size measurement

2.2.1 experiment system design.

The droplet size is used to describe the atomization performance of the air-assisted nozzle, and it has a significant effect on the deposition distribution, drift, and deposition rate. The volume median diameter (D 50 ) is used to characterize the atomization performance in this article ( Nogueira Martins et al., 2021 ; Yu et al., 2021 ). The experiment system to measure the droplet size included the air-supplied spray subsystem and the droplet size measurement subsystem, as shown in Figure 2 . The air-supplied spray subsystem included vortex fan, frequency converter, air pressure gauge, air-assisted nozzle, water tank, electric diaphragm pump, battery, and liquid pressure regulator valve. The vortex fan (ASBA HG-2200S) provided high-pressure air flows for the air-assisted nozzle. The frequency converter (model: Instar) adjusted the air flows by changing the rotational speed of the vortex fan, and the air pressure in the nozzle inlet was measured by the air pressure gauge. The electric diaphragm pump (SEAFLO SFDP2) is powered by battery and delivers the liquid (tap water) from the water tank into the air-assisted nozzle. The liquid pressure regulator is used to adjust the liquid flow rate of the air-assisted nozzle. The droplet size measurement subsystem included the laser particle size analyzer (OMEC DP-02) and computer, and it was used to measure the droplet size information of nozzle in indoor test condition ( Dai et al., 2022 ). The air-assisted nozzle used in the experiment was made by Teflon and manufactured by a CNC (computer numerical control) machine; the surface of the nozzle was very smooth and beneficial to reduce the flow losses, the tap water was very clean, and there was no filter before the liquid inlet during the experiments. The air-assisted nozzle was fixed on the test bench, the spray distance was set to 0.8 m according to the nozzle performance pre-test and previous experiences, and the air-assisted nozzle and laser beam were set in the same horizontal plane to ensure that the droplets could be fully collected by the laser particle size analyzer. Each experiment was repeated three times, and the mean value was taken as the experiment result in the analysis.

www.frontiersin.org

Figure 2 Experiment system of the droplet size measurement.

2.2.2 Experiment parameter design

The air–liquid velocity difference is an important factor affecting the spray nozzle atomization effect. The nozzle outlet diameter and air pressure in the nozzle inlet (nozzle air pressure) are significant factors affecting the airflow velocity in the nozzle outlet. The gap width and liquid pressure in the liquid inlet (nozzle liquid pressure) are significant factors affecting the liquid velocity in the atomization ( Ishimoto et al., 2008 ). The hybrid horizontal orthogonal experiment is conducted in this article. The nozzle outlet diameter d , gap width w , nozzle air pressure P g , and nozzle liquid pressure P l were variables in the experiment, and a mathematical model of D 50 based on these variables was established ( Miranda-Fuentes et al., 2018 ). The variable level table and experiment design are shown in Tables 1 and 2 .

www.frontiersin.org

Table 1 Variable level table of droplet size measurement experiment.

www.frontiersin.org

Table 2 Orthogonal design scheme of droplet size measurement.

2.3 Experiment design of liquid flow rate measurement

2.3.1 experiment system design.

Liquid flow rate is one of the key parameters for the air-assisted nozzle. A weighing method was used for liquid flow rate measurement in this study. The liquid flow rate measurement system included an air-supplied spray subsystem and a liquid flow rate measurement subsystem, as shown in Figure 3 . The air-supplied spray subsystem was stated before, and the liquid flow rate measurement subsystem included a droplet collecting device and an electronic balance (ACS-LQ300001). The droplet collection device included the droplet collection bucket and beaker. The droplet collection bucket was used to collect the droplets from the air-assisted nozzle, and the beaker was used to measure the droplets from the droplet collection bucket. After 1-min steady operation of experiment system, all the droplets have been collected in the beaker. Then, the mass of the beaker with liquid was measured by the electronic balance, and the liquid flow rate was calculated using the results. Each group of experiments was repeated three times, and the average value of the three experiments was taken as the experiment result.

www.frontiersin.org

Figure 3 Experiment system of the liquid flow rate measurement.

2.3.2 Experiment parameter design

The hybrid horizontal orthogonal experiment was designed for air-assisted nozzle performance research. The nozzle outlet diameter d , gap width w , and nozzle liquid pressure P l were variables in the experiment. The nozzle air pressure P g was set to 0.02 MPa, and the mathematical model of liquid flow rate based on these variables was established ( Miranda-Fuentes et al., 2015 ). The variable level table and experiment design are shown in Tables 3 and 4 , respectively.

www.frontiersin.org

Table 3 Variable level table of liquid flow measurement experiment.

www.frontiersin.org

Table 4 Orthogonal experiment scheme of liquid flow rate measurement.

2.4 Experiment design of droplet deposition

2.4.1 experiment system design.

In order to study the droplet deposition in the tree canopy, the imitated tree canopy was selected in the droplet deposition experiment. The height and radius of the tree were approximately 1.0 m and 0.5 m, respectively, and the average LAI (leaf area index) of the imitated tree canopy was approximately 5.9. The water-sensitive paper (26 × 76 mm) was used to measure the droplet coverage and deposition density, which were used to assess the deposition effect of the air-assisted nozzle ( Nishida et al., 2012 ; Ventura et al., 2018 ). Nine water-sensitive paper layout points were set at different locations in the imitated tree canopy, and the imitated tree canopy was divided into three layers (1, 2, 3) and columns (A,B,C) in the vertical and horizontal directions ( Wang et al., 2022b ). The water-sensitive paper layout points were numbered according to the layer and column, as shown in Figure 4A .

www.frontiersin.org

Figure 4 Droplet deposition experimental system. (A) is the schematic diagram of the droplet deposition experiment system; (B) is the experiment system site picture.

The deposition experiment system included the air-supplied spray subsystem and imitated tree canopy, as shown in Figure 4B . The nozzle was fixed on the spray test bench, which provided a spaying speed of 0 m/s~1 m/s; the spraying speed was 1 m/s; the spraying distance (D 1 ) was 0.8 m; and the air-assisted nozzle was located at the height of layer 2 in the experiment ( Dai et al., 2023 ). According to the results of nozzle droplet size and flow measurement, the nozzle outlet diameter and gap width were 6 mm and 0.4 mm, respectively, and the air pressure and liquid pressure were 0.02 MPa and 0.05 MPa respectively in the droplet deposition experiment. The environment temperature was 20°C ± 2°C, and the ambient humidity was 38% ± 5%.

2.4.2 Experiment data process method

Each group of experiments was repeated for three times, and the average value of the three experiments was taken as the droplet deposition experiment result. The water-sensitive papers after spraying operation were scanned with the scanner (M7628DNA, LENOVO) to obtain 8-bit greyscale images (600 dpi); the water-sensitive paper images were processed using “Deposit Scan” software to calculate the droplet coverage results ( Zhu et al., 2011 ). The droplet coverage results in this experiment indicated the droplet deposition inside the imitated tree canopy, which is important to control the pests and diseases in orchard management; these results will also provide useful performance assessment for orchard pesticide sprayer development.

According to the water-sensitive paper layout, C ij represents the droplet coverage of water-sensitive paper which is located in layer i and column j , as shown in Figure 4A . In order to explore the droplet deposition inside the tree canopy, droplet coverage rate C i is defined as the sum of the C ij ( j =A, B, C) and calculated according to Equation (4) .

In order to assess the nozzle performance and droplet-air penetration effect into the tree canopy, spray penetration ( SP ) was defined as the ratio of C ij to C i and calculated according to Equation (5) ; this result is used for evaluating the penetration effect of droplet deposition inside the imitated tree canopy ( Li et al., 2022 ).

3 Results and discussion

3.1 experiment result of droplet size measurement.

The droplet size measurement result and variance analysis are shown in Tables 5 and 6 , respectively. The result demonstrated that the D 50 value was between 52.45 μm and 113.67 μm in the experiment.

www.frontiersin.org

Table 5 Experiment result of droplet size measurement.

www.frontiersin.org

Table 6 Variance analysis of droplet size measurement experiment.

The variation trends of D 50 with the nozzle outlet diameter, gap width, nozzle air pressure, and nozzle liquid pressure are shown in Figures 5A–D , respectively. D 50 increased with the increase of the nozzle outlet diameter and decreased with the increase of the nozzle air pressure. The gap width has no significant correlation with D 50 . With the increase of liquid pressure, D 50 decreased firstly and increased afterward; there is a minimum value of D 50 during the experiments, which is useful for the air-assisted nozzle design.

www.frontiersin.org

Figure 5 The variation trends of D 50 . (A) is the relationship between the nozzle outlet diameter with D 50 ; (B) is the relationship between the gap width with D 50 ; (C) is the relationship between the nozzle air pressure with D 50 ; (D) is the relationship between the nozzle liquid pressure with D 50 .

The nozzle outlet diameter, gap width, nozzle air pressure, and nozzle liquid pressure are the most important factors in nozzle development. As the nozzle outlet diameter increased, the airflow velocity in the nozzle outlet decreased, the air–liquid velocity difference decreased, and the interaction intensity between the air and liquid phase decreased accordingly, which caused the droplet size increase ( Musiu et al., 2019 ). The gap width was adjusted by the ring gasket with different thicknesses during the experiment; once the gap width increased, the liquid flow rate increased accordingly, and then the atomization effect was reduced and the droplet size became larger ( Jadhav and Deivanathan, 2020 ). With the increase of nozzle air pressure, the airflow velocity in the nozzle outlet increased, and the air–liquid velocity difference increased. The interaction intensity between the air and liquid phase increased accordingly, and then the droplet size became smaller ( Balsari et al., 2019 ). As the nozzle liquid pressure increased, D 50 decreased firstly and increased afterward. The increase of liquid velocity affects the mechanism of the liquid jet in the atomization space; the liquid flows into the nozzle outlet from the annular gap and forms an annular jet along the radial direction. This novel design is different from the conventional air-assisted nozzle used in pesticide application. The atomization performance of this design is also different with the conventional air-assisted nozzle; the nozzle air pressure of this novel air-assisted nozzle is lower and the liquid flow rate is greater than the conventional air-assisted nozzle ( Amighi and Ashgriz, 2019 ; Han et al., 2020 ).

Linear regression analysis was conducted based on the droplet size measurement results, and the mathematical model of D 50 was established as shown in Equation (6) ( Liao et al., 2019 ). The linear regression analysis results of D 50 are shown in Table 7 .

www.frontiersin.org

Table 7 Linear regression analysis results of D 50 measurement experiment.

where D 50 is the volume median diameter (μm); d is the nozzle outlet diameter (mm); w is the gap width (mm); P g is the nozzle air pressure (MPa); and P l is the nozzle liquid pressure (MPa).

The data of Table 7 show that the R 2 value of the regression equation is 0.835 and the Revised R 2 value is 0.784. The linear relationship between D 50 and d , w , P g , and P l is significant, and there is a positive correlation between the nozzle outlet diameter and D 50 . Both the nozzle air pressure and nozzle liquid pressure have significant negative correlations with D 50 , and these two parameters are the most important factors during the nozzle design. The regression coefficient of gap width is 6.845(t = 0.614, p = 0.550 > 0.05), indicating that gap width has a smaller effect on D 50 . These results are basically consistent with the results of variance analysis, and the mathematical model based on the nozzle outlet diameter, gap width, nozzle air pressure, and nozzle liquid pressure is useful for the development of this novel air-assisted nozzle.

3.2 Experimental result of the liquid flow rate

The liquid flow rate measurement result and variance analysis are shown in Tables 8 and 9 , respectively. The result demonstrated that the liquid flow rate was between 142.6 ml/min and 1607.8 ml/min in the experiment.

www.frontiersin.org

Table 8 Experimental results of liquid flow rate measurement.

www.frontiersin.org

Table 9 Variance analysis table of liquid flow rate measurement experiment.

The variation trends of the liquid flow rate with the nozzle outlet diameter, gap width, and nozzle liquid pressure are shown in Figures 6A–C , respectively. The liquid flow rate increased with the increase of the nozzle outlet diameter, and it increased firstly and remained unchanged afterward with increase of the gap width. Meanwhile, the liquid flow rate increased with the increase of the nozzle liquid pressure gradually.

www.frontiersin.org

Figure 6 The variation trends of the liquid flow rate. (A) is the relationship between the nozzle outlet diameter with the liquid flow rate; (B) is the relationship between the gap width with the liquid flow rate; (C) is the relationship between the nozzle liquid pressure with the liquid flow rate.

The relationship between the liquid flow rate and the nozzle outlet diameter, gap width, and nozzle liquid pressure are governed by fluid resistance and friction loss theory. The nozzle outlet diameter and gap width determine the flow cross section for this air-assisted nozzle, and the nozzle liquid pressure provides the original fluid dynamic energy for the liquid ( Wang et al., 2020 ; Xue et al., 2021 ).

Linear regression analysis was conducted based the liquid flow rate measurement results; the mathematical model of Q was established as shown in Equation (7) . The linear regression analysis results of Q were shown in Table 10 .

www.frontiersin.org

Table 10 Linear regression analysis results of liquid flow measurement experiment.

where Q is the liquid flow rate (ml/min); d is the nozzle outlet diameter (mm); w is the gap width (mm); and P l is the nozzle liquid pressure (MPa).

The data of Table 10 show that the R 2 value of the regression equation is 0.802 and the adjusted R 2 value is 0.760. The linear relationship between Q and d , w , and P l is significant, and the nozzle outlet diameter, gap width, and nozzle liquid pressure all have significant positive correlations with the liquid flow rate. These results are consistent with the fluid resistance and friction loss, and mathematical model is useful for the development of this novel air-assisted nozzle.

3.3 Experimental result of the droplet deposition

According to the above experiment results and requirement of the pesticide application in orchard, the parameters of the air-assisted nozzle used for droplet deposition experiment are as a flower: the nozzle outlet diameter is 6 mm; the gap width is 0.4 mm; the nozzle air pressure is 0.02 MPa; and the nozzle liquid pressure is 0.05 MPa.

The droplet deposition experiment was conducted for valuing the droplet deposition results in the imitated tree canopy condition. The results of droplet deposition experiment are shown in Figure 7 . With the distance between the air-assisted nozzle and the water-sensitive paper increasing, the droplet coverage decreased significantly in all the three layers, and C iA > C iB > C iC ( i = 1, 2, and 3) is clearly observed from the results. The results demonstrated that SP 1A , SP 2A , and SP 3A are approximately 69%, 60%, and 67%, respectively; SP 1B , SP 2B , and SP 3B are approximately 28%, 33%, and 29%, and the SP 1C , SP 2C , and SP 3C are approximately 3%, 7%, and 4%, respectively. The average droplet coverage of water-sensitive paper in column A is approximately 65% of the total droplet coverage; meanwhile, there are approximately 30% and 5% of average droplet coverage in columns B and C, respectively.

www.frontiersin.org

Figure 7 The results of droplet coverage and spray penetration.

The droplet coverage decreased due to the increase of spray distance and canopy obstacle effect. These data indicated that most of the droplet–air jet from the air-assisted nozzle reached columns A and B, and the average values of droplet coverage in columns A, B, and C are approximately 25%, 12%, and 2%, respectively. The pesticide sprayer normally sprays one row twice and operates from the two sides of the tree, respectively; according to the operation method in orchard, the predicted values of SP iA , SP iB , and SP iC ( i = 1, 2, and 3) are approximately 70%, 60%, and 70%, and the predicted values of droplet coverage in columns A, B, and C are approximately 27%, 24%, and 27%, respectively. These results indicated that the SP results in the columns are good and meet the pesticide penetration requirement in the orchard; the droplet coverage distribution in different columns is uniform and could be adjusted in the actual pesticide application according the pesticide requirement ( Duga et al., 2015 ; Ferguson et al., 2016 ).

Within the same column, the droplet coverage values of different layers are close to each other. The maximum of droplet coverage differences in C iA , C iB , and C iC ( i = 1, 2, and 3) are approximately 4%, 3%, and 1%, respectively. The droplet coverage in layer 2 is greater than in layer 1 and 3 because the nozzle centerline and layer 2 are in the same height, and both the droplet and air velocity near the nozzle centerline are greater than in other spaces. The droplet coverage in layer 3 is slightly greater than in layer 1 due to the gravity effect. These results indicate that the droplet deposition is significantly affected by the inertia and air jet. Furthermore, the droplet and air jet distribution of the air-assisted nozzle are uniform for orchard pesticide application.

4 Conclusions

This article provided the design and experiment studies of a novel air-assisted nozzle used in orchard pesticide application. This study established the mathematical model of volume median diameter and liquid flow rate with the design parameters and provided useful prediction models for nozzle development. The droplet deposition experiment was conducted using an imitated tree canopy for droplet deposition evaluation in realistic orchard condition, and the results proved that this air-assisted nozzle has good droplet coverage for pesticide application. According to the pesticide application method in orchard, the predicted droplet coverage distribution in different layers and columns are relatively uniform and the droplet deposition in the inside canopy is basically equal with the outside canopy. This is beneficial for plant protection and pesticide precision spraying.

The experiment results demonstrated that this air-assisted nozzle has advantages in nozzle air pressure and droplet atomization performance, and the volume median diameter and flow rate of this nozzle are 52.45 μm and 734.3 ml/min, respectively; under the design conditions, these performances are suitable for orchard sprayer development, which could meet the low-volume and ultra-low volume pesticide application requirements. The air pressure of this air-assisted nozzle is only approximately one-fourth of the inlet air pressure of MaxCharge nozzle development by Electrostatic Spray System Company ( Pascuzzi and Cerruto, 2015 ), the D 50 of these two nozzles are about the same, and the flow rate is approximately three times that of the MaxCharge nozzle; these parameters are important for orchard sprayer development and increasing the operation efficiency. These results reveal that the structure design of this air-assisted nozzle enhanced the atomization ability. Specifically, this air-assisted nozzle uses the transverse jet atomization pattern in the design instead of the coaxial flow atomization pattern, which is used widely in a traditional air-assisted nozzle, and the atomization efficiency of this air-assisted nozzle is higher than the traditional air-assisted nozzle. In addition, the traditional air-assisted nozzle normally uses liquid tubular jet flow into high-velocity airflow for atomization. The liquid in this air-assisted nozzle flows into the atomization space (nozzle outlet) along the radial direction as annular jet instead of the tubular jet, and the cross section of annular jet flow is bigger than the tubular jet and brings a greater liquid flow rate to this air-assisted nozzle. Therefore, the sprayer uses this air-assisted nozzle, and this advantage is beneficial for sprayer design and pesticide application.

The droplet size experiment results demonstrated that there is minimum value of volume median diameter when the nozzle liquid pressure increased. Nevertheless, the volume median diameter has a negative relationship with the nozzle liquid pressure in the nozzle using the coaxial flow atomization pattern. It can be inferred that when the annular jet flows into the center of the nozzle outlet with the increase of the nozzle liquid pressure, once the annular jet reaches the center of the nozzle outlet, the air–liquid velocity difference reaches the maximum, and the volume median diameter reaches the minimum. When the nozzle liquid pressure increases continuedly, the liquid flows outward after reaching the center of the nozzle outlet and the air–liquid velocity difference starts to decrease after the maximum. Although some basic theory and experiment results are completed in this study, the atomization mechanism of this air-assisted nozzle using typical pesticide is not clear, and the droplet deposition experiment results using imitated tree only proved the preliminary deposition performance and applicability. Considering that the pesticide has influence on the atomization mechanism and the droplet deposition is usually influenced by leaf size, LAI, spaying distance, spaying speed, working parameters, and environmental wind, the nozzle atomization mechanism and deposition performance are very complex issues under a real-world operating environment; further studies about the atomization and droplet deposition in real-world are needed and contribute to the application.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding authors.

Author contributions

MO: Conceptualization, Data curation, Formal analysis, Supervision, Writing – review & editing. JZ: Data curation, Formal analysis, Investigation, Validation, Writing – original draft, Writing – review & editing. WD: Formal analysis, Investigation, Writing – original draft. MW: Formal analysis, Investigation, Writing – original draft. TG: Data curation, Formal analysis, Investigation, Writing – original draft. WJ: Conceptualization, Methodology, Writing – review & editing. XD: Conceptualization, Data curation, Supervision, Writing – review & editing. TZ: Conceptualization, Formal analysis, Writing – review & editing. SD: Conceptualization, Data curation, Writing – review & editing.

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This work was funded by the Project of Jiangsu Province and Education Ministry Co-Sponsored Synergistic Innovation Center of Modern Agricultural Equipment (XTCX1003) and the Faculty of Agricultural Equipment of Jiangsu University (No. NZXB20210101).

Acknowledgments

The author thanks the Faculty of Agricultural Equipment of Jiangsu University and High-tech Key Laboratory of Agricultural Equipment and Intelligence of Jiangsu Province for the facilities and supports.

Conflict of interest

Author TZ is employed by the company of Chinese Academy of Agriculture Mechanization Sciences Group Co., Ltd., Beijing, China.

The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Amighi, A., Ashgriz, N. (2019). Global droplet size in liquid jet in a high-temperature and high-pressure crossflow. AIAA Journal 57, 1260–1274. doi: 10.2514/1.J056496

CrossRef Full Text | Google Scholar

Appah, S., Zhou, H. T., Wang, P., Ou, M. X., Jia, W. D. (2019). Charged monosized droplet behaviour and wetting ability on hydrophobic leaf surfaces depending on surfactant-pesticide concentrate formulation. J. Electrostat. 100. doi: 10.1016/j.elstat.2019.103356

Balsari, P., Grella, M., Marucco, P., Matta, F., Miranda-Fuentes, A. (2019). Assessing the influence of air speed and liquid flow rate on the droplet size and homogeneity in pneumatic spraying. Pest Manage. Sci. 75, 366–379. doi: 10.1002/ps.5120

Boiko, V. M., Nesterov, A. Y., Poplavski, S. V. (2019). Liquid atomization in a high-speed coaxial gas jet. Thermophys. Aeromech. 26, 385–398. doi: 10.1134/S0869864319030077

Broumand, M., Khan, M. S., Yun, S., Hong, Z., Thomson, M. J. (2020). The role of atomization in the spray combustion of a fast pyrolysis bio-oil. Fuel 276. doi: 10.1016/j.fuel.2020.118035

Chen, B., Gao, D., Li, Y., Chen, C., Yuan, X., Wang, Z., et al. (2020). Investigation of the droplet characteristics and size distribution during the collaborative atomization process of a twin-fluid nozzle. Int. J. Adv. Manufact. Technol. 107, 1625–1639. doi: 10.1007/s00170-020-05131-1

Chu, W., Li, X. Q., Tong, Y. H., Ren, Y. J. (2020). Numerical investigation of the effects of gas-liquid ratio on the spray characteristics of liquid-centered swirl coaxial injectors. Acta Astronaut. 175, 204–215. doi: 10.1016/j.actaastro.2020.05.050

Czaczyk, Z. (2012). INFLUENCE OF AIR FLOW DYNAMICS ON DROPLET SIZE IN CONDITIONS OF AIR-ASSISTED SPRAYERS. Atom. Sprays 22, 275–282. doi: 10.1615/AtomizSpr.v22.i4

Dai, S. Q., Ou, M. X., Du, W. T., Yang, X. J., Dong, X., Jiang, L., et al. (2023). Effects of sprayer speed, spray distance, and nozzle arrangement angle on low-flow air-assisted spray deposition. Front. Plant Sci. 14. doi: 10.3389/fpls.2023.1184244

Dai, S. Q., Zhang, J. Y., Jia, W. D., Ou, M. X., Zhou, H. T., Dong, X., et al. (2022). Experimental study on the droplet size and charge-to-mass ratio of an air-assisted electrostatic nozzle. Agriculture-Basel 12. doi: 10.3390/agriculture12060889

PubMed Abstract | CrossRef Full Text | Google Scholar

Duga, A. T., Ruysen, K., Dekeyser, D., Nuyttens, D., Bylemans, D., Nicolai, B. M., et al. (2015). Spray deposition profiles in pome fruit trees: Effects of sprayer design, training system and tree canopy characteristics. Crop Prot. 67, 200–213. doi: 10.1016/j.cropro.2014.10.016

Ferguson, J. C., Chechetto, R. G., Hewitt, A. J., Chauhan, B. S., Adkins, S. W., Kruger, G. R., et al. (2016). Assessing the deposition and canopy penetration of nozzles with different spray qualities in an oat ( Avena sativa L.) canopy. Crop Prot. 81, 14–19. doi: 10.1016/j.cropro.2015.11.013

Han, H., Wang, P. F., Li, Y. J., Liu, R. H., Tian, C. (2020). Effect of water supply pressure on atomization characteristics and dust-reduction efficiency of internal mixing air atomizing nozzle. Adv. Powder Technol. 31, 252–268. doi: 10.1016/j.apt.2019.10.017

Ishimoto, J., Ohira, K., Okabayashi, K., Chitose, K. (2008). Integrated numerical prediction of atomization process of liquid hydrogen jet. Cryogenics 48, 238–247. doi: 10.1016/j.cryogenics.2008.03.006

Jadhav, P. A., Deivanathan, R. (2020). Numerical analysis of the effect of air pressure and oil flow rate on droplet size and tool temperature in MQL machining. Mater. Today Proc. 38, 2499–2505. doi: 10.1016/j.matpr.2020.07.518

Kang, Z. T., Li, Q. L., Zhang, J. Q., Cheng, P. (2018). Effects of gas liquid ratio on the atomization characteristics of gas-liquid swirl coaxial injectors. Acta Astronaut. 146, 24–32. doi: 10.1016/j.actaastro.2018.02.026

Li, S. G., Chen, C. C., Wang, Y. X., Kang, F., Li, W. B. (2021). Study on the atomization characteristics of flat fan nozzles for pesticide application at low pressures. Agriculture-Basel 11. doi: 10.3390/agriculture11040309

Li, J., Cui, H. J., Ma, Y. K., Xun, L., Li, Z. Q., Yang, Z., et al. (2020a). Orchard spray study: A prediction model of droplet deposition states on leaf surfaces. Agronomy-Basel 10. doi: 10.3390/agronomy10050747

Li, T., Qi, P., Wang, Z. C., Xu, S. Q., Huang, Z., Han, L., et al. (2022). Evaluation of the effects of airflow distribution patterns on deposit coverage and spray penetration in multi-unit air-assisted sprayer. Agronomy-Basel 12. doi: 10.3390/agronomy12040944

Li, Y. C., Qi, Q. X., Zhang, L., Wang, H. Y. (2020b). Atomization characteristics of a fan air-assisted nozzle used for coal mine dust removal: an experimental study. Energy Sources Part a-Recovery Util. Environ. Effects , 1–17. doi: 10.1080/15567036.2020.1769776

Liao, J., Hewitt, A. J., Wang, P., Luo, X. W., Zang, Y., Zhou, Z. Y., et al. (2019). Development of droplet characteristics prediction models for air induction nozzles based on wind tunnel tests. Int. J. Agric. Biol. Eng. 12, 1–6. doi: 10.25165/j.ijabe.20191206.5014

Miranda-Fuentes, A., Marucco, P., González-Sánchez, E. J., Gil, E., Grella, M., Balsari, P. (2018). Developing strategies to reduce spray drift in pneumatic spraying in vineyards: Assessment of the parameters affecting droplet size in pneumatic spraying. Sci. Total Environ. 616, 805–815. doi: 10.1016/j.scitotenv.2017.10.242

Miranda-Fuentes, A., Rodríguez-Lizana, A., Gil, E., Agüera-Vega, J., Gil-Ribes, J. A. (2015). Influence of liquid-volume and airflow rates on spray application quality and homogeneity in super-intensive olive tree canopies. Sci. Total Environ. 537, 250–259. doi: 10.1016/j.scitotenv.2015.08.012

Musiu, E. M., Qi, L. J., Wu, Y. L. (2019). Evaluation of droplets size distribution and velocity pattern using Computational Fluid Dynamics modelling. Comput. Electron. Agric. 164. doi: 10.1016/j.compag.2019.104886

Nishida, K., Ishii, M., Tsushima, S., Hirai, S. (2012). Detection of water vapor in cathode gas diffusion layer of polymer electrolyte fuel cell using water sensitive paper. J. Power Sources 199, 155–160. doi: 10.1016/j.jpowsour.2011.10.026

Nogueira Martins, R., Moraes, H. M. F. e., Freitas, M. A. M. d., Lima, A. d. C., Furtado Junior, M. R. (2021). Effect of nozzle type and pressure on spray droplet characteristics. Idesia (Arica) 39, 101–107.

Google Scholar

Ou, M. X., Wang, M., Zhang, J. Y., Gu, Y. Y., Jia, W. D., Dai, S. Q. (2024). Analysis and experiment research on droplet coverage and deposition measurement with capacitive sensor. Comput. Electron. Agric. 218. doi: 10.1016/j.compag.2024.108743

Pascuzzi, S., Cerruto, E. (2015). Spray deposition in “tendone” vineyards when using a pneumatic electrostatic sprayer. Crop Prot. 68, 1–11. doi: 10.1016/j.cropro.2014.11.006

Patel, M. K., Praveen, B., Sahoo, H. K., Patel, B., Kumar, A., Singh, M., et al. (2017). An advance air-induced air-assisted electrostatic nozzle with enhanced performance. Comput. Electron. Agric. 135, 280–288. doi: 10.1016/j.compag.2017.02.010

Patel, M. K., Sahoo, H. K., Nayak, M. K., Ghanshyam, C. (2016). Plausibility of variable coverage high range spraying: Experimental studies of an externally air-assisted electrostatic nozzle. Comput. Electron. Agric. 127, 641–651. doi: 10.1016/j.compag.2016.07.021

Phuyal, D., Nogueira, T. A. R., Jani, A. D., Kadyampakeni, D. M., Morgan, K. T., Ferrarezi, R. S. (2020). ‘Ray ruby’ Grapefruit affected by huanglongbing II. Planting density, soil, and foliar nutrient management. HortScience 55, 1420–1432. doi: 10.21273/HORTSCI15255-20

Pizziol, B., Costa, M., Pañao, M. O., Silva, A. (2017). Multiple impinging jet air-assisted atomization. Exp. Therm. Fluid Sci. 96, 303–310. doi: 10.1016/j.expthermflusci.2018.03.019

Salcedo, R., Pons, P., Llop, J., Zaragoza, T., Campos, J., Ortega, P., et al. (2019). Dynamic evaluation of airflow stream generated by a reverse system of an axial fan sprayer using 3D-ultrasonic anemometers. Effect of canopy structure. Comput. Electron. Agric. 163. doi: 10.1016/j.compag.2019.06.006

Ventura, F., Guerra, E., Altobelli, F. (2018). Orchards lai estimation through the radiation extinction coefficient. Agrometeorol. Rural Dev. Policies 28–32. doi: 10.6092/unibo/amsacta/5886

Wang, K. X., Fan, X. J., Liu, F. Q., Liu, C. X., Lu, H. T., Xu, G. (2021). Experimental studies on fuel spray characteristics of pressure-swirl atomizer and air-blast atomizer. J. Thermal Sci. 30, 729–741. doi: 10.1007/s11630-021-1320-z

Wang, P. F., Han, H., Liu, R. H., Gao, R. Z., Wu, G. G. (2020). Effect of outlet diameter on atomization characteristics and dust reduction performance of X-swirl pressure nozzle. Process Saf. Environ. Prot. 137, 340–351. doi: 10.1016/j.psep.2020.02.036

Wang, C. L., Liu, Y., Zhang, Z. H., Han, L., Li, Y. F., Zhang, H., et al. (2022a). Spray performance evaluation of a six-rotor unmanned aerial vehicle sprayer for pesticide application using an orchard operation mode in apple orchards. Pest Manage. Sci. 78, 2449–2466. doi: 10.1002/ps.6875

Wang, P. F., Shi, Y. J., Zhang, L. Y., Li, Y. J. (2019). Effect of structural parameters on atomization characteristics and dust reduction performance of internal-mixing air-assisted atomizer nozzle. Process Saf. Environ. Prot. 128, 316–328. doi: 10.1016/j.psep.2019.06.014

Wang, S. L., Wang, W., Lei, X. H., Wang, S. S., Li, X., Norton, T. (2022b). Canopy segmentation method for determining the spray deposition rate in orchards. Agronomy-Basel 12. doi: 10.3390/agronomy12051195

Xue, R., Ruan, Y. X., Liu, X. F., Zhong, X., Chen, L., Hou, Y. (2021). Internal and external flow characteristics of multi-nozzle spray with liquid nitrogen. Cryogenics 114. doi: 10.1016/j.cryogenics.2021.103255

Yu, S. H., Yin, B. F., Bi, Q. S., Jia, H. K., Chen, C. (2021). The influence of elliptical and circular orifices on the transverse jet characteristics at supersonic crossflow. Acta Astronaut. 185, 124–131. doi: 10.1016/j.actaastro.2021.04.038

Zhang, C., Zhou, H. P., Xu, L. Y., Ru, Y., Ju, H., Chen, Q. (2022). Wind tunnel study of the changes in drag and morphology of three fruit tree species during air-assisted spraying. Biosyst. Eng. 218, 153–162. doi: 10.1016/j.biosystemseng.2022.04.003

Zhao, F., Ren, Z., Xu, B., Zhang, H., Fu, C. (2019). Brief overview of effervescent atomizer application. J. Phys. Conf. Ser . 1300, 012043. doi: 10.1088/1742-6596/1300/1/012043

Zhu, H. P., Salyani, M., Fox, R. D. (2011). A portable scanning system for evaluation of spray deposit distribution. Comput. Electron. Agric. 76, 38–43. doi: 10.1016/j.compag.2011.01.003

Zuoping, Z., Sha, Y., Fen, L. S., Puhui, J., Xiaoying, W., Yan-an, T. J. I. J. o. A., Engineering, B., et al. (2014). Effects of chemical fertilizer combined with organic manure on Fuji apple quality, yield and soil fertility in apple orchard on the Loess Plateau of China. 7, 45–55.

Keywords: air-assisted nozzle, sprayer, droplet size, liquid flow rate, droplet coverage

Citation: Ou M, Zhang J, Du W, Wu M, Gao T, Jia W, Dong X, Zhang T and Ding S (2024) Design and experimental research of air-assisted nozzle for pesticide application in orchard. Front. Plant Sci. 15:1405530. doi: 10.3389/fpls.2024.1405530

Received: 23 March 2024; Accepted: 17 June 2024; Published: 09 July 2024.

Reviewed by:

Copyright © 2024 Ou, Zhang, Du, Wu, Gao, Jia, Dong, Zhang and Ding. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Mingxiong Ou, [email protected] ; Xiang Dong, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Help | Advanced Search

Computer Science > Networking and Internet Architecture

Title: network sovereignty: a novel metric and its application on network design.

Abstract: Most network planning problems in literature consider metrics such as cost, availability, and other technology-aware attributes. However, network operators now face new challenges in designing their networks to minimize their dependencies on manufacturers. A low dependency is associated with higher network robustness in case one or more manufacturers fail due to erroneous component design, geopolitical banning of manufacturers, or other reasons discussed in this work. Our work discusses network sovereignty, i.e., the ability to operate a network without dependencies on a particular manufacturer while minimizing the impact of simultaneous manufacturer failure(s). Network sovereignty is considered by solving the manufacturer assignment problem in the network such that robustness is maximized. The three main contributions of this work are (i) the discussion of network sovereignty as a special attribute of dependability, (ii) the introduction of a novel metric -- the Path Set Diversity (PSD) score to measure network sovereignty, and (iii) the introduction of Naga, an ILP formulation to maximize network sovereignty using the PSD score. We compare Naga's performance with centrality metrics-based heuristics and an availability-based optimization. Our work aims to be the foundation to guide network operators in increasing their network sovereignty.
Comments: Submitted to IEEE Transactions on Reliability
Subjects: Networking and Internet Architecture (cs.NI)
Cite as: [cs.NI]
  (or [cs.NI] for this version)
  Focus to learn more arXiv-issued DOI via DataCite

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 09 July 2024

Application of machine learning models for property prediction to targeted protein degraders

  • Giulia Peteani 1 ,
  • Minh Tam Davide Huynh   ORCID: orcid.org/0009-0008-8320-7686 1 ,
  • Grégori Gerebtzoff   ORCID: orcid.org/0000-0002-7580-4463 1 &
  • Raquel Rodríguez-Pérez   ORCID: orcid.org/0000-0002-2992-3402 1  

Nature Communications volume  15 , Article number:  5764 ( 2024 ) Cite this article

Metrics details

  • Computational biology and bioinformatics
  • Drug discovery

Machine learning (ML) systems can model quantitative structure-property relationships (QSPR) using existing experimental data and make property predictions for new molecules. With the advent of modalities such as targeted protein degraders (TPD), the applicability of QSPR models is questioned and ML usage in TPD-centric projects remains limited. Herein, ML models are developed and evaluated for TPDs’ property predictions, including passive permeability, metabolic clearance, cytochrome P450 inhibition, plasma protein binding, and lipophilicity. Interestingly, performance on TPDs is comparable to that of other modalities. Predictions for glues and heterobifunctionals often yield lower and higher errors, respectively. For permeability, CYP3A4 inhibition, and human and rat microsomal clearance, misclassification errors into high and low risk categories are lower than 4% for glues and 15% for heterobifunctionals. For all modalities, misclassification errors range from 0.8% to 8.1%. Investigated transfer learning strategies improve predictions for heterobifunctionals. This is the first comprehensive evaluation of ML for the prediction of absorption, distribution, metabolism, and excretion (ADME) and physicochemical properties of TPD molecules, including heterobifunctional and molecular glue sub-modalities. Taken together, our investigations show that ML-based QSPR models are applicable to TPDs and support ML usage for TPDs’ design, to potentially accelerate drug discovery.

Similar content being viewed by others

experimental design application

Delivering on the promise of protein degraders

experimental design application

OptADMET: a web-based tool for substructure modifications to improve ADMET properties of lead compounds

experimental design application

Development of an in silico prediction system of human renal excretion and clearance from chemical structure information incorporating fraction unbound in plasma as a descriptor

Introduction.

Machine learning (ML) models are invaluable tools for predicting the absorption, distribution, metabolism, and excretion (ADME) properties of small molecules 1 , 2 , 3 , 4 . For ADME predictions, ML models relate compound structural information to molecular properties, which is also referred to as quantitative structure-property relationship (QSPR) models 5 . QSPR models and ADME predictions play a pivotal role in drug discovery, assisting in the early identification of lead compounds with favorable pharmacokinetics and reduced potential for toxicity 5 , 6 . By accurately predicting ADME profiles, models can accelerate compound characterization and potentially reduce the costs associated with synthesis and experimental testing 7 , 8 .

ML-based QSPR models can be created using either all available data for a certain property (global models) or smaller data sets relating to a particular discovery project or chemical series (local models) 9 , 10 , 11 , 12 . Local models focus on predicting ADME properties within specific chemical series or compound classes, utilizing specialized knowledge and chemical features relevant to those compounds. In contrast, global ML models capture the complex relationships between molecular structures and ADME properties across the chemical space 3 , 13 . As it was shown in Di Lascio et al., this broader applicability makes global models more advisable, despite the common intuition that local models might capture series- or project-specific QSPRs more accurately 2 . Therefore, global ADME models should generally be the ones influencing prioritization and selection of lead compounds early in pharmaceutical research. However, global ML models have been predominantly utilized for the prediction of ADME properties of traditional small molecules and it is still an open question whether they are applicable to more recent drug modalities 14 . With the emergence of targeted protein degradation (TPD) as a promising therapeutic strategy, the development and evaluation of ML models for predicting molecular properties of TPDs has gained attention to assist in degraders’ design 15 , 16 . Recent works, including a pharmaceutical industry perspective paper by Volak et al. 17 , have highlighted the knowledge gap in how ML models perform for ADME property predictions in the TPD space.

TPDs agents represent an innovative therapeutic strategy to induce the selective degradation of disease-causing proteins through the intracellular ubiquitin-proteasome system 18 . These agents simultaneously bind the target protein and an E3 ligase, facilitating the recruitment of the protein to the cellular degradation machinery. By modulating previously ‘undruggable’ targets, TPD agents offer new opportunities for therapeutic intervention. Molecular glues and heterobifunctionals are two TPD submodalities. Glues are a class of molecules that directly bind to both the target protein and an E3 ubiquitin ligase, promoting the formation of a ternary complex that facilitates the ubiquitination and target degradation. Heterobifunctionals consist of a ligand to the target, a ligand to the E3 ubiquitin ligase, and a linker 19 . This complex brings the target protein in proximity to E3 ligase leading to ubiquitination and its degradation 18 . The structural features, mechanisms of action, and target engagement modes that distinguish TPDs from traditional small molecules challenge ML models’ performance and generalizability in the context of TPD agents, which remain uncertain 16 . It has been recently reported that computational approaches might not be suited to TPD molecules and data limitations prevent ML-based QSPR modeling assessments 16 . Therefore, it is not yet known whether reliable predictions are possible or, in contrast, TPDs might be outside the applicability domain of ADME models 14 , 16 . Given the promising clinical results of recent TPDs, investigating whether ML-based QSPR models can leverage existing data to effectively predict ADME properties of TPDs is of utmost interest for pharmaceutical research 16 .

Herein, ML models are generated and evaluated for the prediction of ADME properties of TPD compounds, with special focus on glues and heterobifunctionals. By leveraging on global models’ generalization capability, the potential of ML models to capture the QSPR across diverse compound classes and physicochemical and ADME properties is investigated. Predictive performance of property predictions for glues and heterobifunctionals is compared and put in context to all compound modalities. Moreover, transfer learning techniques are adopted with the aim of refining ML models and improving predictions on TPD compounds.

Assay data and global models

A data set with twenty-five ADME endpoints was utilized for ML modeling. ML-based property predictions were carried out with global QSPR models, which learn from all available data for a given ADME property or assay 2 . Here, four multi-task (MT) global models were generated to predict related properties or assays 2 , 20 , 21 . This algorithm was selected because MT learning enables the modeling of multiple properties, assays or, more generally, prediction tasks simultaneously 22 , 23 , 24 . The assays or tasks included in the four global MT models were:

Permeability model (5-task model): Apparent permeability (P app ) from low-efflux MDCK (LE-MDCK) permeability assay (versions 1 and 2), PAMPA and Caco-2 permeability assay, and efflux ratio from MDCK-MDR1 permeability assay.

Clearance model (6-task model): Intrinsic clearance (CL int ) from CYP metabolic stability in liver microsomes assays for rat, human, mouse, dog, cynomolgus monkey, and minipig 20 .

Binding/Lipophilicity model (10-task model): Plasma protein binding (PPB) for rat, human, mouse, dog and cynomolgus monkey, human serum albumin (HSA) binding, microsomal binding, brain binding, and octanol-water partition and distribution coefficients (LogP and LogD).

Cytochrome P450 (CYP) inhibition model (4-task model): time-dependent inhibition of CYP3A4 and reversible inhibition of CYP3A4, CYP2C9, and CYP2D6.

These models are ensembles of a message-passing neural network (MPNN) coupled with a feed-forward deep neural network (DNN) 25 , 26 . More details can be found in the Methods section.

Due to data availability, prospective evaluation was done for a subset of endpoints. Table  1 lists fifteen physicochemical and ADME assays and properties that were considered for models’ evaluation. Experiments for molecules registered until the end of 2021 were used for model generation, whereas performance was evaluated with the most recent ADME experiments, following a temporal validation.

TPDs belonging to the submodalities of glues and heterobifunctionals were identified in the data set. Global models’ performance was assessed for glue and heterobifunctional TPDs separately. Figure  1 reports the number of training and test compounds across all modalities, for heterobifunctionals, and glues. For all endpoints, TPD compounds constitute less than 6% than the rest of drug modalities. Supplementary Fig.  1 shows the distribution of assay values for each modality.

figure 1

The number of compounds per assay is reported both for the ( A ) training and ( B ) test sets. Shown are the number of compounds across all modalities (green), heterobifunctionals (orange), and glues (blue). Assays are described in Table  1 . Source data are provided as a Source Data file.

Figure  2 characterizes the data set distribution and chemical space per each compound modality. Figure  2A shows the distribution of calculated descriptors used in the Lipinski’s rule of five (Ro5) for glues, heterobifunctionals, and the rest of compounds in the test set. Those calculated descriptors include molecular weight (MW), hydrogen bond acceptors (HBA), hydrogen bond donors (HBD), topological polar surface area (TPSA), calculated LogP (cLogP), and number of rotatable bonds. Heterobifunctional TPDs have a larger molecular weight than the glues and are always beyond the Ro5 (bRo5). The rest of compounds tested on these ADME assays, which can belong to different drug modalities, have a molecular weight distribution more similar to that of glues. The percentage of compounds bRo5 is 19% for glues, and 34% for the rest of modalities. Since ML models have been traditionally applied to compounds with molecular weight lower than 900 or 1000 Da, and mostly for compounds following the Ro5, one could anticipate that heterobifunctional TPDs might be outside the applicability of those standard ML-based QSPR models. Calculated properties’ distributions (MW, HBA, HBD, TPSA, cLogP, and rotatable bonds) are also reported for the training and test sets in Supplementary Fig.  2 .

figure 2

A The distributions of molecular weight (MW), number of hydrogen bond acceptors (HBA) and donors (HBD), topological polar surface area (TPSA), calculated LogP (cLogP), and number of rotational bonds are reported for glues (n glues  = 1851, blue), heterobifunctionals (n heterobifunctionals  = 1064, orange), and all the rest of modalities (n other modalities  = 28886, green). Boxplots show the median (center line), and 1st and 3rd quartiles (Q1 and Q3, respectively). The error bars correspond to the Q1-(1.5*IQR) and Q3 + (1.5*IQR) range (IQR = Inter-Quartile Range). Datapoints below Q1 – (1.5*IQR) or above Q3 + (1.5*IQR) are considered outliers and not shown in the boxplots. B A Uniform Manifold Approximation and Projection (UMAP) based on Tanimoto distance and MACCS keys is shown per modality (glues, heterobifunctionals, and others) and for the test set (n glues  = 1851, n heterobifunctionals  = 1064, n other modalities  = 28886). Source data are provided as a Source Data file.

Figure  2B reports a chemical space representation based on Uniform Manifold Approximation and Projection (UMAP), which shows the distribution of TPDs and compounds from other modalities utilizing Tanimoto as the distance metric and MACCS (Molecular ACCess System) keys 27 as molecular representation. The UMAP illustrates that chemical spaces of TPDs and the rest of the compounds in the test data set only partly overlap, and TPD compounds tend to cluster together. Clusters of heterobifunctional TPDs overlap with glue compounds.

Prediction errors on TPDs and all modalities

First, model performance was assessed for properties with at least five compounds in the test sets. Figure  3A reports the mean absolute error (MAE) for fifteen ADME endpoints, in order of increasing model error (across all modalities). This figure shows the prediction error estimations for glue and heterobifunctional TPDs separately, as well as the average errors for the rest of compounds. As a control, models’ errors were compared to a baseline predictor. For all test compounds, the baseline model gave a constant prediction value which corresponded to the mean property value in the training set. Such baseline prediction was consistently less accurate than ML predictions for any of the modalities. MAE values for the baseline model ranged from 0.28 (for CYP2C9 IC 50 ) to 0.96 (for LogD). In contrast, the largest MAE values for ML models were 0.33 for the test set with all compound modalities (LogD), 0.39 for the heterobifunctionals’ test set (LE-MDCK v2 P app ), and 0.31 for the glues’ test set (rPPB). Therefore, predictions were consistently lower than ~2-fold for glues and all modalities’ compounds. For heterobifunctionals, average prediction errors were smaller than 2.5-fold across all studied ADME properties. The largest differences between average ML errors and baseline predictor errors were observed for lipophilicity (∆MAE = 0.63 for LogD and ∆MAE = 0.57 for LogP). For cynomolgus monkey clearance predictions (CynLM CL int ), there was also a large error difference between ML and baseline predictor (∆MAE = 0.33). For CYP3A4 TDI, ML-based predictions were closer to the control baseline (∆MAE = 0.05), which highlights limited predictive ability for k obs values.

figure 3

Global machine learning (ML) model results are shown for fifteen absorption, distribution, metabolism, and excretion (ADME) assays. Reported are the mean absolute error (MAE) distributions for glues (blue), heterobifunctionals (orange) and all the other compounds (green). Results are reported for the complete test set ( A ) and bootstrap samples ( n  = 1000) (B). In ( A ), global models are compared to a baseline prediction (gray), i.e. mean of the training set. Boxplots in ( B ) show the median (center line), and 1 st and 3 rd quartiles (Q1 and Q3, respectively). The error bars correspond to the Q1-(1.5*IQR) and Q3 + (1.5*IQR) range (IQR = Inter-Quartile Range). Assays are described in Table  1 . Source data are provided as a Source Data file.

To further assess predictive performance, Fig.  3B shows a distribution of average errors for each endpoint  on bootstrap samples ( n  = 1000). This analysis also helps to incorporate the uncertainty of the errors’ estimation due to the small sample size in some of the data sets. Similar trends were obtained both in Fig.  3A, B . Overall, results show consistency between the ML models’ performance on all modalities and TPDs. Even though some property predictions were less accurate for heterobifunctional TPDs, model errors were not consistently larger than those for other modalities. Perhaps surprisingly, for the majority of the considered ADME assays, glue molecules had predictions with the lowest errors. For the bootstrap results, at least 75% of the glues had predictions with lower errors than the other molecules in the test set (either heterobifunctionals or all modalities) for nine out of fifteen properties (cynoPPB, CYP3A4 IC 50 , LE-MDCK v2 P app , CYP3A4 k obs , RLM CL int , LogP, MLM CL int , HLM CL int , LogD). This is illustrated by the third quartiles of glues’ MAE distributions in Fig.  3B . Depending on the property to predict, models had larger or smaller errors in TPDs or other modalities, but overall results suggest that TPDs are inside the domain of applicability of ML-based QSPR models.

Performance evaluation for larger data sets

For the five ADME endpoints with most data points available, a detailed evaluation was carried out. Apart from the regression predictions, performance was estimated for categorical predictions. A compound with an experimental readout in a medium range could likely be assigned to another category if the experiment was repeated. Therefore, three classes are often utilized to categorize experimental results into high and low risk, while incorporating experimental variability. Similarly, focusing predictions on the extremes of the distribution, higher agreement with the experimental readout is ensured (higher precision). Thus, medium-range predictions (between the low and high thresholds) were set to ‘inconclusive (medium)’ to avoid making decisions based on those predictions. This approach helps flagging low-confidence predictions. Property thresholds are defined in Table  1 .

Table  2 reports the number of test compounds, MAE values, misclassification errors for the low and high classes, and percentage of inconclusive (medium) predictions. Importantly, LE-MDCK permeability, TDI and reversible inhibition of CYP3A4, and CL int in human and rat liver microsomes were predicted with errors lower than 2-fold (MAE < 0.3). Figure  4 shows the classification predictions for these five selected assays in all modalities, heterobifunctional and glue TPDs. Results show better models’ performance on glues than heterobifunctional TPDs. Moreover, misclassification errors were at most 14.5% and always lower than 4% for glues. These results indicate that when models provide a high or low classification prediction, there is a high confidence that it is correct. The percentage of 'inconclusive (medium)' predictions was generally less than 35% and it was also lower for glues than heterobifunctional TPDs. Specifically, the percentage of inconclusive (medium) predictions ranged from 2% to 20% for glues, and from 19% to 51% for heterobifunctional TPDs. Supplementary Table  1 reports the class distributions according to the assay values. There are also experimental measurements that fall into the medium category and ranged from 7% (CYP2C9 IC 50 ) to 45% (hPPB) of the compounds. As highlighted above, due to experimental variability, such medium-range experiments could also switch category (to low or high property values) if the assay was repeated 20 .

figure 4

Reported are the percentage of compounds (y-axes) that had a given prediction (x-axes) by the global machine learning (ML) models and five properties. Prediction outputs are high risk, inconclusive (medium) or low risk categories. Results are shown for all modalities (left panel), glues (middle panel), and heterobifunctionals (right panel). Colors indicate the experimental three-class readout. Classification predictions are shown for passive permeability ( A ; LE-MDCK P app ), metabolic clearance in rat liver microsomes ( B ; RLM CL int ) and human liver microsomes ( C ; HLM CL int ), CYP3A4 TDI ( D ; CYP3A4 k obs ) and reversible inhibition ( E ; CYP3A4 IC 50 ). The number of tested compounds were 17960 ( A ), 18322 ( B ), 18420 ( C ), 3270 ( D ) and 4377 ( E ) across all modalities; 1395 (A), 1311 ( B ), 1348 ( C ), 123 ( D ), 128 ( E ) glues; and 863 ( A ), 602 ( B ), 598 ( C ), 388 ( D ), 293 ( E ) heterobifunctionals. Assays are described in Table  1 . Source data are provided as a Source Data file.

Models’ refinement for TPD compounds

Since ADME properties for heterobifunctional compounds were more challenging to predict, models’ refinements were carried out with the aim of improving ADME predictions. Since data for model refinement and testing was required, investigation focused on the five properties previously evaluated: passive permeability (LE-MDCK P app ), metabolic clearance in rat liver microsomes (RLM CL int ) and human liver microsomes (HLM CL int ), CYP3A4 TDI (CYP3A4 k obs ) and reversible inhibition of CYP3A4 (CYP3A4 IC 50 ),

Fine-tuning strategies were investigated. Deep learning algorithms trained in one or more domains can be adapted to a different but related target domain 28 , 29 . This concept of transfer learning for domain adaptation was applied herein to adapt global models to a specific area of the chemical space. Existing global ML models were adapted to the TPD modality using fine-tuning with weights initialization 28 , 30 . Two approaches were investigated: (i) fine-tuning ML models with all new compounds registered in 2022, and (ii) fine-tuning ML models on specific chemistry (heterobifunctional TPDs registered before 2023). These two strategies are schematized in Fig.  5 .

figure 5

Reported are the data splitting settings for global model building and fine-tuning. Global models’ training set is constituted by all compounds registered in the database and measured in assays until 2021 (blue). In strategy 1, all compounds (cpds) registered and measured during 2022 were utilized for model fine-tuning (pink). In strategy 2, heterobifunctional targeted protein degraders (TPDs) that were registered and measured before 2023 were utilized for model fine-tuning (pink). In the three cases, the test set was identical and was composed by heterobifunctional TPDs registered and measured in absorption, distribution, metabolism, and excretion (ADME) assays from 1 st January 2023 until 13 th July 2023.

For the original MT-GNN models and the two fine-tuning approaches, Supplementary Fig.  3 reports classification predictions for permeability (LogP app from LE-MDCK), rat and human CLint (LogCL int ), TDI of CYP3A4 (Logk obs ), and reversible inhibition of CYP3A4 (pIC 50 ). Moreover, Fig.  6 shows the average regression errors (MAE values) for the same models and assays. For all ADME endpoints except permeability, model fine-tuning with heterobifunctional compounds yielded equivalent or lower prediction errors than using all new data. In contrast, permeability predictions were more accurate when the MT-GNN model was fine-tuned with all new data (across all modalities) instead of heterobifunctional TPDs only. Interestingly, for all evaluated properties, fine-tuned models consistently led to lower prediction errors compared to the original MT-GNN models.

figure 6

Reported are mean absolute error (MAE) values for two fine-tuning strategies: (i) on new data (yellow) and (ii) only heterobifunctional data (purple), as well as the original (red) global machine learning (ML) models. Shown are bootstrapping results ( n  = 1000) for heterobifunctional TPD compounds, and five assays. Boxplots show the median (center line), and 1 st and 3 rd quartiles (Q1 and Q3, respectively). The error bars correspond to the Q1-(1.5*IQR) and Q3 + (1.5*IQR) range (IQR = Inter-Quartile Range). Assays are described in Table  1 . Source data are provided as a Source Data file.

Newer experiments can also be utilized for model retraining, where the model is generated again from scratch. Global MT-GNN models were retrained with compounds registered before 2023 and tested on the same heterobifunctional molecules. Table  3 reports regression and classification prediction performance for the fine-tuned model for heterobifunctional TPDs (fine-tuning strategy 2), and the original and retrained MT-GNN global models. Results show that using most recent data for modeling consistently decreases prediction errors both in numerical property predictions and misclassifications. However, fine-tuning with heterobifunctional TPD data yielded the lowest misclassification errors across all assays and risk categories, except for low LE-MDCK permeability values. Moreover, these results indicate that when a prediction is reported by the fine-tuned ML model, it is of high confidence. Errors were lower than 4% for LE-MDCK permeability, TDI of CYP3A4, and rat CL int , lower than 13% for human CL int , and no errors were observed for the reversible inhibition of CYP3A4.

Hence, despite the larger efforts of model retraining (more computationally intensive and time-consuming), it did not yield performance improvements compared to fine-tuning. Even though both types of model refinements were successful in improving predictions, using a pre-trained global model and refining predictions with a relevant data set (i.e. TPD modality) can be a more promising strategy.

Public surrogate data and ML model

Due to the recent emergence of this new therapeutic modality, there is a lack of TPD data in the public domain. This limits the possibility of generating and evaluating data-driven ML models for property prediction, especially applicable to TPDs. To accelerate ML-based QSPR for TPDs, a surrogate data set was generated and used for model building 31 , 32 . Public compound structures were extracted from ChEMBL 33 , ZINC 34 , and PROTAC-DB 35 , as detailed in the Methods section, and annotated with our in-house MT-GNN models’ predictions. This surrogate data set contains ~274,000 compounds with predicted data for twenty-five properties, which were included as tasks in the original MT-GNN models. The same MT-GNN approaches (equivalent architecture and hyperparameters) were trained with the surrogate data to generate new models. The code to generate the models and get predictions is provided as  Supplementary Software .

The quality of the public surrogate ML models was evaluated with the same internal test set and performance was compared to the original MT-GNN models. Figure  7 shows the MAE values for the fifteen assays under evaluation for the public surrogate models. Prediction errors were estimated for glues, heterobifunctionals, and all modalities independently. As observed with the internal MT-GNN models, property predictions for heterobifunctionals were often associated to larger errors. On the other hand, average performance for glue TPDs was generally similar to the one observed across all modalities, and even higher for some assays. A control baseline was also included, where the average in the training set (in this case, predictions from the original models) was predicted for all test compounds. Such baseline often yielded higher prediction errors, but in a few cases ML-based predictions were of equivalent quality. Hence, surrogate models were not always applicable, especially to predict some properties for heterobifunctional TPDs. For instance, predictions of time-dependent inhibition of CYP3A4 (Logk obs values) had MAE values larger than the baseline for heterobifunctional compounds.

figure 7

Public surrogate global machine learning (ML) model results are shown for fifteen assays. Reported are the mean absolute error (MAE) values for glues (blue), heterobifunctionals (orange) and all the other compounds. Assays are described in Table  1 . Source data are provided as a Source Data file.

The applicability of the original global models and surrogate models might not be equivalent due to differences in chemical space coverage and labels’ quality (experiment vs. prediction). However, results suggest that surrogate models’ predictions can be successful for many properties. Across all modalities, original and surrogate models had an average MAE difference of 0.04 log units, ranging from 0.01 (CYP2C6 IC 50 ) to 0.09 (CyLM CL int ) log units. For the prediction of glues’ and heterobifunctionals’ properties, MAE differences were 0.05 and 0.07 on average, respectively. Supplementary Fig.  4 reports the comparison of original and surrogate models’ predictions. Interestingly, despite the presence of some outliers, the correlation between predictions of the original and surrogate MT-GNN models was consistently high across the different assays, ranging from 0.95 to 0.98 (Pearson’s coefficient) and from 0.90 to 0.98 (Spearman’s coefficient).

These results suggest the potential of the surrogate data sets for ML model building and applications for TPDs, and highlight which endpoints are more accurately predicted and can be useful in practice. While trained surrogate models establish a proof of principle, additional hyperparameters’ optimizations and algorithms could be tested to improve QSPR models for specific properties and compounds sets. Overall, this large surrogate data set with annotated properties, including TPDs, provides new opportunities for ML-based QSPR model developments in the public domain.

Herein, a comprehensive evaluation of ML for the prediction of ADME and physicochemical properties of TPD molecules is firstly presented, including heterobifunctional and molecular glues submodalities. Deep learning models were generated, and prediction results showed that ADME properties such as permeability, metabolic intrinsic clearance, CYP inhibition, and lipophilicity can be successfully predicted for TPDs. Interestingly, lowest prediction errors were obtained for glues, ranging from MAE values of 0.11 (for reversible inhibition of CYP3A4) to 0.28 (for mouse metabolic clearance). Moreover, misclassification errors for high and low risk predictions were between 0 to 3.1%. For permeability, CYP3A4 inhibition, and human and rat microsomal clearance, classification errors ranged from 0 to 14.5% for heterobifunctionals. Our results suggest that predicting ADME properties for heterobifunctionals is more challenging than for glues. Transfer learning strategies were implemented to adapt the domain of ML models and improve TPD predictions. More specifically, fine-tuning of MT-GNN models with heterobifunctionals’ ADME data improved predictive performance across different ADME endpoints. The generation of a surrogate dataset based on >270,000 publicly available chemical structures, including TPDs, has also shown the potential of ML-based QSPR model building and applications for TPDs, especially glues. Predictions were highly correlated with the original in-house model predictions and were accurate for relevant endpoints such as LogD or rat metabolic clearance (RLM CL int ).

Taken together, this work indicates that ML-based QSPR models are applicable to the new modality of TPDs even though they represent a small fraction of the training set and can be further refined when additional data becomes available. With increasing TPD data availability, additional modeling strategies could be explored to further refine ADME predictions, and potentially move towards the prediction of other relevant properties from molecular structure. ML-based QSPR models are already influencing the design-make-test-analyze (DMTA) cycle in drug discovery, where only the most promising ideas are synthesized, and informative experiments are carried out. However, the use of ML models for TPDs remained marginal compared to other traditional modalities. Our findings have implications in pharmaceutical research and should increase the use of ML models for property predictions in TPD programs, potentially accelerating degraders’ design with favorable ADME properties.

Assays’ description

For passive permeability determination, 96-well plate permeable inserts were plated with Madin-Darby Canine Kidney (MDCK) cells and cultured for three days. The test article in dimethyl sulfoxide (DMSO) stock solution (10 mM) was added to Hanks’ balanced salt solution (HBSS) to result in a final concentration of 10 μM. The HBSS buffer contained 0.02% bovine serum albumin (BSA) and 10 mM HEPES. The acceptor compartment was HBSS with 5% BSA and 10 mM HEPES. The assay was run for 120 min, determining the donor concentration at time zero and after 120 min the donor and acceptor concentrations. The difference between version 1 (v1) and version 2 (v2) of the assays consisted in the addition of BSA 36 .

CYP metabolic stability in liver microsomes

Microsomal incubations were performed in 384-well PCR plates at 37 °C. Test articles at a concentration of 10 mM in pure DMSO were dispensed by an acoustic dispenser to 25 µL 100 mM phosphate buffer (pH 7.4) containing 2 mM NADPH. This solution (12.5 μL, equilibrated for 10 min at 37 °C) was added to 12.5 μL liver microsomes (1 mg/mL) suspended in 100 mM phosphate buffer. At 0.5, 5, 15, and 30 min, the reactions were terminated by the addition of 10 µL acetonitrile/formic acid (93:7) containing the analytical internal standards (1 μM alprenolol and 1 µM warfarin) and transferred to a new 384-well plate containing 15 µL acetonitrile/formic acid (93:7). The stopped incubations were centrifuged at 5000 x g for 15 min at 4 °C and the supernatants were analyzed by high-performance liquid chromatography–tandem mass spectrometry (LC-MS) to measure the percentage of test article remaining relative to time zero-minute incubation and determine the in vitro elimination-rate constant (k mic ). Intrinsic clearance (CL int ) was calculated by dividing k mic by the concentration of microsomal protein.

Plasma protein binding (PPB) values were mainly determined through equilibrium dialysis. Binding to proteins was measured using rapid equilibrium dialysis (RED device from ThermoFisher). Test articles were dissolved in matrix (plasma, human liver microsomes or brain homogenate). 300 µL of the matrix solutions were dispensed to the red chamber of a RED device and 500 µL 100 mM phosphate buffer to the white chamber. The RED device was sealed with a gas permeable membrane and incubated for 4 h on an orbital shaker (750 rpm) at 37 °C under 5% CO 2 . 50 µL aliquots from both compartments were transferred to 600 µL acetonitrile containing the analytical internal standard (0.2 µM glyburide) and 50 µL buffer or matrix for a matrix match. The samples were centrifuged at 5000 x g for 15 min at 4 °C and the supernatant was analyzed by LC-MS analysis for measuring test article and internal standard. The free fraction (fu) was calculated by dividing the area ratio of the receiver compartment to the area ratio of the donor compartment. For large molecular weight compounds, PPB was measured using ultracentrifugation (UC). Test articles (5 µM) were added to 1000 µL plasma and incubated for 10 min at 37 °C in a glass vial. For the determination of the total concentration, 3 times 50 µL were added to a 96 deep-well-plate pre-filled with 600 µL acetonitrile/water (9/1) containing the analytical internal standard (0.2 µM glyburide) and 50 µL phosphate buffer. For the free fraction, an aliquot of 700 µL was centrifuged (Beckman UC Optima Max-XP) at 436,000 x g for 5 h at 37 °C. At the end of the centrifugation, 3 times 50 µL of the supernatant were carefully removed and added to the 96 deep-well-plate pre-filled with acetonitrile containing the internal standard and 50 µL blank plasma for a matrix match. The 96 deep-well plate was shaked for 10 min at 300 rpm and stored over-night in a freezer at −20C° to help protein precipitation. The next day the 96 deep-well plate was centrifuged at 4500 rpm for 1 h at 4 °C. Supernatant (50 µL) was transferred in a 384 well plate with 30 µL water. The samples were analyzed by LC-MS for the measurement of test article and internal standard. High throughput dialysis (HTD) was used as a second alternative method for strong binders (% bound >99). Test articles (5 µM) were added to 700 µL plasma and incubated for 10 min at 37 °C. For the determination of the total concentration, 3 times 50 µL were added to a 96 deep-well-plate pre-filled with 600 µL acetonitrile containing the analytical internal standard (0.2 µM glyburide) and 50 µL phosphate buffer. For the free fraction, an aliquot of 100 µL was dialyzed against 100 mM phosphate buffer at pH 7.4 for 6 h in the HTD96b device (HTDialysis LLC). At the end of the incubation, 3 times 50 µL of the plasma (buffer) compartment were removed and added to the 96 deep-well-plate pre-filled with acetonitrile containing the internal standard and 50 µL blank buffer (plasma) for a matrix match. The 96 deep-well plate was shaked for 10 min at 300 rpm and stored over-night in a freezer at −20C° to help protein precipitation. The next day the 96 deep-well plate was centrifuged at 4500 rpm for 1 h at 4 °C. Supernatant (50 µL) was transferred in a 384 well plate with 30 µL water. The samples were analyzed by LC-MS for the measurement of test article and internal standard. A calibration curve was used to define the LLOQ.

Most of the utilized data comes from RED devices but, when available, HTD or UC data was utilized instead (i.e., some >99% qualifiers were replaced). Specifically, 3–6% and 1-3% of the data was generated with HTD and UC, respectively.

Lipophilicity

The 1-octanol/water partitioning coefficient (LogP) was determined using a miniaturized Shake-Flask equilibrium method. Prior to start the experiment the two phases were pre-saturated, so “water-saturated 1-octanol” and “1-octanol-saturated water” were used. Samples were initially dissolved in DMSO as a 10 mM stock concentration. The samples and an internal standard were dispensed in a 1 ml deepwell plate and DMSO is evaporated prior to be dissolved in 1-octanol at a target concentration of 150 µM by shaking at 1000 rpm for 8 h. The pH 7.4 buffer was added with a phase ratio K of 1 (where K = V water /V octanol ) and then the samples were shaken four hours on a shaker at 1000 rpm. The deepwell plate was centrifuged at 3000 rpm prior to phase separation. A x10 dilution for the aqueous phase and a x1000 dilution for the octanol phase are prepared and quantified by LC-HRMS against an internal standard (Dexamethasone) with a known logD = 1.9 with the following equation:

This protocol was adapted from Low et al. 37 .

CYP inhibition

Cyp3a4 time-dependent inhibition (tdi).

The TDI assay was utilized to determine the first order inactivation rate (k obs ) values. Test articles were dispensed to 96-well plates, and human liver microsomes supplemented with NADPH were added to initiate the pre-incubation. Residual CYP3A4 activity was determined after 0, 7, 16 and 32 min by the addition of midazolam (including d4-1-hydroxy-midazolam as internal standard) and incubated for six additional minutes before adding acetonitrile. Supernatants were analyzed for the CYP3A4 selective metabolite 1-hydroxymidazolam and d4-1-hydroxymidazolam using LC-MS. TDI CYP3A enzyme activity was calculated using normalized area ratios of 1-hydroxymidazolam to internal standard and plotted over the pre-incubation time. A one parameter fit using a range of 80% and a background of 20% was utilized to determine k obs . The percentage of reversible inhibition was calculated by the area ratio at 0 min (pre-incubation) in relation to the area ratio of the control with DMSO only. In cases of strong reversible inhibition ( > 50%), k obs values were not calculated.

Reversible inhibition of CYP3A4, CYP2C9, and CYP2D6

Formation rate of enzyme-specific metabolites from midazolam (CYP3A4), diclofenac (CYP2C9) and bufuralol (CYP2D6) in human liver microsomes was utilized. Substrates, internals standards and test compounds were dispensed by acoustic dispensing to a 384-well microplate. Human liver microsomes supplemented with NADPH were added to start the incubation. Plates were immediately transferred to an incubator. Incubations were stopped by the addition of acetonitrile/formic acid (93:7) and the supernatant was analyzed by LC-MS for the enzyme-specific metabolites and internal standards. The area ratios of test compounds were normalized to the average area ratio of DMSO (100% activity) and an inhibitor cocktail (0% activity) to determine the IC 50 (test compound concentration causing an inhibition of 50%) using a dose-response-model with a two-parameter fit in which 100% and 0% activity were constrained.

Data quality

To deliver the best possible data quality, different processes and acceptance criteria were considered. For instance, in LE-MDCK and PPB assays, data were rejected if the recovery rate was too low ( < 60%). Moreover, to minimize unspecific binding issues, enzymatic incubations are done using one-well-per-data. Therefore, it is expected to extract the (bound) compound completely once the incubation is stopped (high content of organic solvent) in contrast to serial sampling approaches. This does not regard the free fraction in the incubation but increases the probability to get consistent data. The LE-MDCK assay protocol was adapted for outside rule-of-five molecules 36 . In CL int and CYP inhibition assays, non-specific binding is less problematic since protein is present in the incubation medium. Moreover, F u,mic is measured to correct CL int for microsomal binding. For CYP inhibition, the presence of protein decreases the non-specific binding to labware. If non-specific binding interferes too much with the assay, the data does not fit the model and no IC 50 is reported.

Data sets for modeling

ADME data from twenty-five assays were extracted from Novartis database and pre-processed, including apparent permeability (P app ) from two versions of the low-efflux Madin-Darby canine kidney cell line (MDCK) permeability assay, parallel artificial membrane permeability assay (PAMPA), Caco-2 permeability assay, efflux ratio from MDCK-multidrug resistance protein 1 (MDCK-MDR1) permeability assay, intrinsic clearance (CL int ) from CYP metabolic stability in liver microsomes assays for rat, human, mouse, dog, cynomolgus monkey, and minipig, plasma protein binding (PPB) for rat, human, mouse, dog and cynomolgus monkey, human serum albumin (HSA) binding, microsomal binding, brain binding, octanol-water partition (LogP) and distribution coefficients (LogP), time-dependent inhibition (TDI) of CYP3A4 (inactivation rate, k obs ), and reversible inhibition of CYP3A4, CYP2C9, and CYP2D6 (half-inhibitory constant, IC 50 ). Experimental data were aggregated (geometric mean) when multiple measurements were available for the same compound and assay’s endpoint. Moreover, values outside the dynamic range of the assays were excluded, and qualified values (‘<’/‘>’) were discarded for permeability, PPB, LogP, and LogD. PPB values were transformed to fraction unbound (F u ) and IC 50 values from CYP reversible inhibition assays were converted to pIC 50 with a negative logarithmic transformation. Logarithmic transformations were applied to the rest of the assay endpoints, except to LogP and LogD. All above-mentioned assays were utilized for model training, but only a fraction of them were used for model evaluation due to data availability. For instance, some assays are requested less often (e.g. monkey CL int compared to rat CL int ) or were deprecated in favor of newer version (e.g. LE-MDCK version 2 assay) or other technologies (e.g. Caco-2 was deprecated). Data set statistics are discussed and reported below.

Global QSPR models’ description

Four multi-task graph neural networks (MT-GNN) global models were generated and evaluated herein: Permeability (5-task model), Clearance (6-task model), Binding/Lipophilicity (10-task model), and CYP inhibition (4-task model). The models were ensembles of a message-passing neural network (MPNN) followed by a feed-forward deep neural network (DNN) 25 , 26 . These DNNs facilitate MT through the consideration of multiple output neurons (one per task). To enable MT model training with sparse labels, a masked loss function was utilized, and missing values were not considered for backpropagation 22 . Previous investigations indicated that especially for sparse experimental data, MT learning can provide an advantage compared to single-task models 38 . For all models, rectified linear unit (ReLU) was the activation function, a batch size of 50 was considered, and the models were trained with the consideration of early stopping (with a validation set of 10% with scaffold split). Learning rate was varied from an initial value of 0.0001 to a maximum of 0.001 linearly and decreased until 0.0001 exponentially. Mean aggregation was applied to convert atomic vectors into molecular vectors. Supplementary Table  2 reports details about each model’s architecture. Models evaluated herein are available for ADME assays’ predictions internally at Novartis. Prior to selecting MT-GNN as a modeling approach, a variety of molecular features, ML methods, and hyperparameters were benchmarked 2 , 20 .

Prediction tasks

Table  1 reports the fifteen prediction tasks that were evaluated, including the assay, measured property, and MT-GNN model where they were included. Numerical assay thresholds were used to categorize experimental read-outs into ‘high risk’ and ‘low risk’ classes 1 . Compound optimization accounts for multiple properties, which are measured with assays of varying resolution (different experimental errors) 7 . Therefore, this assay risk categorization defined by assay experts accounts for experimental variability 20 and facilitates decisions during multiparameter optimization. Following this three-class concept, property predictions were also converted to a three-output classification (‘low risk’, ‘medium’, ‘high risk’) using the same thresholds utilized for experimental read-outs, which are reported in Table  1 . The percentage of compounds belonging to each category is reported in Supplementary Table  1 . When applying such assay thresholds to ML outputs, predictions in the ‘medium’ category can be disregarded. By only considering ‘high risk’ and ‘low risk’ predictions for decision-making, ML models’ precision improves. As in other works 20 , 21 , 39 , predictions in the medium range can also be termed ‘inconclusive (medium)’ predictions, which ideally are not a large fraction. This regression-based classification approach was previously proposed 20 , 40 .

Data splitting

ML models were evaluated prospectively (with newly registered and measured molecules) 2 . Such evaluation scenario is commonly referred to as temporal validation or time-split 41 . All MT-GNN global models were trained with compounds registered until the end of 2021 and evaluated with compounds registered from 1 st January 2022 until 13 th July 2023. Global models were evaluated on glue and heterobifunctional TPDs, and predictive performance on these two TPD modalities was compared to performance across all modalities. For the prediction tasks under evaluation, Fig.  1 reports the number of training and test set compounds per each property and modality. The Permeability model was trained on 206,347 compounds, which included 2,732 heterobifunctionals and 2673 glues. From those compounds, 20,041 had LE-MDCK v2 P app measurements, including 1,608 heterobifunctionals and 1404 glues. Clearance , Binding/Lipophilicity , and CYP inhibition models were generated with 223,025, 92,464, and 65,701 compounds, respectively. Supplementary Table  3 reports the number of training compounds for all the tasks included in global models.

Performance metrics

For regression models, performance was estimated with the mean absolute error (MAE).

where \(y\) : experimental value, \(\hat{y}\) : prediction, and n: number of compounds.

Classification models were evaluated with the average precision on low and high classes. Precision is the percentage of compounds that were predicted ‘high’ (‘low’) and indeed had a ‘high’ (‘low’) property value.

where TP: true positives, and FP: false positives.

Misclassification or error rates were also calculated, as the percentage of compounds that were predicted to have a ‘high’ (‘low’) property value and had a ‘low’ (‘high’) measured value. Finally, the percentage of inconclusive (medium) predictions is computed as the fraction of molecules with a predicted property value in the ‘medium’ range. For those compounds, no ‘high’ or ‘low’ property prediction is given.

Models’ refinement

Some prediction tasks showing margin for improvement were identified and, when data availability allowed, model refinement was carried out. Specifically, three MT-GNN models were optimized with new data attempting at improving LE-MDCK P app (permeability model); RLM CL int and HLM CL int (clearance model); and CYP3A4 k obs and CYP3A4 IC 50 (CYP model) predictions.

Transfer learning was adopted with the purpose of optimizing model parameters for heterobifunctional TPD compounds. Instead of focusing on transferring knowledge to previously unseen tasks, which is more common in the field 42 , 43 , 44 , transfer learning was applied under the paradigm of domain adaptation 29 , 45 . The transfer learning approach utilized was model fine-tuning with weights initialization, where the weights of the original model (pre-trained model) were adjusted with new data 28 , 46 . Two fine-tuning strategies were investigated and are illustrated in Fig.  5 .

Strategy 1 (New data). Global ML models were fine-tuned with all new data generated during the previous year. Here, the model was fine-tuned with compounds registered and measured in 2022.

Strategy 2 (TPD-specific data). The original global ML model was fine-tuned with heterobifunctional TPDs’ data (registered and measured before 2023).

Both strategies were evaluated on 328 heterobifunctional TPDs synthesized and measured in 2023. Supplementary Table  4 reports the number of compounds in the training, fine-tuning, and test sets.

Public structures and surrogate data set

A data set of public structures was gathered from ChEMBL 33 , ZINC 34 , and PROTAC-DB 35 . ChEMBL and ZINC data sets were prepared according to Rodriguez-Perez and Bajorath 47 . Structures from the three data sources were standardized, including hydrogen bonds removal, metal disconnection, molecule normalization, reionization, and stereochemistry assignation, with the RDKit module rdMolStandardize , and canonicalized 48 . After such pre-processing, the data set was composed by 273,706 molecules from ChEMBL (70,465), ZINC (199,972) and PROTAC-DB (3,269). To get physicochemical and ADME property values for this large set of molecules, the internal Novartis ML models were utilized. Twenty-five compound properties were predicted for each of the molecules to generate a surrogate data set. This surrogate data set is provided as Supplementary Data.

Reporting summary

Further information on research design is available in the  Nature Portfolio Reporting Summary linked to this article.

Data availability

Source data are provided with this paper. The data used to generate some models in this study ( novartis_tpd_NatureCommunications2024 ) are proprietary to Novartis. These data are not publicly available due to intellectual property restrictions. The surrogate data set with publicly available structures (including TPDs) and predicted properties is given as  Supplementary Data .  Source data are provided with this paper.

Code availability

Supplementary Software is provided with this publication to build the four MT-GNN global models and obtain property predictions.

Aleksić, S., Seeliger, D. & Brown, J. B. ADMET predictability at boehringer ingelheim: state-of-the-art, and do bigger datasets or algorithms make a difference? Mol. Inform. 41 , 2100113 (2022).

Article   Google Scholar  

Di Lascio, E., Gerebtzoff, G. & Rodríguez-Pérez, R. Systematic evaluation of local and global machine learning models for the prediction of ADME properties. Mol. Pharmaceutics 20 , 1758–1767 (2023).

Grebner, C. et al. Application of deep neural network models in drug discovery programs. ChemMedChem 16 , 3772–3786 (2021).

Article   CAS   PubMed   Google Scholar  

Bhhatarai, B., Walters, W. P., Hop, C. E. C. A., Lanza, G. & Ekins, S. Opportunities and challenges using artificial intelligence in ADME/Tox. Nat. Mater. 18 , 418–422 (2019).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Ferreira, L. L. G. & Andricopulo, A. D. ADMET modeling approaches in drug discovery. Drug Discov. Today 24 , 1157–1165 (2019).

Göller, A. H. et al. Bayer’s in silico ADMET platform: a journey of machine learning over the past two decades. Drug Discov. Today 25 , 1702–1709 (2020).

Article   PubMed   Google Scholar  

Volkamer, A. et al. Machine learning for small molecule drug discovery in academia and industry. Artif. Intell. Life Sci. 3 , 100056 (2023).

CAS   Google Scholar  

Lombardo, F. et al. In silico absorption, distribution, metabolism, excretion, and pharmacokinetics (adme-pk): utility and best practices. an industry perspective from the international consortium for innovation through quality in pharmaceutical development. J. Medicinal Chem. 60 , 9097–9113 (2017).

Article   CAS   Google Scholar  

Bergström, C. A. S., Wassvik, C. M., Norinder, U., Luthman, K. & Artursson, P. Global and local computational models for aqueous solubility prediction of drug-like molecules. J. Chem. Inf. Computer Sci. 44 , 1477–1488 (2004).

Öberg, T. & Liu, T. Global and local PLS regression models to predict vapor pressure. QSAR Combinatorial Sci. 27 , 273–279 (2008).

Feher, M. & Ewing, T. Global or local QSAR: Is there a way out? QSAR Combinatorial Sci. 28 , 850–855 (2009).

Sheridan, R. P. Global quantitative structure-activity relationship models vs selected local models as predictors of off-target activities for project compounds. J. Chem. Inf. Modeling 54 , 1083–1092 (2014).

Sheridan, R. P., Culberson, J. C., Joshi, E., Tudor, M. & Karnachi, P. Prediction accuracy of production ADMET models as a function of version: activity cliffs rule https://doi.org/10.1021/acs.jcim.2c00699 (2022).

Ekins, S., Lane, T. R., Urbina, F. & Puhl, A. C. In silico ADME/tox comes of age: twenty years later. Xenobiotica 1-7 https://doi.org/10.1080/00498254.2023.2245049 (2023).

Ciulli, A. & Farnaby, W. Protein degradation for drug discovery. Drug Discov. Today.: Technol. 31 , 1–3 (2019).

Mostofian, B. et al. Targeted protein degradation: advances, challenges, and prospects for computational methods. J. Chem. Info. Modeling 63 , 5408–5432 (2023).

Volak, L. P. et al. Industry perspective on the pharmacokinetic and absorption, distribution, metabolism, and excretion characterization of heterobifunctional protein degradersS. Drug Metab. Disposition 51 , 792–803 (2023).

Pettersson, M. & Crews, C. M. PROteolysis TArgeting chimeras (PROTACs) — Past, present and future. Drug . Discov. Today.: Technol. 31 , 15–27 (2019).

An, S. & Fu, L. Small-molecule PROTACs: An emerging and promising approach for the development of targeted therapy drugs. EBioMedicine 36 , 553–562 (2018).

Article   PubMed   PubMed Central   Google Scholar  

Rodríguez-Pérez, R., Trunzer, M., Schneider, N., Faller, B. & Gerebtzoff, G. Multispecies machine learning predictions of in vitro intrinsic clearance with uncertainty quantification analyses. Mol. Pharmaceutics 20 , 383–394 (2023).

Schuffenhauer, A. et al. Evolution of novartis’ small molecule screening deck design. in J. Med. Chem. vol. 63 14425–14447 (American Chemical Society, 2020).

Rodríguez-Pérez, R. & Bajorath, J. Multitask machine learning for classifying highly and weakly potent kinase inhibitors. ACS Omega 4 , 4367–4375 (2019).

Xu, Y., Ma, J., Liaw, A., Sheridan, R. P. & Svetnik, V. Demystifying multitask deep neural networks for quantitative structure-activity relationships. J. Chem. Inf. Modeling 57 , 2490–2504 (2017).

Wenzel, J., Matter, H. & Schmidt, F. Predictive multitask deep neural network models for adme-tox properties: learning from large data sets. J. Chem. Inf. Modeling 59 , 1253–1268 (2019).

Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O. & Dahl, G. E. Neural message passing for quantum chemistry. 34th Int. Conf. Mach. Learn., ICML 2017 3 , 2053–2070 (2017).

Google Scholar  

Yang, K. et al. Analyzing learned molecular representations for property prediction. J. Chem. Inf. Modeling 59 , 3370–3388 (2019).

Accelrys. MACCS keys. MDL information systems, Inc. (2011).

Kohút, J. & Hradiš, M. Fine-tuning is a surprisingly effective domain adaptation baseline in handwriting recognition. in ICDAR 2023: Document Analysis and Recognition - ICDAR 2023 269–286 (2023).

Hu, W., He, J. & Shu, Y. Transfer learning and deep domain adaptation. in Advances and Applications in Deep Learning (ed. Aceves-Fernandez, M. A.) 45–48 (IntechOpen, London, 2020).

Lee, Y. et al. Surgical fine-tuning improves adaptation to distribution shifts. in ICLR 2023 (2023).

Tetko, I. V., Abagyan, R. & Oprea, T. I. Surrogate data - A secure way to share corporate data. J. Computer-Aided Mol. Des. 19 , 749–764 (2005).

Fluetsch, A., Gerebtzoff, G. & Rodríguez-Pérez, R. Deep learning models compared to experimental variability for the prediction of CYP3A4 time-dependent inhibition. Submitted (2023).

Gaulton, A. et al. ChEMBL: A large-scale bioactivity database for drug discovery. Nucleic Acids Res. 40 , 1100–1107 (2012).

Sterling, T. & Irwin, J. J. ZINC 15 - ligand discovery for everyone. J. Chem. Inf. Modeling 55 , 2324–2337 (2015).

Weng, G. et al. PROTAC-DB 2.0: an updated database of PROTACs. Nucleic Acids Res. 51 , D1367–D1372 (2023).

Huth, F. et al. Predicting oral absorption for compounds outside the rule of five property space. J. Pharm. Sci. 110 , 2562–2569 (2021).

(Ivan). Low, Y. W., Blasco, F. & Vachaspati, P. Optimised method to estimate octanol water distribution coefficient (logD) in a high throughput format. Eur. J. Pharm. Sci. 92 , 110–116 (2016).

Rodríguez-Pérez, R. & Bajorath, J. Prediction of compound profiling matrices, part ii: relative performance of multitask deep learning and random forest classification on the basis of varying amounts of training data. ACS Omega 3 , 12033–12040 (2018).

Rodríguez-Pérez, R. & Gerebtzoff, G. Identification of bile salt export pump inhibitors using machine learning: Predictive safety from an industry perspective. Artif. Intell. Life Sci. 1 , 100027 (2021).

Hamzic, S. et al. Predicting in vivo compound brain penetration using multi-task graph neural networks. J. Chem. Inf. Modeling https://doi.org/10.1021/acs.jcim.2c00412 (2022).

Sheridan, R. P. Time-split cross-validation as a method for estimating the goodness of prospective prediction. J. Chem. Inf. Modeling 53 , 783–790 (2013).

Stanley, M. et al. FS-Mol: A few-shot learning dataset of molecules. NeurIPS (2021).

Li, X. & Fourches, D. Inductive transfer learning for molecular activity prediction: Next-Gen QSAR Models with MolPMoFiT. J. Cheminformatics 12 , 27 (2020).

Wiercioch, M. & Kirchmair, J. Dealing with a data-limited regime: Combining transfer learning and transformer attention mechanism to increase aqueous solubility prediction performance. Artif. Intell. Life Sci. 1 , 100021 (2021).

Wang, B., Huang, J., Yan, R., Su, Y. & Mu, X. Domain-Adaptive Pre-training BERT Model for Test and Identification Domain NER Task. in J. Phys.: Conference, AICS 2363 (2022).

Vásquez-Correa, J. C. et al. When whisper meets TTS: domain adaptation using only synthetic speech data. in TSD 2023: Text, Speech, and Dialogue 226–238 (2023).

Rodríguez-Pérez, R. & Bajorath, J. Interpretation of compound activity predictions from complex machine learning models using local approximations and shapley values. J. Medicinal Chem. 63 , 8761–8777 (2020).

RDKit: Open-source cheminformatics; http://www.rdkit.org .

Download references

Acknowledgements

The authors thank Francis J. Prael III and Thomas Zoller for their help in dataset preparation, Paolo Tosco for discussions about molecular similarity, and Markus Trunzer, Bernard Faller, Gaëlle Chenal, and Stephane Rodde for assistance with the assays’ descriptions. Thanks to all former and current Novartis colleagues who helped generating the data used for modeling. M.T.D.H. thanks the Translational Medicine Data Science Academy program at Novartis Biomedical Research.

Author information

Authors and affiliations.

Novartis Biomedical Research, Novartis Campus, 4002, Basel, Switzerland

Giulia Peteani, Minh Tam Davide Huynh, Grégori Gerebtzoff & Raquel Rodríguez-Pérez

You can also search for this author in PubMed   Google Scholar

Contributions

G.G. and R.R.P. conceived and supervised the study; R.R.P. and G.P. built the internal models; G.P. evaluated the internal models; M.T.D.H. built and evaluated the public models; G.P., M.T.D.H., and R.R.P. prepared the figures and analyzed the results; R.R.P. and G.P. wrote the manuscript; all authors discussed the results and revised the manuscript.

Corresponding author

Correspondence to Raquel Rodríguez-Pérez .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Communications thanks Miquel Duran-Frigola, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. A peer review file is available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information, peer review file, description additional supplementary files, supplementary data 1, supplementary software 1, reporting summary, source data, source data, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Peteani, G., Huynh, M.T.D., Gerebtzoff, G. et al. Application of machine learning models for property prediction to targeted protein degraders. Nat Commun 15 , 5764 (2024). https://doi.org/10.1038/s41467-024-49979-3

Download citation

Received : 20 September 2023

Accepted : 21 June 2024

Published : 09 July 2024

DOI : https://doi.org/10.1038/s41467-024-49979-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: Translational Research newsletter — top stories in biotechnology, drug discovery and pharma.

experimental design application

share this!

July 8, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

Engineers develop advanced optical computing method for multiplexed data processing and encryption

by UCLA Engineering Institute for Technology Advancement

UCLA Engineers Develop Advanced Optical Computing Method for Multiplexed Data Processing and Encryption

Engineers at the University of California, Los Angeles (UCLA) have unveiled a major advancement in optical computing technology that promises to enhance data processing and encryption. The work is published in the journal Laser & Photonics Reviews .

This innovative work, led by Professor Aydogan Ozcan and his team, showcases a reconfigurable diffractive optical network capable of executing high-dimensional permutation operations, offering a significant leap forward in telecommunications and data security applications.

Permutation operations, essential for various applications, including telecommunications and encryption, have traditionally relied on electronic hardware. However, the UCLA team's advancement uses all-optical diffractive computing to perform these operations in a multiplexed manner, significantly improving efficiency and scalability.

By leveraging the intrinsic properties of light, the research introduces a novel method to execute high-dimensional permutation operations through a multiplexed diffractive optical network.

Innovative diffractive design

The team's design features a reconfigurable multiplexed material, structured using deep learning algorithms. Each diffractive layer in the network can rotate in four orientations: 0°, 90°, 180°, and 270°. This allows a K-layer rotatable diffractive material to perform up to 4 K independent permutation operations, making it highly versatile.

The original input data can be decrypted by applying a specific inverse permutation matrix, ensuring data security.

Experimental validation and applications

To demonstrate the practicality of this technology, the researchers approximated 256 randomly selected permutation matrices using four rotatable diffractive layers. They also showcased the design's versatility by integrating polarization degrees of freedom, further enhancing its multiplexing capabilities.

The experimental validation, conducted using terahertz radiation and 3D-printed diffractive layers, closely matched the numerical results, underscoring the design's reliability and potential for real-world applications.

Future prospects

The reconfigurable diffractive network offers mechanical reconfigurability, allowing multifunctional representation through a single fabrication process. This innovation is particularly promising for applications in optical switching and encryption, where high-speed, power-efficient information transfer and multiplexed processing are crucial.

The UCLA team's transformative work not only paves the way for advanced data processing and encryption methods but also highlights the immense potential of optical computing technologies in addressing contemporary technological challenges.

Provided by UCLA Engineering Institute for Technology Advancement

Explore further

Feedback to editors

experimental design application

Not so simple: Mosses and ferns offer new hope for crop protection

11 minutes ago

experimental design application

Nanoparticle-based delivery system could offer treatment for diabetics with rare insulin allergy

20 minutes ago

experimental design application

Single-step procedure synthesizes air-stable, nickel-containing catalyst with safe, cheap materials

experimental design application

Fermionic Hubbard quantum simulator observes antiferromagnetic phase transition

experimental design application

The geometry of life: Physicists determine what controls biofilm growth

experimental design application

Archaeologists find ancient temple and theater in Peru

experimental design application

Astronomers suggest up to 60% of near-Earth objects could be dark comets

experimental design application

Wolves' return has had only small impact on deer populations in Washington state, study shows

2 hours ago

experimental design application

Using microwave sintering to produce 'space brick' for a future moon base

experimental design application

Astronomers find the nearest massive black hole, a missing link in massive black hole formation

Relevant physicsforums posts, it's still not clear to me what's the limit of light propagation.

Jul 9, 2024

Viewing 1k x 1k slab of desert sand during extreme cold/hot temps?

Jun 20, 2024

DIY Helmet Mounted HUD

Jun 18, 2024

Ansys FDTD simulation for phase change due to reflection

Jun 16, 2024

Interference of two ultra short laser pulses

Jun 5, 2024

Prism Spectrometer Objective: Optics Question

May 29, 2024

More from Optics

Related Stories

experimental design application

Performing complex-valued linear transformations using spatially incoherent diffractive optical networks

Jan 22, 2024

experimental design application

Scientists unveil all-optical phase conjugation method using diffractive wavefront processing

Jun 13, 2024

experimental design application

A new route for universal polarization transformations of spatially varying polarization fields

Aug 28, 2023

experimental design application

All-optical computation of a group of transformations using a polarization-encoded diffractive network

May 26, 2022

experimental design application

Light computes any desired linear transform without a digital processor

Sep 24, 2021

experimental design application

Data class-specific image encryption using optical diffraction

May 2, 2023

Recommended for you

experimental design application

Photons from quantum dot emitters violate Bell inequality in new study

experimental design application

New shapes of photons open doors to advanced optical technologies

Jul 8, 2024

experimental design application

New technique offers unprecedented control over light at terahertz frequencies

experimental design application

Scientists develop new technique for bespoke optical tweezers

experimental design application

Controlling magnetism with polarized light: Non-thermal pathway uses inverse Faraday effect

Jul 5, 2024

experimental design application

Scientists visualize magnetic fields at atomic scale with holography electron microscope

Let us know if there is a problem with our content.

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

IMAGES

  1. Experimental Study Design: Types, Methods, Advantages

    experimental design application

  2. PPT

    experimental design application

  3. 5 Ways to Make Experimental Design A More Approachable Topic

    experimental design application

  4. An Intuitive Study of Experimental Design

    experimental design application

  5. What is a True Experimental Design?

    experimental design application

  6. PPT

    experimental design application

VIDEO

  1. Needs of Experimental Design

  2. MyoMIDI demo (very early work in progress)

  3. Advanced Statistics and Experimental design Day 1

  4. Experimental Psychology Animated PowerPoint Slides

  5. Experiment design (with full sample test answer)

  6. What is experimental research design? (4 of 11)

COMMENTS

  1. Experimental Design Online

    The Experimental Design Online application is managed by Scandinavian Institute of Business Analytics SCANBA. The application is free to use and does does not store or record any data. 1. Factors and Levels. Enter your experimental design variable names (factors) and values (levels). Examples of factors are 'color', 'size', 'shape ...

  2. Guide to Experimental Design

    Experimental design create a set of procedures to systematically test a hypothesis. A good experimental design requires a strong understanding of the system you are studying. ... is often used in within-subjects designs to ensure that the order of treatment application doesn't influence the results of the experiment. Between-subjects ...

  3. 19+ Experimental Design Examples (Methods + Types)

    Experimental design is the roadmap researchers use to answer questions. It's a set of rules and steps that researchers follow to collect information, or "data," in a way that is fair, accurate, and makes sense. ... Sequential Design Uses. In terms of its applications, besides healthcare and medicine, Sequential Design is also popular in quality ...

  4. Experimental Design

    Applications of Experimental Design . Experimental design is a versatile research methodology that can be applied in many fields. Here are some applications of experimental design: Medical Research: Experimental design is commonly used to test new treatments or medications for various medical conditions. This includes clinical trials to ...

  5. Experimental Design Software

    Experimental design is the planning of an efficient, reliable, and accurate technical study. The range of application of experimental design principles is as broad as science and industry. One person may be planning a long-term agricultural experiment, while another may have eight hours to rectify a production problem.

  6. Experimental Design: With Application in Management, Engineering, and

    About this book. This text introduces and provides instruction on the design and analysis of experiments for a broad audience. Formed by decades of teaching, consulting, and industrial experience in the Design of Experiments field, this new edition contains updated examples, exercises, and situations covering the science and engineering practice.

  7. Experimental Design: Definition and Types

    An experimental design is a detailed plan for collecting and using data to identify causal relationships. Through careful planning, the design of experiments allows your data collection efforts to have a reasonable chance of detecting effects and testing hypotheses that answer your research questions. An experiment is a data collection ...

  8. Guide to experimental research design

    Experimental design is a research method that enables researchers to assess the effect of multiple factors on an outcome.. You can determine the relationship between each of the variables by: Manipulating one or more independent variables (i.e., stimuli or treatments). Applying the changes to one or more dependent variables (i.e., test groups or outcomes)

  9. A Quick Guide to Experimental Design

    A good experimental design requires a strong understanding of the system you are studying. There are five key steps in designing an experiment: Consider your variables and how they are related. Write a specific, testable hypothesis. Design experimental treatments to manipulate your independent variable.

  10. Principles of Experimental Design

    The (statistical) design of experiments provides the principles and methods for planning experiments and tailoring the data acquisition to an intended analysis.The design and analysis of an experiment are best considered as two aspects of the same enterprise: the goals of the analysis strongly inform an appropriate design, and the implemented design determines the possible analyses.

  11. Design of experiments

    The design of experiments ( DOE or DOX ), also known as experiment design or experimental design, is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associated with experiments in which the design introduces conditions ...

  12. Design of Experiments Specialization [4 courses] (ASU)

    By the end of this course, you will be able to: Approach complex industrial and business research problems and address them through a rigorous, statistically sound experimental strategy. Use modern software to effectively plan experiments. Analyze the resulting data of an experiment, and communicate the results effectively to decision-makers.

  13. Experimental Design: Types, Examples & Methods

    Three types of experimental designs are commonly used: 1. Independent Measures. Independent measures design, also known as between-groups, is an experimental design where different participants are used in each condition of the independent variable. This means that each condition of the experiment includes a different group of participants.

  14. Experimental Design Basics

    Applications from various fields will be illustrated throughout the course. Computer software packages (JMP, Design-Expert, Minitab) will be used to implement the methods presented and will be illustrated extensively. All experiments are designed experiments; some of them are poorly designed, and others are well-designed.

  15. Experimental Design Research

    This book presents a new, multidisciplinary perspective on and paradigm for integrative experimental design research. It addresses various perspectives on methods, analysis and overall research approach, and how they can be synthesized to advance ... Applications Experimental Design Research [email protected] Philip Cash · Tino Stanković ...

  16. PDF Experimental Design and Its Application

    Experimental Design and Its Application Experimental Design is a vast concept. In this article, we will learn about: 1. The various terminologies required to build the concept ( experimental units, treatments, experimental error, blocks). 2. The three principles of Experimental Design i.e. Randomization, Replication, and Local Control 3.

  17. Experimental Research Designs: Types, Examples & Methods

    In pre-experimental research design, either a group or various dependent groups are observed for the effect of the application of an independent variable which is presumed to cause change. It is the simplest form of experimental research design and is treated with no control group. ... It is the most accurate type of experimental design and may ...

  18. Desice

    It is simple and efficient and costs fractions of the traditional equivalent software packages. Being a web app, we also benefit from new features immediately. Fully recommend it! HK. CTO, Again. ONEA-Engineering Austria. Desice is the ultimate platform fusing proven Design of Experiments (DoE) tools with the convenience, mobility, and ...

  19. Experimental Design: Theory and Application

    Experimental Design: Theory and Application. W. T. Federer. Oxford & IBH Publishing Company, 1967 - Biometry - 591 pages. 1.Introduction; 2.Some useful statistical tools and concepts; 3.Plot or pen technique; 4.The completely randomized design; 5.Randomized complete block design; 6.The latin squere design; 7.The choice of treatements and the ...

  20. (PDF) Design of experiments application, concepts, examples: State of

    Abstract and Figures. Design of Experiments (DOE) is statistical tool deployed in various types of system, process and product design, development and optimization. It is multipurpose tool that ...

  21. Design of Experiments: An Overview and Application Example

    The design principles that he developed for agricultural experiments have been successfully adapted to industrial and military applications since the 1940s. In the past decade, the application of DOE has gained acceptance in the United States as an essential tool for improving the quality of goods and services.

  22. Experimental Design: Definition, Principle, Steps, Types, Application

    Experimental design, also referred to as "design of experiment,"is a branch of applied statistics that deals with planning, conducting, analysing, and deciphering controlled tests.It is performed to evaluate the factors that control the value of a parameter or group of parameters. It is a powerful data collection and analysis tool that can be utilised in various experiments.

  23. Experimental Design:Theory and Application : Walter T. Federer : Free

    Experimental Design:Theory and Application by Walter T. Federer. Publication date 1955 Publisher Oxford & Ibh Publishing Co. Collection internetarchivebooks; inlibrary; printdisabled Contributor Internet Archive Language English. Access-restricted-item true Addeddate

  24. Applied Sciences

    Analysis, Design, and Experimental Validation of a High-Isolation, Low-Cross-Polarization Antenna Array Demonstrator for Software-Defined-Radar Applications ... (EBG) structures: A low mutual coupling design for array applications. IEEE Trans. Antennas Propag. 2003, 51, 2936-2946. [Google Scholar] SIMULIA CST Studio Suite. Dassault Systemes ...

  25. Frontiers

    This article reports the design and experiment of a novel air-assisted nozzle for pesticide application in orchard. ... Wu M, Gao T, Jia W, Dong X, Zhang T and Ding S (2024) Design and experimental research of air-assisted nozzle for pesticide application in orchard. Front. Plant Sci. 15:1405530. doi: 10.3389/fpls.2024.1405530. Received: 23 ...

  26. Network Sovereignty: A Novel Metric and its Application on Network Design

    View a PDF of the paper titled Network Sovereignty: A Novel Metric and its Application on Network Design, by Shakthivelu Janardhanan and 4 other authors View PDF HTML (experimental) Abstract: Most network planning problems in literature consider metrics such as cost, availability, and other technology-aware attributes.

  27. Application of machine learning models for property prediction to

    Machine learning (ML) systems can model quantitative structure-property relationships (QSPR) using existing experimental data and make property predictions for new molecules. With the advent of ...

  28. Engineers develop advanced optical computing method for multiplexed

    The team's design features a reconfigurable multiplexed material, structured using deep learning algorithms. ... Experimental validation and applications. To demonstrate the practicality of this ...