Definition of Assignment Bias
Assignment bias refers to a type of bias that occurs in research or experimental studies when the assignment of participants to different groups or conditions is not randomized or is influenced by external factors.
Understanding Assignment Bias
Randomized assignment or allocation of participants to different groups is a fundamental principle in research studies that aims to eliminate assignment bias.
Causes of Assignment Bias
Assignment bias can arise due to several reasons:
- Non-randomized allocation: When participants are not randomly assigned to different groups, their characteristics may influence the assignment, introducing bias into the study. This can occur when researchers purposefully assign participants based on certain characteristics or when participants self-select into a specific group.
- External factors: Factors external to the research design, such as the preferences of researchers or unequal distribution of participants based on certain characteristics, may unintentionally affect the assignment process.
- Selection bias: If participants are not selected randomly from the population under study, the assignment process can be biased, impacting the validity and generalizability of the results.
Effects of Assignment Bias
Assignment bias can have various consequences:
- Inaccurate estimation: The inclusion of biased assignment methods can lead to inaccurate estimations of treatment effects, making it difficult to draw reliable conclusions from the study.
- Reduced internal validity: Assignment bias threatens the internal validity of a study because it hampers the ability to establish a causal relationship between the independent variable and the observed outcomes.
- Compromised generalizability: The presence of assignment bias may limit the generalizability of research findings to a larger population, as the biased assignment may not appropriately represent the target population.
Strategies to Minimize Assignment Bias
To minimize assignment bias, researchers can undertake the following strategies:
- Randomization: Random allocation of participants to different groups reduces the likelihood of assignment bias by ensuring that each participant has an equal chance of being assigned to any group.
- Blinding: Adopting blind procedures, such as single-blind or double-blind designs, helps prevent the influence of researcher or participant bias on the assignment process.
- Stratification: Stratifying participants based on certain important variables prior to assignment can ensure a balance of these variables across different groups and minimize the impact of confounding factors.
Cognitive Bias: How We Are Wired to Misjudge
Charlotte Ruhl
Research Assistant & Psychology Graduate
BA (Hons) Psychology, Harvard University
Charlotte Ruhl, a psychology graduate from Harvard College, boasts over six years of research experience in clinical and social psychology. During her tenure at Harvard, she contributed to the Decision Science Lab, administering numerous studies in behavioral economics and social psychology.
Learn about our Editorial Process
Saul McLeod, PhD
Editor-in-Chief for Simply Psychology
BSc (Hons) Psychology, MRes, PhD, University of Manchester
Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.
Olivia Guy-Evans, MSc
Associate Editor for Simply Psychology
BSc (Hons) Psychology, MSc Psychology of Education
Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.
On This Page:
Have you ever been so busy talking on the phone that you don’t notice the light has turned green and it is your turn to cross the street?
Have you ever shouted, “I knew that was going to happen!” after your favorite baseball team gave up a huge lead in the ninth inning and lost?
Or have you ever found yourself only reading news stories that further support your opinion?
These are just a few of the many instances of cognitive bias that we experience every day of our lives. But before we dive into these different biases, let’s backtrack first and define what bias is.
What is Cognitive Bias?
Cognitive bias is a systematic error in thinking, affecting how we process information, perceive others, and make decisions. It can lead to irrational thoughts or judgments and is often based on our perceptions, memories, or individual and societal beliefs.
Biases are unconscious and automatic processes designed to make decision-making quicker and more efficient. Cognitive biases can be caused by many things, such as heuristics (mental shortcuts) , social pressures, and emotions.
Broadly speaking, bias is a tendency to lean in favor of or against a person, group, idea, or thing, usually in an unfair way. Biases are natural — they are a product of human nature — and they don’t simply exist in a vacuum or in our minds — they affect the way we make decisions and act.
In psychology, there are two main branches of biases: conscious and unconscious. Conscious or explicit bias is intentional — you are aware of your attitudes and the behaviors resulting from them (Lang, 2019).
Explicit bias can be good because it helps provide you with a sense of identity and can lead you to make good decisions (for example, being biased towards healthy foods).
However, these biases can often be dangerous when they take the form of conscious stereotyping.
On the other hand, unconscious bias , or cognitive bias, represents a set of unintentional biases — you are unaware of your attitudes and behaviors resulting from them (Lang, 2019).
Cognitive bias is often a result of your brain’s attempt to simplify information processing — we receive roughly 11 million bits of information per second. Still, we can only process about 40 bits of information per second (Orzan et al., 2012).
Therefore, we often rely on mental shortcuts (called heuristics) to help make sense of the world with relative speed. As such, these errors tend to arise from problems related to thinking: memory, attention, and other mental mistakes.
Cognitive biases can be beneficial because they do not require much mental effort and can allow you to make decisions relatively quickly, but like conscious biases, unconscious biases can also take the form of harmful prejudice that serves to hurt an individual or a group.
Although it may feel like there has been a recent rise of unconscious bias, especially in the context of police brutality and the Black Lives Matter movement, this is not a new phenomenon.
Thanks to Tversky and Kahneman (and several other psychologists who have paved the way), we now have an existing dictionary of our cognitive biases.
Again, these biases occur as an attempt to simplify the complex world and make information processing faster and easier. This section will dive into some of the most common forms of cognitive bias.
Confirmation Bias
Confirmation bias is the tendency to interpret new information as confirmation of your preexisting beliefs and opinions while giving disproportionately less consideration to alternative possibilities.
Real-World Examples
Since Watson’s 1960 experiment, real-world examples of confirmation bias have gained attention.
This bias often seeps into the research world when psychologists selectively interpret data or ignore unfavorable data to produce results that support their initial hypothesis.
Confirmation bias is also incredibly pervasive on the internet, particularly with social media. We tend to read online news articles that support our beliefs and fail to seek out sources that challenge them.
Various social media platforms, such as Facebook, help reinforce our confirmation bias by feeding us stories that we are likely to agree with – further pushing us down these echo chambers of political polarization.
Some examples of confirmation bias are especially harmful, specifically in the context of the law. For example, a detective may identify a suspect early in an investigation, seek out confirming evidence, and downplay falsifying evidence.
Experiments
The confirmation bias dates back to 1960 when Peter Wason challenged participants to identify a rule applying to triples of numbers.
People were first told that the sequences 2, 4, 6 fit the rule, and they then had to generate triples of their own and were told whether that sequence fits the rule. The rule was simple: any ascending sequence.
But not only did participants have an unusually difficult time realizing this and instead devised overly-complicated hypotheses, they also only generated triples that confirmed their preexisting hypothesis (Wason, 1960).
Explanations
But why does confirmation bias occur? It’s partially due to the effect of desire on our beliefs. In other words, certain desired conclusions (ones that support our beliefs) are more likely to be processed by the brain and labeled as true (Nickerson, 1998).
This motivational explanation is often coupled with a more cognitive theory.
The cognitive explanation argues that because our minds can only focus on one thing at a time, it is hard to parallel process (see information processing for more information) alternate hypotheses, so, as a result, we only process the information that aligns with our beliefs (Nickerson, 1998).
Another theory explains confirmation bias as a way of enhancing and protecting our self-esteem.
As with the self-serving bias (see more below), our minds choose to reinforce our preexisting ideas because being right helps preserve our sense of self-esteem, which is important for feeling secure in the world and maintaining positive relationships (Casad, 2019).
Although confirmation bias has obvious consequences, you can still work towards overcoming it by being open-minded and willing to look at situations from a different perspective than you might be used to (Luippold et al., 2015).
Even though this bias is unconscious, training your mind to become more flexible in its thought patterns will help mitigate the effects of this bias.
Hindsight Bias
Hindsight bias refers to the tendency to perceive past events as more predictable than they actually were (Roese & Vohs, 2012). There are cognitive and motivational explanations for why we ascribe so much certainty to knowing the outcome of an event only once the event is completed.
When sports fans know the outcome of a game, they often question certain decisions coaches make that they otherwise would not have questioned or second-guessed.
And fans are also quick to remark that they knew their team was going to win or lose, but, of course, they only make this statement after their team actually did win or lose.
Although research studies have demonstrated that the hindsight bias isn’t necessarily mitigated by pure recognition of the bias (Pohl & Hell, 1996).
You can still make a conscious effort to remind yourself that you can’t predict the future and motivate yourself to consider alternate explanations.
It’s important to do all we can to reduce this bias because when we are overly confident about our ability to predict outcomes, we might make future risky decisions that could have potentially dangerous outcomes.
Building on Tversky and Kahneman’s growing list of heuristics, researchers Baruch Fischhoff and Ruth Beyth-Marom (1975) were the first to directly investigate the hindsight bias in the empirical setting.
The team asked participants to judge the likelihood of several different outcomes of former U.S. president Richard Nixon’s visit to Beijing and Moscow.
After Nixon returned back to the States, participants were asked to recall the likelihood of each outcome they had initially assigned.
Fischhoff and Beyth found that for events that actually occurred, participants greatly overestimated the initial likelihood they assigned to those events.
That same year, Fischhoff (1975) introduced a new method for testing the hindsight bias – one that researchers still use today.
Participants are given a short story with four possible outcomes, and they are told that one is true. When they are then asked to assign the likelihood of each specific outcome, they regularly assign a higher likelihood to whichever outcome they have been told is true, regardless of how likely it actually is.
But hindsight bias does not only exist in artificial settings. In 1993, Dorothee Dietrich and Matthew Olsen asked college students to predict how the U.S. Senate would vote on the confirmation of Supreme Court nominee Clarence Thomas.
Before the vote, 58% of participants predicted that he would be confirmed, but after his actual confirmation, 78% of students said that they thought he would be approved – a prime example of the hindsight bias. And this form of bias extends beyond the research world.
From the cognitive perspective, hindsight bias may result from distortions of memories of what we knew or believed to know before an event occurred (Inman, 2016).
It is easier to recall information that is consistent with our current knowledge, so our memories become warped in a way that agrees with what actually did happen.
Motivational explanations of the hindsight bias point to the fact that we are motivated to live in a predictable world (Inman, 2016).
When surprising outcomes arise, our expectations are violated, and we may experience negative reactions as a result. Thus, we rely on the hindsight bias to avoid these adverse responses to certain unanticipated events and reassure ourselves that we actually did know what was going to happen.
Self-Serving Bias
Self-serving bias is the tendency to take personal responsibility for positive outcomes and blame external factors for negative outcomes.
You would be right to ask how this is similar to the fundamental attribution error (Ross, 1977), which identifies our tendency to overemphasize internal factors for other people’s behavior while attributing external factors to our own.
The distinction is that the self-serving bias is concerned with valence. That is, how good or bad an event or situation is. And it is also only concerned with events for which you are the actor.
In other words, if a driver cuts in front of you as the light turns green, the fundamental attribution error might cause you to think that they are a bad person and not consider the possibility that they were late for work.
On the other hand, the self-serving bias is exercised when you are the actor. In this example, you would be the driver cutting in front of the other car, which you would tell yourself is because you are late (an external attribution to a negative event) as opposed to it being because you are a bad person.
From sports to the workplace, self-serving bias is incredibly common. For example, athletes are quick to take responsibility for personal wins, attributing their successes to their hard work and mental toughness, but point to external factors, such as unfair calls or bad weather, when they lose (Allen et al., 2020).
In the workplace, people attribute internal factors when they have hired for a job but external factors when they are fired (Furnham, 1982). And in the office itself, workplace conflicts are given external attributions, and successes, whether a persuasive presentation or a promotion, are awarded internal explanations (Walther & Bazarova, 2007).
Additionally, self-serving bias is more prevalent in individualistic cultures , which place emphasis on self-esteem levels and individual goals, and it is less prevalent among individuals with depression (Mezulis et al., 2004), who are more likely to take responsibility for negative outcomes.
Overcoming this bias can be difficult because it is at the expense of our self-esteem. Nevertheless, practicing self-compassion – treating yourself with kindness even when you fall short or fail – can help reduce the self-serving bias (Neff, 2003).
The leading explanation for the self-serving bias is that it is a way of protecting our self-esteem (similar to one of the explanations for the confirmation bias).
We are quick to take credit for positive outcomes and divert the blame for negative ones to boost and preserve our individual ego, which is necessary for confidence and healthy relationships with others (Heider, 1982).
Another theory argues that self-serving bias occurs when surprising events arise. When certain outcomes run counter to our expectations, we ascribe external factors, but when outcomes are in line with our expectations, we attribute internal factors (Miller & Ross, 1975).
An extension of this theory asserts that we are naturally optimistic, so negative outcomes come as a surprise and receive external attributions as a result.
Anchoring Bias
individualistic cultures is closely related to the decision-making process. It occurs when we rely too heavily on either pre-existing information or the first piece of information (the anchor) when making a decision.
For example, if you first see a T-shirt that costs $1,000 and then see a second one that costs $100, you’re more likely to see the second shirt as cheap as you would if the first shirt you saw was $120. Here, the price of the first shirt influences how you view the second.
Sarah is looking to buy a used car. The first dealership she visits has a used sedan listed for $19,000. Sarah takes this initial listing price as an anchor and uses it to evaluate prices at other dealerships.
When she sees another similar used sedan priced at $18,000, that price seems like a good bargain compared to the $19,000 anchor price she saw first, even though the actual market value is closer to $16,000.
When Sarah finds a comparable used sedan priced at $15,500, she continues perceiving that price as cheap compared to her anchored reference price.
Ultimately, Sarah purchases the $18,000 sedan, overlooking that all of the prices seemed like bargains only in relation to the initial high anchor price.
The key elements that demonstrate anchoring bias here are:
- Sarah establishes an initial reference price based on the first listing she sees ($19k)
- She uses that initial price as her comparison/anchor for evaluating subsequent prices
- This biases her perception of the market value of the cars she looks at after the initial anchor is set
- She makes a purchase decision aligned with her anchored expectations rather than a more objective market value
Multiple theories seek to explain the existence of this bias.
One theory, known as anchoring and adjustment, argues that once an anchor is established, people insufficiently adjust away from it to arrive at their final answer, and so their final guess or decision is closer to the anchor than it otherwise would have been (Tversky & Kahneman, 1992).
And when people experience a greater cognitive load (the amount of information the working memory can hold at any given time; for example, a difficult decision as opposed to an easy one), they are more susceptible to the effects of anchoring.
Another theory, selective accessibility, holds that although we assume that the anchor is not a suitable answer (or a suitable price going back to the initial example) when we evaluate the second stimulus (or second shirt), we look for ways in which it is similar or different to the anchor (the price being way different), resulting in the anchoring effect (Mussweiler & Strack, 1999).
A final theory posits that providing an anchor changes someone’s attitudes to be more favorable to the anchor, which then biases future answers to have similar characteristics as the initial anchor.
Although there are many different theories for why we experience anchoring bias, they all agree that it affects our decisions in real ways (Wegner et al., 2001).
The first study that brought this bias to light was during one of Tversky and Kahneman’s (1974) initial experiments. They asked participants to compute the product of numbers 1-8 in five seconds, either as 1x2x3… or 8x7x6…
Participants did not have enough time to calculate the answer, so they had to estimate based on their first few calculations.
They found that those who computed the small multiplications first (i.e., 1x2x3…) gave a median estimate of 512, but those who computed the larger multiplications first gave a median estimate of 2,250 (although the actual answer is 40,320).
This demonstrates how the initial few calculations influenced the participant’s final answer.
Availability Bias
Availability bias (also commonly referred to as the availability heuristic ) refers to the tendency to think that examples of things that readily come to mind are more common than what is actually the case.
In other words, information that comes to mind faster influences the decisions we make about the future. And just like with the hindsight bias, this bias is related to an error of memory.
But instead of being a memory fabrication, it is an overemphasis on a certain memory.
In the workplace, if someone is being considered for a promotion but their boss recalls one bad thing that happened years ago but left a lasting impression, that one event might have an outsized influence on the final decision.
Another common example is buying lottery tickets because the lifestyle and benefits of winning are more readily available in mind (and the potential emotions associated with winning or seeing other people win) than the complex probability calculation of actually winning the lottery (Cherry, 2019).
A final common example that is used to demonstrate the availability heuristic describes how seeing several television shows or news reports about shark attacks (or anything that is sensationalized by the news, such as serial killers or plane crashes) might make you think that this incident is relatively common even though it is not at all.
Regardless, this thinking might make you less inclined to go in the water the next time you go to the beach (Cherry, 2019).
As with most cognitive biases, the best way to overcome them is by recognizing the bias and being more cognizant of your thoughts and decisions.
And because we fall victim to this bias when our brain relies on quick mental shortcuts in order to save time, slowing down our thinking and decision-making process is a crucial step to mitigating the effects of the availability heuristic.
Researchers think this bias occurs because the brain is constantly trying to minimize the effort necessary to make decisions, and so we rely on certain memories – ones that we can recall more easily – instead of having to endure the complicated task of calculating statistical probabilities.
Two main types of memories are easier to recall: 1) those that more closely align with the way we see the world and 2) those that evoke more emotion and leave a more lasting impression.
This first type of memory was identified in 1973, when Tversky and Kahneman, our cognitive bias pioneers, conducted a study in which they asked participants if more words begin with the letter K or if more words have K as their third letter.
Although many more words have K as their third letter, 70% of participants said that more words begin with K because the ability to recall this is not only easier, but it more closely aligns with the way they see the world (knowing the first letter of any word is infinitely more common than the third letter of any word).
In terms of the second type of memory, the same duo ran an experiment in 1983, 10 years later, where half the participants were asked to guess the likelihood of a massive flood would occur somewhere in North America, and the other half had to guess the likelihood of a flood occurring due to an earthquake in California.
Although the latter is much less likely, participants still said that this would be much more common because they could recall specific, emotionally charged events of earthquakes hitting California, largely due to the news coverage they receive.
Together, these studies highlight how memories that are easier to recall greatly influence our judgments and perceptions about future events.
Inattentional Blindness
A final popular form of cognitive bias is inattentional blindness . This occurs when a person fails to notice a stimulus that is in plain sight because their attention is directed elsewhere.
For example, while driving a car, you might be so focused on the road ahead of you that you completely fail to notice a car swerve into your lane of traffic.
Because your attention is directed elsewhere, you aren’t able to react in time, potentially leading to a car accident. Experiencing inattentional blindness has its obvious consequences (as illustrated by this example), but, like all biases, it is not impossible to overcome.
Many theories seek to explain why we experience this form of cognitive bias. In reality, it is probably some combination of these explanations.
Conspicuity holds that certain sensory stimuli (such as bright colors) and cognitive stimuli (such as something familiar) are more likely to be processed, and so stimuli that don’t fit into one of these two categories might be missed.
The mental workload theory describes how when we focus a lot of our brain’s mental energy on one stimulus, we are using up our cognitive resources and won’t be able to process another stimulus simultaneously.
Similarly, some psychologists explain how we attend to different stimuli with varying levels of attentional capacity, which might affect our ability to process multiple stimuli simultaneously.
In other words, an experienced driver might be able to see that car swerve into the lane because they are using fewer mental resources to drive, whereas a beginner driver might be using more resources to focus on the road ahead and unable to process that car swerving in.
A final explanation argues that because our attentional and processing resources are limited, our brain dedicates them to what fits into our schemas or our cognitive representations of the world (Cherry, 2020).
Thus, when an unexpected stimulus comes into our line of sight, we might not be able to process it on the conscious level. The following example illustrates how this might happen.
The most famous study to demonstrate the inattentional blindness phenomenon is the invisible gorilla study (Most et al., 2001). This experiment asked participants to watch a video of two groups passing a basketball and count how many times the white team passed the ball.
Participants are able to accurately report the number of passes, but what they fail to notice is a gorilla walking directly through the middle of the circle.
Because this would not be expected, and because our brain is using up its resources to count the number of passes, we completely fail to process something right before our eyes.
A real-world example of inattentional blindness occurred in 1995 when Boston police officer Kenny Conley was chasing a suspect and ran by a group of officers who were mistakenly holding down an undercover cop.
Conley was convicted of perjury and obstruction of justice because he supposedly saw the fight between the undercover cop and the other officers and lied about it to protect the officers, but he stood by his word that he really hadn’t seen it (due to inattentional blindness) and was ultimately exonerated (Pickel, 2015).
The key to overcoming inattentional blindness is to maximize your attention by avoiding distractions such as checking your phone. And it is also important to pay attention to what other people might not notice (if you are that driver, don’t always assume that others can see you).
By working on expanding your attention and minimizing unnecessary distractions that will use up your mental resources, you can work towards overcoming this bias.
Preventing Cognitive Bias
As we know, recognizing these biases is the first step to overcoming them. But there are other small strategies we can follow in order to train our unconscious mind to think in different ways.
From strengthening our memory and minimizing distractions to slowing down our decision-making and improving our reasoning skills, we can work towards overcoming these cognitive biases.
An individual can evaluate his or her own thought process, also known as metacognition (“thinking about thinking”), which provides an opportunity to combat bias (Flavell, 1979).
This multifactorial process involves (Croskerry, 2003):
(a) acknowledging the limitations of memory, (b) seeking perspective while making decisions, (c) being able to self-critique, (d) choosing strategies to prevent cognitive error.
Many strategies used to avoid bias that we describe are also known as cognitive forcing strategies, which are mental tools used to force unbiased decision-making.
The History of Cognitive Bias
The term cognitive bias was first coined in the 1970s by Israeli psychologists Amos Tversky and Daniel Kahneman, who used this phrase to describe people’s flawed thinking patterns in response to judgment and decision problems (Tversky & Kahneman, 1974).
Tversky and Kahneman’s research program, the heuristics and biases program, investigated how people make decisions given limited resources (for example, limited time to decide which food to eat or limited information to decide which house to buy).
As a result of these limited resources, people are forced to rely on heuristics or quick mental shortcuts to help make their decisions.
Tversky and Kahneman wanted to understand the biases associated with this judgment and decision-making process.
To do so, the two researchers relied on a research paradigm that presented participants with some type of reasoning problem with a computed normative answer (they used probability theory and statistics to compute the expected answer).
Participants’ responses were then compared with the predetermined solution to reveal the systematic deviations in the mind.
After running several experiments with countless reasoning problems, the researchers were able to identify numerous norm violations that result when our minds rely on these cognitive biases to make decisions and judgments (Wilke & Mata, 2012).
Key Takeaways
- Cognitive biases are unconscious errors in thinking that arise from problems related to memory, attention, and other mental mistakes.
- These biases result from our brain’s efforts to simplify the incredibly complex world in which we live.
- Confirmation bias , hindsight bias, mere exposure effect , self-serving bias , base rate fallacy , anchoring bias , availability bias , the framing effect , inattentional blindness, and the ecological fallacy are some of the most common examples of cognitive bias. Another example is the false consensus effect .
- Cognitive biases directly affect our safety, interactions with others, and how we make judgments and decisions in our daily lives.
- Although these biases are unconscious, there are small steps we can take to train our minds to adopt a new pattern of thinking and mitigate the effects of these biases.
Allen, M. S., Robson, D. A., Martin, L. J., & Laborde, S. (2020). Systematic review and meta-analysis of self-serving attribution biases in the competitive context of organized sport. Personality and Social Psychology Bulletin, 46 (7), 1027-1043.
Casad, B. (2019). Confirmation bias . Retrieved from https://www.britannica.com/science/confirmation-bias
Cherry, K. (2019). How the availability heuristic affects your decision-making . Retrieved from https://www.verywellmind.com/availability-heuristic-2794824
Cherry, K. (2020). Inattentional blindness can cause you to miss things in front of you . Retrieved from https://www.verywellmind.com/what-is-inattentional-blindness-2795020
Dietrich, D., & Olson, M. (1993). A demonstration of hindsight bias using the Thomas confirmation vote. Psychological Reports, 72 (2), 377-378.
Fischhoff, B. (1975). Hindsight is not equal to foresight: The effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance, 1 (3), 288.
Fischhoff, B., & Beyth, R. (1975). I knew it would happen: Remembered probabilities of once—future things. Organizational Behavior and Human Performance, 13 (1), 1-16.
Furnham, A. (1982). Explanations for unemployment in Britain. European Journal of social psychology, 12(4), 335-352.
Heider, F. (1982). The psychology of interpersonal relations . Psychology Press.
Inman, M. (2016). Hindsight bias . Retrieved from https://www.britannica.com/topic/hindsight-bias
Lang, R. (2019). What is the difference between conscious and unconscious bias? : Faqs. Retrieved from https://engageinlearning.com/faq/compliance/unconscious-bias/what-is-the-difference-between-conscious-and-unconscious-bias/
Luippold, B., Perreault, S., & Wainberg, J. (2015). Auditor’s pitfall: Five ways to overcome confirmation bias . Retrieved from https://www.babson.edu/academics/executive-education/babson-insight/finance-and-accounting/auditors-pitfall-five-ways-to-overcome-confirmation-bias/
Mezulis, A. H., Abramson, L. Y., Hyde, J. S., & Hankin, B. L. (2004). Is there a universal positivity bias in attributions? A meta-analytic review of individual, developmental, and cultural differences in the self-serving attributional bias. Psychological Bulletin, 130 (5), 711.
Miller, D. T., & Ross, M. (1975). Self-serving biases in the attribution of causality: Fact or fiction?. Psychological Bulletin, 82 (2), 213.
Most, S. B., Simons, D. J., Scholl, B. J., Jimenez, R., Clifford, E., & Chabris, C. F. (2001). How not to be seen: The contribution of similarity and selective ignoring to sustained inattentional blindness. Psychological Science, 12 (1), 9-17.
Mussweiler, T., & Strack, F. (1999). Hypothesis-consistent testing and semantic priming in the anchoring paradigm: A selective accessibility model. Journal of Experimental Social Psychology, 35 (2), 136-164.
Neff, K. (2003). Self-compassion: An alternative conceptualization of a healthy attitude toward oneself. Self and Identity, 2 (2), 85-101.
Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2 (2), 175-220.
Orzan, G., Zara, I. A., & Purcarea, V. L. (2012). Neuromarketing techniques in pharmaceutical drugs advertising. A discussion and agenda for future research. Journal of Medicine and Life, 5 (4), 428.
Pickel, K. L. (2015). Eyewitness memory. The handbook of attention , 485-502.
Pohl, R. F., & Hell, W. (1996). No reduction in hindsight bias after complete information and repeated testing. Organizational Behavior and Human Decision Processes, 67 (1), 49-58.
Roese, N. J., & Vohs, K. D. (2012). Hindsight bias. Perspectives on Psychological Science, 7 (5), 411-426.
Ross, L. (1977). The intuitive psychologist and his shortcomings: Distortions in the attribution process. In Advances in experimental social psychology (Vol. 10, pp. 173-220). Academic Press.
Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5 (2), 207-232.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185 (4157), 1124-1131.
Tversky, A., & Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review , 90(4), 293.
Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty, 5 (4), 297-323.
Walther, J. B., & Bazarova, N. N. (2007). Misattribution in virtual groups: The effects of member distribution on self-serving bias and partner blame. Human Communication Research, 33 (1), 1-26.
Wason, Peter C. (1960), “On the failure to eliminate hypotheses in a conceptual task”. Quarterly Journal of Experimental Psychology, 12 (3): 129–40.
Wegener, D. T., Petty, R. E., Detweiler-Bedell, B. T., & Jarvis, W. B. G. (2001). Implications of attitude change theories for numerical anchoring: Anchor plausibility and the limits of anchor effectiveness. Journal of Experimental Social Psychology, 37 (1), 62-69.
Wilke, A., & Mata, R. (2012). Cognitive bias. In Encyclopedia of human behavior (pp. 531-535). Academic Press.
Further Information
Test yourself for bias.
- Project Implicit (IAT Test) From Harvard University
- Implicit Association Test From the Social Psychology Network
- Test Yourself for Hidden Bias From Teaching Tolerance
- How The Concept Of Implicit Bias Came Into Being With Dr. Mahzarin Banaji, Harvard University. Author of Blindspot: hidden biases of good people5:28 minutes; includes transcript
- Understanding Your Racial Biases With John Dovidio, PhD, Yale University From the American Psychological Association11:09 minutes; includes transcript
- Talking Implicit Bias in Policing With Jack Glaser, Goldman School of Public Policy, University of California Berkeley21:59 minutes
- Implicit Bias: A Factor in Health Communication With Dr. Winston Wong, Kaiser Permanente19:58 minutes
- Bias, Black Lives and Academic Medicine Dr. David Ansell on Your Health Radio (August 1, 2015)21:42 minutes
- Uncovering Hidden Biases Google talk with Dr. Mahzarin Banaji, Harvard University
- Impact of Implicit Bias on the Justice System 9:14 minutes
- Students Speak Up: What Bias Means to Them 2:17 minutes
- Weight Bias in Health Care From Yale University16:56 minutes
- Gender and Racial Bias In Facial Recognition Technology 4:43 minutes
Journal Articles
- An implicit bias primer Mitchell, G. (2018). An implicit bias primer. Virginia Journal of Social Policy & the Law , 25, 27–59.
- Implicit Association Test at age 7: A methodological and conceptual review Nosek, B. A., Greenwald, A. G., & Banaji, M. R. (2007). The Implicit Association Test at age 7: A methodological and conceptual review. Automatic processes in social thinking and behavior, 4 , 265-292.
- Implicit Racial/Ethnic Bias Among Health Care Professionals and Its Influence on Health Care Outcomes: A Systematic Review Hall, W. J., Chapman, M. V., Lee, K. M., Merino, Y. M., Thomas, T. W., Payne, B. K., … & Coyne-Beasley, T. (2015). Implicit racial/ethnic bias among health care professionals and its influence on health care outcomes: a systematic review. American journal of public health, 105 (12), e60-e76.
- Reducing Racial Bias Among Health Care Providers: Lessons from Social-Cognitive Psychology Burgess, D., Van Ryn, M., Dovidio, J., & Saha, S. (2007). Reducing racial bias among health care providers: lessons from social-cognitive psychology. Journal of general internal medicine, 22 (6), 882-887.
- Integrating implicit bias into counselor education Boysen, G. A. (2010). Integrating Implicit Bias Into Counselor Education. Counselor Education & Supervision, 49 (4), 210–227.
- Cognitive Biases and Errors as Cause—and Journalistic Best Practices as Effect Christian, S. (2013). Cognitive Biases and Errors as Cause—and Journalistic Best Practices as Effect. Journal of Mass Media Ethics, 28 (3), 160–174.
- Empathy intervention to reduce implicit bias in pre-service teachers Whitford, D. K., & Emerson, A. M. (2019). Empathy Intervention to Reduce Implicit Bias in Pre-Service Teachers. Psychological Reports, 122 (2), 670–688.
An official website of the United States government
The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Publications
- Account settings
- Browse Titles
NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-.
StatPearls [Internet].
Aleksandar Popovic ; Martin R. Huecker .
Affiliations
Last Update: June 20, 2023 .
- Definition/Introduction
Bias is colloquially defined as any tendency that limits impartial consideration of a question or issue. In academic research, bias refers to a type of systematic error that can distort measurements and/or affect investigations and their results. [1] It is important to distinguish a systematic error, such as bias, from that of random error. Random error occurs due to the natural fluctuation in the accuracy of any measurement device, the innate differences between humans (both investigators and subjects), and by pure chance. Random errors can occur at any point and are more difficult to control. [2] Systematic errors, referred to as bias from here on, occur at one or multiple points during the research process, including the study design, data collection, statistical analysis, interpretation of results, and publication process. [3]
However, interpreting the presence of bias involves understanding that it is not a dichotomous variable, where the results can either be “present” or “not present.” Rather, it must be understood that bias is always present to some degree due to inherent limitations in research, its design, implementation, and ethical considerations. [4] Therefore, it is instead crucial to evaluate how much bias is present in a study and how the researchers attempted to minimize any sources of bias. [5] When evaluating for bias, it is important to note there are many types with several proposed classification schemes. However, it is easiest to view bias based on the various stages of research studies; the planning and design stage (before), data collection and analysis (during), and interpretation of results and journal submission (after).
- Issues of Concern
The planning stage of any study can have bias present in both study design and recruitment of subjects. Ideally, the design of a study should include a well-defined outcome, population of interest, and collection methods before implementation and data collection. The outcome, for example, response rates to a new medication, should be precisely agreed upon. Investigators may focus on changes in laboratory parameters (such as a new statin reducing LDL and total cholesterol levels) or focus on long-term morbidity and mortality (does the new statin cause reduction in cardiovascular-related deaths?) Similarly, the investigator’s own pre-existing notion or personal beliefs can influence the question being asked and the study's methodology. [6]
For example, an investigator who works for a pharmaceutical company may address a question or collect data most likely to produce a significant finding supporting the use of the investigational medication. Thus, if possible, the question(s) being asked and the collection methods employed should be agreed upon by multiple team members in an interprofessional setting to reduce potential bias. Ethics committees also play a valuable role here.
Relatedly, the team members designing a study must define their population of interest, also referred to as the study population. Bias occurs if the study population does not closely represent a target population due to errors in study design or implementation, termed selection bias. Sampling bias is one form of selection bias and typically occurs if subjects were selected in a non-random way. It can also occur if the study requires subjects to be placed into cohorts and if those cohorts are significantly different in some way. This can lead to erroneous conclusions and significant findings. Randomization of subject selection and cohort assignment is a technique used in study design intended to reduce sampling bias. [7] [8]
However, bias can occur if subject selection occurred through limited means, such as recruiting subjects through phone landlines, thereby excluding anyone who does not own a landline. Similarly, this can occur if subjects are recruited only through email or a website. This can result in confounding or the introduction of 3 variable that influences both the independent and dependent variables. [9]
For example, if a study recruited subjects from two primary care clinics to compare diabetes screening and treatment rates but did not account for potentially different socioeconomic characteristics of the two clinics, there may be significant differences between groups not due to clinical practice but rather cohort composition.
A subtype of selection bias, admission bias (also referred to as Berkson bias), occurs when the selected study population is derived from patients within hospitals or certain specialty clinics. This group is then compared to a non-hospitalized group. This predisposes to bias as hospitalized patient populations are more likely to be ill and not represent the general population. Furthermore, there are typically other confounding variables or covariates that may skew relationships between the intended dependent and independent variables. [10]
For example, in one study that evaluated the effect of cigarette smoking and its association with bladder cancer, researchers decided to use a hospital-based case-control study design. Normally, there is a strong and well-established relationship between years of cigarette use and the likelihood of developing bladder cancer. In fact, part of screening guidelines for bladder cancer considers the total years that an individual has smoked during patient risk stratification and subsequent evaluation and follow-up. However, in one study, researchers noted no significant relationship between smoking and bladder cancer. Upon re-evaluating, they noted their cases and controls both had significant smoking histories, thereby blurring any relationships. [11]
Admission bias can be reduced by selecting appropriate controls and being cognizant of the potential introduction of this bias in any hospital-based study. If this is not possible to do, researchers must be transparent about this in their work and may try to use different methods of statistical analysis to account for any confounding variables. In an almost opposite fashion, another source of potential error is a phenomenon termed the healthy worker effect. The healthy worker effect refers to the overall improved health and decreased mortality and morbidity rates of those employed relative to the unemployed. This occurs for various reasons, including access to better health care, improved socioeconomic status, the beneficial effects of work itself, and those who are critically ill or disabled are less likely to find employment. [12] [13]
Two other important forms of selection bias are lead-time bias and length time bias. Lead-time bias occurs in the context of disease diagnosis. In general, it occurs when new diagnostic testing allows detection of a disease in an early stage, causing a false appearance of longer lifespan or improved outcomes. [14] An example of this is noted in individuals with schizophrenia with varying durations of untreated psychosis. Those with shorter durations of psychosis relative to longer durations typically had better psychosocial functioning after admission to and treatment within a hospital. However, upon further analysis, it was found that it was not the duration of psychosis that affected psychosocial functioning. Rather, the duration of psychosis was indicative of the stage of the person’s disease, and those individuals with shorter durations of psychosis were in an earlier stage of their disease. [15]
Length time bias is similar to lead-time bias; however, it refers to the overestimation of an individual’s survival time due to a large number of cases that are asymptomatic and slowly progressing with a smaller number of cases that are rapidly progressive and symptomatic. An example can be noted in patients with hepatocellular carcinoma (HCC). Those who have HCC found via asymptomatic screening typically had a tumor doubling time of 100 days. In contrast, those individuals who had HCC uncovered due to symptomatic presentation had a tumor doubling time of 42 days on average. However, overall outcomes were the same amongst these two groups. [16]
The effect of both lead time and length time bias must be taken into effect by investigators. For lead-time bias, investigators can instead look at changes in the overall mortality rate due to disease. One method involves creating a modified survival curve that considers possible lead-time bias with the new diagnostic or screening protocols. [17] This involves an estimate of the lead time bias and subsequently subtracting this from the observed survival time. Unfortunately, the consequences of length time bias are difficult to mitigate, but investigators can minimize their effects by keeping individuals in their original groups based on screening protocols (intention-to-screen) regardless of the individual required earlier diagnostic workup due to symptoms.
Channeling and procedure bias are other forms of selection bias that can be encountered and addressed during the planning stage of a study. Channeling bias is a type of selection bias noted in observational studies. It occurs most frequently when patient characteristics, such as age or severity of illness, affect cohort assignment. This can occur, for example, in surgical studies where different interventions carry different levels of risk. Surgical procedures may be more likely to be carried out on patients with lower levels of periprocedural risk who would likely tolerate the event, whereas non-surgical interventions may be reserved for patients with higher levels of risk who would not be suitable for a lengthy procedure under general anesthesia. [18] As a result, channeling bias results in an imbalance of covariates between cohorts. This is particularly important when the surgical and non-surgical interventions have significant differences in outcome, making it difficult to ascertain if the difference is due to different interventions or covariate imbalance. Channeling bias can be accounted for through the use of propensity score analysis. [19]
Propensity scores are the probability of receiving one intervention over another based on an individual's observed covariates. These scores are obtained through a variety of different methods and then accounted for in the analysis stage via statistical methods, such as logistic regression. In addition to channeling bias, procedure bias (administration bias) is a similar form of selection bias, where two cohorts receive different levels of treatment or are administered similar treatments or interviews in different formats. An example of the former would be two cohorts of patients with ACL injuries. One cohort received strictly supervised physical therapy 3 times per week, and the other cohort was taught the exercises but instructed to do them at home on their own. An example of the latter would be administering a questionnaire regarding eating disorder symptoms. One group was asked in-person in an interview format, and the other group was allowed to take the questionnaire at home in an anonymous format. [20]
Either form of procedure bias can lead to significant differences observed between groups that might not exist where they are treated the same. Therefore, both procedure and channeling bias must be considered before data collection, particularly in observational or retrospective studies, to reduce or eliminate erroneous conclusions that are derived from the study design itself and not from treatment protocols.
Bias in Data Collection & Analysis
There are also a variety of forms of bias present during data collection and analysis. One type is observer bias, which refers to any systematic difference between true and recorded values due to variation in the individual observer. This form of bias is particularly notable in studies that require investigators to record measurements or exposures, particularly if there is an element of subjectiveness present, such as evaluating the extent or color of a rash. [21] However, this has even been noted in the measurement of subjects’ blood pressures when using sphygmomanometers, where investigators may round up or down depending on their preconceived notions about the subject. Observer bias is more likely when the observer is aware of the subject’s treatment status or assignment cohort. This is related to confirmation bias, which refers to a tendency to search for or interpret information to support a pre-existing belief. [22]
In one prominent example, physicians were asked to estimate blood loss and amniotic fluid volume in pregnant patients currently in labor. By providing additional information in the form of blood pressures (hypotensive or normotensive) to the physicians, they were more likely to overestimate blood loss and underestimate amniotic fluid volume when told the patient was hypotensive. [23] Similar findings are noted in fields such as medicine, health sciences, and social sciences, illustrating the strong and misdirecting influence of confirmation bias on the results found in certain studies. [22] [24]
Investigators and data collectors need to be trained to collect data in a uniform, empirical fashion and be conscious of their own beliefs to minimize measurement variability. There should be standardization of data collection to reduce inter-observer variance. This may include training all investigators or analysts to follow a standardized protocol, use standardized devices or measurement tools, or use validated questionnaires. [21] [25]
Furthermore, the decision of whether to blind the investigators and analysts should also be made. If implemented, blinding of the investigators can reduce observer bias, which refers to the differential assessment of an outcome when subjective criteria are being assessed. Confirmation bias within investigators and data collectors can be minimized if they are informed of its potential interfering role. Furthermore, overconfidence in either the overall study’s results or the collection of accurate data from subjects can be a strong source of confirmation bias. Challenging overconfidence and encouraging multiple viewpoints is another mechanism by which to challenge this within investigators. Lastly, potential funding sources or other conflicts of interest can influence confirmation and observer bias and must be considered when evaluating for these potential sources of systematic error. [26] [27] However, subjects themselves may change their behavior, consciously or unconsciously, in response to their awareness of being observed or being assigned to a treatment group termed the Hawthorne effect. [28] The Hawthorne effect can be minimized, although not eliminated, by reducing or hiding the observation of the subject if possible. A similar phenomenon is noted with self-selection bias, which occurs when individuals sort themselves into groups or choose to enroll in studies based on pre-existing factors. For example, a study evaluating the effectiveness of a popular weight loss program that allows participants to self-enroll may have significant differences between groups. In circumstances such as this, it is more probable that individuals who experienced greater success (measured in terms of weight lost) are likely to enroll. Meanwhile, those who did not lose weight and/or gained weight would likely not enroll. Similar issues plague other studies that rely on subject self-enrollment. [20] [29]
Self-selection bias is often found in tandem with response bias, which refers to subjects inaccurately answering questions due to various influences. [30] This can be due to question-wording, the social desirability of a certain answer, the sensitiveness of a question, the order of questions, and even the survey format, such as in-person, via telephone, or online. [22] [31] [32] [33] [34] There are methods of reducing the impact of all these factors, such as the use of anonymity in surveys, the use of specialized questioning techniques to reduce the impact of wording, and even the use of nominative techniques where individuals are asked about the behavior of close friends for certain types of questions. [35] Non-response bias refers to significant differences between individuals who respond and those who do not respond to a survey or questionnaire. It is not to be confused as being the opposite of response bias. It is particularly problematic as errors can result in estimating population characteristics due to a lack of response from the non-responders. It is often noted in health surveys regarding alcohol, tobacco, or drug use, though it has been seen in many other topics targeted by surveys. [36] [37] [36] Furthermore, particularly in surveys designed to evaluate satisfaction after an intervention or treatment, individuals are much more likely to respond if they felt highly satisfied relative to the average individual. While highly dissatisfied individuals were also more likely to respond relative to average, they were less likely to respond relative to highly satisfied individuals, thus potentially skewing results toward respondents with positive viewpoints. This can be noted in product reviews or restaurant evaluations.
Several preventative steps can be taken during study design or data collection to mitigate the effects of non-response bias. Ideally, surveys should be as short and accessible as possible, and potential participants should be involved in questions design. Additionally, incentives can be provided for participation if necessary. Lastly, if necessary, surveys can be made mandatory as opposed to voluntary. For example, this could occur if school-age children were initially sent a survey via mail to their homes to complete voluntarily, but this was later changed to a survey required to be completed and handed in at school on an anonymous basis. [38] [39]
Similar to the Hawthorne effect and self-selection bias, recall bias is another potential source of systematic error stemming from the subjects of a particular study. Recall bias is any error due to differences in an individual’s recollections and what truly transpired. Recall bias is particularly prevalent in retrospective studies that use questionnaires, surveys, and/or interviews. [40]
For example, in a retrospective study evaluating the prevalence of cigarette smoking in individuals diagnosed with lung cancer vs. those without, those with lung cancer may be more likely to overestimate their use of tobacco meanwhile those without may underestimate their use. Fortunately, the impact of recall bias can be minimized by decreasing the time interval between an outcome (lung cancer) and exposure (tobacco use). The rationale for this is that individuals are more likely to be accurate when the time period assessed is of shorter duration. Other methods that can be used would be to corroborate the individual’s subjective assessments with medical records or other objective measures whenever possible. [41]
Lastly, in addition to the data collectors and the subjects, bias and subsequent systematic error can be introduced through data analysis, especially if conducted in a manner that gives preference to certain conclusions. There can be blatant data fabrication where non-existing data is reported. However, researchers are more likely to perform multiple tests with pair-wise comparisons, termed “p-hacking.” [42] This typically involves analysis of subgroups or multiple endpoints to obtain statistically significant findings, even if these findings were unrelated to the original hypothesis. P-hacking also occurs when investigators perform data analysis partway through data collection to determine if it is worth continuing or not. [43] It also occurs when covariates are excluded, if outliers are included or dropped without mention, or if treatment groups are split, combined, or otherwise modified based on the original research design. [44] [45]
Ideally, researchers should list all variables explored and all associated findings. If any observations are eliminated (outliers), they should be reported, and an explanation is given as to why they were eliminated and how their elimination affected the data.
Bias in Data Interpretation and Publication
The final stages of any study, interpretation of data and publication of results, is also susceptible to various types of bias. During data interpretation and subsequent discussion, researchers must ensure that the proper statistical tests were used and that they were used correctly. Furthermore, results discussed should be statistically significant, and discussion should be avoided with results that “approach significance.” [46] Furthermore, bias can also be introduced in this stage if researchers discuss statistically significant differences but not clinically significant if conclusions are made about causality when the experiment was purely observational if data is extrapolated beyond the range found within the study. [3]
A major form of bias found during the publication stage is appropriately named publication bias. This refers to the submission of either statistically or clinically significant results, excluding other findings. [47] Journals and publishers themselves have been found to favor studies with significant values. However, researchers themselves may, in turn, use methods of data analysis or interpretation (mentioned above) to uncover significant results. Outcome reporting bias is similar, which refers to the submission of statistically significant results only, excluding non-significant ones. These two biases have been found to affect the results of systematic analyses and even affect the clinical management of patients. [48] However, publication and outcome reporting bias can be prevented in certain cases. Any prospective trials are typically required to be registered before study commencement, meaning that all results, whether significant or not, will be visible. Furthermore, electronic registration and archiving of findings can also help reduce publication bias. [49]
- Clinical Significance
Understanding basic aspects of study bias and related concepts will aid clinicians in practicing and improving evidence-based medicine. Study bias can be a major factor that detracts from the external validity of a study or the generalizability of findings to other populations or settings. [50] Clinicians who possess a strong understanding of the various biases that can plague studies will be better able to determine the external validity and, therefore, clinical applicability of a study's findings. [51] [52]
The replicability of a study with similar findings is a strong factor in determining its external validity and generalizability to the clinical setting. Whenever possible, clinicians should arm themselves with the knowledge from multiple studies or systematic reviews on a topic, as opposed to using a single study. [53] Systematic reviews allow applying strategies that limit bias through systematic assembly, appraisal, and unification of the relevant studies regarding a topic. [54]
With a critical, investigational point of view, a willingness to evaluate contrary sources, and the use of systematic reviews, clinicians can better identify sources of bias. In doing so, they can better reduce its impact in their decision-making process and thereby implement a strong form of evidence-based medicine.
- Nursing, Allied Health, and Interprofessional Team Interventions
There are numerous sources of bias within the research process, ranging from the design and planning stage, data collection and analysis, interpretation of results, and the publication process. Bias in one or multiple points of this process can skew results and even lead to incorrect conclusions. This, in turn, can cause harmful medical decisions, affecting patients, their families, and the overall healthcare team. Outside of medicine, significant bias can result in erroneous conclusions in academic research, leading to future fruitless studies in the same field. [55]
When combined with the knowledge that most studies are never replicated or verified, this can lead to a deleterious cycle of biased, unverified research leading to more research. This can harm the investigators and institutions partaking in such research and discredit entire fields, even if other investigators had significant work and took extreme care to limit and explain sources of bias.
All research needs to be carried out and reported transparently and honestly. In recent years, important steps have been taken, such as increased awareness of biases present in the research process, manipulating statistics to generate significant results, and implementing a clinical trial registry system. However, all stakeholders of the research process, from investigators to data collectors, to the institutions they are a part of, and the journals that review and publish findings, must take extreme care to identify and limit sources of bias and report those transparently.
All interprofessional healthcare team members, including physicians, physician assistants, nurses, pharmacists, and therapists, need to understand the variety of biases present throughout the research process. Such knowledge will separate stronger studies from weaker ones, determine the clinical and real-world applicability of results, and optimize patient care through the appropriate use of data-driven research results considering potential biases. Failure to understand various biases and how they can skew research results can lead to suboptimal and potentially deleterious decision-making and negatively impact both patient and system outcomes.
- Review Questions
- Access free multiple choice questions on this topic.
- Comment on this article.
Disclosure: Aleksandar Popovic declares no relevant financial relationships with ineligible companies.
Disclosure: Martin Huecker declares no relevant financial relationships with ineligible companies.
This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ( http://creativecommons.org/licenses/by-nc-nd/4.0/ ), which permits others to distribute the work, provided that the article is not altered or used commercially. You are not required to obtain permission to distribute this article, provided that you credit the author and journal.
- Cite this Page Popovic A, Huecker MR. Study Bias. [Updated 2023 Jun 20]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-.
In this Page
Bulk download.
- Bulk download StatPearls data from FTP
Related information
- PMC PubMed Central citations
- PubMed Links to PubMed
Similar articles in PubMed
- Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas. [Cochrane Database Syst Rev. 2022] Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas. Crider K, Williams J, Qi YP, Gutman J, Yeung L, Mai C, Finkelstain J, Mehta S, Pons-Duran C, Menéndez C, et al. Cochrane Database Syst Rev. 2022 Feb 1; 2(2022). Epub 2022 Feb 1.
- Review Bias in research studies. [Radiology. 2006] Review Bias in research studies. Sica GT. Radiology. 2006 Mar; 238(3):780-9.
- Small class sizes for improving student achievement in primary and secondary schools: a systematic review. [Campbell Syst Rev. 2018] Small class sizes for improving student achievement in primary and secondary schools: a systematic review. Filges T, Sonne-Schmidt CS, Nielsen BCV. Campbell Syst Rev. 2018; 14(1):1-107. Epub 2018 Oct 11.
- Recovery schools for improving behavioral and academic outcomes among students in recovery from substance use disorders: a systematic review. [Campbell Syst Rev. 2018] Recovery schools for improving behavioral and academic outcomes among students in recovery from substance use disorders: a systematic review. Hennessy EA, Tanner-Smith EE, Finch AJ, Sathe N, Kugley S. Campbell Syst Rev. 2018; 14(1):1-86. Epub 2018 Oct 4.
- Review Bias, Confounding, and Interaction: Lions and Tigers, and Bears, Oh My! [Anesth Analg. 2017] Review Bias, Confounding, and Interaction: Lions and Tigers, and Bears, Oh My! Vetter TR, Mascha EJ. Anesth Analg. 2017 Sep; 125(3):1042-1048.
Recent Activity
- Study Bias - StatPearls Study Bias - StatPearls
Your browsing activity is empty.
Activity recording is turned off.
Turn recording back on
Connect with NLM
National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894
Web Policies FOIA HHS Vulnerability Disclosure
Help Accessibility Careers
- Bipolar Disorder
- Therapy Center
- When To See a Therapist
- Types of Therapy
- Best Online Therapy
- Best Couples Therapy
- Managing Stress
- Sleep and Dreaming
- Understanding Emotions
- Self-Improvement
- Healthy Relationships
- Student Resources
- Personality Types
- Guided Meditations
- Verywell Mind Insights
- 2024 Verywell Mind 25
- Mental Health in the Classroom
- Editorial Process
- Meet Our Review Board
- Crisis Support
How Does Implicit Bias Influence Behavior?
Strategies to Reduce the Impact of Implicit Bias
Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."
Akeem Marsh, MD, is a board-certified child, adolescent, and adult psychiatrist who has dedicated his career to working with medically underserved communities.
Getty Images
- Measurement
- Discrimination
An implicit bias is an unconscious association, belief, or attitude toward any social group. Implicit biases are one reason why people often attribute certain qualities or characteristics to all members of a particular group, a phenomenon known as stereotyping .
It is important to remember that implicit biases operate almost entirely on an unconscious level . While explicit biases and prejudices are intentional and controllable, implicit biases are less so.
A person may even express explicit disapproval of a certain attitude or belief while still harboring similar biases on a more unconscious level. Such biases do not necessarily align with our own sense of self and personal identity. People can also hold positive or negative associations about their own race, gender, religion, sexuality, or other personal characteristics.
Causes of Implicit Bias
While people might like to believe that they are not susceptible to these implicit biases and stereotypes, the reality is that everyone engages in them whether they like it or not. This reality, however, does not mean that you are necessarily prejudiced or inclined to discriminate against other people. It simply means that your brain is working in a way that makes associations and generalizations.
In addition to the fact that we are influenced by our environment and stereotypes that already exist in the society into which we were born, it is generally impossible to separate ourselves from the influence of society.
You can, however, become more aware of your unconscious thinking and the ways in which society influences you.
It is the natural tendency of the brain to sift, sort, and categorize information about the world that leads to the formation of these implicit biases. We're susceptible to bias because of these tendencies:
- We tend to seek out patterns . Implicit bias occurs because of the brain's natural tendency to look for patterns and associations in the world. Social cognition , or our ability to store, process, and apply information about people in social situations, is dependent on this ability to form associations about the world.
- We like to take shortcuts . Like other cognitive biases , implicit bias is a result of the brain's tendency to try to simplify the world. Because the brain is constantly inundated with more information than it could conceivably process, mental shortcuts make it faster and easier for the brain to sort through all of this data.
- Our experiences and social conditioning play a role . Implicit biases are influenced by experiences, although these attitudes may not be the result of direct personal experience. Cultural conditioning, media portrayals, and upbringing can all contribute to the implicit associations that people form about the members of other social groups.
How Implicit Bias Is Measured
The term implicit bias was first coined by social psychologists Mahzarin Banaji and Tony Greenwald in 1995. In an influential paper introducing their theory of implicit social cognition, they proposed that social behavior was largely influenced by unconscious associations and judgments.
In 1998, Banaji and Greenwald published their now-famous Implicit Association Test (IAT) to support their hypothesis . The test utilizes a computer program to show respondents a series of images and words to determine how long it takes someone to choose between two things.
Subjects might be shown images of faces of different racial backgrounds, for example, in conjunction with either a positive word or a negative word. Subjects would then be asked to click on a positive word when they saw an image of someone from one race and to click on a negative word when they saw someone of another race.
Interpreting the Results
The researchers suggest that when someone clicks quickly, it means that they possess a stronger unconscious association. If a person quickly clicks on a negative word every time they see a person of a particular race, the researchers suggest that this would indicate that they hold an implicit negative bias toward individuals of that race.
In addition to a test of implicit racial attitudes, the IAT has also been utilized to measure unconscious biases related to gender, weight, sexuality, disability, and other areas. The IAT has grown in popularity and use over the last decade, yet has recently come under fire.
Among the main criticisms are findings that the test results may lack reliability . Respondents may score high on racial bias on one test, and low the next time they are tested.
Also of concern is that scores on the test may not necessarily correlate with individual behavior. People may score high for a type of bias on the IAT, but those results may not accurately predict how they would relate to members of a specific social group.
Link Between Implicit Bias and Discrimination
It is important to understand that implicit bias is not the same thing as racism, although the two concepts are related. Overt racism involves conscious prejudice against members of a particular racial group and can be influenced by both explicit and implicit biases.
Other forms of discrimination that can be influenced by unconscious biases include ageism , sexism, homophobia, and ableism.
One of the benefits of being aware of the potential impact of implicit social biases is that you can take a more active role in overcoming social stereotypes, discrimination, and prejudice.
Effects of Implicit Bias
Implicit biases can influence how people behave toward the members of different social groups. Researchers have found that such bias can have effects in a number of settings, including in school, work, and legal proceedings.
Implicit Bias in School
Implicit bias can lead to a phenomenon known as stereotype threat in which people internalize negative stereotypes about themselves based upon group associations. Research has shown, for example, that young girls often internalize implicit attitudes related to gender and math performance.
By the age of 9, girls have been shown to exhibit the unconscious beliefs that females have a preference for language over math. The stronger these implicit beliefs are, the less likely girls and women are to pursue math performance in school. Such unconscious beliefs are also believed to play a role in inhibiting women from pursuing careers in science, technology, engineering, and mathematics (STEM) fields.
Studies have also demonstrated that implicit attitudes can also influence how teachers respond to student behavior, suggesting that implicit bias can have a powerful impact on educational access and academic achievement.
One study, for example, found that Black children—and Black boys in particular—were more likely to be expelled from school for behavioral issues. When teachers were told to watch for challenging behaviors, they were more likely to focus on Black children than on White children.
Implicit Bias In the Workplace
While the Implicit Attitude Test itself may have pitfalls, these problems do not negate the existence of implicit bias. Or the existence and effects of bias, prejudice, and discrimination in the real world. Such prejudices can have very real and potentially devastating consequences.
One study, for example, found that when Black and White job seekers sent out similar resumes to employers, Black applicants were half as likely to be called in for interviews as White job seekers with equal qualifications.
Such discrimination is likely the result of both explicit and implicit biases toward racial groups.
Even when employers strive to eliminate potential bias in hiring, subtle implicit biases may still have an impact on how people are selected for jobs or promoted to advanced positions. Avoiding such biases entirely can be difficult, but being aware of their existence and striving to minimize them can help.
Implicit Bias in Healthcare Settings
Certainly, age, race, or health condition should not play a role in how patients get treated, however, implicit bias can influence quality healthcare and have long-term impacts including suboptimal care, adverse outcomes, and even death.
For example, one study published in the American Journal of Public Health found that physicians with high scores in implicit bias tended to dominate conversations with Black patients and, as a result, the Black patients had less confidence and trust in the provider and rated the quality of their care lower.
Researchers continue to investigate implicit bias in relation to other ethnic groups as well as specific health conditions, including type 2 diabetes, obesity, mental health, and substance use disorders.
Implicit Bias in Legal Settings
Implicit biases can also have troubling implications in legal proceedings, influencing everything from initial police contact all the way through sentencing. Research has found that there is an overwhelming racial disparity in how Black defendants are treated in criminal sentencing.
Not only are Black defendants less likely to be offered plea bargains than White defendants charged with similar crimes, but they are also more likely to receive longer and harsher sentences than White defendants.
Strategies to Reduce the Impact of Implict Bias
Implicit biases impact behavior, but there are things that you can do to reduce your own bias. Some ways that you can reduce the influence of implicit bias:
- Focus on seeing people as individuals . Rather than focusing on stereotypes to define people, spend time considering them on a more personal, individual level.
- Work on consciously changing your stereotypes . If you do recognize that your response to a person might be rooted in biases or stereotypes, make an effort to consciously adjust your response.
- Take time to pause and reflect . In order to reduce reflexive reactions, take time to reflect on potential biases and replace them with positive examples of the stereotyped group.
- Adjust your perspective . Try seeing things from another person's point of view. How would you respond if you were in the same position? What factors might contribute to how a person acts in a particular setting or situation?
- Increase your exposure . Spend more time with people of different racial backgrounds. Learn about their culture by attending community events or exhibits.
- Practice mindfulness . Try meditation, yoga, or focused breathing to increase mindfulness and become more aware of your thoughts and actions.
While implicit bias is difficult to eliminate altogether, there are strategies that you can utilize to reduce its impact. Taking steps such as actively working to overcome your biases , taking other people's perspectives, seeking greater diversity in your life, and building your awareness about your own thoughts are a few ways to reduce the impact of implicit bias.
A Word From Verywell
Implicit biases can be troubling, but they are also a pervasive part of life. Perhaps more troubling, your unconscious attitudes may not necessarily align with your declared beliefs. While people are more likely to hold implicit biases that favor their own in-group, it is not uncommon for people to hold biases against their own social group as well.
The good news is that these implicit biases are not set in stone. Even if you do hold unconscious biases against other groups of people, it is possible to adopt new attitudes, even on the unconscious level. This process is not necessarily quick or easy, but being aware of the existence of these biases is a good place to start making a change.
Jost JT. The existence of implicit bias is beyond reasonable doubt: A refutation of ideological and methodological objections and executive summary of ten studies that no manager should ignore . Research in Organizational Behavior . 2009;29:39-69. doi:10.1016/j.riob.2009.10.001
Greenwald AG, Mcghee DE, Schwartz JL. Measuring individual differences in implicit cognition: The implicit association test . J Pers Soc Psychol. 1998;74(6):1464-1480. doi:10.1037/0022-3514.74.6.1464
Sabin J, Nosek BA, Greenwald A, Rivara FP. Physicians' implicit and explicit attitudes about race by MD race, ethnicity, and gender . J Health Care Poor Underserved. 2009;20(3):896-913. doi:10.1353/hpu.0.0185
Capers Q, Clinchot D, McDougle L, Greenwald AG. Implicit racial bias in medical school admissions . Acad Med . 2017;92(3):365-369. doi:10.1097/ACM.0000000000001388
Kiefer AK, Sekaquaptewa D. Implicit stereotypes and women's math performance: How implicit gender-math stereotypes influence women's susceptibility to stereotype threat . Journal of Experimental Social Psychology. 2007;43(5):825-832. doi:10.1016/j.jesp.2006.08.004
Steffens MC, Jelenec P, Noack P. On the leaky math pipeline: Comparing implicit math-gender stereotypes and math withdrawal in female and male children and adolescents . Journal of Educational Psychology. 2010;102(4):947-963. doi:10.1037/a0019920
Edward Zigler Center in Child Development & Social Policy, Yale School of Medicine. Implicit Bias in Preschool: A Research Study Brief .
Pager D, Western B, Bonikowski B. Discrimination in a low-wage labor market: A field experiment . Am Sociol Rev. 2009;74(5):777-799. doi:10.1177/000312240907400505
Malinen S, Johnston L. Workplace ageism: Discovering hidden bias . Exp Aging Res. 2013;39(4):445-465. doi:10.1080/0361073X.2013.808111
Cooper LA, Roter DL, Carson KA, et al. The associations of clinicians' implicit attitudes about race with medical visit communication and patient ratings of interpersonal care . Am J Public Health . 2012;102(5):979-87. doi:10.2105/AJPH.2011.300558
Leiber MJ, Fox KC. Race and the impact of detention on juvenile justice decision making . Crime & Delinquency. 2005;51(4):470-497. doi:10.1177/0011128705275976
Van Ryn M, Hardeman R, Phelan SM, et al. Medical school experiences associated with change in implicit racial bias among 3547 students: A medical student CHANGES study report . J Gen Intern Med. 2015;30(12):1748-1756. doi:10.1007/s11606-015-3447-7
By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."
Assignment Bias
Assignment bias is a term used in used in the analysis of research data for factors that can skew the results of a study. For instance, a research study compares test results from students at two different schools. Even if the researcher controls the age, gender and grade level of the students being studied they might not be able to control factors such as the ethnic background, school quality, family background, etc. of the students. This lack of control can adversely affect the reliability of the experiment.
Principles of Clinical Trials: Bias and Precision Control
Randomization, Stratification, and Minimization
- Reference work entry
- First Online: 20 July 2022
- Cite this reference work entry
- Fan-fan Yu 3
394 Accesses
The fundamental difference distinguishing observational studies from clinical trials is randomization. This chapter provides a practical guide to concepts of randomization that are widely used in clinical trials. It starts by describing bias and potential confounding arising from allocating people to treatment groups in a predictable way. It then presents the concept of randomization, starting from a simple coin flip, and sequentially introduces methods with additional restrictions to account for better balance of the groups with respect to known (measured) and unknown (unmeasured) variables. These include descriptions and examples of complete randomization and permuted block designs. The text briefly describes biased coin designs that extend this family of designs. Stratification is introduced as a way to provide treatment balance on specific covariates and covariate combinations, and an adaptive counterpart of biased coin designs, minimization, is described. The chapter concludes with some practical considerations when creating and implementing randomization schedules.
By the chapter’s end, statistician or clinicians designing a trial may distinguish generally what assignment methods may fit the needs of their trial and whether or not stratifying by prognostic variables may be appropriate. The statistical properties of the methods are left to the individual references at the end.
This is a preview of subscription content, log in via an institution to check access.
Access this chapter
Subscribe and save.
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
- Available as PDF
- Read on any device
- Instant download
- Own it forever
- Available as EPUB and PDF
- Durable hardcover edition
- Dispatched in 3 to 5 business days
- Free shipping worldwide - see info
Tax calculation will be finalised at checkout
Purchases are for personal use only
Institutional subscriptions
Similar content being viewed by others
The Randomized Controlled Trial: Methodological Perspectives
General Overview of the Statistical Issues in Clinical Study Designs
Buyse M (2000) Centralized treatment allocation in comparative clinical trials. Applied Clinical Trials 9:32–37
Google Scholar
Byar D, Simon R, Friendewald W, Schlesselman J, DeMets D, Ellenberg J, Gail M, Ware J (1976) Randomized clinical trials – perspectives on some recent ideas. N Engl J Med 295:74–80
Article Google Scholar
Hennekens C, Buring J, Manson J, Stampfer M, Rosner B, Cook NR, Belanger C, LaMotte F, Gaziano J, Ridker P, Willett W, Peto R (1996) Lack of effect of long-term supplementation with beta carotene on the incidence of malignant neoplasms and cardiovascular disease. N Engl J Med 334:1145–1149
Ivanova A (2003) A play-the-winner type urn model with reduced variability. Metrika 58:1–13
Article MathSciNet Google Scholar
Kahan B, Morris T (2012) Improper analysis of trials randomized using stratified blocks or minimisation. Stat Med 31:328–340
Lachin J (1988a) Statistical properties of randomization in clinical trials. Control Clin Trials 9:289–311
Lachin J (1988b) Properties of simple randomization in clinical trials. Control Clin Trials 9:312–326
Lachin JM, Matts JP, Wei LJ (1988) Randomization in clinical trials: Conclusions and recommendations. Control Clin Trials 9(4):365–374
Leyland-Jones B, Bondarenko I, Nemsadze G, Smirnov V, Litvin I, Kokhreidze I, Abshilava L, Janjalia M, Li R, Lakshmaiah KC, Samkharadze B, Tarasova O, Mohapatra RK, Sparyk Y, Polenkov S, Vladimirov V, Xiu L, Zhu E, Kimelblatt B, Deprince K, Safonov I, Bowers P, Vercammen E (2016) A randomized, open-label, multicenter, phase III study of epoetin alfa versus best standard of care in anemic patients with metastatic breast cancer receiving standard chemotherapy. J Clin Oncol 34:1197–1207
Matthews J (2000) An introduction to randomized controlled clinical trials. Oxford University Press, Inc., New York
MATH Google Scholar
Matts J, Lachin J (1988) Properties of permuted-block randomization in clinical trials. Control Clin Trials 9:345–364
Pocock S, Simon R (1975) Sequential treatment assignment with balancing for prognostic factors in the controlled clinical trial. Biometrics 31:103–115
Proschan M, Brittain E, Kammerman L (2011) Minimize the use of minimization with unequal allocation. Biometrics 67(3):1135–1141. https://doi.org/10.1111/j.1541-0420.2010.01545.x
Article MathSciNet MATH Google Scholar
Rosenberger W, Uschner D, Wang Y (2018) Randomization: the forgotten component of the randomized clinical trial. Stat Med 38(1):1–12
Russell S, Bennett J, Wellman J, Chung D, Yu Z, Tillman A, Wittes J, Pappas J, Elci O, McCague S, Cross D, Marshall K, Walshire J, Kehoe T, Reichert H, Davis M, Raffini L, Lindsey G, Hudson F, Dingfield L, Zhu X, Haller J, Sohn E, Mahajin V, Pfeifer W, Weckmann M, Johnson C, Gewaily D, Drack A, Stone E, Wachtel K, Simonelli F, Leroy B, Wright J, High K, Maguire A (2017) Efficacy and safety of voretigene neparvovec (AAV2-hRPE65v2) in patients with REP65-mediated inherited retinal dystrophy: a randomised, controlled, open-label, phase 3 trial. Lancet 390:849–860
Scott N, McPherson G, Ramsay C (2002) The method of minimization for allocation to clinical trials: a review. Control Clin Trials 23:662–674
Taves DR (1974) Minimization: a new method of assigning patients to treatment and control groups. Clin Pharmacol Ther 15:443–453
Wei L, Durham S (1978) The randomized play-the-winner rule in medical trials. J Am Stat Assoc 73(364):840–843
Download references
Author information
Authors and affiliations.
Statistics Collaborative, Inc., Washington, DC, USA
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to Fan-fan Yu .
Editor information
Editors and affiliations.
Department of Surgery, Division of Surgical Oncology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
Steven Piantadosi
Department of Epidemiology, School of Public Health, Johns Hopkins University, Baltimore, MD, USA
Curtis L. Meinert
Section Editor information
Department of Medicine, University of Alabama, Birmingham, AL, USA
O. Dale Williams
Rights and permissions
Reprints and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this entry
Cite this entry.
Yu, Ff. (2022). Principles of Clinical Trials: Bias and Precision Control. In: Piantadosi, S., Meinert, C.L. (eds) Principles and Practice of Clinical Trials. Springer, Cham. https://doi.org/10.1007/978-3-319-52636-2_211
Download citation
DOI : https://doi.org/10.1007/978-3-319-52636-2_211
Published : 20 July 2022
Publisher Name : Springer, Cham
Print ISBN : 978-3-319-52635-5
Online ISBN : 978-3-319-52636-2
eBook Packages : Mathematics and Statistics Reference Module Computer Science and Engineering
Share this entry
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Publish with us
Policies and ethics
- Find a journal
- Track your research
IMAGES
VIDEO
COMMENTS
Assignment bias refers to a type of bias that occurs in research or experimental studies when the assignment of participants to different groups or conditions is not randomized or is influenced by external factors.
What is Assignment Bias? Assignment bias happens when experimental groups have significantly different characteristics due to a faulty assignment process. For example, if you’re performing a set of intelligence tests, one group might have more people who are significantly smarter.
Broadly speaking, bias is a tendency to lean in favor of or against a person, group, idea, or thing, usually in an unfair way. Biases are natural — they are a product of human nature — and they don’t simply exist in a vacuum or in our minds — they affect the way we make decisions and act.
There are numerous sources of bias within the research process, ranging from the design and planning stage, data collection and analysis, interpretation of results, and the publication process. Bias in one or multiple points of this process can skew results and even lead to incorrect conclusions.
Academic assessment bias refers to assessments that unfairly penalize or impact students based on their personal characteristics, such as race, gender, socioeconomic status, religion and place of origin.
Response bias refers to several factors that can lead someone to respond falsely or inaccurately to a question. Self-report questions, such as those asked on surveys or in structured interviews, are particularly prone to this type of bias. Example: Response bias. A job applicant is asked to take a personality test during the recruitment process.
An implicit bias is an unconscious association, belief, or attitude toward any social group. Implicit biases are one reason why people often attribute certain qualities or characteristics to all members of a particular group, a phenomenon known as stereotyping.
Assignment bias is a term used in used in the analysis of research data for factors that can skew the results of a study. For instance, a research study compares test results from students at two different schools.
Assignment Bias. Occurs when experimental groups have significantly different characteristics due to a faulty assignment process. What impact can it have? Outcomes may be skewed due to inherent differences between the groups—not due to the treatment. Did the authors...
When the assignment results in prognostic factors that are unequally distributed across the treatment groups, then the effect of the treatment on the final outcome may be confounded with the effect of the factor. This is an example of assignment bias. Mitigating bias results in more accurate estimates of treatment differences.