Confirmation Bias In Psychology: Definition & Examples

Julia Simkus

Editor at Simply Psychology

BA (Hons) Psychology, Princeton University

Julia Simkus is a graduate of Princeton University with a Bachelor of Arts in Psychology. She is currently studying for a Master's Degree in Counseling for Mental Health and Wellness in September 2023. Julia's research has been published in peer reviewed journals.

Learn about our Editorial Process

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

On This Page:

Confirmation Bias is the tendency to look for information that supports, rather than rejects, one’s preconceptions, typically by interpreting evidence to confirm existing beliefs while rejecting or ignoring any conflicting data (American Psychological Association).

One of the early demonstrations of confirmation bias appeared in an experiment by Peter Watson (1960) in which the subjects were to find the experimenter’s rule for sequencing numbers.

Its results showed that the subjects chose responses that supported their hypotheses while rejecting contradictory evidence, and even though their hypotheses were incorrect, they became confident in them quickly (Gray, 2010, p. 356).

Though such evidence of confirmation bias has appeared in psychological literature throughout history, the term ‘confirmation bias’ was first used in a 1977 paper detailing an experimental study on the topic (Mynatt, Doherty, & Tweney, 1977).

Confirmation bias as psychological objective attitude issue outline diagram. Incorrect information checking or aware of self interpretation vector illustration. Tendency to approve existing opinion.

Biased Search for Information

This type of confirmation bias explains people’s search for evidence in a one-sided way to support their hypotheses or theories.

Experiments have shown that people provide tests/questions designed to yield “yes” if their favored hypothesis is true and ignore alternative hypotheses that are likely to give the same result.

This is also known as the congruence heuristic (Baron, 2000, p.162-64). Though the preference for affirmative questions itself may not be biased, there are experiments that have shown that congruence bias does exist.

For Example:

If you were to search “Are cats better than dogs?” in Google, all you would get are sites listing the reasons why cats are better.

However, if you were to search “Are dogs better than cats?” google will only provide you with sites that believe dogs are better than cats.

This shows that phrasing questions in a one-sided way (i.e., affirmative manner) will assist you in obtaining evidence consistent with your hypothesis.

Biased Interpretation

This type of bias explains that people interpret evidence concerning their existing beliefs by evaluating confirming evidence differently than evidence that challenges their preconceptions.

Various experiments have shown that people tend not to change their beliefs on complex issues even after being provided with research because of the way they interpret the evidence.

Additionally, people accept “confirming” evidence more easily and critically evaluate the “disconfirming” evidence (this is known as disconfirmation bias) (Taber & Lodge, 2006).

When provided with the same evidence, people’s interpretations could still be biased.

For example:

Biased interpretation is shown in an experiment conducted by Stanford University on the topic of capital punishment. It included participants who were in support of and others who were against capital punishment.

All subjects were provided with the same two studies.

After reading the detailed descriptions of the studies, participants still held their initial beliefs and supported their reasoning by providing “confirming” evidence from the studies and rejecting any contradictory evidence or considering it inferior to the “confirming” evidence (Lord, Ross, & Lepper, 1979).

Biased Memory

To confirm their current beliefs, people may remember/recall information selectively. Psychological theories vary in defining memory bias.

Some theories state that information confirming prior beliefs is stored in the memory while contradictory evidence is not (i.e., Schema theory). Some others claim that striking information is remembered best (i.e., humor effect).

Memory confirmation bias also serves a role in stereotype maintenance. Experiments have shown that the mental association between expectancy-confirming information and the group label strongly affects recall and recognition memory.

Though a certain stereotype about a social group might not be true for an individual, people tend to remember the stereotype-consistent information better than any disconfirming evidence (Fyock & Stangor, 1994).

In one experimental study, participants were asked to read a woman’s profile (detailing her extroverted and introverted skills) and assess her for either a job of a librarian or real-estate salesperson.

Those assessing her as a salesperson better recalled extroverted traits, while the other group recalled more examples of introversion (Snyder & Cantor, 1979).

These experiments, along with others, have offered an insight into selective memory and provided evidence for biased memory, proving that one searches for and better remembers confirming evidence.

social media bias

Social Media

Information we are presented on social media is not only reflective of what the users want to see but also of the designers’ beliefs and values. Today, people are exposed to an overwhelming number of news sources, each varying in their credibility.

To form conclusions, people tend to read the news that aligns with their perspectives. For instance, new channels provide information (even the same news) differently from each other on complex issues (i.e., racism, political parties, etc.), with some using sensational headlines/pictures and one-sided information.

Due to the biased coverage of topics, people only utilize certain channels/sites to obtain their information to make biased conclusions.

Religious Faith

People also tend to search for and interpret evidence with respect to their religious beliefs (if any).

For instance, on the topics of abortion and transgender rights, people whose religions are against such things will interpret this information differently than others and will look for evidence to validate what they believe.

Similarly, those who religiously reject the theory of evolution will either gather information disproving evolution or hold no official stance on the topic.

Also, irreligious people might perceive events that are considered “miracles” and “test of faiths” by religious people to be a reinforcement of their lack of faith in a religion.

when Does The Confirmation Bias Occur?

There are several explanations why humans possess confirmation bias, including this tendency being an efficient way to process information, protect self-esteem, and minimize cognitive dissonance.

Information Processing

Confirmation bias serves as an efficient way to process information because of the limitless information humans are exposed to.

To form an unbiased decision, one would have to critically evaluate every piece of information present, which is unfeasible. Therefore, people only tend to look for information desired to form their conclusions (Casad, 2019).

Protect Self-esteem

People are susceptible to confirmation bias to protect their self-esteem (to know that their beliefs are accurate).

To make themselves feel confident, they tend to look for information that supports their existing beliefs (Casad, 2019).

Minimize Cognitive Dissonance

Cognitive dissonance also explains why confirmation bias is adaptive.

Cognitive dissonance is a mental conflict that occurs when a person holds two contradictory beliefs and causes psychological stress/unease in a person.

To minimize this dissonance, people adapt to confirmation bias by avoiding information that is contradictory to their views and seeking evidence confirming their beliefs.

Challenge avoidance and reinforcement seeking to affect people’s thoughts/reactions differently since exposure to disconfirming information results in negative emotions, something that is nonexistent when seeking reinforcing evidence (“The Confirmation Bias: Why People See What They Want to See”).

Implications

Confirmation bias consistently shapes the way we look for and interpret information that influences our decisions in this society, ranging from homes to global platforms. This bias prevents people from gathering information objectively.

During the election campaign, people tend to look for information confirming their perspectives on different candidates while ignoring any information contradictory to their views.

This subjective manner of obtaining information can lead to overconfidence in a candidate, and misinterpretation/overlooking of important information, thus influencing their voting decision and, eventually country’s leadership (Cherry, 2020).

Recruitment and Selection

Confirmation bias also affects employment diversity because preconceived ideas about different social groups can introduce discrimination (though it might be unconscious) and impact the recruitment process (Agarwal, 2018).

Existing beliefs of a certain group being more competent than the other is the reason why particular races and gender are represented the most in companies today. This bias can hamper the company’s attempt at diversifying its employees.

Mitigating Confirmation Bias

Change in intrapersonal thought:.

To avoid being susceptible to confirmation bias, start questioning your research methods, and sources used to obtain their information.

Expanding the types of sources used in searching for information could provide different aspects of a particular topic and offer levels of credibility.

  • Read entire articles rather than forming conclusions based on the headlines and pictures. – Search for credible evidence presented in the article.
  • Analyze if the statements being asserted are backed up by trustworthy evidence (tracking the source of evidence could prove its credibility). – Encourage yourself and others to gather information in a conscious manner.

Alternative hypothesis:

Confirmation bias occurs when people tend to look for information that confirms their beliefs/hypotheses, but this bias can be reduced by taking into alternative hypotheses and their consequences.

Considering the possibility of beliefs/hypotheses other than one’s own could help you gather information in a more dynamic manner (rather than a one-sided way).

Related Cognitive Biases

There are many cognitive biases that characterize as subtypes of confirmation bias. Following are two of the subtypes:

Backfire Effect

The backfire effect occurs when people’s preexisting beliefs strengthen when challenged by contradictory evidence (Silverman, 2011).

  • Therefore, disproving a misconception can actually strengthen a person’s belief in that misconception.

One piece of disconfirming evidence does not change people’s views, but a constant flow of credible refutations could correct misinformation/misconceptions.

This effect is considered a subtype of confirmation bias because it explains people’s reactions to new information based on their preexisting hypotheses.

A study by Brendan Nyhan and Jason Reifler (two researchers on political misinformation) explored the effects of different types of statements on people’s beliefs.

While examining two statements, “I am not a Muslim, Obama says.” and “I am a Christian, Obama says,” they concluded that the latter statement is more persuasive and resulted in people’s change of beliefs, thus affirming statements are more effective at correcting incorrect views (Silverman, 2011).

Halo Effect

The halo effect occurs when people use impressions from a single trait to form conclusions about other unrelated attributes. It is heavily influenced by the first impression.

Research on this effect was pioneered by American psychologist Edward Thorndike who, in 1920, described ways officers rated their soldiers on different traits based on first impressions (Neugaard, 2019).

Experiments have shown that when positive attributes are presented first, a person is judged more favorably than when negative traits are shown first. This is a subtype of confirmation bias because it allows us to structure our thinking about other information using only initial evidence.

Learning Check

When does the confirmation bias occur.

  • When an individual only researches information that is consistent with personal beliefs.
  • When an individual only makes a decision after all perspectives have been evaluated.
  • When an individual becomes more confident in one’s judgments after researching alternative perspectives.
  • When an individual believes that the odds of an event occurring increase if the event hasn’t occurred recently.

The correct answer is A. Confirmation bias occurs when an individual only researches information consistent with personal beliefs. This bias leads people to favor information that confirms their preconceptions or hypotheses, regardless of whether the information is true.

Take-home Messages

  • Confirmation bias is the tendency of people to favor information that confirms their existing beliefs or hypotheses.
  • Confirmation bias happens when a person gives more weight to evidence that confirms their beliefs and undervalues evidence that could disprove it.
  • People display this bias when they gather or recall information selectively or when they interpret it in a biased way.
  • The effect is stronger for emotionally charged issues and for deeply entrenched beliefs.

Agarwal, P., Dr. (2018, October 19). Here Is How Bias Can Affect Recruitment In Your Organisation. https://www.forbes.com/sites/pragyaagarwaleurope/2018/10/19/how-can-bias-during-interviewsaffect-recruitment-in-your-organisation

American Psychological Association. (n.d.). APA Dictionary of Psychology. https://dictionary.apa.org/confirmation-bias

Baron, J. (2000). Thinking and Deciding (Third ed.). Cambridge University Press.

Casad, B. (2019, October 09). Confirmation bias . https://www.britannica.com/science/confirmation-bias

Cherry, K. (2020, February 19). Why Do We Favor Information That Confirms Our Existing Beliefs? https://www.verywellmind.com/what-is-a-confirmation-bias-2795024

Fyock, J., & Stangor, C. (1994). The role of memory biases in stereotype maintenance. The British journal of social psychology, 33 (3), 331–343.

Gray, P. O. (2010). Psychology . New York: Worth Publishers.

Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37 (11), 2098–2109.

Mynatt, C. R., Doherty, M. E., & Tweney, R. D. (1977). Confirmation bias in a simulated research environment: An experimental study of scientific inference. Quarterly Journal of Experimental Psychology, 29 (1), 85-95.

Neugaard, B. (2019, October 09). Halo effect. https://www.britannica.com/science/halo-effect

Silverman, C. (2011, June 17). The Backfire Effect . https://archives.cjr.org/behind_the_news/the_backfire_effect.php

Snyder, M., & Cantor, N. (1979). Testing hypotheses about other people: The use of historical knowledge. Journal of Experimental Social Psychology, 15 (4), 330–342.

Further Information

  • What Is Confirmation Bias and When Do People Actually Have It?
  • Confirmation Bias: A Ubiquitous Phenomenon in Many Guises
  • The importance of making assumptions: why confirmation is not necessarily a bias
  • Decision Making Is Caused By Information Processing And Emotion: A Synthesis Of Two Approaches To Explain The Phenomenon Of Confirmation Bias

Confirmation bias occurs when individuals selectively collect, interpret, or remember information that confirms their existing beliefs or ideas, while ignoring or discounting evidence that contradicts these beliefs.

This bias can happen unconsciously and can influence decision-making and reasoning in various contexts, such as research, politics, or everyday decision-making.

What is confirmation bias in psychology?

Confirmation bias in psychology is the tendency to favor information that confirms existing beliefs or values. People exhibiting this bias are likely to seek out, interpret, remember, and give more weight to evidence that supports their views, while ignoring, dismissing, or undervaluing the relevance of evidence that contradicts them.

This can lead to faulty decision-making because one-sided information doesn’t provide a full picture.

A list of examples of confirmation bias: Only reading news sources that align with your political views: This is a classic example of confirmation bias in media consumption. Remembering the times your horoscope was right, but forgetting when it was wrong: This illustrates selective memory bias. Interpreting ambiguous symptoms as confirmation of a self-diagnosed illness: This shows how confirmation bias can affect health-related decisions. Believing in a conspiracy theory and only seeking information that supports it: This demonstrates how confirmation bias can reinforce unfounded beliefs. Judging a job candidate based on first impressions, ignoring contradictory information: This example relates to the halo effect mentioned in the article. Assuming a product is high-quality because it's from a brand you like: This shows how brand loyalty can lead to confirmation bias in consumer behavior. Believing your favorite sports team is the best, despite their losing record: This illustrates how emotional attachment can contribute to confirmation bias.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Research bias
  • What Is Confirmation Bias? | Definition & Examples

What Is Confirmation Bias? | Definition & Examples

Published on September 19, 2022 by Kassiani Nikolopoulou . Revised on March 10, 2023.

Confirmation bias is the tendency to seek out and prefer information that supports our preexisting beliefs. As a result, we tend to ignore any information that contradicts those beliefs.

Confirmation bias is often unintentional but can still lead to poor decision-making in (psychology) research and in legal or real-life contexts.

Table of contents

What is confirmation bias, types of confirmation bias, confirmation bias examples, how to avoid confirmation bias, other types of research bias, frequently asked questions about confirmation bias.

Confirmation bias is a type of cognitive bias , or an error in thinking. Processing all the facts available to us costs us time and energy, so our brains tend to pick the information that agrees most with our preexisting opinions and knowledge. This leads to faster decision-making. Mental “shortcuts” like this are called heuristics.

Confirmation bias

When confronted with new information that confirms what we already believe, we are more likely to:

  • Accept it as true and accurate
  • Overlook any flaws or inconsistencies
  • Incorporate it into our belief system
  • Recall it later, using it to support our belief during a discussion

On the other hand, if the new information contradicts what we already believe, we respond differently. We are more likely to:

  • Become defensive about it
  • Focus on criticizing any flaw, while that same flaw would be ignored if the information confirmed our beliefs
  • Forget this information quickly, not recalling reading or hearing about it later on

There are three main ways that people display confirmation bias:

  • Selective search
  • Selective interpretation
  • Selective recall

Biased search for information

This type of bias occurs when only positive evidence is sought, or evidence that supports your expectations or hypotheses. Evidence that could prove them wrong is systematically disregarded.

If you reverse the question and type “are cats better than dogs?”, you will get results in support of cats.

This will happen with any two variables : the search engine “assumes” that you think variable A is better than variable B, and shows you the results that agree with your opinion first.

Biased interpretation of information

Confirmation bias is not limited to the type of information we search for. Even if two people are presented with the same information, it is possible that they will interpret it differently.

The reader who doubts climate change may interpret the article as evidence that climate change is natural and has happened at other points in history. Any arguments raised in the article about the negative impact of fossil fuels will be dismissed.

On the other hand, the reader who is concerned about climate change will view the information as evidence that climate change is a threat and that something must be done about it. Appeals to cut down fossil fuel emissions will be viewed favorably.

Biased recall of information

Confirmation bias also affects what type of information we are able to recall.

A week after encountering the story, the reader who is concerned about climate change is more likely to recall these arguments in a discussion with friends. On the contrary, a climate change doubter likely won’t be able to recall the points made in the article.

Confirmation bias has serious implications for our ability to seek objective facts. It can lead individuals to “cherry-pick” bits of information that reinforce any prejudices or stereotypes.

An overworked physician, believing this is just drug-seeking behavior, examines him hastily in the hall. The physician confirms that all of the man’s vital signs are fine: consistent with what was expected.

The man is discharged. Because the physician was only looking for what was already expected, she missed the signs that the man was actually having a problem with his kidneys.

Confirmation bias can lead to poor decision-making in various contexts, including interpersonal relationships, medical diagnoses, or applications of the law.

Due to this, you unconsciously seek information to support your hypothesis during the data collection phase, rather than remaining open to results that could disprove it. At the end of your research, you conclude that memory games do indeed delay memory loss.

Although confirmation bias cannot be entirely eliminated, there are steps you can take to avoid it:

  • First and foremost, accept that you have biases that impact your decision-making. Even though we like to think that we are objective, it is our nature to use mental shortcuts. This allows us to make judgments quickly and efficiently, but it also makes us disregard information that contradicts our views.
  • Do your research thoroughly when searching for information. Actively consider all the evidence available, rather than just the evidence confirming your opinion or belief. Only use credible sources that can pass the CRAAP test .
  • Make sure you read entire articles, not just the headline, prior to drawing any conclusions. Analyze the article to see if there is reliable evidence to support the argument being made. When in doubt, do further research to check if the information presented is trustworthy.

Cognitive bias

  • Confirmation bias
  • Baader–Meinhof phenomenon

Selection bias

  • Sampling bias
  • Ascertainment bias
  • Attrition bias
  • Self-selection bias
  • Survivorship bias
  • Nonresponse bias
  • Undercoverage bias
  • Hawthorne effect
  • Observer bias
  • Omitted variable bias
  • Publication bias
  • Pygmalion effect
  • Recall bias
  • Social desirability bias
  • Placebo effect

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

Research bias affects the validity and reliability of your research findings , leading to false conclusions and a misinterpretation of the truth. This can have serious implications in areas like medical research where, for example, a new form of treatment may be evaluated.

It can sometimes be hard to distinguish accurate from inaccurate sources , especially online. Published articles are not always credible and can reflect a biased viewpoint without providing evidence to support their conclusions.

Information literacy is important because it helps you to be aware of such unreliable content and to evaluate sources effectively, both in an academic context and more generally.

Confirmation bias is the tendency to search, interpret, and recall information in a way that aligns with our pre-existing values, opinions, or beliefs. It refers to the ability to recollect information best when it amplifies what we already believe. Relatedly, we tend to forget information that contradicts our opinions.

Although selective recall is a component of confirmation bias, it should not be confused with recall bias.

On the other hand, recall bias refers to the differences in the ability between study participants to recall past events when self-reporting is used. This difference in accuracy or completeness of recollection is not related to beliefs or opinions. Rather, recall bias relates to other factors, such as the length of the recall period, age, and the characteristics of the disease under investigation.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Nikolopoulou, K. (2023, March 10). What Is Confirmation Bias? | Definition & Examples. Scribbr. Retrieved September 3, 2024, from https://www.scribbr.com/research-bias/confirmation-bias/

Is this article helpful?

Kassiani Nikolopoulou

Kassiani Nikolopoulou

Other students also liked, random vs. systematic error | definition & examples, evaluating sources | methods & examples, applying the craap test & evaluating sources.

Confirmation Bias: Seeing What We Want to Believe

Confirmation Bias

Confirmation bias is a widely recognized phenomenon and refers to our tendency to seek out evidence in line with our current beliefs and stick to ideas even when the data contradicts them (Lidén, 2023).

Evolutionary and cognitive psychologists agree that we naturally tend to be selective and look for information we already know (Buss, 2016).

This article explores this tendency, how it happens, why it matters, and what we can do to get better at recognizing it and reducing its impact.

Before you continue, we thought you might like to download our three Positive CBT Exercises for free . These science-based exercises will provide you with detailed insight into positive Cognitive-Behavioral Therapy (CBT) and give you the tools to apply it in your therapy or coaching.

This Article Contains

Understanding confirmation bias, fascinating confirmation bias examples, 10 reasons we fall for it, 10 steps to recognizing and reducing confirmation bias, how confirmation bias impacts research, can confirmation bias be good, resources from positivepsychology.com, a take-home message.

We can understand the confirmation bias definition as the human tendency “to seek out, to interpret, to favor, and to selectively recall information that confirms beliefs they already hold, while avoiding or ignoring information that disconfirms these beliefs” (Gabriel & O’Connor, 2024, p. 1).

While it has been known and accepted since at least the 17th century that humans are inclined to form and hold on to ideas and beliefs — often tenaciously — even when faced with contradictory evidence, the term “confirmation bias” only became popular in the 1960s with the work of cognitive psychologist Peter Cathcart Wason (Lidén, 2023).

Wason’s (1960) famous 2–4–6 experiment was devised to investigate the nature of hypothesis testing.

Participants were given the numbers 2, 4, and 6 and told the numbers adhered to a rule.

They were then asked to arrive at a hypothesis explaining the sequence and try a new three-number series to test their rule (Wason, 1960; Lidén, 2023).

For example, if a participant thought the second number was twice that of the first and the third number was three times greater, they might suggest the numbers 10, 20, and 30.

However, if another participant thought it was a simple series increasing by two each time, they might suggest 13, 15, and 17 (Wason, 1960; Lidén, 2023).

The actual rule is more straightforward; the numbers are in ascending order. That’s all.

As we typically offer tests that confirm our initial beliefs, both example hypotheses appear to work, even if they are not the answer (Wason, 1960; Lidén, 2023).

The experiment demonstrates our confirmation bias; we seek information confirming our existing beliefs or hypotheses rather than challenging or disproving them (Lidén, 2023).

In the decades since, and with developments in cognitive science, we have come to understand that people don’t typically have everything they need, “and even if they did, they would not be able to use all the information due to constraints in the environment, attention, or memory” (Lidén, 2023, p. 8).

Instead, we rely on heuristics. Such “rules of thumb” are easy to apply and fairly accurate, yet they can potentially result in systematic and serious biases and errors in judgment (Lidén, 2023; Eysenck & Keane, 2015).

Confirmation bias in context

Confirmation bias is one of several cognitive biases ’(Lidén, 2023).

They are important because researchers have recognized that “vulnerability to clinical anxiety and depression depends in part on various cognitive biases” and that mental health treatments such as CBT  should support the goals of reducing them (Eysenck & Keane, 2015, p. 668).

Cognitive biases include (Eysenck & Keane, 2015):

  • Attentional bias Attending to threat-related stimuli more than neutral stimuli
  • Interpretive bias Interpreting ambiguous stimuli, situations, and events as threatening
  • Explicit memory bias The likelihood of retrieving mostly unpleasant thoughts rather than positive ones
  • Implicit memory bias The tendency to perform better for negative or threatening information on memory tests

Individuals possessing all four biases focus too much on environmental threats, interpret most incidents as concerning, and identify themselves as having experienced mostly unpleasant past events (Eysenck & Keane, 2015).

Similarly, confirmation bias means that individuals give too much weight to evidence that confirms their preconceptions or hypotheses, even incorrect and unhelpful ones. It can lead to poor decision-making because it limits their ability to consider alternative viewpoints or evidence that contradicts their beliefs (Lidén, 2023).

Unsurprisingly, such a negative outlook or bias will lead to unhealthy outcomes, including anxiety and depression (Eysenck & Keane, 2015).

Check out Tali Sharot’s video for a deeper dive.

Confirmation bias is commonplace and typically has a low impact, yet there are times when it is significant and newsworthy (Eysenck & Keane, 2015; Lidén, 2023).

Limits of information

In 2005, terrorists detonated four bombs in London (three on the London Underground and one on a bus), killing 52 and injuring 700 civilians. In the chaotic weeks that followed, a further attempt failed to detonate a suicide bomb, and the individual got away (Lidén, 2023).

Unsurprisingly, a mass hunt was launched to capture the escaped bomber, and many suspects came under surveillance. Yet, the security services made several significant mistakes.

On July 22, 2005, a man living in the same house as two suspects and bearing a resemblance to one of them was shot dead on an Underground train by officers.

“The context with the previous bombings, the available intelligence, and the pre-operation briefings, created expectations that the surveillance team would spot a suicide bomber leaving the doorway” (Lidén, 2023, p. 37).

The wrong man died because the officers involved failed to see the limits of the information available to them at the time.

Witness identification

In 1976, factory worker John Demjanjuk from Cleveland, Ohio, was identified as a Nazi war criminal known as Ivan the Terrible, perpetrator of many killings within prison camps in the Second World War (Lidén, 2023).

Due to the individual’s denial and limited evidence, the case rested on proof of identity via a photo line-up. However, it became known that “Ivan the Terrible” had a round face and was bald.

As the defendant was the only individual who matched the description, he was chosen by all the witnesses (Lidén, 2023).

Whether or not the witnesses were genuinely able to identify the factory worker as the criminal became irrelevant. The case centered around the unfairness of the line-up and the confirmation bias that resulted from the information they had been given (Lidén, 2023).

Years later, in 2012, following continuing challenges to his identity, John Demjanjuk died pending an appeal for his conviction in a German court. His identity remained unclear as the confirmation bias remained (“Ivan the Terrible,” 2024).

experiment confirmation bias

Download 3 Free Positive CBT Exercises (PDF)

These detailed, science-based exercises will equip you or your clients with tools to find new pathways to reduce suffering and more effectively cope with life stressors.

Download 3 Free Positive CBT Tools Pack (PDF)

By filling out your name and email address below.

Confirmation bias can significantly impact our own and others’ lives (Lidén, 2023; Kappes et al., 2020).

For that reason, it is helpful to understand why it happens and the psychological factors involved. Research confirms that people (Lidén, 2023; Kappes et al., 2020; Eysenck & Keane, 2015):

  • Don’t like to let go of their initial hypothesis
  • Prefer to use as much information as is initially available, often resulting in a too specific hypothesis
  • Show confirmation bias more on their hypothesis than others
  • Are more likely to adopt a confirmation bias when under high cognitive load
  • With a lower degree of intelligence are more likely to engage in confirmation bias (most likely due to being less able to manage higher cognitive loads and see the overall picture)
  • With cognitive impairments are more impacted by confirmation bias
  • Are often unable to actively consider and understand all relevant information to challenge the existing hypothesis or make a new one
  • Are influenced by their emotions and motivations and potentially “blinded” to the facts
  • Are biased by existing thoughts and beliefs (sometimes cultural), even if incorrect
  • Are influenced by the beliefs and arguments of those around them

Recognize confirmation bias

  • Recognize that confirmation bias exists and understand its impact on decision-making and how you interpret information. ​
  • Actively seek out and consider different viewpoints, opinions, and sources of information that challenge your existing beliefs and hypotheses. ​
  • Develop critical thinking skills that evaluate evidence and arguments objectively without favoring preconceived notions or desired outcomes.
  • Be aware of your biases and open to questioning your beliefs and assumptions.
  • Explore alternative explanations or hypotheses that may contradict your initial beliefs or interpretations.
  • Welcome feedback and criticism from others, even if they challenge your ideas; recognize it as an opportunity to learn and grow.
  • Apply systematic and rigorous methods to gather and analyze data, ensuring your conclusions are evidence-based rather than a result of personal biases.
  • Engage in collaborative discussions and debates with individuals with different perspectives to help see other viewpoints and challenge your biases.
  • Continuously seek new information and update your knowledge base to avoid becoming entrenched and support more-informed decision-making.
  • Practice analytical thinking, questioning assumptions, evaluating evidence objectively, and considering alternate explanations.

experiment confirmation bias

World’s Largest Positive Psychology Resource

The Positive Psychology Toolkit© is a groundbreaking practitioner resource containing over 500 science-based exercises , activities, interventions, questionnaires, and assessments created by experts using the latest positive psychology research.

Updated monthly. 100% Science-based.

“The best positive psychology resource out there!” — Emiliya Zhivotovskaya , Flourishing Center CEO

As far back as 1968, Karl Popper recognized that falsifiability (being able to prove that something can be incorrect or false) is crucial to all scientific inquiry, impacting researchers’ behavior and experimental outcomes.

As scientists, Popper argued, we should focus on looking for examples of why a theory does not work instead of seeking confirmation of its correctness. More recently, researchers have also considered that when findings suggest a theory is false, it may be due to issues with the experimental design or data accuracy (Eysenck & Keane, 2015).

Yet, confirmation bias has been an issue for a long time in scientific discovery and remains a challenge.

When researchers looked back at the work of Alexander Graham Bell in developing the telephone, they found that, due to confirmation bias, he ignored promising new approaches in favor of his tried-and-tested ones. It ultimately led to Thomas Edison being the first to develop the forerunner of today’s telephone (Eysenck & Keane, 2015).

More recently, a study showed that 88% of professional scientists working on issues in molecular biology responded to unexpected and inconsistent findings by blaming their experimental methods; they ignored the suggestion that they may need to modify, or even replace, their theories (Eysenck & Keane, 2015).

However, when those same scientists changed their approach yet obtained similarly inconsistent results, 61% revisited their theoretical assumptions (Eysenck & Keane, 2015).

Failure to report null research findings is also a problem. It is known as the “file drawer problem” because data remains unseen in the bottom drawer as the researcher does not attempt to get findings published or because journals show no interest in them (Lidén, 2023).

Positive confirmation bias

Researchers have recognized several potential benefits that arise from our natural inclination to seek out confirmation that we are right, including (Peters, 2022; Gabriel & O’Connor, 2024; Bergerot et al., 2023):

  • Assisting in the personal development of individuals by reinforcing their positive self-conceptions and traits
  • Helping individuals shape social structures by persuading others to adopt their viewpoints
  • Supporting increased confidence by reinforcing individuals’ beliefs and ignoring contradictory evidence
  • Contributing to social conformity and stability by reinforcing shared beliefs and values within a group, potentially boosting cooperation and coordination
  • Encouraging decision-making by removing uncertainty and doubt
  • Increasing the knowledge-producing capacity of a group by supporting a deeper exploration of individual members’ perspectives

It’s vital to note that the possible benefits also have their limitations. They potentially favor the individual at the cost of others’ needs while potentially distorting and hindering the formation of well-founded beliefs (Peters, 2022).

experiment confirmation bias

17 Science-Based Ways To Apply Positive CBT

These 17 Positive CBT & Cognitive Therapy Exercises [PDF] include our top-rated, ready-made templates for helping others develop more helpful thoughts and behaviors in response to challenges, while broadening the scope of traditional CBT.

Created by Experts. 100% Science-based.

We have many resources for coaches and therapists to help individuals and groups understand and manage their biases.

Why not download our free 3 Positive CBT Exercises Pack and try out the powerful tools contained within? Some examples include the following:

  • Re-Framing Critical Self-Talk  Self-criticism typically involves judgment and self-blame regarding our shortcomings (real or imagined), such as our inability to accomplish personal goals and meet others’ expectations. In this exercise, we use self-talk to help us reduce self-criticism and cultivate a kinder, compassionate relationship with ourselves.
  • Solution-Focused Guided Imagery Solution-focused therapy assumes we have the resources required to resolve our issues. Here, we learn how to connect with our strengths and overcome the challenges we face.

Other free resources include:

  • The What-If Bias We often get caught up in our negative biases, thinking about potentially dire outcomes rather than adopting rational beliefs. This exercise helps us regain a more realistic and balanced perspective.
  • Becoming Aware of Assumptions We all bring biases into our daily lives, particularly conversations. In this helpful exercise , we picture how things might be in five years to put them into context.

More extensive versions of the following tools are available with a subscription to the Positive Psychology Toolkit© , but they are described briefly below.

  • Increasing Awareness of Cognitive Distortions

Cognitive distortions refer to our biased thinking about ourselves and our environment. This tool helps reduce the effect of the distortions by dismantling them.

  • Step one – Begin by exploring cognitive distortions, such as all-or-nothing thinking, jumping to conclusions, and catastrophizing .
  • Step two – Next, identify the cognitive distortions relevant to your situation.
  • Step three – Reflect on your thinking patterns, how they could harm you, and how you interact with others.
  • Finding Silver Linings

We tend to dwell on the things that go wrong in our lives. We may even begin to think our days are filled with mishaps and disappointments.

Rather than solely focusing on things that have gone wrong, it can help to look on the bright side. Try the following:

  • Step one – Create a list of things that make you feel life is worthwhile, enjoyable, and meaningful.
  • Step two – Think of a time when things didn’t go how you wanted them to.
  • Step three – Reflect on what this difficulty cost you.
  • Step four – Finally, consider what you may have gained from the experience. Write down three positives.

If you’re looking for more science-based ways to help others through CBT, check out this collection of 17 validated positive CBT tools for practitioners. Use them to help others overcome unhelpful thoughts and feelings and develop more positive behaviors.

We can’t always trust what we hear or see because our beliefs and expectations influence so much of how we interact with the world.

Confirmation bias refers to our natural inclination to seek out and focus on what confirms our beliefs, often ignoring anything that contradicts them.

While we have known of its effect for over 200 years, it still receives considerable research focus because of its impact on us individually and as a society, often causing us to make poor decisions and leading to damaging outcomes.

Confirmation bias has several sources and triggers, including our unwillingness to relinquish our initial beliefs (even when incorrect), preference for personal hypotheses, cognitive load, and cognitive impairments.

However, most of us can reduce confirmation bias with practice and training. We can become more aware of such inclinations and seek out challenges or alternate explanations for our beliefs.

It matters because confirmation bias can influence how we work, the research we base decisions on, and how our clients manage their relationships with others and their environments.

We hope you enjoyed reading this article. For more information, don’t forget to download our three Positive CBT Exercises for free .

  • Bergerot, C., Barfuss, W., & Romanczuk, P. (2023). Moderate confirmation bias enhances collective decision-making . biorXiv. https://www.biorxiv.org/content/10.1101/2023.11.21.568073v1.full
  • Buss, D. M. (2016). Evolutionary psychology: The new science of the mind . Routledge.
  • Eysenck, M. W., & Keane, M. T. (2015). Cognitive psychology: A student’s handbook . Psychology Press.
  • Gabriel, N., & O’Connor, C. (2024). Can confirmation bias improve group learning? PhilSci Archive. https://philsci-archive.pitt.edu/20528/
  • Ivan the Terrible (Treblinka guard). (2024). In Wikipedia . https://en.wikipedia.org/wiki/Ivan_the_Terrible_(Treblinka_guard)
  • Kappes, A., Harvey, A. H., Lohrenz, T., Montague, P. R., & Sharot, T. (2020). Confirmation bias in the utilization of others’ opinion strength. Nature Neuroscience , 23 (1), 130–137.
  • Lidén, M. (2023). Confirmation bias in criminal cases . Oxford University Press.
  • Peters, U. (2022). What is the function of confirmation bias? Erkenntnis , 87 , 1351–1376.
  • Popper, K. R. (1968). The logic of scientific discovery . Hutchinson.
  • Rist, T. (2023). Confirmation bias studies: Towards a scientific theory in the humanities. SN Social Sciences , 3 (8).
  • Wason, P. C. (1960). On the failure to eliminate hypotheses in a conceptual task. Quarterly Journal of Experimental Psychology , 12 (3), 129–140.

' src=

Share this article:

Article feedback

Let us know your thoughts cancel reply.

Your email address will not be published.

Save my name, email, and website in this browser for the next time I comment.

Related articles

experiment confirmation bias

The Positive CBT Triangle Explained (+11 Worksheets)

Cognitive behavioral therapy (CBT) is a popular and highly effective intervention model for dealing with multiple mental health conditions (Early & Grady, 2017; Yarwood et [...]

FEA

Fundamental Attribution Error: Shifting the Blame Game

We all try to make sense of the behaviors we observe in ourselves and others. However, sometimes this process can be marred by cognitive biases [...]

Halo effect

Halo Effect: Why We Judge a Book by Its Cover

Even though we may consider ourselves logical and rational, it appears we are easily biased by a single incident or individual characteristic (Nicolau, Mellinas, & [...]

Read other articles by their category

  • Body & Brain (52)
  • Coaching & Application (39)
  • Compassion (23)
  • Counseling (40)
  • Emotional Intelligence (21)
  • Gratitude (18)
  • Grief & Bereavement (18)
  • Happiness & SWB (40)
  • Meaning & Values (26)
  • Meditation (16)
  • Mindfulness (40)
  • Motivation & Goals (41)
  • Optimism & Mindset (29)
  • Positive CBT (28)
  • Positive Communication (23)
  • Positive Education (36)
  • Positive Emotions (32)
  • Positive Leadership (16)
  • Positive Parenting (14)
  • Positive Psychology (21)
  • Positive Workplace (35)
  • Productivity (16)
  • Relationships (46)
  • Resilience & Coping (38)
  • Self Awareness (20)
  • Self Esteem (37)
  • Strengths & Virtues (29)
  • Stress & Burnout Prevention (33)
  • Theory & Books (42)
  • Therapy Exercises (37)
  • Types of Therapy (54)

3 Positive CBT Exercises (PDF)

Understanding confirmation bias in research

Last updated

30 August 2023

Reviewed by

One of the biggest challenges of conducting a meaningful study is removing bias. Some forms of bias are easier than others to identify and remove.

One of the forms that's hardest for us to recognize in ourselves is confirmation bias.

In this article, you'll learn what confirmation bias is, the forms it takes, and how to begin removing it from your research. 

  • History of confirmation bias

Awareness of bias goes back as far as Aristotle and Plato. Aristotle noticed people are more likely to believe arguments that support their bias. Plato noticed the challenge of overcoming bias when seeking the truth. While neither called this “confirmation bias,” they were certainly aware of its effects.

The first psychological evidence of confirmation bias came from an experiment conducted by psychologist Peter Wason. Subjects were asked to guess a rule regarding a sequence of numbers. Participants could test any numbers they wanted before guessing what the rule was. However, most only tested the numbers that confirmed their initial guess.

  • Types of confirmation bias

Confirmation bias comes in many forms. Although the result is a failure to get the complete picture of a given research area, understanding the ways this bias presents itself can help you avoid it in your methodologies.

The biases that may impact research can be grounded in beliefs found outside the lab, so you'll need to evaluate how all your preconceived notions may play a role in skewing your research. 

Information selection bias

Information selection bias occurs when you seek out information that supports your existing beliefs. This is often done subconsciously. Information that allows someone to feel correct is more enjoyable to consume than information that challenges strongly held beliefs. This can also cause you to ignore or dismiss viewpoints that don't align with the way you think. 

For the purposes of this type of confirmation bias, information doesn't just mean news sources or scientific studies. The people you spend time with are a major source of information about the world. Selecting friend groups that don't challenge your beliefs can be a significant source of confirmation bias.

Example: Social media echo chambers

Many people carefully cultivate their social media feeds. Social media can be a challenging environment, with dissenting opinions treated as unfathomable evil, rather than mere disagreement. This can create particularly strong echo chambers that enforce an equally strong resistance to understanding the perspective of those who disagree with you.

Social scientists need to be aware of how these biases may impact their conclusions.

Interpretation bias

Data can often be interpreted in more ways than one. With motivated reasoning, even clear data can be distorted to better align with your views. When data is misrepresented to fit a particular line of reasoning, it's known as interpretation bias.

A common form of interpretation bias is when the researcher places emphasis on data that supports a preconceived notion and downplays data that doesn't.

Example: Biased interpretations of scientific studies

Whether it's a study you've conducted or one that's guiding your research, it's easy to focus on the parts that reinforce what you already believe and ignore the parts that don't.

However, doing so can prevent you from finding evidence that would disprove your theory and make it difficult to solve the problem at hand. 

Memory bias

The propensity to downplay disconfirming data can hurt research in the moment, but it can also have knock-on effects later. Data that confirms your biases will stick in your mind, while data that doesn't can fade away.

When confirmation bias appears in this form, it's called memory bias. This type of bias can be harder to recognize on a particular project because you can't be aware of something you don't remember.

Example: Cherry-picking scientific studies

A big part of conducting research is relying on work that others have done before you. A review of the literature can guide your research and help you to form conclusions.

Confirmation-seeking bias

Wason's experiment, described earlier, is an example of confirmation-seeking bias. The subjects only tested the rule they believed to be the case and didn't properly explore the options. As a result, they came to the wrong conclusion.

This can come in the form of poorly designed experiments or searching only for data and research that confirms your views. In its most extreme form, balanced or disconfirming sources are purposefully ignored or dismissed to confirm a bias instead of answering a research question.

Example: Looking for news sources that align with political views

Here's another example from outside the lab and is one to which political scientists may be particularly susceptible. Increasingly, news sources serve a particular ideological bent. Many people only rely on sources that paint a one-sided picture of the socio-political landscape.

While we're good at recognizing this behavior in others, we aren't so good at recognizing it in ourselves.

  • Impact of confirmation bias

The impacts of confirmation bias over which you have the most control are those that affect you directly. These will weaken the results of your research if you aren't careful to recognize and avoid your biases.

Some common impacts of confirmation bias are:

Biased hypotheses: Confirmation bias can lead you to form a hypothesis based more on existing beliefs than meaningful data, biasing the project from the start.

Data collection and interpretation: During the data collection phase, you may unconsciously focus on data that supports your hypotheses, leading to a distorted representation of the findings.

Selective reporting: In more extreme cases of confirmation bias, you may choose to only report on the findings that confirm your beliefs.

Misinterpretation of results: You may incorrectly interpret ambiguous or inconclusive findings that you would have otherwise been more conscious of.

Poor study design: You may unintentionally design experiments in ways where results are more likely to confirm a hypothesis instead of looking for a more balanced design.

Some impacts of confirmation bias affect the scientific community more broadly. When a given field is dominated by a particular ideology or belief system, several negative consequences can arise from the resulting confirmation bias.

Publication bias: Studies that align more closely with prevailing points of view or wisdom may be more likely to get published than those that push against them, regardless of the strength of the research.

Peer review and feedback: Both sides of peer review can suffer from confirmation bias. Reviewers may be more dismissive of studies they disagree with, or too lenient on those they don't. Authors may be less likely to accept valid criticism that challenges their beliefs.

Replication issues: The best way to prove the validity of a given piece of research is for someone else to replicate it. If confirmation bias played a role in the results, those without the bias might have difficulty replicating it, resulting in the type of replication crisis we've seen some fields experience.

  • Signs of confirmation bias

Understanding the signs of confirmation bias can help people recognize it in themselves and try to work past it. Confirmation bias can be a complex phenomenon, as evidenced by the numerous forms it can take.

Ignoring contradictory evidence

Unfortunately, it isn't uncommon for people to ignore evidence that contradicts their preconceived notions. Because everyone is guilty of this to some extent, it's important to know which signs to look out for, so you can catch yourself when it happens to you.

Some common signs of confirmation bias include:

Selectively focusing on data that supports your position while neglecting conflicting data

Deliberately avoiding situations that might expose you to opposing viewpoints

Suppressing or dismissing evidence that causes discomfort due to conflicting beliefs

Selective exposure to information

It's easiest to ignore disconfirming evidence if you never see it in the first place. Selective exposure to information is a major problem for those who want to get both sides of the picture and ensure their conclusions are based on fact and not bias.

Here are some signs you're guilty of selective exposure to information:

Actively seeking out sources that confirm your existing beliefs

Unconsciously avoiding information that challenges your worldview

Preferring news outlets and websites that align with your personal opinions

Over-relying on anecdotal evidence

There's a joke that the plural of anecdote isn't “anecdata.” Yet, many people treat anecdotal evidence as more concrete than hard data when the anecdotes fit their preferred narrative. Some ways you may catch yourself falling into this trap are:

Giving more weight to personal stories or experiences than concrete data

Being swayed by emotionally charged stories that resonate with your current beliefs

Drawing conclusions from individual experiences to make broader claims

Misinterpreting ambiguous information

The human brain has a habit of filling in gaps. When presented with ambiguous information, there are plenty of gaps to fill. Almost always, the mind will fill these gaps with information that supports an existing belief system.

The signs you're guilty of this include:

Assigning meaning to ambiguous information that confirms your preexisting beliefs

Interpreting ambiguous external stimuli in a way that aligns with your existing notions

Incorrectly attributing motives or intentions to ambiguous actions to fit your assumptions

Group polarization and echo chambers

When you spend most of your time around people who agree with you, you limit the number of alternative perspectives you are exposed to. When everyone you spend time with agrees with you, it creates a potentially false perception that a larger subset of the broader population is of the same opinion.

The following signs may indicate a lack of diversity in your relationships:

In a group setting, the people you spend time with reinforce each other's beliefs more often than not

You spend time in online and offline communities that all share the same views on a subject

The people you spend time with tend to vilify those with different opinions

  • How to avoid confirmation bias in research

The purpose of research should be to find the truth or to solve a problem. Neither can be accomplished if you're merely reinforcing your own, possibly false, beliefs.

We’ve already looked at some ways to identify and potentially avoid confirmation bias. Here are some more proactive measures you can take to be more sure your results are sound:

Acknowledging personal biases: The first step is to understand which way you may want the research to go. Then you'll be better equipped to design experiments that test your idea rather than simply confirm it.

Actively seeking diverse perspectives: Intellectual diversity is a powerful way to fight confirmation bias. Although the bias itself may lead you to push away those with differing beliefs, taking them into account is the best way to shape your own.

Engaging with contradictory information: Similarly, you must seek out information that disconfirms your hypothesis. What arguments and data are against it? By taking those into account in your research, you can better test which theories are true.

Using critical thinking and skepticism: A great way to combat confirmation bias is to treat findings that confirm your suspicions with the same scrutiny you would those that disconfirm them.

Employing rigorous research methods : Putting strict protocols in place and using a robust statistical analysis of the data, if applicable, can help counteract the bias you bring to the research.

Peer review: Just as you sought diverse perspectives when designing and conducting the research, have a trusted neutral party review your work for any signs of bias.

Continuous learning and self-improvement: As the Virginia Tech researchers showed, confirmation bias is part of how our brain works. Working to remove it takes continuous effort to better identify and mitigate it.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 18 April 2023

Last updated: 27 February 2023

Last updated: 22 August 2024

Last updated: 5 February 2023

Last updated: 16 August 2024

Last updated: 9 March 2023

Last updated: 30 April 2024

Last updated: 12 December 2023

Last updated: 11 March 2024

Last updated: 4 July 2024

Last updated: 6 March 2024

Last updated: 5 March 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next, log in or sign up.

Get started for free

Gary Klein Ph.D.

The Curious Case of Confirmation Bias

The concept of confirmation bias has passed its sell-by date..

Posted May 5, 2019 | Reviewed by Devon Frye

Confirmation bias is the tendency to search for data that can confirm our beliefs, as opposed to looking for data that might challenge those beliefs. The bias degrades our judgments when our initial beliefs are wrong because we might fail to discover what is really happening until it is too late.

To demonstrate confirmation bias, Pines (2006) provides a hypothetical example (which I have slightly modified) of an overworked Emergency Department physician who sees a patient at 2:45 a.m.—a 51-year-old man who has come in several times in recent weeks complaining of an aching back. The staff suspects that the man is seeking prescriptions for pain medication . The physician, believing this is just one more such visit, does a cursory examination and confirms that all of the man's vital signs are fine—consistent with what was expected. The physician does give the man a new prescription for a pain reliever and sends the man home—but because he was only looking for what he expected, he missed the subtle problem that required immediate surgery.

The concept of confirmation bias appears to rest on three claims:

  • First, firm evidence, going back 60 years, has demonstrated that people are prone to confirmation bias.
  • Second, confirmation bias is clearly a dysfunctional tendency.
  • Third, methods of debiasing are needed to help us to overcome confirmation bias.

The purpose of this essay is to look closely at these claims and explain why each one of them is wrong.

Claim #1: Firm evidence has demonstrated that people are prone to confirmation bias.

Confirmation bias was first described by Peter Wason (1960), who asked participants in an experiment to guess at a rule about number triples. The participants were told that the sequence 2-4-6 fit that rule. They could generate their own triples and they would get feedback on whether or not their triple fit the rule. When they had collected enough evidence, they were to announce their guess about what the rule was.

Wason found that the participants tested only positive examples—triples that fit their theory of what the rule was. The actual rule was any three ascending numbers, such as 2, 3, 47. However, given the 2-4-6 starting point, many participants generated triples that were even numbers, ascending and also increasing by two. Participants didn’t try sequences that might falsify their theory (e.g., 6-4-5). They were simply trying to confirm their beliefs.

At least, that’s the popular story. Reviewing the original Wason data reveals a different story. Wason’s data on the number triples (e.g., 2-4-6) showed that six of the 29 participants correctly guessed the rule on the very first trial, and several of these six did use probes that falsified a belief.

Most of the other participants in that study seemed to take the task lightly because it seemed so simple—but after getting feedback that their first guess was wrong, they realized that there was only one right answer and they'd have to do more analysis. Almost half of the remaining 23 participants immediately shaped up—10 guessed correctly on the second trial, with many of these also making use of negative probes (falsifications).

Therefore, the impression found in the literature is highly misleading. The impression is that in this Wason study—the paradigm case of confirmation bias—the participants showed a confirmation effect. But when you look at all the data, most of the participants were not trapped by confirmation bias. Only 13 of the 29 participants failed to solve the problem in the first two trials. (By the fifth trial, 23 of the 29 had solved the problem.)

The takeaway should have been that most people do test their beliefs. However, Wason chose to headline the bad news. The abstract to his paper states that “The results show that those [13] subjects, who reached two or more incorrect solutions, were unable, or unwilling, to test their hypotheses.” (p. 129).

Since then, several studies have obtained results that challenge the common beliefs about confirmation bias. These studies showed that most people actually are thoughtful enough to prefer genuinely diagnostic tests when given that option (Kunda, 1999; Trope & Bassok, 1982; Devine et al., 1990).

In the cognitive interviews I have conducted, I have seen some people trying to falsify their beliefs. One fireground commander, responding to a fire in a four-story apartment building, saw that the fire was in a laundry chute and seemed to be just beginning. He believed that he and his crew had arrived before the fire had a chance to spread up the chute—so he ordered an immediate attempt to suppress it from above, sending his crew to the 2nd and 3rd floors.

experiment confirmation bias

But he also worried that he might be wrong, so he circled the building. When he noticed smoke coming out of the eaves above the top floor, he realized that he was wrong. The fire must have already reached the 4th floor and the smoke was spreading down the hall and out the eaves. He immediately told his crew to stop trying to extinguish the fire and instead to shift to search and rescue for the inhabitants. All of them were successfully rescued, even though the building was severely damaged.

 Gary Klein

Another difficulty with Claim #1 is that confirmation bias tends to disappear when we add context. In a second study, Wason (1968) used a four-card problem to demonstrate confirmation bias. For example: Four cards are shown, each of which has a number on one side and a color on the other. The visible faces show 3, 8, red and brown. Participants are asked, "Which two cards should you turn over to test the claim that if a card has an even number on one face, then its opposite face is red?” (This is a slight variant of Wason’s original task; see the top part of the figure next to this paragraph.)

Most people turn over cards two and three. Card two, showing an “8,” is a useful test because of the opposite face is not red, the claim is disproved. But turning over card three, “red,” is a useless test because the claim is not that only cards with even numbers on one side have a red opposite face. Selecting card three illustrates confirmation bias.

However, Griggs and Cox (1982) applied some context to the four-card problem—they situated the task in a tavern with a barkeeper intent on following the law about underage drinking. Now the question took the form, “Which two of these cards should you turn over to test the claim that in this bar, 'If you are drinking alcohol then you must be over 19'?" Griggs and Cox found that 73 percent of the participants now chose “16,” and the beer—meaning the confirmation bias effect seen in Wason's version had mostly vanished. (See the bottom part of the figure above.)

Therefore, the first claim about the evidence for confirmation bias does not seem warranted.

Claim #2: Confirmation bias is clearly a dysfunctional tendency.

Advocates for confirmation bias would argue that the bias can still get in the way of good decision making . They would assert that even if the data don’t really support the claim that people fall prey to confirmation bias, we should still, as a safeguard, warn decision-makers against the tendency to support their pre-existing beliefs.

But that ploy, to discourage decision-makers from seeking to confirm their pre-existing beliefs, won’t work because confirmation attempts often do make good sense. Klayman and Ha (1987) explained that under high levels of uncertainty, positive tests are more informative than negative tests (i.e., falsifications). Klayman and Ha refer to a “positive test strategy” as having clear benefits.

As a result of this work, many researchers in the judgment and decisionmaking community have reconsidered their view that the confirmation tendency is a bias and needs to be overcome. Confirmation bias seems to be losing its force within the scientific community, even as it echoes in various applied communities.

Think about it: Of course we use our initial beliefs and frames to guide our explorations. How else would we search for information? Sometimes we can be tricked, in a cleverly designed study. Sometimes we trick ourselves when our initial belief is wrong. The use of our initial beliefs, gained through experience, isn’t perfect. However, it is not clear that there are better ways of proceeding in ambiguous and uncertain settings.

We seem to have a category error here—people referring to the original Wason data on the triples and the four cards (even though these data are problematic) and then stretching the concept of confirmation bias to cover all kinds of semi-related or even unrelated problems, usually with hindsight: If someone makes a mistake, then the researchers hunt for some aspect of confirmation bias. As David Woods observed, "The focus on confirmation bias commits hindsight bias."

For all these reasons, the second claim that the confirmation tendency is dysfunctional doesn’t seem warranted. We are able to make powerful use of our experience to identify a likely initial hypothesis and then use that hypothesis to guide the way we search for more data.

How would we search for data without using our experience? We wouldn’t engage in random search because that strategy seems highly inefficient. And I don’t think we would always try to search for data that could disprove our initial hypothesis, because that strategy won’t help us make sense of confusing situations. Even scientists do not often try to falsify their hypotheses, so there’s no reason to set this strategy up as an ideal for practitioners.

The confirmation bias advocates seem to be ignoring the important and difficult process of hypothesis generation, particularly under ambiguous and changing conditions. These are the kinds of conditions favoring the positive test strategy that Klayman and Ha studied.

Claim #3: Methods of debiasing are needed to help us to overcome confirmation bias.

For example, Lilienfeld et al. (2009) asserted that “research on combating extreme confirmation bias should be among psychological science’s most pressing priorities.” (p. 390). Many if not most decision researchers would still encourage us to try to debias decision-makers.

Unfortunately, that’s been tried and has gotten nowhere. Attempts to re-program people have failed. Lilienfeld et al. admitted that “psychologists have made far more progress in cataloguing cognitive biases… than in finding ways to correct or prevent them.” (p. 391). Arkes (1981) concluded that psychoeducational methods by themselves are “absolutely worthless.” (p. 326). The few successes have been small and it is likely that many failures go unreported. One researcher whose work has been very influential in the heuristics and biases community has admitted to me that debiasing efforts don’t work.

And let’s imagine that, despite the evidence, a debiasing tactic was developed that was effective. How would we use that tactic? Would it prevent us from formulating an initial hypothesis without gathering all relevant information? Would it prevent us from speculating when faced with ambiguous situations? Would it require us to seek falsifying evidence before searching for any supporting evidence? Even the advocates acknowledge that confirmation tendencies are generally adaptive. So how would a debiasing method enable us to know when to employ a confirmation strategy and when to stifle it?

Making this a little more dramatic, if we could surgically excise the confirmation tendency, how many decision researchers would sign up for that procedure? After all, I am not aware of any evidence that debiasing the confirmation tendency improves decision quality or makes people more successful and effective. I am not aware of data showing that a falsification strategy has any value. The Confirmation Surgery procedure would eliminate confirmation bias but would leave the patients forever searching for evidence to disconfirm any beliefs that might come to their minds to explain situations. The result seems more like a nightmare than a cure.

One might still argue that there are situations in which we would want to identify several hypotheses, as a way of avoiding confirmation bias. For example, physicians are well-advised to do differential diagnosis, identifying the possible causes for a medical condition. However, that’s just good practice. There’s no need to invoke a cognitive bias. There’s no need to try to debias people.

For these reasons, I suggest that the third claim about the need for debiasing methods is not warranted.

What about the problem of implicit racial biases? That topic is not really the same as confirmation bias, but I suspect some readers will be making this connection, especially given all of the effort to set up programs to overcome implicit racial biases. My first reaction is that the word “bias” is ambiguous. “Bias” can mean a prejudice , but this essay uses “bias” to mean a dysfunctional cognitive heuristic, with no consideration of prejudice, racial or otherwise. My second reaction is to point the reader to the weakened consensus on implicit bias and the concession made by Greenwald and Banaji (the researchers who originated the concept of implicit bias) that the Implicit Association Test doesn’t predict biased behavior and shouldn’t be used to classify individuals as likely to engage in discriminatory behavior.

Conclusions

Where does that leave us?

Fischhoff and Beyth-Marom (1983) complained about this expansion: “Confirmation bias, in particular, has proven to be a catch-all phrase incorporating biases in both information search and interpretation. Because of its excess and conflicting meanings, the term might best be retired.” (p. 257).

I have mixed feelings. I agree with Fischhoff and Beyth-Marom that over the years, the concept of confirmation bias has been stretched—or expanded—beyond Wason’s initial formation so that today it can refer to the following tendencies:

  • Search: to search only for confirming evidence (Wason’s original definition)
  • Preference: to prefer evidence that supports our beliefs
  • Recall: to best remember information in keeping with our beliefs
  • Interpretation: to interpret evidence in a way that supports our beliefs
  • Framing: to use mistaken beliefs to misunderstand what is happening in a situation
  • Testing: to ignore opportunities to test our beliefs
  • Discarding: to explain away data that don’t fit with our beliefs

I see this expansion as a useful evolution, particularly the last three issues of framing, testing, and discarding. These are problems I have seen repeatedly. With this expansion, researchers will perhaps be more successful in finding ways to counter confirmation bias and improve judgments.

Nevertheless, I am skeptical. I don’t think the expansion will be effective because researchers will still be going down blind alleys. Decision researchers may try to prevent people from speculating at the outset even though rapid speculation is valuable for guiding exploration. Decision researchers may try to discourage people from seeking confirming evidence, even though the positive test strategy is so useful. The whole orientation of correcting a bias seems misguided. Instead of appreciating the strength of our sensemaking orientation and trying to reduce the occasional errors that might arise, the confirmation bias approach typically tries to eliminate errors by inhibiting our tendencies to speculate and explore.

Fortunately, there seems to be a better way to address the problems of being captured by our initial beliefs, failing to test those beliefs, and explaining away inconvenient data—the concept of fixation . This concept is consistent with what we know of naturalistic decision making, whereas confirmation bias is not. Fixation doesn’t carry the baggage of confirmation bias in terms of the three unwarranted claims discussed in this essay. Fixation directly gets at a crucial problem of failing to revise a mistaken belief.

And best of all, the concept of fixation provides a novel strategy for overcoming the problems of being captured by initial beliefs, failing to test those beliefs, and explaining away data that are inconsistent with those beliefs.

My next essay will discuss fixation and describe that strategy.

Arkes, H. (1981). Impediments to accurate clinical judgment and possible ways to minimize their impact. Journal of Consulting and Clinical Psychology, 49, 323-330.

Devine, P. G. Hirt, E.R.; Gehrke, E.M. (1990), Diagnostic and confirmation strategies in trait hypothesis testing. Journal of Personality and Social Psychology, 58 , 952–63.

Fischhoff, B. & Beyth-Marom, R. (1983). Hypothesis evaluation from a Bayesian perspective. Psychological Review, 90 , 239-260.

Griggs, R.A., & Cox, J.R. (1982). The elusive thematic-materials effect in Wason’s selection task. British Journal of Psychology, 73, 407-420.

Klayman, J., & Ha, Y-W. (1987). Confirmation, disconfirmation, and information in hypothesis testing. Psychological Review, 94 , 211-228.

Klein, G. (1998). Sources of power: How people make decisions . Cambridge, MA: MIT Press.

Kunda, Z. (1999). Social cognition: Making sense of people .Cambridge, MA: MIT Press.

Lilienfeld, S.O., Ammirati, R., & Landfield, K. (2009). Giving debiasing away: Can psychological research on correcting cognitive errors promote human welfare? Perspectives on Psychological Science, 4 , 390-398.

Oswald, M.E., & Grossjean, S. (2004). Confirmation bias. In R.F. Rudiger (Ed.) Cognitive illusions: A handbook on fallacies and biases in thinking, judgement and memory. Hove, UK: Psychology Press.

Pines, J.M. (2006). Confirmation bias in emergency medicine .Academic Emergency Medicine, 13 , 90-94.

Trope, Y., & Bassok, M. (1982), Confirmatory and diagnosing strategies in social information gathering. Journal of Personality and Social Psychology, 43, 22–34.

Wason, P.C. (1960). On the failure to eliminate hypotheses in a conceptual task. The Quarterly Journal of Experimental Psychology, 12 , 129-140.

Wason, P.C. (1968). Reasoning about a rule. The Quarterly Journal of Experimental Psychology, 20 , 273-281.

Gary Klein Ph.D.

Gary Klein, Ph.D., is a senior scientist at MacroCognition LLC. His most recent book is Seeing What Others Don't: The remarkable ways we gain insights.

  • Find Counselling
  • Find a Support Group
  • Find Online Therapy
  • United Kingdom
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Self Tests NEW
  • Therapy Center
  • Diagnosis Dictionary
  • Types of Therapy

September 2024 magazine cover

It’s increasingly common for someone to be diagnosed with a condition such as ADHD or autism as an adult. A diagnosis often brings relief, but it can also come with as many questions as answers.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 26 May 2020

Confidence drives a neural confirmation bias

  • Max Rollwage   ORCID: orcid.org/0000-0003-4181-3983 1 , 2 ,
  • Alisa Loosen   ORCID: orcid.org/0000-0002-4295-3817 1 , 2 ,
  • Tobias U. Hauser   ORCID: orcid.org/0000-0002-7997-8137 1 , 2 ,
  • Rani Moran 1 , 2 ,
  • Raymond J. Dolan 1 , 2 &
  • Stephen M. Fleming   ORCID: orcid.org/0000-0003-0233-4891 1 , 2 , 3  

Nature Communications volume  11 , Article number:  2634 ( 2020 ) Cite this article

26k Accesses

88 Citations

190 Altmetric

Metrics details

  • Human behaviour

A prominent source of polarised and entrenched beliefs is confirmation bias, where evidence against one’s position is selectively disregarded. This effect is most starkly evident when opposing parties are highly confident in their decisions. Here we combine human magnetoencephalography (MEG) with behavioural and neural modelling to identify alterations in post-decisional processing that contribute to the phenomenon of confirmation bias. We show that holding high confidence in a decision leads to a striking modulation of post-decision neural processing, such that integration of confirmatory evidence is amplified while disconfirmatory evidence processing is abolished. We conclude that confidence shapes a selective neural gating for choice-consistent information, reducing the likelihood of changes of mind on the basis of new information. A central role for confidence in shaping the fidelity of evidence accumulation indicates that metacognitive interventions may help ameliorate this pervasive cognitive bias.

Similar content being viewed by others

experiment confirmation bias

Subjective value and decision entropy are jointly encoded by aligned gradients across the human brain

experiment confirmation bias

Confirmation bias in the utilization of others’ opinion strength

experiment confirmation bias

Evidence integration and decision confidence are modulated by stimulus consistency

Introduction.

The philosopher Bertrand Russell opined “The most savage controversies are about matters as to which there is no good evidence either way”. While this view applies in some situations, even more troubling are instances where polarization and entrenchment of opinion persists in the face of contrary evidence, exemplified by debates on climate change and vaccinations. This polarization is most evident when opposing parties are highly confident in their positions 1 , 2 . A psychological-level explanation for such entrenchment is the idea that people selectively incorporate evidence in line with their beliefs, known as confirmation bias 3 . Although an extensive literature has documented this bias in behaviour 3 , 4 , the underlying cognitive, computational and neuronal mechanisms are not understood.

So far, an investigation of confirmation bias has been restricted largely to scenarios involving complex real-world beliefs such as political attitudes 4 , 5 , 6 . However, the complexity of such higher-order beliefs makes it difficult to disentangle the various contributors to biased information processing. For instance, people may have a strong personal investment in their political opinions, leading to a significant motivation to discount new information that goes against their beliefs. Intriguingly, confirmation biases have recently been demonstrated in low-level perceptual tasks 7 , 8 , 9 , that are unlikely to evoke such motivated reasoning. These studies indicate a source of confirmation bias may be a generic shift in the way the brain incorporates new information. Here we adopt such a task to study the computational and neural basis of post-decisional shifts in sensitivity to choice-consistent information.

Perceptual decision-making is well-described using sequential sampling models which assume the brain accumulates noisy evidence for each choice option to a decision bound 10 . This accumulation process is thought to be supported by neuronal populations in parietal and prefrontal cortex 11 , 12 . Importantly, while perceptual tasks allow tight control over the processes involved, they also permit generalisation to more complex decisions 13 , 14 , 15 , and similar principles appear to underlie choice and confidence formation in both simple and more complex tasks 16 , 17 . However while the processes underlying perceptual decision-making have been studied in detail, little is known about the mechanisms governing accumulation of evidence after a choice has been made, or how such processing is shaped by pre-existing beliefs and confidence 7 , 17 , 18 , 19 , 20 , 21 , 22 , 23 .

Here we combine theoretical models and neural metrics to identify alterations in post-decisional processing that may contribute to the phenomenon of confirmation bias. Across all experiments, participants were presented with a sample of moving dots (pre-decision evidence) before indicating their initial decision (motion to the left or right) and confidence in their choice (see Fig.  1a ). They were then presented with a second sample of moving dots (post-decision evidence) before making a final choice and providing a confidence estimate. Importantly, pre- and post-decision evidence always indicated the same direction of motion such that post-decision evidence was helpful. Accordingly, an ideal Bayesian observer should use post-decision evidence to change its mind after initial mistakes (see Supplementary Note  1 for analysis of the adaptive usage of post-decision evidence), whereas a confirmation bias would blunt this belief flexibility 13 , 18 .

figure 1

a Trial timeline. Note that participants first had to indicate a binary left versus right decision (i.e. a two alternative forced-choice), and then indicate their confidence in this decision by moving a cursor along the selected scale. b , c A psychophysical manipulation of positive evidence selectively increased confidence of the first decision ( c ) while keeping accuracy constant ( b ). This increase in confidence was replicated across all three studies. Data are presented as mean values ± SEM; grey dots represent individual participant data. Paired t -test (two-tailed): ** p  = .005. LPE = low positive evidence condition; HPE = high positive evidence condition. d Between-subject relationship between the degree to which positive evidence increased confidence ( x -axis: confidence in the high positive evidence condition—confidence in the low positive evidence condition) and its effect on changes of mind ( y -axis: changes of mind in the high positive evidence condition—changes of mind in the low positive evidence condition). This correlation was replicated in all three studies. Orange data points represent subjects showing the opposite of the intended effect of the manipulation on confidence (higher confidence in the low positive evidence condition). Pearson correlation (two-tailed): *** p  < .0001.

Effects of confidence on changes of mind

In a first experiment we hypothesised that a confirmation bias would occur more often when people are highly confident in their original choice 24 , 25 , 26 . In order to dissociate subjective confidence from objective performance we used a psychophysical manipulation (“positive evidence” 27 , see Methods) to selectively boost participants’ confidence (mean difference = 0.024, CI = [0.008, 0.04], Cohen’s d  = 0.21, t (27) = 3.0, p  = 0.005, Fig.  1c ) while leaving performance (mean difference = 0.006, CI = [−0.022, 0.034], Cohen’s d  = 0.02; Bayesian t -test indicating equality: BF 01  = 4.61; Fig.  1b ) and reaction times (mean difference = −0.005, CI = [−0.029, 0.018], Cohen’s d  = −0.04; Bayesian t -test indicating equality: BF 01  = 4.51, Supplementary Fig.  5a ) unaffected.

We next set out to test whether this boost in confidence influenced changes of mind. There were notable individual differences in the degree to which our manipulation boosted participants’ confidence (see Fig.  1c, d ). Importantly, subjects who experienced a stronger confidence boost through the positive evidence manipulation also showed a stronger reduction in changes of mind ( r  = −0.69, p  < 0.0001, see Fig.  1d ), an effect not explained by an impact of positive evidence on accuracy or reaction time ( p  = 0.005 when controlling for these effects). This supports a notion that confidence drives reductions in changes of mind (see Supplementary Notes  5 and 6 for additional behavioural and magnetoencephalography (MEG) analyses that further confirm confidence as a critical driver of changes of mind).

Confidence induces a selective gain for confirmatory evidence

We next reasoned that confidence may reduce changes of mind by promoting a bias towards processing of confirmatory post-decision evidence. We sought to test this hypothesis by revealing the process through which confidence affects accumulation of post-decision evidence, applying a combination of drift-diffusion modelling and recordings of post-decisional fluctuations in a neural decision variable (DV) using MEG. We considered two potential mechanisms through which confidence might reduce changes of mind. First, confidence might reflect a shift in the starting point of post-decision accumulation to be closer to the bound associated with an initial decision, consistent with a continuation of pre-decisional evidence accumulation (influence on starting point; Fig.  2a upper panel). Second, confidence may induce selective accumulation of evidence in line with an initial decision (influence on drift rate; Fig.  2a lower panel)—a clear instance of confirmation bias.

figure 2

a Illustration of how confidence may reduce changes of mind through either a shift in starting point towards the decision bound of the initial decision (upper panel) and/or a selective increase of drift-rate for evidence supporting the initial decision (lower panel). b , c Model simulations (of the best fitting model) reproduce behavioural patterns of accuracy and reaction times of the second decision when plotted as a function of the initial decision and initial confidence. Due to the task structure participants received confirming post-decision evidence when they were initially correct and disconfirming post-decision evidence after initial mistakes. Model simulations are shown as dotted lines, behavioural data as solid lines. Data are presented as mean values +/− 95% confidence intervals. The righthand panel of ( c ) plots the full distribution of response times and model predictions for the different trial types (high confidence and no change of mind, low confidence and no change of mind, high confidence and change of mind, low confidence and change of mind). d Posterior distribution of model parameters of the best-fitting model. The dependencies of the drift rate (purple lines) and starting point (green lines) on initial confidence (left panel), initial decision (middle panel) and the interaction between confidence × initial decision (right panel) are presented. The dotted vertical lines represent an effect of zero/no effect. Note that these dependencies are simultaneously fitted, controlling for mutual influences. Markov-Chain Monte-Carlo sampling of posterior parameter distribution: *** P (parameter > 0)>0.999. Sec=seconds.

Critically, these two mechanisms make different predictions in terms of the distributions of response times for the final decision 8 , 9 . We compared 10 drift-diffusion models (DDMs) that embodied these different predictions (see Supplementary Note  2 for a full model comparison). We employed accuracy coding such that the bounds correspond to a correct versus an incorrect decision, such that a positive drift-rate represents stronger integration of the presented (correct) motion direction. Note, by design, confirmatory post-decision evidence was received when an initial decision was correct, and disconfirmatory evidence when an initial decision was incorrect (Fig.  2b–d ). In addition, in light of suggestions that confidence might also affect the separation of decision bounds, and thus the trade-off between speed and accuracy of subsequent decisions 28 , 29 we allowed for a dependency of boundary separation on initial confidence in all models.

The models differed as to whether the starting point and/or drift-rate were affected by confidence (models 2–4), accuracy of the initial decision (models 5–7; i.e. correct = 1 and incorrect = −1, capturing a general confirmation bias) and their interaction (models 8–10; i.e. capturing a confirmation bias that depends on confidence). The winning model (Model 10, as indicated by the Deviance Information Criterion score; see Supplementary Fig.  2A ) incorporated dependencies of starting point and drift-rate on all factors (confidence, initial decision and the interaction) and provided a good fit to the data (Fig.  2b, c ):

After accounting for main effects, we observed a dependency of the starting point on the interaction between confidence and initial decision (95% equal-tailed interval = 0.08−0.18; Fig.  2d right hand panel), indicating participants started the accumulation process closer to the bound of the initial decision when highly confident in their choice. Even more striking was the discovery of a similar interaction effect on drift rate (95% equal-tailed interval = 0.11−0.26; Fig.  2d right hand panel) indicating participants selectively accumulated evidence supporting their initial choice, and were more likely to do so when they were more confident. While a confidence-related shift in starting point might reflect normative usage of pre-decision evidence (because high confidence in an initial decision might reflect greater pre-decision evidence accumulation, and thus be closer to a post-decisional bound), an influence of confidence on the drift rate is a clear instance of confirmation bias. Indeed, effects of the initial decision and confidence on the drift rate were more pronounced than those on the starting point (see Fig.  2 and Supplementary Note  3 ). Such a confirmation bias led to a boost in accumulation of the veridical motion direction following high-confidence correct decisions (as such information served to confirm the original choice), whereas it led to a reduction in evidence accumulation (manifest as a lowered drift rate) following high-confidence errors (as new information served to disconfirm an originally wrong decision).

Neural markers of post-decisional processing

While our DDM fits support a distinct influence of initial choice and confidence on post-decisional processing, they allow only indirect inference on how confidence affects evidence accumulation. To quantify this process more directly we used MEG to obtain a time-resolved neural metric of post-decision accumulation. Specifically, we trained a SVM classifier on brain activity (normalized amplitude of all MEG channels) at each time point (10 ms timebins) in the pre-decision time window (lasting 850 ms from stimulus onset to the presentation of choice options; note that the trial timeline for the MEG study differed slightly to the timeline presented in Fig.  1a , see “Methods” for details) to predict which choice (left or right) was made on each trial. We then applied the trained classifier to brain activity at the corresponding time point in the post-decision time window, enabling us to derive a probabilistic prediction of neural evidence favouring a leftward versus rightward decision (see Fig.  3a left panel). Positive values indicate prediction of a rightward decision and negative values indicate prediction of a leftward decision (see Fig.  3b ). We next fitted a linear regression to the time series of classifier predictions within each trial (see Fig.  3a right panel) to obtain a trial-by-trial neural measure of the starting point (intercept) and drift rate (slope). These measures of neural evidence accumulation (slope) should be highly responsive to the presented motion direction during the post-decision period, and we show this was indeed the case (hierarchical regression: β  = 0.07, t (8550) = 6.89, p  < 10 −11 , Fig.  3b ).

figure 3

a We trained a machine-learning classification algorithm on the pre-decision phase using MEG activity to predict left vs. right choices, and reapplied this classifier to the corresponding time point during the post-decision phase. The distance of each trial to the separating hyperplane provides a graded measure of neural evidence for a left or right decision, with changes in the classifier prediction within each trial providing a neural metric of evidence accumulation (see right hand panel). The inset shows the temporal generalization of decoding accuracy from the pre- to post-decision phases, indicating that the pre-decision classifier generalises to the post-decision phase along the major diagonal (i.e. corresponding time-points). AUC = area under the curve, DV = decision variable. b Grand average of the left/right classifier prediction in response to post-decision evidence. The light grey line shows the change in neural representation when rightward motion is presented and the black line shows the change in neural representation when leftward motion is presented. Regression lines show fits to the group-averaged data for visualisation purposes. Note that positive classifier values indicate evidence for a rightward decision and negative values evidence for a leftward decision. c Contributions of sensors to decoding left versus right decisions. The group average of contributions for each sensor is presented. In line with previous research on the neural correlates of evidence accumulation, sensors in centro-parietal regions made the highest contributions to decodability of (abstract) left versus right decisions. d – f Validation of neural metrics of post-decision evidence accumulation. Neural measures of the slope and starting point (intercept) of evidence accumulation extracted from the post-decision phase were entered as simultaneous predictors of ( d ) reaction times ( e ) accuracy and ( f ) confidence of the final decision. Fixed effects from a hierarchical regression model are presented ± SEM. Hierarchical regression (two-tailed): d ** p  = 0.005; e * p  = 0.045, ** p  = 0.002; f ** p  = 0.002, *** p  = 0.0004.

The slopes extracted from this analysis are signed, such that positive values indicate evidence for a rightward choice and negative values evidence for a leftward choice. In order to obtain an unsigned metric of evidence accumulation strength, we flipped the sign of slopes extracted from trials in which leftward motion was presented (we conducted the same flip for the intercept to obtain an unsigned metric of the starting point). This unsigned metric quantifies a propensity to correctly integrate the presented information, analogous to a drift rate in the accuracy coded DDM employed in Fig.  2 .

A neural analogue of the drift-rate (or change in internal DV) should be related to characteristic features of the observer’s decision. Specifically, stronger internal evidence accumulation should be related to a higher likelihood of having made a correct decision 12 , faster response times 10 and higher confidence 11 . In order to check whether our classifier predictions satisfied these criteria for metrics of internal evidence accumulation, we entered both the trial-by-trial slope and intercept of the post-decision accumulation process as simultaneous predictors in a hierarchical regression model to predict (a) reaction times, (b) choice accuracy and (c) confidence of the final decision (see Supplementary Note  4 for a similar analysis of the pre-decision period). Steeper slopes predicted faster reaction times ( β  = −0.007, t (8549) = −2.83, p  = 0.005, see Fig.  3d ), a higher likelihood of a correct decision ( β  = 0.16, t (8549) = 3.05, p  = 0.002, see Fig.  3e ) and higher confidence ( β  = 0.14, t (8549)=3.53, p  = 0.0004, see Fig.  3f ). We also observed significant effects of the intercept on accuracy ( β  = 0.1, t (8549)=2.0, p  = 0.045, see Fig.  3e ) and confidence ( β  = 0.12, t (8549) = 3.07, p  = 0.002, see Fig.  3f ) which is to be expected if participants maintain a representation of the evidence obtained in the pre-decision phase, and if the strength of this pre-decisional accumulation predicts the likelihood of being both correct and confident.

We next asked whether specific sensor clusters drive the classifier performance. Previous studies using EEG have identified a centro-parietal event-related potential (the centroparietal positivity or CPP) as a neural marker of internal evidence accumulation 12 , 30 , 31 . Accordingly, when identifying the features that contributed most strongly to classifier decoding accuracy (Fig.  3c ) we also found that centro-parietal sensors make a disproportionate contribution to an ability to differentiate between left and right decisions.

Having identified a neural metric of evidence accumulation, we next turned to our central question of whether confidence induces a selective accumulation for choice-consistent information as measured using MEG. As hypothesized, we found that after high confidence (vs. low confidence) decisions, accumulation of neural evidence was facilitated if it was confirmatory, but largely abolished if it was disconfirmatory (Fig.  4a, b ). In other words, our MEG analysis reveals that high confidence leads to post-decision accumulation becoming “blind” to disconfirmatory evidence. To formally quantify this effect, we entered the slope and starting point of neural evidence accumulation on each trial into hierarchical regression models with initial decision, high vs. low initial confidence and their interaction as predictors. We obtained a significant effect of initial decision ( β  = 0.042, t (8547) = 2.96, p  = 0.003) and its interaction with confidence ( β  = 0.038, t (8547) = 2.64, p  = 0.008, see Fig.  4c ) on slope in the absence of effects on starting point ( p  > 0.05). Consistent with our DDM fits, these results indicate that a confidence-induced confirmation bias is predominantly driven by a selective accumulation of choice-consistent information.

figure 4

a , b Neural metrics of post-decision accumulation separated into confirming (consistent with initial decision) and disconfirming (inconsistent with initial decision) post-decision evidence and as a function of high ( a ) and low ( b ) initial confidence. More positive values on the y-axis indicate stronger (more veridical) representation of the presented motion. Weighted group averages (grand average) are presented and regression lines are fits to this averaged data. c Effects of initial decision and confidence on the slope of neural evidence accumulation in response to post-decision evidence (slope). The righthand panel shows weighted mean values ± SEM for the strength of neural evidence integration (slope) within each condition. Grey dots represent individual participants’ data. The lefthand bar shows the fixed effect ± SEM for the initial decision × confidence interaction effect from a hierarchical regression (two-tailed): ** p  = 0.008. d Effect of confidence on temporal generalization of decoding accuracy from the pre- to the post-decision phase. Higher confidence is associated with higher decodability of the initial decision (i.e. stronger representation of the initial decision, yellow colours). A stronger representation of the initial decision was seen at the beginning of the post-decision period when confidence was high, consistent with confidence shifting a starting point towards the bound of the initial decision. The contoured area represents a cluster of timepoints with a significant main effect of confidence (permutation test, p  < 0.05 corrected for multiple comparisons). The time window starts with stimulus presentation (0 ms) and ends when the response options are presented (850 ms). Dotted lines indicate the offset of the stimulus (pre- or post-decision stimulus respectively).

We further reasoned that this approach may remain blind to changes in the starting point of post-decision evidence accumulation because of an asymmetry in evidence availability at the start of the pre- and post-decision phases. In other words, simply reapplying the (non-predictive) classifier weights obtained at the beginning of the pre-decision phase to the same time point in the post-decision phase could render the analysis pipeline blind to starting point offsets. To address this concern, we evaluated the extent to which the entire timecourse of classifier predictions obtained in the pre-decision phase generalised to the post-decision phase, without making assumptions about their relative timing 32 . This analysis provides insight into how putative processing stages identified in the pre-decision phase are reinstated in the post-decision phase, and crucially how this timecourse is affected by confidence. We found a cluster of time points in which a representation of the initial decision was activated earlier in the post- compared with the pre-decision phase when confidence was high ( p  = 0.01, corrected for multiple comparisons; Fig.  4d ). Such early reinstatement of a later processing stage is consistent with confidence enhancing a representation of the initial decision (i.e. shifting a starting point towards the bound of the initial decision) or inducing an expectation for evidence supporting an initial decision at the beginning of the post-decision period. Together these results indicate that confidence changes both the neural representation of evidence for an initial decision at the beginning of the post-decision phase (analogous to a change in starting point) as well as enhancing the processing of evidence supporting an initial decision (analogous to a change in drift rate).

By combining behavioural and neural modelling we provide experimental evidence that holding high confidence in a decision leads to a striking modulation of post-decision processing and the emergence of a behavioural confirmation bias. These findings are consistent with a neural representation of confidence acting as a top-down controller 25 (see Supplementary Note  7 for further analysis) that selectively amplifies processing of choice-consistent information.

A confirmation bias in the current experiment was observed in low-level perceptual decisions with limited emotional or cognitive content, suggesting that choice-induced biases in evidence accumulation represent a core principle of neural information processing 8 , 33 . In most real-world decisions, additional motivational 34 and social 35 influences (e.g. not revising a decision in order to appear self-consistent) are presumably also in play. These additional influences may amplify, or add to, effects of confidence on post-decisional processing in complex ways. An advantage of starting with an investigation of confirmation biases within lower-level tasks is that the potential for such interactions can be minimized, allowing a focused investigation of the processes that drive post-decisional shifts in evidence accumulation.

Computational modelling of the evidence accumulation process enabled further arbitration between apparently optimal information usage and a confirmation bias, by separating the influence of confidence on post-decisional starting point and drift rate. A shift in starting point is potentially normative as it may reflect the contribution of stronger pre-decision evidence to higher confidence, indicating that participants incorporate both pre- and post-decision evidence when reaching a final decision. In contrast, the influence confidence on drift-rate represents a distortion in the integration of new evidence and thus a classic instance of confirmation bias.

In turn, our usage of MEG recordings in combination with machine learning classification revealed a neural marker of these shifts in post-decision evidence accumulation. This measure complemented our behavioural modelling results and yielded direct support for a hypothesis that confidence alters the way in which the brain accumulates new information, consistent with a selective gating of choice-consistent information.

In the current task, where new evidence is always helpful, this bias against incorporating conflicting post-decision evidence is normatively maladaptive. In other scenarios, however, where new evidence may be distracting and/or actively misleading, a confirmation bias might prove helpful. For instance, previous attempts to explain the value of selective evidence accumulation focused on its role in directing attention towards aspects of the environment with the highest potential for information gain 36 , 37 , or in increasing the robustness of decisions against the influence of noise 26 , 38 . However, the fact that confidence increases choice-consistent information processing goes against the idea that confirmation bias is itself driven by a need for certainty 3 , 39 . Instead, we observed the strongest confirmation bias when people were already confident in their decisions.

The study of cognitive biases has remained largely distinct from parallel efforts to understand the processes governing evidence accumulation in simple decisions. We suggest that extending models of evidence accumulation to post-decisional processing enables a unique window onto biases in higher-order cognition 7 . Intriguingly, recent evidence suggests that alterations in post-decision processing are predictive of higher-level attitudes such as beliefs about political issues 13 , suggesting that insights gained from the study of confirmation bias in simple decisions can be applied to understand the drivers of polarization and entrenchment across a range of societal issues. For instance, a central role for confidence in shaping the fidelity of evidence accumulation indicates that metacognitive interventions may be one route towards ameliorating this pervasive cognitive bias.

Participants

Each study contained a different group of participants. We analysed data from 28 participants in study 1 ( M age  = 23.8; SD age  = 6.3; 16 female) and 23 participants in study 2 ( M age  = 25.7; SD age  = 7; 12 female). Participants were excluded based on the following set of pre-defined criteria: using the same initial confidence rating more than 90% of time ( N  = 3 in study 1; N  = 2 in study 2), performance below 55% or above 87.5% correct decisions in one of the pre-decision evidence conditions (see explanation of the experimental conditions below) indicating non-convergence of the staircase procedure ( N  = 3 in study 1; N  = 2 in study 2).

For the MEG study 3, participants conducted an initial behavioural training session before being screened according to the same criteria reported above. MEG data of a final sample of 25 subjects was analysed ( M age  = 24.6; SD age  = 4.1; 16 female). Data of four subjects could not be analysed due to technical problems with recording triggers. As we applied machine learning classification algorithms to the neural data in order to decode decisions (left versus right) and confidence (high versus low) it was important that participants showed relatively balanced responses for these two categories. 2 subjects were excluded because they chose one response more than 80% of the time for either the decision or confidence.

In addition to a basic payment (£10 for behaviour and £20 for MEG) participants received a performance-based bonus (up to £5 for behaviour and £8 for MEG). All studies were approved by the Research Ethics Committee of University College London (#1260-003) and all subjects gave written informed consent.

Stimuli and experimental design

The psychophysical task was an adaptation of the task used by Fleming and colleagues 18 , and programmed in MATLAB 2012a (Mathworks Inc., USA) using Psychtoolbox- 3.0.14. Stimuli were random dot motion kinetograms (RDKs), viewed at a distance of approximately 45 cm. The RDKs were clouds of white dots (0.12° diameter) within a white circular aperture with a radius of 7° on a grey background that lasted for 350 ms. The direction of motion was rightward or leftward along the horizontal meridian. The speed of movement was 5° per second and the density of dots in the whole experiment was set to 60 dots per degree. Each set was replotted three apertures later in which a subset of dots, determined by the percent coherence, was offset from their previous location towards the target movement direction, and another subset was offset in the opposite direction, whereas the rest was replotted randomly.

Unlike in a classical RDK stimulus, dots moved coherently in both the target direction and the opposite direction. The remaining dots moved randomly (percentages described below). We used a psychophysical manipulation of positive evidence to dissociate subjective confidence from objective task performance 27 . In the high positive evidence (HPE) the proportion of dots moving in the incorrect direction was set to 15% and the proportion moving in the correct direction was a higher percentage, staircased to ensure the targeted performance level (see below). In the low positive evidence (LPE) condition the motion coherence of dots moving in the incorrect direction was set to 5%, whereas the dots moving in the correct direction was also staircased to ensure the same performance as in the HPE condition. The rationale for this manipulation was that accuracy and confidence are usually highly correlated, hindering specific claims about the unique role of confidence. The positive evidence manipulation enabled us to selectively increase confidence while keeping performance constant, thus making it possible to determine a direct effects of changes in confidence on post-decision processing.

All experiments adapted a full 2 (pre-decision positive evidence level) by 2 (post-decision evidence strength) factorial design yielding a total of 4 experimental conditions each corresponding to 90 trials. HPE and LPE stimuli were each followed by one of two post-decision evidence conditions (weak or strong). For the post-decision evidence a constant level of evidence in the incorrect direction was employed (i.e. we did not manipulate the overall amount of positive evidence in the post-decision phase). The post-decision coherence level in the incorrect direction was derived from the averaged staircased pre-decision values as [incorrect coherence LPE + incorrect coherence HPE]/2. Weak post-decision evidence stimuli were created by specifying correct-direction coherence as [staircased correct coherence LPE + staircased correct coherence HPE]/2. Strong post-decision evidence stimuli were then derived by multiplying this coherence level by a factor of 1.3.

Task procedure

In every study, participants first performed 180 trials of a calibration phase before performing the main task which consisted of 360 trials (behavioural studies) or 352 trials (MEG study).

In the calibration phase subjects judged whether the dots were moving to the left or to the right side of the screen, without rating their confidence or seeing additional post-decision evidence. The response had to be given within 1.5 s after stimulus offset. LPE and HPE stimuli were randomly interleaved. As described above, the coherence of the target direction was adapted with a staircase procedure to obtain a performance of 60% correct in study 1 and 71% correct in studies 2 and 3 40 .

The main task had the same core structure for all studies with slight variations, explained below, to optimize each study for the specific research question and planned analysis. Participants were first presented with a moving dot stimulus before they indicated their initial decision (left or right) together with a confidence rating. In behavioural studies 1 and 2 the decision was indicated by pressing the left or right arrow key on the keyboard and was directly combined with a graded confidence rating (7-point sliding scale between 50% and 100%), where pressing the (same) arrow key again moved a slider along the confidence scale. In the MEG study, subjects first made a left versus right decision, before giving a binary high/low confidence rating. After this initial decision, participants received a second sample of moving dots (i.e. post-decision evidence) which was always in the same (correct) direction as the pre-decision evidence presentation, but of variable strength. Subjects were instructed that this evidence was bonus information that could be used to inform their final decision and confidence. After the post-decision evidence, participants were again asked to judge the motion direction and indicate their confidence.

Design alterations in behavioural study 2

In study 2 we optimized the experimental design to allow drift-diffusion modelling of the second/final decision. While in study 1 subjects had to withhold their final response for 300 ms after the offset of the post-decision evidence (i.e. responding was only possible after this delay), in study 2 participants were able to make their final response freely as soon as they had decided. This allowed us to use response times as a proxy for crossing a decision threshold, which would not have been possible if the response was delayed.

Design alterations in MEG study 3

In the MEG study, participants indicated their responses by pressing and up or down button on a keypad with their right thumb. We disentangled the participant’s decision (left/right and high/low confidence) from the motor response they had to perform (pressing the up or down key on a key pad), by randomising the mapping between decision options and key presses. Specifically, on any given trial leftward motion could be indicated by pressing the up key and on another trial by pressing the down key. Similarly, high confidence could be indicated in one trial by pressing the up key and in a different trial by pressing the down key. The mapping between decisions and motor responses was revealed once responding was possible, by presenting the letters L or R (and H or L for confidence ratings) above/below the horizontal plane. This approach ensured that decoding of motion direction was not trivially confounded by motor preparation signals. Additionally, we introduced delays of 500 ms after the presentation of each stimulus but before participants were informed about the response mappings to allow decoding analysis to be applied in a time window when subjects could form an abstract decision about motion direction but were not yet able to prepare a response.

Scoring and bonus payment

Participants were instructed to rate their confidence as a subjective probability of being correct and were rewarded according to the correspondence between their confidence and task accuracy. An incentive-compatible Quadratic Scoring Rule 41 was applied equally to both the initial and final decisions:

where correct i is equal to 1 on trial i if the choice was correct and 0 otherwise, and conf i is the subject’s confidence rating on trial i. The Quadratic Scoring Rule is a proper scoring rule in that maximum earnings are obtained by jointly maximizing the accuracy of both choices and confidence ratings. This scoring rule also ensures that confidence is orthogonal to the reward the subject expects to receive for each trial: maximal reward is obtained both when one is maximally confident and right, and minimally confident and wrong. The points gained on each trial were summed and participants were given a £1 bonus payment for every 15,000 points earned. After each block participants were informed of their current total number of points. This was the only performance feedback that was given and subjects did not receive specific information regarding the correctness of their motion direction decisions.

Multilevel meditation analysis

A mediation analysis was carried out to examine whether the effect of positive evidence on changes of mind was mediated by a shift in confidence (see Supplementary Notes  5 and 6 ). We implemented a multilevel mediation model with subjects as random effects, using the Multilevel Mediation and Moderation (M3) Toolbox 42 . Mediation analysis assesses whether covariance between two variables (predictor and dependent variable) is explained by a third mediator variable. Significant mediation is obtained when inclusion of the mediator in the model significantly alters the slope of the predictor-dependent variable relationship (evaluated as the product of the predictor-mediator and mediator-dependent variable path coefficients). In a logistic regression model the two positive evidence conditions (i.e. coded as HPE = 2, LPE = 1) were entered as the predictor variable, changes of mind as the dependent variable (coded as change of mind = 1, no change of mind = 0) with confidence ratings as the mediator variable. We controlled for covariates that potentially could have had a confounding influence on these linkages such as accuracy, reaction time, post-decision evidence strength and the interaction between accuracy × post-decision evidence strength. The following effects of interest were simultaneously tested: the impact of positive evidence on confidence ratings (path a); the impact of confidence ratings on changes of mind, controlling for positive evidence (path b); and the formal mediation of positive evidence on changes of mind by confidence (path a × b). The direct effect of positive evidence on changes of mind before and after controlling for confidence was also estimated (paths c and c’, respectively). Parameter estimates for each path (a, b, c, a × b, c’) were obtained by bootstrapping 200,000 times with replacement, producing two-tailed p -values and 95% confidence intervals. In a control model in which the predictor and mediator variables were swapped, no mediation effect was found.

Drift-diffusion modelling

Drift-diffusion modelling was conducted in Python 2 using Jupyter Notebook (5.50). The model was fit using accuracy coding such that decision boundaries and reaction time distributions corresponded to those for correct and incorrect responses. However, by design, initially correct decisions led to confirming post-decision evidence (because the motion direction was always the same in the pre and post-decision periods) and initially incorrect decisions always led to disconfirming post-decision evidence.

Within the DDM there are two natural ways to account for biases in a decision process: by shifting the starting point towards one of the decision boundaries, or by altering the drift rate to induce a bias in the processing of information. We also considered the possibility that other factors (e.g. decision bound) could be altered, but in initial simulations such changes were unable to explain the observed behavioural patterns. Since it has been reported that confidence might affect boundary separation 29 , we included a dependency of the boundary separation on confidence in each of the models (note however that a symmetrical influence on boundary separation cannot explain any choice-dependent effects on changes of mind).

A hierarchical Bayesian variant of the DDM (hDDM) enabled us to investigate the dependencies of the model parameters on the initial decision and confidence on a trial-by-trial basis 43 . The hDDM simultaneously estimates individual parameters drawn from a group distribution using Markov-Chain Monte-Carlo methods. This procedure not only estimates the most likely value of the model parameters but also uncertainty in the estimate. The hDDM toolbox 43 was used to compare 10 hDDMs. The best-fitting model was identified by comparing Deviance Information Criterion scores and ensuring that the wining model adequately fitted the qualitative data patterns (see Supplementary Note  2 ). A regression analysis was used to investigate the dependency of the starting point and drift-rate parameters on the initial decision (1 = correct decision leading to confirmatory post-decision evidence, −1 = incorrect decision leading to disconfirmatory post-decision evidence), initial confidence (parametrically ranging from −1 to 1) or their interaction.

In all models the drift rate, starting point, non-decision time and boundary separation were fitted hierarchically with individual parameter estimates for each participant, whereas dependencies of starting point and drift-rate on experimental factors were estimated as fixed group-level effects. In all model fits we incorporated an influence of post-decision evidence strength on the drift-rate. First a baseline model was estimated where none of the parameters depended on confidence or an initial decision. Subsequently, we created three model families that had dependencies of starting point and/or drift-rate on (i) initial confidence, (ii) initial decision or (iii) the interaction of initial confidence × initial decision (i.e. confidence was allowed to amplify or attenuate the influence of the initial decision on the starting point and/or drift-rate). Within each model family we created three different models with dependencies of these variables on starting point, drift-rate or both.

Baseline model (Model 1):

Confidence dependency (Model 4):

Initial decision dependency (Model 7):

Full model (Model 10):

RTs faster than 200 ms were discarded from the model fits and the outlier probability was set to 0.05, as recommended in previous literature 43 , 44 . The models were estimated with a Markov chain of 100,000 samples with 50,000 burn-in samples (i.e. discarding the first 50,000 iterations), and a thinning factor of 25, resulting in 2500 posterior samples. To ensure convergence, the posterior traces and their autocorrelation were inspected and the Gelman–Rubin statistic was calculated for each parameter (see Supplementary Table  1 ). The posterior distributions of the best-fitting model were interrogated to retrieve parameter estimates.

The winning model was characterized by a regression equation that incorporates effects of confidence, the initial decision and their interaction (i.e. the full model) on the starting point and drift-rate. The Deviance Information Criterion scores of all models are shown in Supplementary Fig.  3A . The model parameters of the best-fitting model are shown in Fig.  2d .

MEG pre-processing

MEG was recorded continuously at 600 samples/second using a whole-head 273-channel axial gradiometer system (CTF Omega, VSM MedTech), while participants sat upright inside the scanner. Data was segmented into 8200 ms segments from −200 ms to +8000 ms relative to trial onset, where each segment encompassed one trial. Each epoch was aligned to the onset of the trial or, for analysis of the post-decisional phase, was realigned to the onset of post-decision evidence (to minimize any presentation delays that may have occurred during the trial). The data were resampled from 600 to 100 Hz to conserve processing time and improve signal to noise ratio, resulting in data samples spaced every 10 ms. All data were then high-pass filtered at 0.5 Hz to remove slow drift. All analyses were performed directly on the filtered, cleaned MEG signal, consisting of a 273 channel × 821 sample matrix for each trial, in units of femtotesla.

Generalising a pre-decision classifier to the post-decision phase

We built a machine-learning classification algorithm to predict participants’ decisions on each trial (leftward vs. rightward motion) at each timepoint during the decision phase. Having trained such an algorithm we could then apply it to a distinct set of trials and use the probabilistic prediction of the classifier as a neural DV for leftward versus rightward motion 45 , 46 . Specifically, we used a support-vector machine (SVM) classifier trained on sensor-level whole-brain activity (normalized amplitude of all MEG channels). The classifier labels were the trial-by-trial choices made by participants (left or right) while the features encompassed a matrix of activity at each MEG sensor (z-scored for each time point) at a given time point (average activity over 100 ms window, shifted in steps of 10 ms). The classifier was trained on MEG activity in the pre-decision time phase (e.g. 250 ms after the onset of pre-decision evidence) and then reapplied to the corresponding time point in the post-decision phase (e.g. 250 ms after the onset of post-decision evidence). We computed the predictions of the classifier across an 850 ms time window, starting with post-decision stimulus onset and ending with the presentation of response options (i.e. when the mapping between choices and motor responses was revealed).

We used linear kernels and a default regularization parameter of C  = 1 within the svmtrain/svmpredict routines of libsvm 47 . A leave-one-out procedure was used, training the classifier on all trials except one (using pre-decision data only) and testing it on the left-out trial (using post-decision data). Training the SVM results in a hyperplane that best separates the two classes of trials (see Fig.  3a ) in a high-dimensional space. If a trial is far away from this hyperplane it is unlikely to be a misclassification, while trials that are close to the hyperplane might easily be misclassified. Thus, the distance to the hyperplane represents the decodable evidence for a decision and can thus be used as a graded measure of the neural DV 45 , 46 .

After reapplying the classifier to every trial and time point during the post-decision phase, we obtained a timeseries of neural evidence accumulation within each trial (see Fig.  3a , right panel). We focussed on the time from the onset of the post-decision stimulus to the timepoint of peak decodability at which the pre-decision classifier best generalized to the post-decision phase. The accumulation process can be summarized by fitting a linear regression to the time series (see Fig.  3a , right panel) on each trial, where the slope is analogous to the drift rate in a DDM, and the intercept analogous to the starting point. A positive slope corresponds to a change of the neural DV towards predicting rightward motion decisions while a negative slope corresponds to a change towards leftward motion (see Fig.  3b ). By taking the absolute value of these slope values (i.e. reversing the sign on trials in which leftward motion was presented), we could derive a general index for the sensitivity of the neural DV to the motion direction presented on the screen (see Fig.  4a, b ).

Based on our behavioural findings we expected that both the slope and the intercept would be influenced by the interaction of initial decision (confirmatory post-decision evidence = 1; disconfirmatory post-decision evidence = −1) × confidence (low confidence = −1; high confidence = 1). Thus, we entered the initial decision, confidence and their interaction as simultaneous predictors in a hierarchical regression model.

MEG topography contributing to classification accuracy

To explore which brain areas carried the information about evidence for a left versus a right decision (or high versus low confidence as reported in the Supplementary Note  6 ), we trained a SVM classifier for each participant at the time point of highest decodability (see Supplementary Fig.  6 for the whole timeline) using subsets of 30 randomly selected sensors and repeated this procedure 2500 times. The contribution of each sensor s was taken to be the mean of all prediction accuracies achieved using an ensemble of 30 sensors that included s 48 , 49 .

MEG temporal generalization

The extent to which a classifier trained on neural data obtained from one time point generalizes to other time points can provide insight how mental representations change over time 32 . We utilized this temporal generalization method to formally test whether the same processing steps (leading up to a decision) occur at similar times in the pre- and post-decision phases (see Supplementary Fig.  9 for temporal generalization restricted to the pre-decision phase). Most critically, we also investigated whether this processing cascade was altered by participants’ confidence in their choice.

For the temporal generalization analysis we trained our classifier on every timepoint in the pre-decision phase and tested it on every timepoint in the post-decision phase yielding a 2D matrix of decoding accuracy (see Fig.  3a top-left panel). A fourfold stratified cross-validation was implemented for each subject and repeated 100 times to account for potential random biases in assigning trials to folds. Through this stratification we obtained a balanced number of trials within each condition in each fold (left/right decision, high/low confidence, change/no change of mind, and all combinations of these factors). Classifiers were trained on three out of four folds and tested on the left-out fold. Decoding accuracy was determined by the area under a Receiver Operator Curve (AUC) that sought to predict the decision based on the continuous DV outputted by the classifier. Decoding accuracy was calculated separately for the four different conditions (low confidence and change of mind; high confidence and change of mind; low confidence and no change of mind; high confidence and no change of mind). Importantly, classification accuracy was based on how well the initial decision (rather than the final decision) could be predicted based on neural data. Since we are dealing with a two-class decoding problem one can directly infer the decoding accuracy of the alternative decision from the classification accuracy of the initial decision.

We estimated the main effect of confidence on decoding accuracy to isolate confidence-induced changes in temporal generalisation from the pre- to post-decision phase. We used a cluster-based permutation test 50 , 51 to determine statistical significance ( p  < 0.05, corrected for multiple comparisons). We calculated the contrast of high > low confidence averaging over change/no change of mind trials [[high confidence and no change of mind − low confidence and no change of mind] + [high confidence and change of mind − low confidence and change of mind]]. We identified adjacent timepoints all individually exceeding t -values corresponding to p  < 0.05 uncorrected, and stored the sum of t -values for each cluster. We then applied a sign-flip permutation test (randomly switching the contrast direction for a subset of subjects of the sample, i.e. low-high instead of high-low) and repeated this procedure 1000 times. The distribution of summed t -values over all permutations built the null distribution for our statistical test. If the observed sum of t -values within a cluster exceeded the 5% quantile of this distribution (separately calculated for negative and positive values) we labelled this cluster as showing a significant main effect of confidence in this portion of the temporal generalisation matrix.

Reporting summary

Further information on research design is available in the  Nature Research Reporting Summary linked to this article.

Data availability

Anonymised data and code are available at a dedicated Github repository [ https://github.com/MaxRollwage/NatureCommunications ]. The source data underlying Figs.  1 b–d, 2 b–d, 3b, d–f , and 4a–d are provided as part of this repository. A reporting summary for this Article is available as a Supplementary Information file.

Code availability

Code supporting this study are available at a dedicated Github repository [ https://github.com/MaxRollwage/NatureCommunications ].

Pomerantz, E. M., Chaiken, S. & Tordesillas, R. S. Attitude strength and resistance processes. J. Pers. Soc. Psychol. 69 , 408 (1995).

Article   CAS   PubMed   Google Scholar  

Park, J., Konana, P., Gu, B., Kumar, A. & Raghunathan, R. Confirmation bias, overconfidence, and investment performance: Evidence from stock message boards. McCombs Res. Pap. Ser. No. IROM-07-10 (2010).

Nickerson, R. S. Confirmation bias: a ubiquitous phenomenon in many guises. Rev. Gen. Psychol. 2 , 175–220 (1998).

Article   Google Scholar  

Lord, C. G., Ross, L. & Lepper, M. R. Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. J. Pers. Soc. Psychol. 37 , 2098 (1979).

Kaplan, J. T., Gimbel, S. I. & Harris, S. Neural correlates of maintaining one’s political beliefs in the face of counterevidence. Sci. Rep. 6 , 1–11 (2016).

Article   CAS   Google Scholar  

Nyhan, B. & Reifler, J. When corrections fail: the persistence of political misperceptions. Polit. Behav. 32 , 303–330 (2010).

Talluri, B. C., Urai, A. E., Tsetsos, K., Usher, M. & Donner, T. H. Confirmation bias through selective overweighting of choice-consistent evidence. Curr. Biol. 28 , 3128–3135 (2018).

Urai, A. E., De Gee, J. W., Tsetsos, K. & Donner, T. H. Choice history biases subsequent evidence accumulation. Elife 8 , e46331 (2019).

Article   PubMed   PubMed Central   Google Scholar  

Braun, A., Urai, A. E. & Donner, T. H. Adaptive history biases result from confidence-weighted accumulation of past choices. J. Neurosci. 38 , 2189–17 (2018).

Gold, J. I. & Shadlen, M. N. The neural basis of decision making. Annu. Rev. Neurosci. 30 , 535–574 (2007).

Kiani, R. & Shadlen, M. N. Representation of confidence associated with a decision by neurons in the parietal cortex. Science 324 , 759–764 (2009).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

O’Connell, R. G., Dockree, P. M. & Kelly, S. P. A supramodal accumulation-to-bound signal that determines perceptual decisions in humans. Nat. Neurosci. 15 , 1729–1735 (2012).

Article   PubMed   CAS   Google Scholar  

Rollwage, M., Dolan, R. J. & Fleming, S. M. Metacognitive failure as a feature of those holding radical beliefs. Curr. Biol. 28 , 4014–4021 (2018).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Rouault, M., Seow, T., Gillan, C. M. & Fleming, S. M. Psychiatric symptom dimensions are associated with dissociable shifts in metacognition but not task performance. Biol. Psychiatry 84 , 443–451 (2018).

Hauser, T. U., Allen, M., Rees, G. & Dolan, R. J. Metacognitive impairments extend perceptual decision making weaknesses in compulsivity. Sci. Rep. 7 , 6614 (2017).

Article   ADS   PubMed   PubMed Central   CAS   Google Scholar  

Sanders, J. I., Hangya, B. & Kepecs, A. Signatures of a statistical computation in the human sense of confidence. Neuron 90 , 499–506 (2016).

Article   CAS   PubMed   PubMed Central   MATH   Google Scholar  

Pleskac, T. J. & Busemeyer, J. R. Two-stage dynamic signal detection: a theory of choice, decision time, and confidence. Psychol. Rev. 117 , 864–901 (2010).

Article   PubMed   Google Scholar  

Fleming, S. M., van der Putten, E. J. & Daw, N. D. Neural mediators of changes of mind about perceptual decisions. Nat. Neurosci. https://doi.org/10.1038/s41593-018-0104-6 (2018).

Van Den Berg, R. et al. A common mechanism underlies changes of mind about decisions and confidence. Elife 5 , e12192 (2016).

Resulaj, A., Kiani, R., Wolpert, D. M. & Shadlen, M. N. Changes of mind in decision-making. Nature 461 , 263 (2009).

Bronfman, Z. Z. et al. Decisions reduce sensitivity to subsequent information. Proc. R. Soc. B Biol. Sci. 282 , 20150228 (2015).

Desender, K., Boldt, A. & Yeung, N. Subjective confidence predicts information seeking in decision making. Psychol. Sci. https://doi.org/10.1177/0956797617744771 (2018).

Moran, R., Teodorescu, A. R. & Usher, M. Post choice information integration as a causal determinant of confidence: novel data and a computational account. Cogn. Psychol. 78 , 99–147 (2015).

Desender, K., Murphy, P., Boldt, A., Verguts, T. & Yeung, N. A post-decisional neural marker of confidence predicts information-seeking in decision-making. J. Neurosci . 39 , 3309–3319 (2019).

Atiya, N. A. A., Rañó, I., Prasad, G. & Wong-Lin, K. A neural circuit model of decision uncertainty and change-of-mind. Nat. Commun. 10 , 2287 (2019).

Qiu, C., Luu, L. & Stocker, A. A. Benefits of commitment in hierarchical inference. Psychological Review . https://doi.org/10.1037/rev0000193 (2020).

Zylberberg, A., Barttfeld, P. & Sigman, M. The construction of confidence in a perceptual decision. Front. Integr. Neurosci. 6 , 1–10 (2012).

Desender, K., Boldt, A., Verguts, T. & Donner, T. H. Confidence predicts speed-accuracy tradeoff for subsequent decisions. Elife 8 , e43499 (2019).

van den Berg, R., Zylberberg, A., Kiani, R., Shadlen, M. N. & Wolpert, D. M. Confidence is the bridge between multi-stage decisions. Curr. Biol. 26 , 3157–3168 (2016).

Article   PubMed   PubMed Central   CAS   Google Scholar  

Kelly, S. P. & O’Connell, R. G. The neural processes underlying perceptual decision making in humans: recent progress and future directions. J. Physiol. Paris 109 , 27–37 (2015).

Tagliabue, C. F. et al. The EEG signature of sensory evidence accumulation during decision formation closely tracks subjective perceptual experience. Sci. Rep. 9 , 4949 (2019).

King, J. R. & Dehaene, S. Characterizing the dynamics of mental representations: the temporal generalization method. Trends Cogn. Sci. 18 , 203–210 (2014).

Luu, L. & Stocker, A. A. Post-decision biases reveal a self-consistency principle in perceptual inference. Elife 7 , e33334 (2018).

Taber, C. S., Cann, D. & Kucsova, S. The motivated processing of political arguments. Polit. Behav. 31 , 137–155 (2009).

Kappes, A., Harvey, A. H., Lohrenz, T., Montague, P. R. & Sharot, T. Confirmation bias in the utilization of others’ opinion strength. Nat. Neurosci. 23 , 130–137 (2020).

Cheadle, S. et al. Adaptive gain control during human perceptual choice. Neuron 81 , 1429–1441 (2014).

Parr, T., Benrimoh, D. A., Vincent, P. & Friston, K. J. Precision and false perceptual inference. Front. Integr. Neurosci . 12 , 39 (2018).

Tsetsos, K. et al. Economic irrationality is optimal during noisy decision making. Proc. Natl Acad. Sci. USA 113 , 3102–3107 (2016).

Skov, R. B. & Sherman, S. J. Information-gathering processes: diagnosticity, hypothesis-confirmatory strategies, and perceived hypothesis confirmation. J. Exp. Soc. Psychol. 22 , 93–121 (1986).

García-Pérez, M. A. Forced-choice staircases with fixed step sizes: asymptotic and small-sample properties. Vis. Res. 38 , 1861–1881 (1998).

Brier, G. W. Verification of forecasts expressed in terms of probability. Mon. Weather Rev. 78 , 1–3 (1950).

Article   ADS   Google Scholar  

Wager, T. D., Davidson, M. L., Hughes, B. L., Lindquist, M. A. & Ochsner, K. N. Prefrontal-subcortical pathways mediating successful emotion regulation. Neuron 59 , 1037–1050 (2008).

Wiecki, T. V., Sofer, I. & Frank, M. J. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python. Front. Neuroinform. 7 , 1–10 (2013).

Ratcliff, R. & Tuerlinckx, F. Estimating parameters of the diffusion model: Approaches to dealing with contaminant reaction times and parameter variability. Psychon. Bull. Rev . https://doi.org/10.3758/BF03196302 (2002).

Peters, M. A. K. et al. Perceptual confidence neglects decision-incongruent evidence in the brain. Nat. Hum. Behav. 1 , 1–34 (2017).

Cortese, A., Amano, K., Koizumi, A., Kawato, M. & Lau, H. Multivoxel neurofeedback selectively modulates confidence without changing perceptual performance. Nat. Commun. 7 , 1–18 (2016).

Chang, C.-C. & Lin, C.-J. LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2 , 1–27 (2011).

Liu, Y., Dolan, R. J., Kurth-Nelson, Z. & Behrens, T. E. J. Human replay spontaneously reorganizes experience. Cell 178 , 640–652 (2019).

Kurth-Nelson, Z., Barnes, G., Sejdinovic, D., Dolan, R. & Dayan, P. Temporal structure in associative retrieval. Elife 2015 , 1–18 (2015).

Google Scholar  

Nichols, T. E. & Holmes, A. P. Nonparametric permutation tests for functional neuroimaging: a primer with examples. Hum. Brain Mapp. 15 , 1–25 (2002).

Maris, E. & Oostenveld, R. Nonparametric statistical testing of EEG- and MEG-data. J. Neurosci. Methods 164 , 177–190 (2007).

Download references

Acknowledgements

We thank Dr Sam Ereira for help with implementing the machine learning MEG analysis. M.R. is a predoctoral Fellow of the International Max Planck Research School on Computational Methods in Psychiatry and Ageing Research. The participating institutions are the Max Planck Institute for Human Development and University College London (UCL). The Wellcome Centre for Human Neuroimaging is supported by core funding from the Wellcome Trust (203147/Z/16/Z). S.M.F. is supported by a Sir Henry Dale Fellowship jointly funded by the Wellcome Trust and the Royal Society (206648/Z/17/Z). T.U.H. is supported by a Wellcome/Royal Society Sir Henry Dale Fellowship (211155/Z/18/Z), a grant from the Jacobs Foundation (2017-1261-04), the Medical Research Foundation, and a 2018 NARSAD Young Investigator grant (27023) from the Brain & Behaviour Research Foundation.

Author information

Authors and affiliations.

Wellcome Centre for Human Neuroimaging, University College London, London WC1N 3BG, UK

Max Rollwage, Alisa Loosen, Tobias U. Hauser, Rani Moran, Raymond J. Dolan & Stephen M. Fleming

Max Planck University College London Centre for Computational Psychiatry and Ageing Research, London WC1B 5EH, UK

Department of Experimental Psychology, University College London, London WC1H 0AP, UK

Stephen M. Fleming

You can also search for this author in PubMed   Google Scholar

Contributions

S.M.F. and M.R. conceptualized the study. M.R. developed the methodology under supervision of S.M.F. and with input from T.U.H. A.L. and M.R. conducted the experiments. M.R. analysed the data under supervision of S.M.F. and with input from R.M. and T.U.H. M.R. and S.M.F. wrote the first draft of the manuscript which was revised and edited by A.L., T.U.H., R.M. and R.J.D.

Corresponding author

Correspondence to Max Rollwage .

Ethics declarations

Competing interests.

All authors declare no competing interests.

Additional information

Peer review information Nature Communications thanks Redmond O’Connell, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information, peer review file, reporting summary, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Rollwage, M., Loosen, A., Hauser, T.U. et al. Confidence drives a neural confirmation bias. Nat Commun 11 , 2634 (2020). https://doi.org/10.1038/s41467-020-16278-6

Download citation

Received : 20 September 2019

Accepted : 23 April 2020

Published : 26 May 2020

DOI : https://doi.org/10.1038/s41467-020-16278-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

experiment confirmation bias

COMMENTS

  1. Confirmation Bias In Psychology: Definition & Examples

    Confirmation Bias is the tendency to look for information that supports, rather than rejects, one’s preconceptions, typically by interpreting evidence to confirm existing beliefs while rejecting or ignoring any conflicting data.

  2. The Curious Case of Confirmation Bias | Psychology Today

    Confirmation bias is the tendency to search for data that can confirm our beliefs, as opposed to looking for data that might challenge those beliefs. The bias degrades our judgments when our...

  3. What Is Confirmation Bias? | Definition & Examples - Scribbr

    Confirmation bias is the tendency to seek out and prefer information that supports our preexisting beliefs. As a result, we tend to ignore any information that contradicts those beliefs. Confirmation bias is often unintentional but can still lead to poor decision-making in (psychology) research and in legal or real-life contexts.

  4. Confirmation bias - Wikipedia

    A series of psychological experiments in the 1960s suggested that people are biased toward confirming their existing beliefs. Later work re-interpreted these results as a tendency to test ideas in a one-sided way, focusing on one possibility and ignoring alternatives.

  5. Confirmation Bias: Seeing What We Want to Believe

    Confirmation bias has several sources and triggers, including our unwillingness to relinquish our initial beliefs (even when incorrect), preference for personal hypotheses, cognitive load, and cognitive impairments.

  6. Understanding confirmation bias in research - Dovetail

    Some forms of bias are easier than others to identify and remove. One of the forms that's hardest for us to recognize in ourselves is confirmation bias. In this article, you'll learn what confirmation bias is, the forms it takes, and how to begin removing it from your research.

  7. Confirmation Bias: A Ubiquitous Phenomenon in Many Guises

    Confirmation bias, as the term is typically used in the psychological literature, connotes the seeking or interpreting of evidence in ways that are partial to existing beliefs, expectations, or a hypothesis in hand. The author reviews evidence of such a bias in a variety of guises and gives examples of its operation in several practical ...

  8. Confirmation Bias - an overview | ScienceDirect Topics

    The confirmation bias is the tendency to make predictions and examine them by searching for information that is expected to confirm anticipations or desirable beliefs, avoiding the collection of potential refuting evidences (Friedrich, 1993).

  9. The Curious Case of Confirmation Bias - Psychology Today

    Confirmation bias is the tendency to search for data that can confirm our beliefs, as opposed to looking for data that might challenge those beliefs. The bias degrades our judgments when our...

  10. Confidence drives a neural confirmation bias | Nature ...

    Here we combine human magnetoencephalography (MEG) with behavioural and neural modelling to identify alterations in post-decisional processing that contribute to the phenomenon of confirmation...