Stanley Milgram Shock Experiment

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Stanley Milgram, a psychologist at Yale University, carried out one of the most famous studies of obedience in psychology.

He conducted an experiment focusing on the conflict between obedience to authority and personal conscience.

Milgram (1963) examined justifications for acts of genocide offered by those accused at the World War II, Nuremberg War Criminal trials. Their defense often was based on obedience  – that they were just following orders from their superiors.

The experiments began in July 1961, a year after the trial of Adolf Eichmann in Jerusalem. Milgram devised the experiment to answer the question:

Could it be that Eichmann and his million accomplices in the Holocaust were just following orders? Could we call them all accomplices?” (Milgram, 1974).

Milgram (1963) wanted to investigate whether Germans were particularly obedient to authority figures, as this was a common explanation for the Nazi killings in World War II.

Milgram selected participants for his experiment by newspaper advertising for male participants to take part in a study of learning at Yale University.

The procedure was that the participant was paired with another person and they drew lots to find out who would be the ‘learner’ and who would be the ‘teacher.’  The draw was fixed so that the participant was always the teacher, and the learner was one of Milgram’s confederates (pretending to be a real participant).

stanley milgram generator scale

The learner (a confederate called Mr. Wallace) was taken into a room and had electrodes attached to his arms, and the teacher and researcher went into a room next door that contained an electric shock generator and a row of switches marked from 15 volts (Slight Shock) to 375 volts (Danger: Severe Shock) to 450 volts (XXX).

The shocks in Stanley Milgram’s obedience experiments were not real. The “learners” were actors who were part of the experiment and did not actually receive any shocks.

However, the “teachers” (the real participants of the study) believed the shocks were real, which was crucial for the experiment to measure obedience to authority figures even when it involved causing harm to others.

Milgram’s Experiment (1963)

Milgram (1963) was interested in researching how far people would go in obeying an instruction if it involved harming another person.

Stanley Milgram was interested in how easily ordinary people could be influenced into committing atrocities, for example, Germans in WWII.

Volunteers were recruited for a controlled experiment investigating “learning” (re: ethics: deception). 

Participants were 40 males, aged between 20 and 50, whose jobs ranged from unskilled to professional, from the New Haven area. They were paid $4.50 for just turning up.

Milgram

At the beginning of the experiment, they were introduced to another participant, a confederate of the experimenter (Milgram).

They drew straws to determine their roles – learner or teacher – although this was fixed, and the confederate was always the learner. There was also an “experimenter” dressed in a gray lab coat, played by an actor (not Milgram).

Two rooms in the Yale Interaction Laboratory were used – one for the learner (with an electric chair) and another for the teacher and experimenter with an electric shock generator.

Milgram Obedience: Mr Wallace

The “learner” (Mr. Wallace) was strapped to a chair with electrodes.

After he has learned a list of word pairs given to him to learn, the “teacher” tests him by naming a word and asking the learner to recall its partner/pair from a list of four possible choices.

The teacher is told to administer an electric shock every time the learner makes a mistake, increasing the level of shock each time. There were 30 switches on the shock generator marked from 15 volts (slight shock) to 450 (danger – severe shock).

Milgram Obedience IV Variations

The learner gave mainly wrong answers (on purpose), and for each of these, the teacher gave him an electric shock. When the teacher refused to administer a shock, the experimenter was to give a series of orders/prods to ensure they continued.

There were four prods, and if one was not obeyed, then the experimenter (Mr. Williams) read out the next prod, and so on.

Prod 1 : Please continue. Prod 2: The experiment requires you to continue. Prod 3 : It is absolutely essential that you continue. Prod 4 : You have no other choice but to continue.

These prods were to be used in order, and begun afresh for each new attempt at defiance (Milgram, 1974, p. 21). The experimenter also had two special prods available. These could be used as required by the situation:

  • Although the shocks may be painful, there is no permanent tissue damage, so please go on’ (ibid.)
  • ‘Whether the learner likes it or not, you must go on until he has learned all the word pairs correctly. So please go on’ (ibid., p. 22).

65% (two-thirds) of participants (i.e., teachers) continued to the highest level of 450 volts. All the participants continued to 300 volts.

Milgram did more than one experiment – he carried out 18 variations of his study.  All he did was alter the situation (IV) to see how this affected obedience (DV).

Conclusion 

The individual explanation for the behavior of the participants would be that it was something about them as people that caused them to obey, but a more realistic explanation is that the situation they were in influenced them and caused them to behave in the way that they did.

Some aspects of the situation that may have influenced their behavior include the formality of the location, the behavior of the experimenter, and the fact that it was an experiment for which they had volunteered and been paid.

Ordinary people are likely to follow orders given by an authority figure, even to the extent of killing an innocent human being.  Obedience to authority is ingrained in us all from the way we are brought up.

People tend to obey orders from other people if they recognize their authority as morally right and/or legally based. This response to legitimate authority is learned in a variety of situations, for example in the family, school, and workplace.

Milgram summed up in the article “The Perils of Obedience” (Milgram 1974), writing:

“The legal and philosophic aspects of obedience are of enormous import, but they say very little about how most people behave in concrete situations. I set up a simple experiment at Yale University to test how much pain an ordinary citizen would inflict on another person simply because he was ordered to by an experimental scientist. Stark authority was pitted against the subjects’ [participants’] strongest moral imperatives against hurting others, and, with the subjects’ [participants’] ears ringing with the screams of the victims, authority won more often than not. The extreme willingness of adults to go to almost any lengths on the command of an authority constitutes the chief finding of the study and the fact most urgently demanding explanation.”

Milgram’s Agency Theory

Milgram (1974) explained the behavior of his participants by suggesting that people have two states of behavior when they are in a social situation:

  • The autonomous state – people direct their own actions, and they take responsibility for the results of those actions.
  • The agentic state – people allow others to direct their actions and then pass off the responsibility for the consequences to the person giving the orders. In other words, they act as agents for another person’s will.

Milgram suggested that two things must be in place for a person to enter the agentic state:

  • The person giving the orders is perceived as being qualified to direct other people’s behavior. That is, they are seen as legitimate.
  • The person being ordered about is able to believe that the authority will accept responsibility for what happens.
According to Milgram, when in this agentic state, the participant in the obedience studies “defines himself in a social situation in a manner that renders him open to regulation by a person of higher status. In this condition the individual no longer views himself as responsible for his own actions but defines himself as an instrument for carrying out the wishes of others” (Milgram, 1974, p. 134).

Agency theory says that people will obey an authority when they believe that the authority will take responsibility for the consequences of their actions. This is supported by some aspects of Milgram’s evidence.

For example, when participants were reminded that they had responsibility for their own actions, almost none of them were prepared to obey.

In contrast, many participants who were refusing to go on did so if the experimenter said that he would take responsibility.

According to Milgram (1974, p. 188):

“The behavior revealed in the experiments reported here is normal human behavior but revealed under conditions that show with particular clarity the danger to human survival inherent in our make-up.

And what is it we have seen? Not aggression, for there is no anger, vindictiveness, or hatred in those who shocked the victim….

Something far more dangerous is revealed: the capacity for man to abandon his humanity, indeed, the inevitability that he does so, as he merges his unique personality into larger institutional structures.”

Milgram Experiment Variations

The Milgram experiment was carried out many times whereby Milgram (1965) varied the basic procedure (changed the IV).  By doing this Milgram could identify which factors affected obedience (the DV).

Obedience was measured by how many participants shocked to the maximum 450 volts (65% in the original study). Stanley Milgram conducted a total of 23 variations (also called conditions or experiments) of his original obedience study:

In total, 636 participants were tested in 18 variation studies conducted between 1961 and 1962 at Yale University.

In the original baseline study – the experimenter wore a gray lab coat to symbolize his authority (a kind of uniform).

The lab coat worn by the experimenter in the original study served as a crucial symbol of scientific authority that increased obedience. The lab coat conveyed expertise and legitimacy, making participants see the experimenter as more credible and trustworthy.

Milgram carried out a variation in which the experimenter was called away because of a phone call right at the start of the procedure.

The role of the experimenter was then taken over by an ‘ordinary member of the public’ ( a confederate) in everyday clothes rather than a lab coat. The obedience level dropped to 20%.

Change of Location:  The Mountain View Facility Study (1963, unpublished)

Milgram conducted this variation in a set of offices in a rundown building, claiming it was associated with “Research Associates of Bridgeport” rather than Yale.

The lab’s ordinary appearance was designed to test if Yale’s prestige encouraged obedience. Participants were led to believe that a private research firm experimented.

In this non-university setting, obedience rates dropped to 47.5% compared to 65% in the original Yale experiments. This suggests that the status of location affects obedience.

Private research firms are viewed as less prestigious than certain universities, which affects behavior. It is easier under these conditions to abandon the belief in the experimenter’s essential decency.

The impressive university setting reinforced the experimenter’s authority and conveyed an implicit approval of the research.

Milgram filmed this variation for his documentary Obedience , but did not publish the results in his academic papers. The study only came to wider light when archival materials, including his notes, films, and data, were studied by later researchers like Perry (2013) in the decades after Milgram’s death.

Two Teacher Condition

When participants could instruct an assistant (confederate) to press the switches, 92.5% shocked to the maximum of 450 volts.

Allowing the participant to instruct an assistant to press the shock switches diffused personal responsibility and likely reduced perceptions of causing direct harm.

By attributing the actions to the assistant rather than themselves, participants could more easily justify shocking to the maximum 450 volts, reflected in the 92.5% obedience rate.

When there is less personal responsibility, obedience increases. This relates to Milgram’s Agency Theory.

Touch Proximity Condition

The teacher had to force the learner’s hand down onto a shock plate when the learner refused to participate after 150 volts. Obedience fell to 30%.

Forcing the learner’s hand onto the shock plate after 150 volts physically connected the teacher to the consequences of their actions. This direct tactile feedback increased the teacher’s personal responsibility.

No longer shielded from the learner’s reactions, the proximity enabled participants to more clearly perceive the harm they were causing, reducing obedience to 30%. Physical distance and indirect actions in the original setup made it easier to rationalize obeying the experimenter.

The participant is no longer buffered/protected from seeing the consequences of their actions.

Social Support Condition

When the two confederates set an example of defiance by refusing to continue the shocks, especially early on at 150 volts, it permitted the real participant also to resist authority.

Two other participants (confederates) were also teachers but refused to obey. Confederate 1 stopped at 150 volts, and Confederate 2 stopped at 210 volts.

Their disobedience provided social proof that it was acceptable to disobey. This modeling of defiance lowered obedience to only 10% compared to 65% without such social support. It demonstrated that social modeling can validate challenging authority.

The presence of others who are seen to disobey the authority figure reduces the level of obedience to 10%.

Absent Experimenter Condition 

It is easier to resist the orders from an authority figure if they are not close by. When the experimenter instructed and prompted the teacher by telephone from another room, obedience fell to 20.5%.

Many participants cheated and missed out on shocks or gave less voltage than ordered by the experimenter. The proximity of authority figures affects obedience.

The physical absence of the authority figure enabled participants to act more freely on their own moral inclinations rather than the experimenter’s commands. This highlighted the role of an authority’s direct presence in influencing behavior.

A key reason the obedience studies fascinate people is Milgram presented them as a scientific experiment, contrasting himself as an “empirically grounded scientist” compared to philosophers. He claimed he systematically varied factors to alter obedience rates.

However, recent scholarship using archival records shows Milgram’s account of standardizing the procedure was misleading. For example, he published a list of standardized prods the experimenter used when participants questioned continuing. Milgram said these were delivered uniformly in a firm but polite tone.

Analyzing audiotapes, Gibson (2013) found considerable variation from the published protocol – the prods differed across trials. The point is not that Milgram did poor science, but that the archival materials reveal the limitations of the textbook account of his “standardized” procedure.

The qualitative data like participant feedback, Milgram’s notes, and researchers’ actions provide a fuller, messier picture than the obedience studies’ “official” story. For psychology students, this shows how scientific reporting can polish findings in a way that strays from the less tidy reality.

Critical Evaluation

Inaccurate description of the prod methodology:.

A key reason the obedience studies fascinate people is Milgram (1974) presented them as a scientific experiment, contrasting himself as an “empirically grounded scientist” compared to philosophers. He claimed he systematically varied factors to alter obedience rates.

However, recent scholarship using archival records shows Milgram’s account of standardizing the procedure was misleading. For example, he published a list of standardized prods the experimenter used when participants questioned continuing. Milgram said these were delivered uniformly in a firm but polite tone (Gibson, 2013; Perry, 2013; Russell, 2010).

Perry’s (2013) archival research revealed another discrepancy between Milgram’s published account and the actual events. Milgram claimed standardized prods were used when participants resisted, but Perry’s audiotape analysis showed the experimenter often improvised more coercive prods beyond the supposed script.

This off-script prodding varied between experiments and participants, and was especially prevalent with female participants where no gender obedience difference was found – suggesting the improvisation influenced results. Gibson (2013) and Russell (2009) corroborated the experimenter’s departures from the supposed fixed prods. 

Prods were often combined or modified rather than used verbatim as published.

Russell speculated the improvisation aimed to achieve outcomes the experimenter believed Milgram wanted. Milgram seemed to tacitly approve of the deviations by not correcting them when observing.

This raises significant issues around experimenter bias influencing results, lack of standardization compromising validity, and ethical problems with Milgram misrepresenting procedures.

Milgram’s experiment lacked external validity:

The Milgram studies were conducted in laboratory-type conditions, and we must ask if this tells us much about real-life situations.

We obey in a variety of real-life situations that are far more subtle than instructions to give people electric shocks, and it would be interesting to see what factors operate in everyday obedience. The sort of situation Milgram investigated would be more suited to a military context.

Orne and Holland (1968) accused Milgram’s study of lacking ‘experimental realism,”’ i.e.,” participants might not have believed the experimental set-up they found themselves in and knew the learner wasn’t receiving electric shocks.

“It’s more truthful to say that only half of the people who undertook the experiment fully believed it was real, and of those two-thirds disobeyed the experimenter,” observes Perry (p. 139).

Milgram’s sample was biased:

  • The participants in Milgram’s study were all male. Do the findings transfer to females?
  • Milgram’s study cannot be seen as representative of the American population as his sample was self-selected. This is because they became participants only by electing to respond to a newspaper advertisement (selecting themselves).
  • They may also have a typical “volunteer personality” – not all the newspaper readers responded so perhaps it takes this personality type to do so.

Yet a total of 636 participants were tested in 18 separate experiments across the New Haven area, which was seen as being reasonably representative of a typical American town.

Milgram’s findings have been replicated in a variety of cultures and most lead to the same conclusions as Milgram’s original study and in some cases see higher obedience rates.

However, Smith and Bond (1998) point out that with the exception of Jordan (Shanab & Yahya, 1978), the majority of these studies have been conducted in industrialized Western cultures, and we should be cautious before we conclude that a universal trait of social behavior has been identified.

Selective reporting of experimental findings:

Perry (2013) found Milgram omitted findings from some obedience experiments he conducted, reporting only results supporting his conclusions. A key omission was the Relationship condition (conducted in 1962 but unpublished), where participant pairs were relatives or close acquaintances.

When the learner protested being shocked, most teachers disobeyed, contradicting Milgram’s emphasis on obedience to authority.

Perry argued Milgram likely did not publish this 85% disobedience rate because it undermined his narrative and would be difficult to defend ethically since the teacher and learner knew each other closely.

Milgram’s selective reporting biased interpretations of his findings. His failure to publish all his experiments raises issues around researchers’ ethical obligation to completely and responsibly report their results, not just those fitting their expectations.

Unreported analysis of participants’ skepticism and its impact on their behavior:

Perry (2013) found archival evidence that many participants expressed doubt about the experiment’s setup, impacting their behavior. This supports Orne and Holland’s (1968) criticism that Milgram overlooked participants’ perceptions.

Incongruities like apparent danger, but an unconcerned experimenter likely cued participants that no real harm would occur. Trust in Yale’s ethics reinforced this. Yet Milgram did not publish his assistant’s analysis showing participant skepticism correlated with disobedience rates and varied by condition.

Obedient participants were more skeptical that the learner was harmed. This selective reporting biased interpretations. Additional unreported findings further challenge Milgram’s conclusions.

This highlights issues around thoroughly and responsibly reporting all results, not just those fitting expectations. It shows how archival evidence makes Milgram’s study a contentious classic with questionable methods and conclusions.

Ethical Issues

What are the potential ethical concerns associated with Milgram’s research on obedience?

While not a “contribution to psychology” in the traditional sense, Milgram’s obedience experiments sparked significant debate about the ethics of psychological research.

Baumrind (1964) criticized the ethics of Milgram’s research as participants were prevented from giving their informed consent to take part in the study. 

Participants assumed the experiment was benign and expected to be treated with dignity.

As a result of studies like Milgram’s, the APA and BPS now require researchers to give participants more information before they agree to take part in a study.

The participants actually believed they were shocking a real person and were unaware the learner was a confederate of Milgram’s.

However, Milgram argued that “illusion is used when necessary in order to set the stage for the revelation of certain difficult-to-get-at-truths.”

Milgram also interviewed participants afterward to find out the effect of the deception. Apparently, 83.7% said that they were “glad to be in the experiment,” and 1.3% said that they wished they had not been involved.

Protection of participants 

Participants were exposed to extremely stressful situations that may have the potential to cause psychological harm. Many of the participants were visibly distressed (Baumrind, 1964).

Signs of tension included trembling, sweating, stuttering, laughing nervously, biting lips and digging fingernails into palms of hands. Three participants had uncontrollable seizures, and many pleaded to be allowed to stop the experiment.

Milgram described a businessman reduced to a “twitching stuttering wreck” (1963, p. 377),

In his defense, Milgram argued that these effects were only short-term. Once the participants were debriefed (and could see the confederate was OK), their stress levels decreased.

“At no point,” Milgram (1964) stated, “were subjects exposed to danger and at no point did they run the risk of injurious effects resulting from participation” (p. 849).

To defend himself against criticisms about the ethics of his obedience research, Milgram cited follow-up survey data showing that 84% of participants said they were glad they had taken part in the study.

Milgram used this to claim that the study caused no serious or lasting harm, since most participants retrospectively did not regret their involvement.

Yet archival accounts show many participants endured lasting distress, even trauma, refuting Milgram’s insistence the study caused only fleeting “excitement.” By not debriefing all, Milgram misled participants about the true risks involved (Perry, 2013).

However, Milgram did debrief the participants fully after the experiment and also followed up after a period of time to ensure that they came to no harm.

Milgram debriefed all his participants straight after the experiment and disclosed the true nature of the experiment.

Participants were assured that their behavior was common, and Milgram also followed the sample up a year later and found no signs of any long-term psychological harm.

The majority of the participants (83.7%) said that they were pleased that they had participated, and 74% had learned something of personal importance.

Perry’s (2013) archival research found Milgram misrepresented debriefing – around 600 participants were not properly debriefed soon after the study, contrary to his claims. Many only learned no real shocks occurred when reading a mailed study report months later, which some may have not received.

Milgram likely misreported debriefing details to protect his credibility and enable future obedience research. This raises issues around properly informing and debriefing participants that connect to APA ethics codes developed partly in response to Milgram’s study.

Right to Withdrawal 

The BPS states that researchers should make it plain to participants that they are free to withdraw at any time (regardless of payment).

When expressing doubts, the experimenter assured them all was well. Trusting Yale scientists, many took the experimenter at his word that “no permanent tissue damage” would occur, and continued administering shocks despite reservations.

Did Milgram give participants an opportunity to withdraw? The experimenter gave four verbal prods which mostly discouraged withdrawal from the experiment:

  • Please continue.
  • The experiment requires that you continue.
  • It is absolutely essential that you continue.
  • You have no other choice, you must go on.

Milgram argued that they were justified as the study was about obedience, so orders were necessary.

Milgram pointed out that although the right to withdraw was made partially difficult, it was possible as 35% of participants had chosen to withdraw.

Replications

Direct replications have not been possible due to current ethical standards . However, several researchers have conducted partial replications and variations that aim to reproduce some aspects of Milgram’s methods ethically.

One important replication was conducted by Jerry Burger in 2009. Burger’s partial replication included several safeguards to protect participant welfare, such as screening out high-risk individuals, repeatedly reminding participants they could withdraw, and stopping at the 150-volt shock level. This was the point where Milgram’s participants first heard the learner’s protests.

As 79% of Milgram’s participants who went past 150 volts continued to the maximum 450 volts, Burger (2009) argued that 150 volts provided a reasonable estimate for obedience levels. He found 70% of participants continued to 150 volts, compared to 82.5% in Milgram’s comparable condition.

Another replication by Thomas Blass (1999) examined whether obedience rates had declined over time due to greater public awareness of the experiments. Blass correlated obedience rates from replication studies between 1963 and 1985 and found no relationship between year and obedience level. He concluded that obedience rates have not systematically changed, providing evidence against the idea of “enlightenment effects”.

Some variations have explored the role of gender. Milgram found equal rates of obedience for male and female participants. Reviews have found most replications also show no gender difference, with a couple of exceptions (Blass, 1999). For example, Kilham and Mann (1974) found lower obedience in female participants.

Partial replications have also examined situational factors. Having another person model defiance reduced obedience compared to a solo participant in one study, but did not eliminate it (Burger, 2009). The authority figure’s perceived expertise seems to be an influential factor (Blass, 1999). Replications have supported Milgram’s observation that stepwise increases in demands promote obedience.

Personality factors have been studied as well. Traits like high empathy and desire for control correlate with some minor early hesitation, but do not greatly impact eventual obedience levels (Burger, 2009). Authoritarian tendencies may contribute to obedience (Elms, 2009).

In sum, the partial replications confirm Milgram’s degree of obedience. Though ethical constraints prevent full reproductions, the key elements of his procedure seem to consistently elicit high levels of compliance across studies, samples, and eras. The replications continue to highlight the power of situational pressures to yield obedience.

Milgram (1963) Audio Clips

Below you can also hear some of the audio clips taken from the video that was made of the experiment. Just click on the clips below.

Why was the Milgram experiment so controversial?

The Milgram experiment was controversial because it revealed people’s willingness to obey authority figures even when causing harm to others, raising ethical concerns about the psychological distress inflicted upon participants and the deception involved in the study.

Would Milgram’s experiment be allowed today?

Milgram’s experiment would likely not be allowed today in its original form, as it violates modern ethical guidelines for research involving human participants, particularly regarding informed consent, deception, and protection from psychological harm.

Did anyone refuse the Milgram experiment?

Yes, in the Milgram experiment, some participants refused to continue administering shocks, demonstrating individual variation in obedience to authority figures. In the original Milgram experiment, approximately 35% of participants refused to administer the highest shock level of 450 volts, while 65% obeyed and delivered the 450-volt shock.

How can Milgram’s study be applied to real life?

Milgram’s study can be applied to real life by demonstrating the potential for ordinary individuals to obey authority figures even when it involves causing harm, emphasizing the importance of questioning authority, ethical decision-making, and fostering critical thinking in societal contexts.

Were all participants in Milgram’s experiments male?

Yes, in the original Milgram experiment conducted in 1961, all participants were male, limiting the generalizability of the findings to women and diverse populations.

Why was the Milgram experiment unethical?

The Milgram experiment was considered unethical because participants were deceived about the true nature of the study and subjected to severe emotional distress. They believed they were causing harm to another person under the instruction of authority.

Additionally, participants were not given the right to withdraw freely and were subjected to intense pressure to continue. The psychological harm and lack of informed consent violates modern ethical guidelines for research.

Baumrind, D. (1964). Some thoughts on ethics of research: After reading Milgram’s” Behavioral study of obedience.”.  American Psychologist ,  19 (6), 421.

Blass, T. (1999). The Milgram paradigm after 35 years: Some things we now know about obedience to authority 1.  Journal of Applied Social Psychology ,  29 (5), 955-978.

Brannigan, A., Nicholson, I., & Cherry, F. (2015). Introduction to the special issue: Unplugging the Milgram machine.  Theory & Psychology ,  25 (5), 551-563.

Burger, J. M. (2009). Replicating Milgram: Would people still obey today? American Psychologist, 64 , 1–11.

Elms, A. C. (2009). Obedience lite. American Psychologist, 64 (1), 32–36.

Gibson, S. (2013). Milgram’s obedience experiments: A rhetorical analysis. British Journal of Social Psychology, 52, 290–309.

Gibson, S. (2017). Developing psychology’s archival sensibilities: Revisiting Milgram’s obedience’ experiments.  Qualitative Psychology ,  4 (1), 73.

Griggs, R. A., Blyler, J., & Jackson, S. L. (2020). Using research ethics as a springboard for teaching Milgram’s obedience study as a contentious classic.  Scholarship of Teaching and Learning in Psychology ,  6 (4), 350.

Haslam, S. A., & Reicher, S. D. (2018). A truth that does not always speak its name: How Hollander and Turowetz’s findings confirm and extend the engaged followership analysis of harm-doing in the Milgram paradigm. British Journal of Social Psychology, 57, 292–300.

Haslam, S. A., Reicher, S. D., & Birney, M. E. (2016). Questioning authority: New perspectives on Milgram’s ‘obedience’ research and its implications for intergroup relations. Current Opinion in Psychology, 11 , 6–9.

Haslam, S. A., Reicher, S. D., Birney, M. E., Millard, K., & McDonald, R. (2015). ‘Happy to have been of service’: The Yale archive as a window into the engaged followership of participants in Milgram’s ‘obedience’ experiment. British Journal of Social Psychology, 54 , 55–83.

Kaplan, D. E. (1996). The Stanley Milgram papers: A case study on appraisal of and access to confidential data files. American Archivist, 59 , 288–297.

Kaposi, D. (2022). The second wave of critical engagement with Stanley Milgram’s ‘obedience to authority’experiments: What did we learn?.  Social and Personality Psychology Compass ,  16 (6), e12667.

Kilham, W., & Mann, L. (1974). Level of destructive obedience as a function of transmitter and executant roles in the Milgram obedience paradigm. Journal of Personality and Social Psychology, 29 (5), 696–702.

Milgram, S. (1963). Behavioral study of obedience . Journal of Abnormal and Social Psychology , 67, 371-378.

Milgram, S. (1964). Issues in the study of obedience: A reply to Baumrind. American Psychologist, 19 , 848–852.

Milgram, S. (1965). Some conditions of obedience and disobedience to authority . Human Relations, 18(1) , 57-76.

Milgram, S. (1974). Obedience to authority: An experimental view . Harpercollins.

Miller, A. G. (2009). Reflections on” Replicating Milgram”(Burger, 2009), American Psychologis t, 64 (1):20-27

Nicholson, I. (2011). “Torture at Yale”: Experimental subjects, laboratory torment and the “rehabilitation” of Milgram’s “obedience to authority”. Theory & Psychology, 21 , 737–761.

Nicholson, I. (2015). The normalization of torment: Producing and managing anguish in Milgram’s “obedience” laboratory. Theory & Psychology, 25 , 639–656.

Orne, M. T., & Holland, C. H. (1968). On the ecological validity of laboratory deceptions. International Journal of Psychiatry, 6 (4), 282-293.

Orne, M. T., & Holland, C. C. (1968). Some conditions of obedience and disobedience to authority. On the ecological validity of laboratory deceptions. International Journal of Psychiatry, 6 , 282–293.

Perry, G. (2013). Behind the shock machine: The untold story of the notorious Milgram psychology experiments . New York, NY: The New Press.

Reicher, S., Haslam, A., & Miller, A. (Eds.). (2014). Milgram at 50: Exploring the enduring relevance of psychology’s most famous studies [Special issue]. Journal of Social Issues, 70 (3), 393–602

Russell, N. (2014). Stanley Milgram’s obedience to authority “relationship condition”: Some methodological and theoretical implications. Social Sciences, 3, 194–214

Shanab, M. E., & Yahya, K. A. (1978). A cross-cultural study of obedience. Bulletin of the Psychonomic Society .

Smith, P. B., & Bond, M. H. (1998). Social psychology across cultures (2nd Edition) . Prentice Hall.

Further Reading

  • The power of the situation: The impact of Milgram’s obedience studies on personality and social psychology
  • Seeing is believing: The role of the film Obedience in shaping perceptions of Milgram’s Obedience to Authority Experiments
  • Replicating Milgram: Would people still obey today?

Learning Check

Which is true regarding the Milgram obedience study?
  • The aim was to see how obedient people would be in a situation where following orders would mean causing harm to another person.
  • Participants were under the impression they were part of a learning and memory experiment.
  • The “learners” in the study were actual participants who volunteered to be shocked as part of the experiment.
  • The “learner” was an actor who was in on the experiment and never actually received any real shocks.
  • Although the participant could not see the “learner”, he was able to hear him clearly through the wall
  • The study was directly influenced by Milgram’s observations of obedience patterns in post-war Europe.
  • The experiment was designed to understand the psychological mechanisms behind war crimes committed during World War II.
  • The Milgram study was universally accepted in the psychological community, and no ethical concerns were raised about its methodology.
  • When Milgram’s experiment was repeated in a rundown office building in Bridgeport, the percentage of the participants who fully complied with the commands of the experimenter remained unchanged.
  • The experimenter (authority figure) delivered verbal prods to encourage the teacher to continue, such as ‘Please continue’ or ‘Please go on’.
  • Over 80% of participants went on to deliver the maximum level of shock.
  • Milgram sent participants questionnaires after the study to assess the effects and found that most felt no remorse or guilt, so it was ethical.
  • The aftermath of the study led to stricter ethical guidelines in psychological research.
  • The study emphasized the role of situational factors over personality traits in determining obedience.

Answers : Items 3, 8, 9, and 11 are the false statements.

Short Answer Questions
  • Briefly explain the results of the original Milgram experiments. What did these results prove?
  • List one scenario on how an authority figure can abuse obedience principles.
  • List one scenario on how an individual could use these principles to defend their fellow peers.
  • In a hospital, you are very likely to obey a nurse. However, if you meet her outside the hospital, for example in a shop, you are much less likely to obey. Using your knowledge of how people resist pressure to obey, explain why you are less likely to obey the nurse outside the hospital.
  • Describe the shock instructions the participant (teacher) was told to follow when the victim (learner) gave an incorrect answer.
  • State the lowest voltage shock that was labeled on the shock generator.
  • What would likely happen if Milgram’s experiment included a condition in which the participant (teacher) had to give a high-level electric shock for the first wrong answer?
Group Activity

Gather in groups of three or four to discuss answers to the short answer questions above.

For question 2, review the different scenarios you each came up with. Then brainstorm on how these situations could be flipped.

For question 2, discuss how an authority figure could instead empower those below them in the examples your groupmates provide.

For question 3, discuss how a peer could do harm by using the obedience principles in the scenarios your groupmates provide.

Essay Topic
  • What’s the most important lesson of Milgram’s Obedience Experiments? Fully explain and defend your answer.
  • Milgram selectively edited his film of the obedience experiments to emphasize obedient behavior and minimize footage of disobedience. What are the ethical implications of a researcher selectively presenting findings in a way that fits their expected conclusions?

Print Friendly, PDF & Email

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Understanding the Milgram Experiment in Psychology

A closer look at Milgram's controversial studies of obedience

Isabelle Adam (CC BY-NC-ND 2.0) via Flickr

Factors That Influence Obedience

  • Ethical Concerns
  • Replications

How far do you think people would go to obey an authority figure? Would they refuse to obey if the order went against their values or social expectations? Those questions were at the heart of an infamous and controversial study known as the Milgram obedience experiments.

Yale University  psychologist   Stanley Milgram  conducted these experiments during the 1960s. They explored the effects of authority on obedience. In the experiments, an authority figure ordered participants to deliver what they believed were dangerous electrical shocks to another person. These results suggested that people are highly influenced by authority and highly obedient . More recent investigations cast doubt on some of the implications of Milgram's findings and even the results and procedures themselves. Despite its problems, the study has, without question, made a significant impact on psychology .

At a Glance

Milgram's experiments posed the question: Would people obey orders, even if they believed doing so would harm another person? Milgram's findings suggested the answer was yes, they would. The experiments have long been controversial, both because of the startling findings and the ethical problems with the research. More recently, experts have re-examined the studies, suggesting that participants were often coerced into obeying and that at least some participants recognized that the other person was just pretending to be shocked. Such findings call into question the study's validity and authenticity, but some replications suggest that people are surprisingly prone to obeying authority.

History of the Milgram Experiments

Milgram started his experiments in 1961, shortly after the trial of the World War II criminal Adolf Eichmann had begun. Eichmann’s defense that he was merely following instructions when he ordered the deaths of millions of Jews roused Milgram’s interest.

In his 1974 book "Obedience to Authority," Milgram posed the question, "Could it be that Eichmann and his million accomplices in the Holocaust were just following orders? Could we call them all accomplices?"

Procedure in the Milgram Experiment

The participants in the most famous variation of the Milgram experiment were 40 men recruited using newspaper ads. In exchange for their participation, each person was paid $4.50.

Milgram developed an intimidating shock generator, with shock levels starting at 15 volts and increasing in 15-volt increments all the way up to 450 volts. The many switches were labeled with terms including "slight shock," "moderate shock," and "danger: severe shock." The final three switches were labeled simply with an ominous "XXX."

Each participant took the role of a "teacher" who would then deliver a shock to the "student" in a neighboring room whenever an incorrect answer was given. While participants believed that they were delivering real shocks to the student, the “student” was a confederate in the experiment who was only pretending to be shocked.

As the experiment progressed, the participant would hear the learner plead to be released or even complain about a heart condition. Once they reached the 300-volt level, the learner would bang on the wall and demand to be released.

Beyond this point, the learner became completely silent and refused to answer any more questions. The experimenter then instructed the participant to treat this silence as an incorrect response and deliver a further shock.

Most participants asked the experimenter whether they should continue. The experimenter then responded with a series of commands to prod the participant along:

  • "Please continue."
  • "The experiment requires that you continue."
  • "It is absolutely essential that you continue."
  • "You have no other choice; you must go on."

Results of the Milgram Experiment

In the Milgram experiment, obedience was measured by the level of shock that the participant was willing to deliver. While many of the subjects became extremely agitated, distraught, and angry at the experimenter, they nevertheless continued to follow orders all the way to the end.

Milgram's results showed that 65% of the participants in the study delivered the maximum shocks. Of the 40 participants in the study, 26 delivered the maximum shocks, while 14 stopped before reaching the highest levels.

Why did so many of the participants in this experiment perform a seemingly brutal act when instructed by an authority figure? According to Milgram, there are some situational factors that can explain such high levels of obedience:

  • The physical presence of an authority figure dramatically increased compliance .
  • The fact that Yale (a trusted and authoritative academic institution) sponsored the study led many participants to believe that the experiment must be safe.
  • The selection of teacher and learner status seemed random.
  • Participants assumed that the experimenter was a competent expert.
  • The shocks were said to be painful, not dangerous.

Later experiments conducted by Milgram indicated that the presence of rebellious peers dramatically reduced obedience levels. When other people refused to go along with the experimenter's orders, 36 out of 40 participants refused to deliver the maximum shocks.

More recent work by researchers suggests that while people do tend to obey authority figures, the process is not necessarily as cut-and-dried as Milgram depicted it.

In a 2012 essay published in PLoS Biology , researchers suggested that the degree to which people are willing to obey the questionable orders of an authority figure depends largely on two key factors:

  • How much the individual agrees with the orders
  • How much they identify with the person giving the orders

While it is clear that people are often far more susceptible to influence, persuasion , and obedience than they would often like to be, they are far from mindless machines just taking orders. 

Another study that analyzed Milgram's results concluded that eight factors influenced the likelihood that people would progress up to the 450-volt shock:

  • The experimenter's directiveness
  • Legitimacy and consistency
  • Group pressure to disobey
  • Indirectness of proximity
  • Intimacy of the relation between the teacher and learner
  • Distance between the teacher and learner

Ethical Concerns in the Milgram Experiment

Milgram's experiments have long been the source of considerable criticism and controversy. From the get-go, the ethics of his experiments were highly dubious. Participants were subjected to significant psychological and emotional distress.

Some of the major ethical issues in the experiment were related to:

  • The use of deception
  • The lack of protection for the participants who were involved
  • Pressure from the experimenter to continue even after asking to stop, interfering with participants' right to withdraw

Due to concerns about the amount of anxiety experienced by many of the participants, everyone was supposedly debriefed at the end of the experiment. The researchers reported that they explained the procedures and the use of deception.

Critics of the study have argued that many of the participants were still confused about the exact nature of the experiment, and recent findings suggest that many participants were not debriefed at all.

Replications of the Milgram Experiment

While Milgram’s research raised serious ethical questions about the use of human subjects in psychology experiments , his results have also been consistently replicated in further experiments. One review further research on obedience and found that Milgram’s findings hold true in other experiments. In one study, researchers conducted a study designed to replicate Milgram's classic obedience experiment. The researchers made several alterations to Milgram's experiment.

  • The maximum shock level was 150 volts as opposed to the original 450 volts.
  • Participants were also carefully screened to eliminate those who might experience adverse reactions to the experiment.

The results of the new experiment revealed that participants obeyed at roughly the same rate that they did when Milgram conducted his original study more than 40 years ago.

Some psychologists suggested that in spite of the changes made in the replication, the study still had merit and could be used to further explore some of the situational factors that also influenced the results of Milgram's study. But other psychologists suggested that the replication was too dissimilar to Milgram's original study to draw any meaningful comparisons.

One study examined people's beliefs about how they would do compared to the participants in Milgram's experiments. They found that most people believed they would stop sooner than the average participants. These findings applied to both those who had never heard of Milgram's experiments and those who were familiar with them. In fact, those who knew about Milgram's experiments actually believed that they would stop even sooner than other people.

Another novel replication involved recruiting participants in pairs and having them take turns acting as either an 'agent' or 'victim.' Agents then received orders to shock the victim. The results suggest that only around 3.3% disobeyed the experimenter's orders.

Recent Criticisms and New Findings

Psychologist Gina Perry suggests that much of what we think we know about Milgram's famous experiments is only part of the story. While researching an article on the topic, she stumbled across hundreds of audiotapes found in Yale archives that documented numerous variations of Milgram's shock experiments.

Participants Were Often Coerced

While Milgram's reports of his process report methodical and uniform procedures, the audiotapes reveal something different. During the experimental sessions, the experimenters often went off-script and coerced the subjects into continuing the shocks.

"The slavish obedience to authority we have come to associate with Milgram’s experiments comes to sound much more like bullying and coercion when you listen to these recordings," Perry suggested in an article for Discover Magazine .

Few Participants Were Really Debriefed

Milgram suggested that the subjects were "de-hoaxed" after the experiments. He claimed he later surveyed the participants and found that 84% were glad to have participated, while only 1% regretted their involvement.

However, Perry's findings revealed that of the 700 or so people who took part in different variations of his studies between 1961 and 1962, very few were truly debriefed.

A true debriefing would have involved explaining that the shocks weren't real and that the other person was not injured. Instead, Milgram's sessions were mainly focused on calming the subjects down before sending them on their way.

Many participants left the experiment in a state of considerable distress. While the truth was revealed to some months or even years later, many were simply never told a thing.

Variations Led to Differing Results

Another problem is that the version of the study presented by Milgram and the one that's most often retold does not tell the whole story. The statistic that 65% of people obeyed orders applied only to one variation of the experiment, in which 26 out of 40 subjects obeyed.

In other variations, far fewer people were willing to follow the experimenters' orders, and in some versions of the study, not a single participant obeyed.

Participants Guessed the Learner Was Faking

Perry even tracked down some of the people who took part in the experiments, as well as Milgram's research assistants. What she discovered is that many of his subjects had deduced what Milgram's intent was and knew that the "learner" was merely pretending.

Such findings cast Milgram's results in a new light. It suggests that not only did Milgram intentionally engage in some hefty misdirection to obtain the results he wanted but that many of his participants were simply playing along.

An analysis of an unpublished study by Milgram's assistant, Taketo Murata, found that participants who believed they were really delivering a shock were less likely to obey, while those who did not believe they were actually inflicting pain were more willing to obey. In other words, the perception of pain increased defiance, while skepticism of pain increased obedience.

A review of Milgram's research materials suggests that the experiments exerted more pressure to obey than the original results suggested. Other variations of the experiment revealed much lower rates of obedience, and many of the participants actually altered their behavior when they guessed the true nature of the experiment.

Impact of the Milgram Experiment

Since there is no way to truly replicate the experiment due to its serious ethical and moral problems, determining whether Milgram's experiment really tells us anything about the power of obedience is impossible to determine.

So why does Milgram's experiment maintain such a powerful hold on our imaginations, even decades after the fact? Perry believes that despite all its ethical issues and the problem of never truly being able to replicate Milgram's procedures, the study has taken on the role of what she calls a "powerful parable."

Milgram's work might not hold the answers to what makes people obey or even the degree to which they truly obey. It has, however, inspired other researchers to explore what makes people follow orders and, perhaps more importantly, what leads them to question authority.

Recent findings undermine the scientific validity of the study. Milgram's work is also not truly replicable due to its ethical problems. However, the study has led to additional research on how situational factors can affect obedience to authority.

Milgram’s experiment has become a classic in psychology , demonstrating the dangers of obedience. The research suggests that situational variables have a stronger sway than personality factors in determining whether people will obey an authority figure. However, other psychologists argue that both external and internal factors heavily influence obedience, such as personal beliefs and overall temperament.

Milgram S.  Obedience to Authority: An Experimental View.  Harper & Row.

Russell N, Gregory R. The Milgram-Holocaust linkage: challenging the present consensus . State Crim J. 2015;4(2):128-153.

Russell NJC. Milgram's obedience to authority experiments: origins and early evolution . Br J Soc Psychol . 2011;50:140-162. doi:10.1348/014466610X492205

Haslam SA, Reicher SD. Contesting the "nature" of conformity: What Milgram and Zimbardo's studies really show . PLoS Biol. 2012;10(11):e1001426. doi:10.1371/journal.pbio.1001426

Milgram S. Liberating effects of group pressure . J Person Soc Psychol. 1965;1(2):127-234. doi:10.1037/h0021650

Haslam N, Loughnan S, Perry G. Meta-Milgram: an empirical synthesis of the obedience experiments .  PLoS One . 2014;9(4):e93927. doi:10.1371/journal.pone.0093927

Perry G. Deception and illusion in Milgram's accounts of the obedience experiments . Theory Appl Ethics . 2013;2(2):79-92.

Blass T. The Milgram paradigm after 35 years: some things we now know about obedience to authority . J Appl Soc Psychol. 1999;29(5):955-978. doi:10.1111/j.1559-1816.1999.tb00134.x

Burger J. Replicating Milgram: Would people still obey today? . Am Psychol . 2009;64(1):1-11. doi:10.1037/a0010932

Elms AC. Obedience lite . American Psychologist . 2009;64(1):32-36. doi:10.1037/a0014473

Miller AG. Reflections on “replicating Milgram” (Burger, 2009) . American Psychologist . 2009;64(1):20-27. doi:10.1037/a0014407

Grzyb T, Dolinski D. Beliefs about obedience levels in studies conducted within the Milgram paradigm: Better than average effect and comparisons of typical behaviors by residents of various nations .  Front Psychol . 2017;8:1632. doi:10.3389/fpsyg.2017.01632

Caspar EA. A novel experimental approach to study disobedience to authority .  Sci Rep . 2021;11(1):22927. doi:10.1038/s41598-021-02334-8

Haslam SA, Reicher SD, Millard K, McDonald R. ‘Happy to have been of service’: The Yale archive as a window into the engaged followership of participants in Milgram’s ‘obedience’ experiments . Br J Soc Psychol . 2015;54:55-83. doi:10.1111/bjso.12074

Perry G, Brannigan A, Wanner RA, Stam H. Credibility and incredulity in Milgram’s obedience experiments: A reanalysis of an unpublished test . Soc Psychol Q . 2020;83(1):88-106. doi:10.1177/0190272519861952

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

The Milgram Experiment: How Far Will You Go to Obey an Order?

Understand the infamous study and its conclusions about human nature

  • Archaeology
  • Ph.D., Psychology, University of California - Santa Barbara
  • B.A., Psychology and Peace & Conflict Studies, University of California - Berkeley

A brief Milgram experiment summary is as follows: In the 1960s, psychologist Stanley Milgram conducted a series of studies on the concepts of obedience and authority. His experiments involved instructing study participants to deliver increasingly high-voltage shocks to an actor in another room, who would scream and eventually go silent as the shocks became stronger. The shocks weren't real, but study participants were made to believe that they were.

Today, the Milgram experiment is widely criticized on both ethical and scientific grounds. However, Milgram's conclusions about humanity's willingness to obey authority figures remain influential and well-known.

Key Takeaways: The Milgram Experiment

  • The goal of the Milgram experiment was to test the extent of humans' willingness to obey orders from an authority figure.
  • Participants were told by an experimenter to administer increasingly powerful electric shocks to another individual. Unbeknownst to the participants, shocks were fake and the individual being shocked was an actor.
  • The majority of participants obeyed, even when the individual being shocked screamed in pain.
  • The experiment has been widely criticized on ethical and scientific grounds.

Detailed Milgram’s Experiment Summary

In the most well-known version of the Milgram experiment, the 40 male participants were told that the experiment focused on the relationship between punishment, learning, and memory. The experimenter then introduced each participant to a second individual, explaining that this second individual was participating in the study as well. Participants were told that they would be randomly assigned to roles of "teacher" and "learner." However, the "second individual" was an actor hired by the research team, and the study was set up so that the true participant would always be assigned to the "teacher" role.

During the Milgram experiment, the learner was located in a separate room from the teacher (the real participant), but the teacher could hear the learner through the wall. The experimenter told the teacher that the learner would memorize word pairs and instructed the teacher to ask the learner questions. If the learner responded incorrectly to a question, the teacher would be asked to administer an electric shock. The shocks started at a relatively mild level (15 volts) but increased in 15-volt increments up to 450 volts. (In actuality, the shocks were fake, but the participant was led to believe they were real.)

Participants were instructed to give a higher shock to the learner with each wrong answer. When the 150-volt shock was administered, the learner would cry out in pain and ask to leave the study. He would then continue crying out with each shock until the 330-volt level, at which point he would stop responding.

During this process, whenever participants expressed hesitation about continuing with the study, the experimenter would urge them to go on with increasingly firm instructions, culminating in the statement, "You have no other choice, you must go on." The study ended when participants refused to obey the experimenter’s demand, or when they gave the learner the highest level of shock on the machine (450 volts).

Milgram found that participants obeyed the experimenter at an unexpectedly high rate: 65% of the participants gave the learner the 450-volt shock.

Critiques of the Milgram Experiment

The Milgram experiment has been widely criticized on ethical grounds. Milgram’s participants were led to believe that they acted in a way that harmed someone else, an experience that could have had long-term consequences. Moreover, an investigation by writer Gina Perry uncovered that some participants appear to not have been fully debriefed after the study —they were told months later, or not at all, that the shocks were fake and the learner wasn’t harmed. Milgram’s studies could not be perfectly recreated today, because researchers today are required to pay much more attention to the safety and well-being of human research subjects.

Researchers have also questioned the scientific validity of Milgram’s results. In her examination of the study, Perry found that Milgram’s experimenter may have gone off script and told participants to obey many more times than the script specified. Additionally, some research suggests that participants may have figured out that the learner was not harmed: in interviews conducted after the Milgram experiment, some participants reported that they didn’t think the learner was in any real danger. This mindset is likely to have affected their behavior in the study.

Variations on the Milgram Experiment

Milgram and other researchers conducted numerous versions of the experiment over time. The participants' levels of compliance with the experimenter’s demands varied greatly from one study to the next. For example, when participants were in closer proximity to the learner (e.g. in the same room), they were less likely to give the learner the highest level of shock.

Another version of the Milgram experiment brought three "teachers" into the experiment room at once. One was a real participant, and the other two were actors hired by the research team. During the experiment, the two non-participant teachers would quit as the level of shocks began to increase. Milgram found that these conditions made the real participant far more likely to "disobey" the experimenter, too: only 10% of participants gave the 450-volt shock to the learner.

In yet another version of the Milgram experiment, two experimenters were present, and during the experiment, they would begin arguing with one another about whether it was right to continue the study. In this version, none of the participants gave the learner the 450-volt shock.

Replicating the Milgram Experiment

Researchers have sought to replicate Milgram's original study with additional safeguards in place to protect participants. In 2009, Jerry Burger replicated Milgram’s famous experiment at Santa Clara University with new safeguards in place: the highest shock level was 150 volts, and participants were told that the shocks were fake immediately after the experiment ended. Additionally, participants were screened by a clinical psychologist before the experiment began, and those found to be at risk of a negative reaction to the study were deemed ineligible to participate.

Burger found that participants obeyed at similar levels as Milgram’s participants: 82.5% of Milgram’s participants gave the learner the 150-volt shock, and 70% of Burger’s participants did the same.

The Legacy of the Milgram Experiment

Milgram’s interpretation of his research was that everyday people are capable of carrying out unthinkable actions in certain circumstances. His research has been used to explain atrocities such as the Holocaust and the Rwandan genocide, though these applications are by no means widely accepted or agreed upon.

Importantly, not all participants obeyed the experimenter’s demands , and Milgram’s studies shed light on the factors that enable people to stand up to authority. In fact, as sociologist Matthew Hollander writes, we may be able to learn from the participants who disobeyed, as their strategies may enable us to respond more effectively to an unethical situation. The Milgram experiment suggested that human beings are susceptible to obeying authority, but it also demonstrated that obedience is not inevitable.

  • Baker, Peter C. “Electric Schlock: Did Stanley Milgram's Famous Obedience Experiments Prove Anything?” Pacific Standard (2013, Sep. 10). https://psmag.com/social-justice/electric-schlock-65377
  • Burger, Jerry M. "Replicating Milgram: Would People Still Obey Today?."  American Psychologist 64.1 (2009): 1-11. http://psycnet.apa.org/buy/2008-19206-001
  • Gilovich, Thomas, Dacher Keltner, and Richard E. Nisbett. Social Psychology . 1st edition, W.W. Norton & Company, 2006.
  • Hollander, Matthew. “How to Be a Hero: Insight From the Milgram Experiment.” HuffPost Contributor Network (2015, Apr. 29). https://www.huffingtonpost.com/entry/how-to-be-a-hero-insight-_b_6566882
  • Jarrett, Christian. “New Analysis Suggests Most Milgram Participants Realised the ‘Obedience Experiments’ Were Not Really Dangerous.” The British Psychological Society: Research Digest (2017, Dec. 12). https://digest.bps.org.uk/2017/12/12/interviews-with-milgram-participants-provide-little-support-for-the-contemporary-theory-of-engaged-followership/
  • Perry, Gina. “The Shocking Truth of the Notorious Milgram Obedience Experiments.” Discover Magazine Blogs (2013, Oct. 2). http://blogs.discovermagazine.com/crux/2013/10/02/the-shocking-truth-of-the-notorious-milgram-obedience-experiments/
  • Romm, Cari. “Rethinking One of Psychology's Most Infamous Experiments.” The Atlantic (2015, Jan. 28) . https://www.theatlantic.com/health/archive/2015/01/rethinking-one-of-psychologys-most-infamous-experiments/384913/
  • Gilligan's Ethics of Care
  • What Is Behaviorism in Psychology?
  • What Was the Robbers Cave Experiment in Psychology?
  • What Is the Zeigarnik Effect? Definition and Examples
  • What Is a Conditioned Response?
  • Psychodynamic Theory: Approaches and Proponents
  • Social Cognitive Theory: How We Learn From the Behavior of Others
  • Kohlberg's Stages of Moral Development
  • What's the Difference Between Eudaimonic and Hedonic Happiness?
  • Genie Wiley, the Feral Child
  • What Is the Law of Effect in Psychology?
  • What Is the Recency Effect in Psychology?
  • Heuristics: The Psychology of Mental Shortcuts
  • What Is Survivor's Guilt? Definition and Examples
  • 5 Psychology Studies That Will Make You Feel Good About Humanity
  • What Is Cognitive Bias? Definition and Examples
  • Skip to main content
  • Keyboard shortcuts for audio player

Author Interviews

Taking a closer look at milgram's shocking obedience study.

Behind the Shock Machine

Behind the Shock Machine

Buy featured book.

Your purchase helps support NPR programming. How?

  • Independent Bookstores

In the early 1960s, Stanley Milgram, a social psychologist at Yale, conducted a series of experiments that became famous. Unsuspecting Americans were recruited for what purportedly was an experiment in learning. A man who pretended to be a recruit himself was wired up to a phony machine that supposedly administered shocks. He was the "learner." In some versions of the experiment he was in an adjoining room.

The unsuspecting subject of the experiment, the "teacher," read lists of words that tested the learner's memory. Each time the learner got one wrong, which he intentionally did, the teacher was instructed by a man in a white lab coat to deliver a shock. With each wrong answer the voltage went up. From the other room came recorded and convincing protests from the learner — even though no shock was actually being administered.

The results of Milgram's experiment made news and contributed a dismaying piece of wisdom to the public at large: It was reported that almost two-thirds of the subjects were capable of delivering painful, possibly lethal shocks, if told to do so. We are as obedient as Nazi functionaries.

Or are we? Gina Perry, a psychologist from Australia, has written Behind the Shock Machine: The Untold Story of the Notorious Milgram Psychology Experiments . She has been retracing Milgram's steps, interviewing his subjects decades later.

"The thought of quitting never ... occurred to me," study participant Bill Menold told Perry in an Australian radio documentary . "Just to say: 'You know what? I'm walking out of here' — which I could have done. It was like being in a situation that you never thought you would be in, not really being able to think clearly."

In his experiments, Milgram was "looking to investigate what it was that had contributed to the brainwashing of American prisoners of war by the Chinese [in the Korean war]," Perry tells NPR's Robert Siegel.

Interview Highlights

On turning from an admirer of Milgram to a critic

"That was an unexpected outcome for me, really. I regarded Stanley Milgram as a misunderstood genius who'd been penalized in some ways for revealing something troubling and profound about human nature. By the end of my research I actually had quite a very different view of the man and the research."

Watch A Video Of One Of The Milgram Obedience Experiments

On the many variations of the experiment

"Over 700 people took part in the experiments. When the news of the experiment was first reported, and the shocking statistic that 65 percent of people went to maximum voltage on the shock machine was reported, very few people, I think, realized then and even realize today that that statistic applied to 26 of 40 people. Of those other 700-odd people, obedience rates varied enormously. In fact, there were variations of the experiment where no one obeyed."

On how Milgram's study coincided with the trial of Nazi officer Adolf Eichmann — and how the experiment reinforced what Hannah Arendt described as "the banality of evil"

"The Eichmann trial was a televised trial and it did reintroduce the whole idea of the Holocaust to a new American public. And Milgram very much, I think, believed that Hannah Arendt's view of Eichmann as a cog in a bureaucratic machine was something that was just as applicable to Americans in New Haven as it was to people in Germany."

On the ethics of working with human subjects

"Certainly for people in academia and scholars the ethical issues involved in Milgram's experiment have always been a hot issue. They were from the very beginning. And Milgram's experiment really ignited a debate particularly in social sciences about what was acceptable to put human subjects through."

milgram experiment results table

Gina Perry is an Australian psychologist. She has previously written for The Age and The Australian. Chris Beck/Courtesy of The New Press hide caption

Gina Perry is an Australian psychologist. She has previously written for The Age and The Australian.

On conversations with the subjects, decades after the experiment

"[Bill Menold] doesn't sound resentful. I'd say he sounds thoughtful and he has reflected a lot on the experiment and the impact that it's had on him and what it meant at the time. I did interview someone else who had been disobedient in the experiment but still very much resented 50 years later that he'd never been de-hoaxed at the time and he found that really unacceptable."

On the problem that one of social psychology's most famous findings cannot be replicated

"I think it leaves social psychology in a difficult situation. ... it is such an iconic experiment. And I think it really leads to the question of why it is that we continue to refer to and believe in Milgram's results. I think the reason that Milgram's experiment is still so famous today is because in a way it's like a powerful parable. It's so widely known and so often quoted that it's taken on a life of its own. ... This experiment and this story about ourselves plays some role for us 50 years later."

Related NPR Stories

Shocking TV Experiment Sparks Ethical Concerns

Shocking TV Experiment Sparks Ethical Concerns

How stanley milgram 'shocked the world', research news, scientists debate 'six degrees of separation'.

Encyclopedia Britannica

  • History & Society
  • Science & Tech
  • Biographies
  • Animals & Nature
  • Geography & Travel
  • Arts & Culture
  • Games & Quizzes
  • On This Day
  • One Good Fact
  • New Articles
  • Lifestyles & Social Issues
  • Philosophy & Religion
  • Politics, Law & Government
  • World History
  • Health & Medicine
  • Browse Biographies
  • Birds, Reptiles & Other Vertebrates
  • Bugs, Mollusks & Other Invertebrates
  • Environment
  • Fossils & Geologic Time
  • Entertainment & Pop Culture
  • Sports & Recreation
  • Visual Arts
  • Demystified
  • Image Galleries
  • Infographics
  • Top Questions
  • Britannica Kids
  • Saving Earth
  • Space Next 50
  • Student Center

Stanley Milgram

  • Where was science invented?
  • When did science begin?

Blackboard inscribed with scientific formulas and calculations in physics and mathematics

Milgram experiment

Our editors will review what you’ve submitted and determine whether to revise the article.

  • Open University - OpenLearn - Psychological research, obedience and ethics: 1 Milgram’s obedience study
  • Social Science LibreTexts - The Milgram Experiment- The Power of Authority
  • Verywell Mind - What was the Milgram Experiment?
  • BCcampus Open Publishing - Ethics in Law Enforcement - The Milgram Experiment
  • Nature - Modern Milgram experiment sheds light on power of authority
  • SimplyPsychology - Stanley Milgram Shock Experiment: Summary, Results, & Ethics
  • University of California - College of Natural Resources - Milgrams Experiment on Obedience to Authority

Stanley Milgram

Milgram experiment , controversial series of experiments examining obedience to authority conducted by social psychologist Stanley Milgram . In the experiment, an authority figure, the conductor of the experiment, would instruct a volunteer participant, labeled the “teacher,” to administer painful, even dangerous, electric shocks to the “learner,” who was actually an actor. Although the shocks were faked, the experiments are widely considered unethical today due to the lack of proper disclosure, informed consent, and subsequent debriefing related to the deception and trauma experienced by the teachers. Some of Milgram’s conclusions have been called into question. Nevertheless, the experiments and their results have been widely cited for their insight into how average people respond to authority.

Milgram conducted his experiments as an assistant professor at Yale University in the early 1960s. In 1961 he began to recruit men from New Haven , Connecticut , for participation in a study he claimed would be focused on memory and learning . The recruits were paid $4.50 at the beginning of the study and were generally between the ages of 20 and 50 and from a variety of employment backgrounds. When they volunteered, they were told that the experiment would test the effect of punishment on learning ability. In truth, the volunteers were the subjects of an experiment on obedience to authority. In all, about 780 people, only about 40 of them women, participated in the experiments, and Milgram published his results in 1963.

milgram experiment results table

Volunteers were told that they would be randomly assigned either a “teacher” or “learner” role, with each teacher administering electric shocks to a learner in another room if the learner failed to answer questions correctly. In actuality, the random draw was fixed so that all the volunteer participants were assigned to the teacher role and the actors were assigned to the learner role. The teachers were then instructed in the electroshock “punishment” they would be administering, with 30 shock levels ranging from 15 to 450 volts. The different shock levels were labeled with descriptions of their effects, such as “Slight Shock,” “Intense Shock,” and “Danger: Severe Shock,” with the final label a grim “XXX.” Each teacher was given a 45-volt shock themselves so that they would better understand the punishment they believed the learner would be receiving. Teachers were then given a series of questions for the learner to answer, with each incorrect answer generally earning the learner a progressively stronger shock. The actor portraying the learner, who was seated out of sight of the teacher, had pre-recorded responses to these shocks that ranged from grunts of pain to screaming and pleading, claims of suffering a heart condition, and eventually dead silence. The experimenter, acting as an authority figure, would encourage the teachers to continue administering shocks, telling them with scripted responses that the experiment must continue despite the reactions of the learner. The infamous result of these experiments was that a disturbingly high number of the teachers were willing to proceed to the maximum voltage level, despite the pleas of the learner and the supposed danger of proceeding.

Milgram’s interest in the subject of authority, and his dark view of the results of his experiments, were deeply informed by his Jewish identity and the context of the Holocaust , which had occurred only a few years before. He had expected that Americans, known for their individualism , would differ from Germans in their willingness to obey authority when it might lead to harming others. Milgram and his students had predicted only 1–3% of participants would administer the maximum shock level. However, in his first official study, 26 of 40 male participants (65%) were convinced to do so and nearly 80% of teachers that continued to administer shocks after 150 volts—the point at which the learner was heard to scream—continued to the maximum of 450 volts. Teachers displayed a range of negative emotional responses to the experiment even as they continued to obey, sometimes pleading with the experimenters to stop the experiment while still participating in it. One teacher believed that he had killed the learner and was moved to tears when he eventually found out that he had not.

milgram experiment results table

Milgram included several variants on the original design of the experiment. In one, the teachers were allowed to select their own voltage levels. In this case, only about 2.5% of participants used the maximum shock level, indicating that they were not inclined to do so without the prompting of an authority figure. In another, there were three teachers, two of whom were not test subjects, but instead had been instructed to protest against the shocks. The existence of peers protesting the experiment made the volunteer teachers less likely to obey. Teachers were also less likely to obey in a variant where they could see the learner and were forced to interact with him.

The Milgram experiment has been highly controversial, both for the ethics of its design and for the reliability of its results and conclusions. It is commonly accepted that the ethics of the experiment would be rejected by mainstream science today, due not only to the handling of the deception involved but also to the extreme stress placed on the teachers, who often reacted emotionally to the experiment and were not debriefed . Some teachers were actually left believing they had genuinely and repeatedly shocked a learner before having the truth revealed to them later. Later researchers examining Milgram’s data also found that the experimenters conducting the tests had sometimes gone off-script in their attempts to coerce the teachers into continuing, and noted that some teachers guessed that they were the subjects of the experiment. However, attempts to validate Milgram’s findings in more ethical ways have often produced similar results.

  • Tools and Resources
  • Customer Services
  • Affective Science
  • Biological Foundations of Psychology
  • Clinical Psychology: Disorders and Therapies
  • Cognitive Psychology/Neuroscience
  • Developmental Psychology
  • Educational/School Psychology
  • Forensic Psychology
  • Health Psychology
  • History and Systems of Psychology
  • Individual Differences
  • Methods and Approaches in Psychology
  • Neuropsychology
  • Organizational and Institutional Psychology
  • Personality
  • Psychology and Other Disciplines
  • Social Psychology
  • Sports Psychology
  • Share This Facebook LinkedIn Twitter

Article contents

Milgram’s experiments on obedience to authority.

  • Stephen Gibson Stephen Gibson Heriot-Watt University, School of Social Sciences
  • https://doi.org/10.1093/acrefore/9780190236557.013.511
  • Published online: 30 June 2020

Stanley Milgram’s experiments on obedience to authority are among the most influential and controversial social scientific studies ever conducted. They remain staples of introductory psychology courses and textbooks, yet their influence reaches far beyond psychology, with myriad other disciplines finding lessons in them. Indeed, the experiments have long since broken free of the confines of academia, occupying a place in popular culture that is unrivaled among psychological experiments. The present article begins with an overview of Milgram’s account of his experimental procedure and findings, before focussing on recent scholarship that has used materials from Milgram’s archive to challenge many of the long-held assumptions about the experiments. Three areas in which our understanding of the obedience experiments has undergone a radical shift in recent years are the subject of particular focus. First, work that has identified new ethical problems with Milgram’s studies is summarized. Second, hitherto unknown methodological variations in Milgram’s experimental procedures are considered. Third, the interactions that took place in the experimental sessions themselves are explored. This work has contributed to a shift in how we see the obedience experiments. Rather than viewing the experiments as demonstrations of people’s propensity to follow orders, it is now clear that people did not follow orders in Milgram’s experiments. The experimenter did a lot more than simply issue orders, and when he did, participants found it relatively straightforward to defy them. These arguments are discussed in relation to the definition of obedience that has typically been adopted in psychology, the need for further historical work on Milgram’s experiments, and the possibilities afforded by the development of a broader project of secondary qualitative analysis of laboratory interaction in psychology experiments.

  • experimentation
  • interaction
  • standardization

You do not currently have access to this article

Please login to access the full content.

Access to the full content requires a subscription

Printed from Oxford Research Encyclopedias, Psychology. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 19 August 2024

  • Cookie Policy
  • Privacy Policy
  • Legal Notice
  • Accessibility
  • [185.126.86.119]
  • 185.126.86.119

Character limit 500 /500

milgram experiment results table

Reference Library

Collections

  • See what's new
  • All Resources
  • Student Resources
  • Assessment Resources
  • Teaching Resources
  • CPD Courses
  • Livestreams

Study notes, videos, interactive activities and more!

Psychology news, insights and enrichment

Currated collections of free resources

Browse resources by topic

  • All Psychology Resources

Resource Selections

Currated lists of resources

Study Notes

Explanations for Obedience -Variations of Milgram (1963)

Last updated 22 Mar 2021

  • Share on Facebook
  • Share on Twitter
  • Share by Email

Following Milgram’s original research, numerous variations were carried out to examine how different variables affect obedience.

  • Agentic State

An agentic state is when an individual carries out the orders of an authority figure and acts as their agent, with little personal responsibility. In Milgram’s original experiment, the participants were told that the experimenter had full responsibility and therefore they could act as an agent, carrying out the experimenter’s orders. If the participants were told that they were responsible, it is possible that Milgram would have obtained very different results.

Milgram argued that people operate in one of two ways when faced with social situations. Individuals can act autonomously and choose their behaviour, or they can enter an agentic state, where they carry out orders of an authority figure and do not feel responsible for their actions. When a person changes from autonomous state to an agentic state, they have undergone an agentic shift.

In Milgram’s original experiment 65% of participants administered the full 450 volts and were arguably in an agentic state . However, in one variation of Milgram’s experiment and additional confederate administered the electric shocks on behalf of the teacher. In this variation the percentage of participants who administered the full 450 volts rose dramatically, from 65% to 92.5%. This variation highlights the power of shifting responsibility (agentic shift), as these participants were able to shift their responsibility onto the person administering the electric shocks and continue obeying orders because they felt less responsible. Therefore, the ability to enter an agentic state increases the level of obedience, as the level of personal responsibility decreases.

In Milgram’s original research the teacher and the learner were in separate rooms. In order to test the power of proximity, Milgram conducted a variation where the teacher and learner where seated in the same room. In this variation the percentage of participants who administered the full 450 volts dropped from 65% to 40%. Here obedience levels fell, as the teacher was able to experience the learner’s pain more directly. In another variation, the teacher had to force the learner’s hand directly onto the shock plate. In this more extreme variation, the percentage dropped even further, to 30%. In these two variations, the closer the proximity of the teacher and learner, the lower the level of obedience.

The proximity of the authority figure also affects the level of obedience. In one variation, after the experimenter had given the initial instructions they left the room. All subsequent instructions were provided over the phone. In this variation participants were more likely to defy the experimenter and only 21% of the participants administers the full 450 volts.

Milgram’s conducted his original research in a laboratory of Yale University. In order to test the power of the location, Milgram conducted a variation in a run down building in Bridgeport, Connecticut. The experiment was no longer associated with Yale University and was carried out by the Research Association of Bridgeport. In this variation the percentage of participants who administered the full 450 volts dropped from 65% to 47.5%. This highlights the impact of location on obedience, with less credible locations resulting in a reduction in the level of obedience.

In most of Milgram’s variations the experimenter wore a lab coat, indicating his status as a University Professor. Milgram examined the power of uniform in a variation where the experimenter was called away and replaced by another ‘participant’ in ordinary clothes, who was in fact another confederate. In this variation, the man in ordinary clothes came up with the idea of increasing the voltage every time the leaner made a mistake. The percentage of participants who administered the full 450 volts when being instructed by an ordinary man, dropped from 65% to 20%, demonstrating the dramatic power of uniform.

Bickman (1974) also investigated the power of uniform in a field experiment conducted in New York. Bickman used three male actors: one dressed as a milkman; one dressed as a security guard; and one dressed in ordinary clothes. The actors asked members of the public to following one of three instructions: pick up a bag; give someone money for a parking metre; and stand on the other side of a bus stop sign which said ‘no standing’.

On average the guard was obeyed on 76% of occasions, the milkman on 47% and the pedestrian on 30%. These results all suggest that people are more likely to obey, when instructed by someone wearing a uniform. This is because the uniform infers a sense of legitimate authority and power.

Legitimate Authority

Milgram’s variations investigating location and uniform highlight an important factor in obedience research – legitimate authority. For a person to obey an instruction they need to believe that the authority is legitimate and this can be affected by multiple variables.

In Milgram’s original research, which took place at Yale University, the percentage of participants administering the full 450 volts was high (65%). However, when the experiment took place in a run down building in Bridgeport, Connecticut, obedience levels dropped significantly (48%). This change in location reduced the legitimacy of the authority, as participants were less likely to trust the experiment. In addition, when the experimenter in Milgram’s research was replaced by another participant, in ordinary clothes, the obedience levels dropped even further (20%). The lack of a uniform and questionable position of authority reduced the credibility of the authority, which meant the participants were far less likely to obey.

milgram experiment results table

You might also like

milgram experiment results table

Great lesson starter for obedience!

16th February 2016

Dispositional Explanation for Obedience: Authoritarian personality

Explanations for obedience - milgram (1963), resistance to social change.

Quizzes & Activities

Video: The Authority Hoax - You Don't Know What You Would Do!

1st November 2016

Lights, Camera, Action! A Shocking Rendition of Milgram

28th January 2017

How Obedient Are We?

9th March 2017

Example Answer for Question 1 Paper 1: AS Psychology, June 2017 (AQA)

Exam Support

Our subjects

  • › Criminology
  • › Economics
  • › Geography
  • › Health & Social Care
  • › Psychology
  • › Sociology
  • › Teaching & learning resources
  • › Student revision workshops
  • › Online student courses
  • › CPD for teachers
  • › Livestreams
  • › Teaching jobs

Boston House, 214 High Street, Boston Spa, West Yorkshire, LS23 6AD Tel: 01937 848885

  • › Contact us
  • › Terms of use
  • › Privacy & cookies

© 2002-2024 Tutor2u Limited. Company Reg no: 04489574. VAT reg no 816865400.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 25 November 2021

A novel experimental approach to study disobedience to authority

  • Emilie A. Caspar 1 , 2  

Scientific Reports volume  11 , Article number:  22927 ( 2021 ) Cite this article

6605 Accesses

6 Citations

16 Altmetric

Metrics details

  • Human behaviour

Fifty years after the experiments of Stanley Milgram, the main objective of the present paper is to offer a paradigm that complies with up-to-date ethical standards and that can be adapted to various scientific disciplines, ranging from sociology and (social) psychology to neuroscience. Inspired by subsequent versions of Milgram-like paradigms and by combining the strengths of each, this paper presents a novel experimental approach to the study of (dis)obedience to authority. Volunteers are recruited in pairs and take turns to be ‘agents’ or ‘victims’, making the procedure fully reciprocal. For each trial, the agents receive an order from the experimenter to send a real, mildly painful electric shock to the ‘victim’, thus placing participants in an ecological set-up and avoiding the use of cover stories. Depending on the experimental condition, ‘agents’ receive, or do not receive, a monetary gain and are given, or are not given, an aim to obey the experimenter’s orders. Disobedience here refers to the number of times ‘agents’ refused to deliver the real shock to the ‘victim’. As the paradigm is designed to fit with brain imaging methods, I hope to bring new insights and perspectives in this area of research.

Similar content being viewed by others

milgram experiment results table

On the cognitive mechanisms supporting prosocial disobedience in a post-genocidal context

milgram experiment results table

Neural mechanisms of deception in a social context: an fMRI replication study

milgram experiment results table

Psychopathic traits mediate guilt-related anterior midcingulate activity under authority pressure

Introduction.

The experiment of Stanley Milgram is one of the most (in)famous in psychology 1 , within and beyond academia. Several variables account for this notoriety, such as the method used, the ethical issues associated, the enthralling results or the societal impact of the research topic. Milgram’s classical studies famously suggested a widespread willingness to obey authority, to the point of inflicting irreversible harm to another person just met a few minutes before. Beyond the studies of Milgram, the history of nations is also plagued by horrendous acts of obedience that have caused wars and the loss of countless lives 2 . History has fortunately shown that some individuals do resist the social constraint of receiving orders when their own morality is of greater importance than the social costs associated with defying orders (e.g., 3 , 4 ). To understand the factors that prevent an individual from complying with immoral orders, research on disobedience should focus on two main axes: (1) what social and situational factors support disobedience and (2) what individual differences support disobedience.

The first axe has already been largely investigated in past studies. From Milgram’s studies, important situational factors supporting disobedience have already been established 5 . For instance, disobedience increases if the experimenter is not physically present in the room or if two experimenters provide opposing views regarding the morality of the experiment. Subsequent versions and interpretations of Milgram’s studies 6 , 7 , 8 as well as historical research 4 , 9 also suggested the importance of several social (e.g. presence of a supporting group) and situational factors (e.g. family history, proximity with the ‘victim’, intensity of the pain; money) supporting resistance to immoral orders. However, the second axe regarding individual differences has been less systematically approached. A few studies 10 , 11 previously explored personality traits that may influence disobedience (e.g. empathic concern, risk-taking) but most of these studies, however, have used relatively weak and potentially biased methods, such as self-reported questionnaires and methods based on cover stories. These studies are not sufficient to explain why, in a given situation, some people will refuse immoral orders and rescue threatened human beings while others will comply with such orders. With the current literature on disobedience, we have no idea about which neuro-cognitive processes drive inter-individual differences regarding the degree of disobedience. This aim could be achieved by offering a novel experimental approach that would make it possible to use novel techniques that give us a more direct access to the functioning of the brain and cognition, such as functional near-infrared spectroscopy (fNIRS), electroencephalography or Magnetic Resonance Imagery (MRI). Regrettably, the original paradigm and those bearing close similarity are not adapted to reliably answer those questions as they were not designed to fit with neuroimaging measurements. By combining the strengths of previous work on disobedience into a single experimental paradigm and adapting it to fit with cognitive and brain imaging measurements, this novel experimental approach could help to better understand, together with individual, social, and cultural factors, which mechanisms make it possible for an individual to refuse to comply with immoral orders.

There were several challenges to consider in order to develop such a paradigm, both ethical and methodological. Studying obedience and resistance to immoral orders involves putting volunteers in a situation where they have to make a decision on whether or not to commit ‘immoral acts’ under orders. A balance has to be found between what is acceptable from an ethical perspective and what is necessary for the research question. Milgram’s studies on obedience raised undeniable ethical issues 12 , 13 , 14 , mostly associated with high stress and the use a cover story, which involves deception. Some variants of Milgram’s studies were realized with immersive virtual reality to prevent the ethical issues associated with Milgram’s paradigm 15 , but the transparency of the fake scenario presented to participants does not capture decision-making in an ecological set-up. Other Milgram-based variants, such as the 150-V method, appear to replicate Milgram’s results 16 with respect to the actual ethical standards, but methodological concerns are still present 17 as cover stories are still used, which lead to interpretation issues. Beyond ethical considerations, the use of deception also indeed involves a doubt about whether or not volunteers truly believed the cover story. As a consequence, a reasonable doubt remains on how to interpret the results and this is one of the main critics associated with Milgram’s studies and following versions. Recent work on the reports of Milgram’s volunteers suggested that there are no strong and reliable evidence that participants believed in the cover story 8 , 14 , 18 . Others suggested that since the stress of participants was visible on video recordings during the experiment (e.g. hand shaking, nervousness), this suggests that participants actually believed that they were torturing another human being 19 . However, this interpretation has been challenged by another study showing that participants can have physiological reactions to stress even in an obviously-fake experimental set-up 15 . These contrasting interpretations of Milgram’s studies actually reinforce the idea that results can hardly be interpreted when cover stories are used 20 . To answer those criticisms, a real scenario had thus to be created, where participants made decisions that have real consequences on another human being.

An additional challenge is that methods relying on the original paradigm of Milgram, such as the virtual reality version 15 or the 150-V method 16 are not adapted to neuroimaging measurements. More specifically, with such Milgram-like experimental approaches, only a single trial would be recorded for the entire experimental session, that is, when the volunteer stops the experiment (if this happens). For cognitive and neuroimaging data collection, a single trial per participant is not a reliable result, which requires the averaging of several trials to obtain a good signal-to-noise ratio.

Another challenge at the methodological and conceptual levels it that several experimenters 1 , 5 , 21 , 22 including myself 23 , 24 , 25 , 26 , 27 , noted that volunteers are extremely obedient when coming to an experiment. Personally, I have tested about 800 volunteers to investigate the mechanisms by which coercive instructions influence individual cognition and moral behaviors. For instance, by using behavioral, electrophysiological and neuroimaging methods, we have observed that when people obey orders to send real shocks to someone else, their sense of agency 23 , their feeling of responsibility 28 , empathy for the pain of the victim and interpersonal guilt 26 are attenuated compared to a situation where they are free to decide which action to execute. Out of 800 volunteers tested, only 27 disobeyed my orders (i.e. 3.3%): 21 for prosocial reasons (i.e. they refused to administer an electric shock to another individual), 3 by contradiction (i.e. by systematically pressing the other button, not matter the content of the order), and 3 for antisocial reasons (i.e. by administering shocks despite my order not to do so). Although convenient to study how obedience affects cognition, this rate is indubitably an issue when studying disobedience. If participants almost never disobey, we can’t study the mechanisms through which resistance to immoral orders may develop in a given situation. Several reasons for not disobeying the experimenter’s orders have been suggested. Some consider that being obedient is part of the human nature as massive and destructive obedience has been observed through countless historical events 2 . Another current view on the experiments of Milgram is that volunteers were actually happy to participate and to contribute to the acquisition of scientific data 17 , thus explaining the high obedience rate observed. This effect has been referred to as ‘engaged followership’ 29 . If that interpretation is correct, the volunteer’s willingness to come and help the experimenter acquiring scientific data creates an extra difficulty to obtain disobedience in an experimental setup. However, this interpretation is challenged by several studies reported by Milgram, which displayed a higher disobedience rate than his original study. For instance disobedience increases when the shocks’ receiver sits in the same room as the participant or when the authoritative experimenter is not physically present in the room 5 . If participants were indeed only guided by their willingness to help to acquire scientific data, this should be the case in any experimental set-up. As some studies involve a higher disobedience rate compared to the initial version of Milgram’s study 1 , they could thus, at a first glance, be used for studying disobedience. However, even if some versions of the initial study of Milgram offer a highly disobedience rate, thus making it possible to study the mechanisms through which resistance to immoral orders may develop in a given situation, these experimental set-ups are still not adapted for cognitive and neuroimaging measurements and still rely on the use of a cover story.

Taking all the presented challenges into account (i.e. not using cover stories to avoid interpretation issues; obtaining a fair rate of disobedience; using an experimental approach that also fits with cognitive and neuroimaging measurements; respecting ethical standards), the present paper presents a set of experiments that combine the strengths of past experimental work on (dis)obedience. Volunteers were openly involved and active (= real social situation) rather than having to act in fictitious scenarios (= imagined social situation, e.g. Slater et al., 2006). They were confronted with moral decisions to follow or not the orders from an experimenter to inflict a real painful shock to a ‘victim’ in exchange (or not) for a small monetary gain, thus avoiding the use of cover stories. Since the aim here is to develop a paradigm that could be used both in behavioral and neuroimaging studies, some basic characteristics had to be considered. For instance, to fit with a Magnetic Resonance Imagery (MRI) scanning environment, neither the ‘victim’, nor the experimenter were in the same room as the agent. A real-time video was thus used to display a video of the victim’s hand receiving shocks on the agent’s screen and headphones were used so the participant could hear the experimenter’s orders.

Another method to study disobedience would be to select participants who are more likely to disobey than others. Each volunteer was thus also asked to complete a series of personality questionnaires to evaluate if a specific profile is associated with a greater prosocial disobedience rate. Systematic post-experimental interviews were conducted at the end of each experiment in order to understand the decisions of volunteers to follow or not the orders of the experimenter and to ask them how they felt during the experiment.

Participants

A hundred eighty naive volunteers (94 females) were recruited in same gender dyads (= 90 dyads). During the recruitment procedure, I ensured that the participants in each dyad were neither close friends (by mixing people studying different academic courses), nor relatives. To estimate the sample size a priori, I calculated the total sample size based on an effect size f of (0.3). To achieve a power of 0.85 for this effect size, the estimated sample size was 168 for 6 groups 30 . I increased the sample size slightly to 180 in order to prevent loss of data in case of withdrawals. Volunteers were randomly assigned to one of the 6 variants of the task (N = 30/variant). One volunteer was not taken into account because they only played the role of the ‘victim’ to replace a participant who did not show up. No volunteers withdrew from the experiment. For the remaining 179 volunteers, the mean age was 22.63 years old (SD = 2.77, range:18–35). A Univariate ANOVA with Age as the dependent variable and Variant as the fixed factor confirmed that age of the volunteers did not differ between the different variant of the tasks ( p  > 0.1, BF 10  = 0.167). Volunteers received between €10 and €19.60 for their participation. All volunteers provided written informed consent prior to the experiment. The study was approved by the Ethics Committee of the Erasme Hospital (reference number: P2019/484). All methods were performed in accordance with the relevant guidelines and regulations.

Method and Material

Six experimental set-ups were created in a between-subject design. In all six set-ups, volunteers were invited by pairs. One person was assigned to start as agent and the other one to start as ‘victim’. Their roles were switched mid-way, ensuring reciprocity. Compared to the experimental design of Milgram, both volunteers were real participants, not confederates. The reciprocity also avoided volunteers to be stuck in the role of the person providing pain to the other, thus attenuating the potential psychological distress of being in a perpetrator role only. Volunteers were given the possibility to choose the role they wanted to start with. In the case none of them had a preference, role assignment was decided by a coin flip, but volunteers were reminded that they could still decide themselves. This procedure allows to ensure that participants do not think that this procedure is a trick.

Volunteers were first given the instructions of the task. Then, they signed the consent forms in front of each other, so both were aware of the other’s consent. The experimenter was never present in the same room, but rather gave the instructions through headphones. This was for two reasons. First, Milgram’s studies show that disobedience increases if the experimenter is not physically present in the room. Second, in the case of MRI scanning, the experimenter would not be able to give direct verbal instructions to the volunteers in the MRI room due to the high noise of the scanner. Here, agents were isolated in a room and were provided headphones to hear the experimenter’s instructions (see Fig.  1 ). They were told that this was done to avoid attentional interferences through the experimenter’s physical presence in the room. In this series of studies, instructions were pre-recorded but a real setup with a microphone connected to the headphones could also work. Pre-recordings allow perfect timing of the events, important for neuroimaging or electroencephalography recordings. The instructions were “ give a shock ” or “ don’t give a shock ”. To increase the authenticity of the procedure, each sentence was recorded 6 times with small variations in the voice and displayed randomly. In addition, the audio recordings included a background sound similar to interphone communications.

figure 1

Experimental setup. Schematic representation of the experimental setup. Volunteers were in different rooms. The experimenter was located in a third, separated room. The agent heard on a trial basis the orders of the experiment through headphones and had to decide to press the ‘SHOCK’ or ‘NO SHOCK’ button. A real-time camera feedback displayed the hand of the victim of the agent’s screen so to allow to keep track on the consequences of their actions.

Shocks were delivered using a constant current stimulator (Digitimer DS7A) connected to two electrodes placed on the back of victims’ left hand, visible to the agent through the camera display. Individual pain thresholds were determined for the two volunteers before starting the experiment. This threshold was determined by increasing stimulation in steps of 1 mA (Caspar et al., 2016). I approximated an appropriate threshold by asking a series of questions about their pain perception during the calibration (1. «  Is it uncomfortable?  »—2. «  Is it painful?  »—3. «  Could you cope with a maximum of 100 of these shocks?  »—4. «  Could I increase the threshold?  »). When roles were reversed, I briefly re-calibrated the pain threshold of the new victim by increasing the stimulation again from 0 in steps of 3 mA up to the previously determined threshold, to confirm that the initial estimate was still appropriate, and to allow re-familiarisation. The mean stimulation level selected by this procedure was 36.3 mA (SD = 17.5, V = 300, pulse duration: 200 µs). I chose this instead of other types of pain (e.g. financial) because it produces a clear muscle twitch on the victim’s hand each time a shock is sent. This allows volunteers to have a clear and visible feedback of the consequences of their actions and to be fully aware that shocks were real.

There was a total of 96 trials per experimental condition. In the coerced condition, the experimenter asked to give a shock in 64 trials and asked not to give a shock 32 trials. This ratio was chosen on the assumption that the volunteer’s willingness to refuse immoral orders would increase with the number of times they were instructed to inflict pain to the “victim”.

On each trial, a picture of two rectangles, a red one labelled ‘SHOCK’ and a green one labelled ‘NO SHOCK’, was displayed in the bottom left and right of the screen. The key-outcome mapping varied randomly on a trial-wise basis, but the outcome was always fully congruent with the mapping seen by the participant. Agents could then press one of the two buttons. Pressing the SHOCK key delivered a shock to the victim while pressing the NO SHOCK key did not deliver any shocks. This procedure of randomized button mapping allows to have a better control over motor preparation, an aspect that can be important for neuroimaging data.

In half of the variants of the task (i.e., 3/6), the “Aim” variants, participants were given a reason for obeying the orders of the experimenter, while this was not the case in the other half, the “No aim” variants. In the “No Aim” variants, I did not provide any reasons for obeying to the participants and I simply explained the task. If participants asked about the aim, I simply told them that they would know at the end of the experiment, without providing further justifications. In the “Aim” variants, volunteers were told that researchers observed a specific brain activity in the motor cortex in another study when participants were given instructions. We explained that the present study was a control study to measure different aspects linked to motor activity when they press buttons, in order to see if the button pressing was related to brain activity measured over the motor cortex. To increase the veracity of the aim, electrodes were also placed on their fingers and connected to a real electromyography (EMG) apparatus to supposedly record their muscle activity. Volunteers were instructed to press the two buttons only with their right and left index fingers, as naturally as possible, and to avoid producing too ample movements to create clean EMG data. In the case volunteers asked if they really had to follow orders, I told them that for ethical reasons I could not force them to do anything, but that it would be better for the sake of the experiment. Telling them explicitly that they could disobey the orders would not be beneficial in the quest of studying ‘real’ disobedience.

In 4 out of 6 variants of the task, the “Free-choice” variants, a second experimental condition was used, the free-choice condition. In this condition, volunteers were told that they could freely decide in each trial to shock the ‘victim’ or not. In this condition, they did not receive instructions. In 4 out of 6 variants of the task, the “Monetary reward” variants, agents received a monetary reward of + €0.05 for each shock delivered. In the other 2 variants, volunteers were not rewarded for each shock delivered (i.e. “No monetary reward” variants). To resume, the 6 variants of the same task were the following: (1) No Aim + Monetary reward + Free-choice condition; (2) No Aim + No monetary reward + Free-choice condition; (3) Aim + Monetary reward + Free-choice condition; (4) Aim + No monetary reward + Free-choice condition; (5) No Aim + Monetary reward + No free-choice condition; (6) Aim + Monetary reward + No free-choice condition (see Table 1 ).

Before the experimental session, volunteers filled in six questionnaires. Those questionnaires included (1) the Money Attitude Scale (e.g. “ I put money aside on a regular basis for the future ”) 31 , (2) the Moral Foundation Questionnaire (e.g. “ Whether or not someone showed a lack of respect for authority ”) 32 , (3) the Aggression-Submission-Conventionalism scale (e.g., “ We should believe what our leaders tell us ”) 33 , (4) the short dark triad scale (e.g., “ Most people can be manipulated ”) 34 , the Interpersonal Reactivity Index (e.g. “ When I see someone get hurt, I tend to remain calm ”) 35 . At the end of the experimental session, they were asked to fill in two more questionnaires: (1) A debriefing assessing what they felt during the experiment and the reasons for choosing to obey or disobey the orders of the experimenter (Supplementary Information S1) and (2) a questionnaire on social identification with the experimenter (e.g., “ I feel strong ties with this experimenter ”) 36 . At the end of the experiment a debriefing was conducted for each volunteer, separately. Volunteers were then paid, again separately.

General data analyses

Each result was analyzed with both frequentist and Bayesian statistics 37 . Bayesian statistics assess the likelihood of the data under both the null and the alternative hypothesis. BF 10 corresponds to the p (data| H 1 )/ p (data| H 0 ). Generally, a BF between 1/3 and 3 indicates that the data is similarly likely under the H 1 and H 0 , and that the data does not adjudicate which is more likely. A BF 10 below 1/3 or above 3 is interpreted as supporting H 0 and H 1 , respectively. For instance, BF 10  = 20 would mean that the data are 20 times more likely under H 1 than H 0 providing very strong support for H 1 , while BF 10  = 0.05 would mean that the data are 20 times more likely under H 0 than H 1 providing very strong support for H 0 38 . BF and p values were calculated using JASP 39 and the default priors implemented in JASP. All analyses were two-tailed.

Number of shocks given in the free-choice condition

In the free-choice condition, volunteers were told that they were entirely free to decide to deliver a shock or not to the ‘victim’ on each of the 96 free-choice trials. On average, agents administered shocks to the victim on 31.86% of the trials (SD = 34.98, minimum: 0%, maximum: 100%) in the free-choice condition, corresponding to 30.59/96 shocks. A paired-sample t-test indicated that agents delivered less frequently a shock in the free-choice condition than in the coerced condition (68.03%, SD = 41.11, t (119)  = -9.919, p  < 0.001, Cohen’s d = − 0.906, BF 10  = 1.987e + 14). This result supports the fact that individuals can inflict more harm to others when they obey orders than when they act freely.

Prosocial disobedience across variants

In the present study, I was interested in prosocial disobedience, that is, when agents refuse the orders of the experimenter to send a painful shock to the ‘victim’. Table 2 displays the number of volunteers who reported that they voluntarily disobeyed in each variant of the task.

In this experiment, the main variable of interest was not to consider how many participants disobeyed in each variant only, but also how frequently they disobeyed. A percentage of prosocial disobedience was calculated for each volunteer, corresponding to the number of trials in which participants chose to disobey (i.e., sending no shocks while ordered by the experimenter to do so) divided by the total number of trials corresponding to the order to send a shock, multiplied by 100. I compared the prosocial disobedience rate across variants of the task, gender of participants and order of the role. I conducted a univariate ANOVA with prosocial disobedience as the dependent variable and Aim (aim given, no aim given), Monetary reward (+ €0.05 or not), Free-choice (presence or absence of a free-choice condition), Gender and Order of the Role (agent first, victim first) as fixed factors (see Fig.  2 ). Both frequentist and Bayesian statistics strongly supported a main effect of Aim (F (1,155)  = 14.248, p  < 0.001, η 2 partial  = 0.084, BF incl  = 158.806). Prosocial disobedience was lower when an aim for obedience was given to volunteers (20.4%, CI 95  = 12.8–28.1) than when no aim was given (43.3%, CI 95  = 35.6–51). Both frequentist and Bayesian statistics also supported a main effect of Monetary reward (F (1,155)  = 12.335, p  = 0.001, η 2 partial  = 0.074, BF incl  = 28.930). Prosocial disobedience was lower when a monetary reward was given for each shock (25.1%, CI 95  = 18.5–31.7) than when no monetary reward was given (45.4%, CI 95  = 35.9–54.8). The frequentist approach showed a main effect of Gender (F (1,155)  = 5.128, p  = 0.025, η 2 partial  = 0.032), with a lower prosocial disobedience rate for female volunteers (25.7%, CI 95  = 18.2–33.2) then for male volunteers (38%, CI 95  = 30–46). However, the Bayesian version of the same analysis revealed a lack of sensitivity (BF incl  = 0.871). All other main effects or interactions supported H 0 or a lack of sensitivity (all p s > 0.1 & BFs incl  ≥ 0.4.291E-7 & ≤ 1.178).

figure 2

Graphical representation of the percentages of prosocial disobedience in each variant of the task.

The following results report two-tailed Pearson correlations between prosocial disobedience and several other variables, including (1) the reasons given for disobeying, (2) the feeling of responsibility, badness and how sorry they experienced during the experiment, (3) the identification with the experimenter, (4) the perceived level of pain of the victim, (5) identification with the ‘victim’, and (6) individual differences measured through self-report questionnaires. I applied a False Discovery Rate (FDR) approach with the Benjamini and Hochberg method 40 to each p-value for each of those correlations but for the sake of clarity these variables are reported in different sub-sections.

Reasons for prosocial disobedience

All participants who reported that they voluntarily disobeyed the orders of the experimenter (N = 108) were presented a list of 10 reasons that they had to rate from “Not at all” to “Extremely” (see Supplementary Information S1). The reason ‘ I wanted to make more money ’ was only considered for the data of volunteers who had a variant with a monetary reward for each shock (N = 68). Both frequentist and Bayesian statistics showed that the percentage of prosocial disobedience positively correlated with moral reasons (r = 0.550, p FDR  < 0.001, BF 10  = 1.700e + 7), positively correlated with disobedience by contradiction (r = 0.329, p FDR  < 0.001, BF 10  = 47.53) and negatively correlated with the willingness to make more money (r = − 0.485, p FDR  < 0.001, BF 10  = 822.16). Other correlations were in favor of H 0 or were inconclusive (all p s FDR  > 0.076, all BFs 10  ≥ 0.120 & ≤ 1.446).

Feeling responsible, bad and sorry

Both frequentist and Bayesian statistics showed strong positive correlations between prosocial disobedience and how responsible (r = 0.299, p FDR  < 0.001, BF 10  = 343.98) and how bad (r = 0.301, p FDR  < 0.001, BF 10  = 384.65) they felt during the task (see Figs.  3 A and B). The more responsible and worse they felt during the task, the more they refused the order to send a shock to the ‘victim’. How sorry they felt was inconclusive ( p FDR  > 0.08, BF 10  = 0.929).

figure 3

Graphical representation of Pearson correlations between prosocial disobedience and ( A ) feeling of responsibility, ( B ) how bad agents felt during the task when they administered shocks to the ‘victim’, and ( C ) how painful they estimated the shock delivered to the ‘victim’ was. All tests were two-tailed.

Identification with the experimenter

Both frequentist and Bayesian statistics strongly supported H 0 regarding the relationship between prosocial disobedience and personal identification ( p FDR  > 0.5, BF 10  = 0.121) and bonding with the experimenter ( p FDR  > 0.5, BF 10  = 0.117). The relationship between the charisma of the experimenter and prosocial disobedience was also slightly in favor of H 0 ( p FDR  > 0.1, BF 10  = 0.530).

Estimated pain of the ‘victim’

The frequentist approach showed a positive correlation between the perceived pain of the ‘victim’ and prosocial disobedience (r = 0.189, p FDR  = 0.048). The higher they considered the ‘victim’ to be in pain, the more frequently they refused to deliver the shock. The Bayesian version of the same analysis slightly supported this relationship (BF 10  = 2.236), see Fig.  3 C.

Identification with the ‘victim’

In the post-session questionnaire, volunteers had to identify to what extent they considered that the other participant could be part of their group and to what extent they identified with the other participant. Both frequentist and Bayesian statistics strongly supported H 0 regarding the relationship between prosocial disobedience and the perception that the other participant could be part of one’s own group ( p FDR  > 0.8, BF 10  = 0.096). The relationship between prosocial disobedience and the identification with the other participant also slightly supported H 0 ( p FDR  > 0.1, BF 10  = 0.511).

Correlations between the behavior of pairs of participants

As we used a role reversal procedure, the behavior of those who were agents first could influence the behavior of those who turned agents afterwards. A Pearson correlation between prosocial disobedience of agents first and prosocial disobedience of victims who turned agents afterwards. The correlation was positive (r = 0.514, p  < 0.001, BF 10  = 60,068.704), suggesting participants who were agents second tend to act similarly as those who were agents first.

Individual differences associated with prosocial disobedience

Another approach to ensure a reliable prosocial disobedience rate when recruiting volunteers would be to target individuals with a profile that is most frequently associated with disobedient behaviors. Both frequentist and Bayesian statistics for exploratory correlations were two-tailed. Cronbach’s α for each subscale is presented in Supplementary Information S2. Both frequentist and Bayesian statistics showed a negative correlation between scores on the Authority subscale (r = -0.259, p FDR  < 0.001, BF 10  = 41.372) and the Purity subscale (r = -0.303, p FDR  < 0.001, BF 10  = 424.97) from the MFQ questionnaire. The lower volunteers scored on authority and purity, the higher was their prosocial disobedience rate. Other correlations were in favor of H 0 or were inconclusive (all p s FDR  ≥ 0.048, all BFs 10  ≥ 0.100 & ≤ 2.314).

Reasons for obedience

If participants reported that they did not voluntarily disobey the orders of the experimenter, they were asked in an open question to explain their decision to comply with those orders. After reading all the answers, three categories were extracted from the reasons provided: (1) ‘For science’ reasons; participants reported that they obeyed to allow reliable data acquisition (e.g., Participant 91: “ Pour ne pas fausser l’étude ”—English translation: “ To avoid biasing the stud y”); (2) ‘For respect of authority’ reasons; participants reported that they had to follow the orders of the authoritative figure (e.g., Participant 13: “ Pour moi c’est normal de suivre un ordre ”—English translation: “ In my opinion, it’s normal to follow an order ”), and (3) ‘For lack of side-effects’ reasons; participants reported that since the shocks delivered were calibrated on one’s own pain threshold, obeying orders to shock was not problematic (e.g., Participant 115: “ Douleur supportable pour l'autre, je n'ai accepté de faire subir que ce que j'aurais été prêt à subir moi-même ”—English translation: “ The pain was tolerable for the other participant, I have accepted to inflict the intensity of the pain that I would have been ready to undergo myself ”). An independent, naive judge classified the response of participants in one or several of those three established categories. Analyses of the frequencies revealed that the reason “For Science” was mentioned 31/70 times, the reason “For lack of side-effects” was mentioned 17/70 times and the reason “For respect of authority” was mentioned 31/70 times.

The aim of the present paper was to present a novel experimental approach to study (dis)obedience to immoral orders, by combining the strength of past experimental work and by adapting it to cognitive and neuroimaging measurements. Although other versions were proposed since Milgram’s studies, like a study in an immersive virtual environment 15 or the 150-V method 16 , some methodological concerns remained as those methods still involved cover stories or fake experimental set-ups. Here, the experimental approach was significantly different as it was based on an entirely transparent method that involved the administration of real electric shocks to another individual. This approach has the advantage to solve some of the main ethical and methodological concerns associated with the use of cover stories. It also has the advantage that it be can used both to study how social and situational factors influence disobedience as well as individual factors. For social and situational factors, the proposed paradigm can be adapted to evaluate for instance the influence of a supporting group, the use of high or low monetary rewards or how priming disobedience with a documentary influence disobedience. For individual factors, the paradigm allows to investigate how personality traits influence disobedience or to study the neuro-cognitive processes underlying disobedience.

Some novel theories combining a multi-method approach based on social psychology, neuroeconomics and neuroscience could thus emerge to understand better the mechanisms supporting disobedience. For instance, one could evaluate how empathy for the pain of the victim predicts disobedience and how the presence of a supporting group influences our capacity to feel empathy 41 and/or compassion for the ‘victim’ 42 . It could also be argued that the presence of a supporting group diffuses responsibility between individuals and increases obedience, by influencing how our brain processes agency and responsibility over our actions 28 , 43 , 44 , 45 . As the results obtained in the present study also indicated that feeling bad for the shocks delivered was statistically associated with prosocial disobedience, one could evaluate how the neural correlates of guilt 46 predicts prosocial disobedience and what historical, cultural and individual factors influence the feeling of guilt.

Six variants of the same task were tested in the present study, some inducing a higher prosocial disobedience rate than others. Statistical results showed that providing a reason—or aim—to justify obedience strongly decreased disobedience. Providing a monetary reward, even one as small as €0.05, also strongly decreased disobedience. Variant 2, in which volunteers were not given an aim or monetary reward, showed the highest disobedience rates. However, to study disobedience in ecological way, the paradigm should capture disobedience of participants even if they know that they are losing something (i.e., monetary rewards or the ‘trust’ of the experimenter asking them help for the study). Defying the orders of an authority generally involves social and/or monetary costs in real-life situations. I would thus not recommend using an experimental paradigm in which volunteers have no costs associated with defying the orders of the experimenter, as it would reduce the ecology of the disobedience act. Variants 3 and 6 involve two types of costs for resisting the orders of the experimenter: a monetary loss and deceiving the experimenter. In Variant 3, descriptive statistics showed that prosocial disobedience was lower compared to Variant 6. The main difference between these two variants was the presence of a free-choice condition. In my former studies 23 , 27 , volunteers frequently justified obedience in the coerced condition because they were given freedom in the free-choice condition (e.g. Participant 89 – English Translation: “ (…) In addition, I knew I could chose freely in the other condition not to send shocks—what I did ). In the present debriefings, some volunteers also reported that the presence of a free-choice condition was giving them enough freedom to accept to follow the orders in the coerced condition. In the supplementary analyses, results showed that when the monetary reward and the aim for obeying are identical, being given a free-choice condition reduces disobedience in the coerced condition. Therefore, Variant 6 appears to provide a good balance between reaching a reliable disobedience rate and finding volunteers who would refuse to produce physical harm on another human beings despite the monetary or social costs associated with defying orders.

Another approach would be to pre-select people who are predicted to be more disobedient. Personality questionnaires indicated that scoring low on the authority and on the purity subscale of the MFQ was strongly associated with a higher prosocial disobedience rate. The link between one’s own relationship to authority and prosocial disobedience observed here replicates another study conducted on the first generation of Rwandese after the 1994 genocide 47 . One’s own relationship to authority thus appears to be a reliable predictor variable in order to pre-select a sample that is more likely to disobey immoral orders.

In the present paper, administering a real mildly painful shock in exchange or not for a small monetary gain was described as an ‘immoral’ act. The notion of what is moral or not can highly differ between individuals 48 , for both academics and volunteers participating in an experiment. Humans are indeed sensitive to different competing issues of morality, a key reason for rescuing persecuted people 49 . In accordance with this observation, the present results indicated that moral reasons were a critical factor associated with the prosocial disobedience rate: the more shocking partners was considered as immoral, the more volunteers disobeyed. However, considering an action as against one’s own moral values does not necessarily translate to a refusal—especially when this order is in line with the Law. An extreme example is soldiers who have perpetrated acts that transgressed their moral beliefs but were issued by their superior in combat 50 . A core question for future research remains: Why are some people capable of putting their own moral standards above the social costs associated with defying orders?

Results indicate that the more volunteers felt responsible during the task, and the worse they felt for sending shocks to the ‘victim’, the higher was their prosocial disobedience. In another study, we observed that obeying orders reduced the feeling of responsibility, how bad and how sorry volunteers felt compared to being free to decide 26 . One hypothesis is that individuals who have preserved a feeling of responsibility and feeling bad—even under command—could more easily defy immoral orders. However, future studies are necessary to better understand the neuro-cognitive processes that prevent an individual from complying with immoral orders. As this paradigm is adapted to neuroimaging measurements, a whole range of studies could now be conducted.

It has been previously suggested that a strong identification with the experimenter giving orders is associated with higher obedience 36 . However, in the present paper, correlations between prosocial disobedience and identification with the experimenter were in favor of H0 with both the frequentist and the Bayesian approaches. In a former study, we also observed that identification to the experimenter was not a critical aspect for explain (dis)obedience. We observed that the generation of Rwandese born after the genocide and tested in Rwanda reported a higher identification to the experimenter than the same generation of Rwandese but tested in Belgium 47 . However, the latter group had a higher prosocial disobedience rate than the former group. Future studies must thus be conducted to understand how the identification with the person giving orders could influence obedience and its weight compared to other social, cultural and individual variables.

Although some volunteers reported that they felt a bit stressed and anxious during the task when they were in the role of the agent, the overwhelming majority did not report any negative psychological feelings. None of the participants withdrew from the experiment and none reported long-term negative psychological effects.

Nowadays, it has become difficult to find volunteers who do not know Milgram’s studies given the high media coverage, including movies, radio soaps, books, podcast and documentaries. One could expect that knowing Milgram would prevent people to obey. However, for the large majority of volunteers, it appears that this is not the case. In previous studies that I conducted with a relatively similar paradigm, the disobedience rate was drastically low (i.e. 3.3%) even if participants were university students knowing Milgram’s studies. In the present study, almost all the volunteers who participated in the present study knew Milgram and explicitly mentioned him during the oral debriefings or before starting the experiment. Yet for those who disobeyed, almost none reported that the reason for disobedience was that they thought it was the aim of the experiment. Further, there was no statistical relationship between prosocial disobedience and believing that it was the aim of the study. It does not mean that knowing Milgram would not influence at all disobedience. It rather suggests that knowing Milgram is not the main factor influencing one’s decision to obey or not an experimenter. It is also possible that since in this experiment shocks were real and not fake such as in Milgram’s studies, participants considered that this was indeed not a study aiming to replicate Milgram.

As far as I have observed, the main problem associated with knowing Milgram’s studies is that volunteers believe that I also have hidden aims and procedures when they enter the experimental room. Several volunteers reported that they only realized that my explanations for the task were true when they were explicitly offered the choice to decide which role to play first and/or when they started receiving the shocks. This is a general concern in psychological studies: The high use of cover stories can also impact other research, as volunteers start to develop a mistrust in what researchers tell them.

Results indicated that who were agents second tend to act similarly as those who were agents first, by sending a relatively similar amount of shocks. Of note, this is an effect that we also observed in past studies on the effect of obeying orders on cognition 23 , 26 , 43 . Nonetheless, in none of those studies we observed that the order of the role had a statistical influence on the neuro-cognitive processes targeted. However, the influence on role reversal on disobedience and related neuro-cognitive processes has still to be investigated in future studies.

The present paradigm is ecological in the sense that volunteers are facing decisions that have a real, physical impact on another human being. However, at the moment I only have little evidences that this paradigm has ecological validity to reflect obedience in real life situations, especially regarding “destructive disobedience” 17 . Caution is indicated when making inference from laboratory studies to complex social behaviours, such as those observed during genocides 16 . My main evidence at the moment is that the very low rate of prosocial disobedience observed in the first generation of post-genocide Rwandans tested in Rwanda using this paradigm 47 is consistent with the fact that deference to authority had already been emphasized by academics as an important factor in the 1994 genocide 4 , 51 . Individual scores on deference to authority in Caspar et al. 47 was the best predictive factor for prosocial disobedience in that former paradigm, thus suggesting some ecological validity. A promising approach would be to recruit “Righteous Among the Nations”, individuals who really saved lives during genocides. Testing this population with the present paradigm would put the ecological validity of this paradigm to the test.

People’s ability to question and resist immoral orders is a fundamental aspect of individual autonomy and of successful societies. As Howard Zinn famously wrote: “ Historically, the most terrible things—war, genocide, and slavery—have resulted not from disobedience, but from obedience ”. Understanding how individuals differ in the extent to which they comply with orders has undeniably several societal implications. They range from understanding how evolving in highly hierarchical environments — such as the military or prisons—influences moral behaviours, to developing interventions that would help to prevent blind obedience and help to resist calls to violence in vulnerable societies. However, since Milgram’s studies, the topic of disobedience has been mostly studied by social psychologists using adapted versions of the initial paradigm developed by Milgram. I hope that with this novel approach, (dis)obedience research will be given a new boost and will be considered by other scientific disciplines seeking to understand better human behaviours.

Data availability

Data are made available on OSF (DOI: https://doi.org/10.17605/OSF.IO/2BKJC ).

Milgram, S. Behavioral study of obedience. Psychol. Sci. Public Interest 67 (4), 371–378. https://doi.org/10.1037/h0040525 (1963).

Article   CAS   Google Scholar  

H. Zinn, The Zinn Reader: Writings on Disobedience and Democracy . Seven Stories Press, 1997.

Roisin, J. Dans la nuit la plus noire se cache l'humanité: Récits des justes du Rwanda. Les Impressions nouvelles (2017).

Fox, N. & Nyseth Brehm, H. I decided to save them: Factors that shaped participation in rescue efforts during genocide in Rwanda. Soc. Forces 96 (4), 1625–1648. https://doi.org/10.1093/sf/soy018 (2018).

Article   Google Scholar  

S. Milgram, Obedience to Authority: an Experiment View . Harper & Row, 1974.

Blass, T. Understanding behavior in the Milgram obedience experiment: The role of personality, situations, and their interactions. J. Pers. Soc. Psychol. 60 (3), 398–413. https://doi.org/10.1037/0022-3514.60.3.398 (1991).

Article   ADS   Google Scholar  

Dolinski, D. & Grzyb, T. The (doubtful) role of financial reward in obedience to authority. J. Soc. Psychol. 159 (4), 490–496. https://doi.org/10.1080/00224545.2018.1505708 (2019).

Article   PubMed   Google Scholar  

Haslam, N., Loughnan, S. & Perry, G. Meta-milgram: An empirical synthesis of the obedience experiments. PLoS ONE 9 (4), e93927. https://doi.org/10.1371/journal.pone.0093927 (2014).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Fagin-Jones, S. & Midlarsky, E. Courageous altruism: Personal and situational correlates of rescue during the Holocaust. J. Posit. Psychol. 2 (2), 136–147. https://doi.org/10.1080/17439760701228979 (2007).

Bègue, L. et al. Personality predicts obedience in a milgram paradigm. J. Pers. 83 , 299–306. https://doi.org/10.1111/jopy.12104 (2015).

S. P. Oliner and P. M. Oliner, The altruistic personality: Rescuers of Jews in Nazi Europe . New York, NY, US: Free Press, 1988, pp. xxv, 419.

Baumrind, D. Some thoughts on ethics of research: After reading Milgram’s “Behavioral Study of Obedience”. Am. Psychol. 19 (6), 421–423. https://doi.org/10.1037/h0040128 (1964).

A. G. Miller, The obedience experiments: A case study of controversy in social science . New York, NY, England: Praeger Publishers, 1986, pp. ix, 295.

Perry, G. Behind the Shock Machine: The Untold Story of the Notorious Milgram Psychology Experiments (New Press, 2013).

Google Scholar  

Slater, M. et al. A virtual reprise of the stanley milgram obedience experiments. PLoS ONE 1 (1), e39. https://doi.org/10.1371/journal.pone.0000039 (2006).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Burger, J. M. Replicating Milgram: Would people still obey today?. Am. Psychol. 64 (1), 1–11. https://doi.org/10.1037/a0010932 (2009).

Miller, A. G. Reflections on “replicating milgram” (Burger, 2009). Am. Psychol. 64 (1), 20–27. https://doi.org/10.1037/a0014407 (2009).

Griggs, R. A. & Whitehead, G. I. Coverage of recent criticisms of Milgram’s obedience experiments in introductory social psychology textbooks. Theory Psychol. 25 (5), 564–580. https://doi.org/10.1177/0959354315601231 (2015).

T. Blass, Obedience to Authority: Current Perspectives on the Milgram Paradigm . Psychology Press, 1999.

Kelman, H. C. Human use of human subjects: The problem of deception in social psychological experiments. Psychol. Bull. 67 (1), 1–11. https://doi.org/10.1037/h0024072 (1967).

Article   CAS   PubMed   Google Scholar  

Beauvois, J.-L., Courbet, D. & Oberlé, D. The prescriptive power of the television host. A transposition of Milgram’s obedience paradigm to the context of TV game show. Eur. Rev. Appl. Psychol. 62 (3), 111–119. https://doi.org/10.1016/j.erap.2012.02.001 (2012).

Frank, J. D. Experimental studies of personal pressure and resistance: I. experimental production of resistance. J. Gen. Psychol. 30 (1), 23–41. https://doi.org/10.1080/00221309.1943.10544454 (1944).

Caspar, E. A., Christensen, J. F., Cleeremans, A. & Haggard, P. Coercion changes the sense of agency in the human brain. Curr. Biol. 26 (5), 585–592. https://doi.org/10.1016/j.cub.2015.12.067 (2016).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Caspar, E. A., Vuillaume, L., Magalhães de Saldanha da Gama, P. A. & Cleeremans, A. The Influence of (Dis)belief in free will on immoral behavior. Front. Psychol. https://doi.org/10.3389/fpsyg.2017.00020 (2017).

Article   PubMed   PubMed Central   Google Scholar  

Caspar, E. A., Cleeremans, A. & Haggard, P. Only giving orders? An experimental study of the sense of agency when giving or receiving commands. PLoS ONE 13 (9), e0204027. https://doi.org/10.1371/journal.pone.0204027 (2018).

Caspar, E. A., Ioumpa, K., Keysers, C. & Gazzola, V. Obeying orders reduces vicarious brain activation towards victims’ pain. Neuroimage 222 , 117251. https://doi.org/10.1016/j.neuroimage.2020.117251 (2020).

Caspar, E. A., LoBue, S., Magalhães de Saldanha da Gama, P. A., Haggard, P. & Cleeremans, A. The effect of military training on the sense of agency and outcome processing. Nat. Commun. https://doi.org/10.1038/s41467-020-18152-x (2020).

Beyer, F., Sidarus, N., Bonicalzi, S., & Haggard, P. Beyond self-serving bias: diffusion of responsibility reduces sense of agency and outcome monitoring. Soc Cogn Affect Neurosci. 12 (1), 138–145. https://doi.org/10.1093/scan/nsw160 (2017).

Haslam, S. A., Reicher, S. D., Millard, K. & McDonald, R. “Happy to have been of service”: The Yale archive as a window into the engaged followership of participants in Milgram’s “obedience” experiments. Br. J. Soc. Psychol. 54 (1), 55–83. https://doi.org/10.1111/bjso.12074 (2015).

Faul, F., Erdfelder, E., Lang, A.-G. & Buchner, A. G*Power 3: A flexible statistical power analysis program for the social, behavior, and biomedical sciences. Behav. Res. Methods 39 , 175–191. https://doi.org/10.3758/BF03193146 (2007).

Yamauchi, K. T. & Templer, D. J. The development of a money attitude scale. J. Pers. Assess. 46 (5), 522–528. https://doi.org/10.1207/s15327752jpa4605_14 (1982).

Graham, J. et al. Mapping the moral domain. J. Pers. Soc. Psychol. 101 (2), 366–385. https://doi.org/10.1037/a0021847 (2011).

Dunwoody, P. T. & Funke, F. The aggression-submission-conventionalism scale: Testing a new three factor measure of authoritarianism. J. Soc. Polit. Psychol. 4 (2), 571–600. https://doi.org/10.5964/jspp.v4i2.168 (2016).

Jones, D. N. & Paulhus, D. L. Introducing the short dark triad (SD3): A brief measure of dark personality traits. Assessment 21 (1), 28–41. https://doi.org/10.1177/1073191113514105 (2014).

M. Davis, A multidimensional approach to individual differences in empathy. JSAS Catalog Sel. Doc. Psychol. , vol. 10, (1980).

Steffens, N. K., Haslam, S. A. & Reicher, S. D. Up close and personal: Evidence that shared social identity is a basis for the “special” relationship that binds followers to leaders. Leadersh. Q. 25 (2), 296–313. https://doi.org/10.1016/j.leaqua.2013.08.008 (2014).

Dienes, Z. Bayesian versus orthodox statistics: Which Side Are You On?. Perspect. Psychol. Sci. 6 (3), 274–290. https://doi.org/10.1177/1745691611406920 (2011).

Marsman, M. & Wagenmakers, E.-J. Bayesian benefits with JASP. Europ. J. Develop. Psychol. 14 (5), 545–555. https://doi.org/10.1080/17405629.2016.1259614 (2017).

JASP Team, ‘JASP (Version 0.14.10)’. 2019.

Benjamini, Y. & Hochberg, Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. J. Roy. Stat. Soc.: Ser. B (Methodol.) 57 (1), 289–300. https://doi.org/10.1111/j.2517-6161.1995.tb02031.x (1995).

Article   MathSciNet   MATH   Google Scholar  

Singer, T. et al. Empathy for Pain involves the affective but not sensory components of pain. Science 303 (5661), 1157–1162. https://doi.org/10.1126/science.1093535 (2004).

Article   ADS   CAS   PubMed   Google Scholar  

Valk, S. L. et al. Structural plasticity of the social brain: Differential change after socio-affective and cognitive mental training. Sci. Adv. 3 (10), e1700489. https://doi.org/10.1126/sciadv.1700489 (2017).

Caspar, E. A., Beyer, F., Cleeremans, A. & Haggard, P. The obedient mind and the volitional brain: A neural basis for preserved sense of agency and sense of responsibility under coercion. PLoS ONE 16 (10), e0258884. https://doi.org/10.1371/journal.pone.0258884 (2021).

Beyer, F., Sidarus, N., Bonicalzi, S. & Haggard, P. Beyond self-serving bias: diffusion of responsibility reduces sense of agency and outcome monitoring. Social Cognit. Affect. Neurosci. 12 (1), 138–145. https://doi.org/10.1093/scan/nsw160 (2017).

Haggard, P. Sense of agency in the human brain. Nat. Rev. Neurosci. https://doi.org/10.1038/nrn.2017.14 (2017).

Yu, H. et al. A generalizable multivariate brain pattern for interpersonal guilt. Cereb. Cortex 30 (6), 3558–3572. https://doi.org/10.1093/cercor/bhz326 (2020).

E. Caspar, D. Gishoma, and P. A. M. D. S. da Gama, ‘Obedience to authority in the aftermath of a genocide. A social neuroscience study in Rwanda’. PsyArXiv, Jun. 23, 2021. https://doi.org/10.31234/osf.io/a8r7y .

B. Gert, and J. Gert, ‘The Definition of Morality’, in The Stanford Encyclopedia of Philosophy , Fall 2020., E. N. Zalta, Ed. Metaphysics Research Lab, Stanford University, 2020. Accessed: Apr. 07, 2021. [Online]. Available: https://plato.stanford.edu/archives/fall2020/entries/morality-definition/

Gross, M. L. Jewish rescue in holland and france during the second world war: Moral cognition and collective action*. Soc. Forces 73 (2), 463–496. https://doi.org/10.1093/sf/73.2.463 (1994).

Shay, J. Moral injury. Psychoanal. Psychol. 31 (2), 182–191. https://doi.org/10.1037/a0036090 (2014).

Article   MathSciNet   Google Scholar  

Prunier, G. The Rwanda Crisis: History of a Genocide (C. Hurst & Co., 1998).

Download references

Acknowledgements

Emilie A. Caspar was funded by the F.R.S-FNRS.

Author information

Authors and affiliations.

Moral and Social Brain Lab, Department of Experimental Psychology, Ghent University, Henri Dunantlaan, 2, 9000, Ghent, Belgium

  • Emilie A. Caspar

Center for Research in Cognition and Neuroscience, Université Libre de Bruxelles, Brussels, Belgium

You can also search for this author in PubMed   Google Scholar

Contributions

E.A.C. developed the study concept and the method. Testing, data collection and data analysis were performed by E.A.C. E.A.C. wrote the manuscript.

Corresponding author

Correspondence to Emilie A. Caspar .

Ethics declarations

Competing interests.

The author declares no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Caspar, E.A. A novel experimental approach to study disobedience to authority. Sci Rep 11 , 22927 (2021). https://doi.org/10.1038/s41598-021-02334-8

Download citation

Received : 21 April 2021

Accepted : 15 November 2021

Published : 25 November 2021

DOI : https://doi.org/10.1038/s41598-021-02334-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Obedience induces agentic shifts by increasing the perceived time between own action and results.

  • Hans Marien

Scientific Reports (2024)

Relative to females, male rats are more willing to forego obtaining sucrose reward in order to prevent harm to their cage mate

  • Evan M Hess
  • Marco Venniro
  • Todd D Gould

Psychopharmacology (2023)

  • Darius Gishoma
  • Pedro Alexandre Magalhaes de Saldanha da Gama

Scientific Reports (2022)

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

milgram experiment results table

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of plosone

Meta-Milgram: An Empirical Synthesis of the Obedience Experiments

Nick haslam.

School of Psychological Sciences, University of Melbourne, Parkville, Victoria, Australia

Steve Loughnan

Conceived and designed the experiments: NH SL GP. Performed the experiments: NH SL. Analyzed the data: NH SL. Contributed reagents/materials/analysis tools: GP. Wrote the paper: NH SL GP.

Milgram's famous experiment contained 23 small-sample conditions that elicited striking variations in obedient responding. A synthesis of these diverse conditions could clarify the factors that influence obedience in the Milgram paradigm. We assembled data from the 21 conditions ( N  = 740) in which obedience involved progression to maximum voltage (overall rate 43.6%) and coded these conditions on 14 properties pertaining to the learner, the teacher, the experimenter, the learner-teacher relation, the experimenter-teacher relation, and the experimental setting. Logistic regression analysis indicated that eight factors influenced the likelihood that teachers continued to the 450 volt shock: the experimenter's directiveness, legitimacy, and consistency; group pressure on the teacher to disobey; the indirectness, proximity, and intimacy of the relation between teacher and learner; and the distance between the teacher and the experimenter. Implications are discussed.

Introduction

The Milgram study is arguably the most iconic experiment in the history of psychology. In the fifty years since it was conducted, debate about its implications has spread far beyond the academic literature of social psychology and into the culture at large. Scholars continue to discuss whether Milgram demonstrated the capacity for evil in everyday people, the roots of the Holocaust, or the ethical limitations of psychological research. Arguments continue on the nature of authority and the meaning of obedience within Milgram's paradigm [1] and how the study's findings should be theorized [2] . Attempts have been made to replicate it with mixed results [3] , [4] and the original data have been re-examined [5] . Meanwhile, archival scholarship continues to examine the origins of Milgram's work [6] and to unearth troubling discrepancies between its public representation and how its methodology was executed in practice [7] .

The most famous of Milgram's findings is associated with the best-known version of his experiment. A substantial majority of study participants, recruited from the general public as “teachers” in a study of paired associates learning, continued to shock an unresponsive and possibly dying “learner” up to the maximum 450 volts at the behest of the “experimenter.” (Although it remains unclear and somewhat controversial how this behavior should be conceptualized, and even whether it is best described as ‘obedience’ [7] , we use that term as shorthand to describe the progression of experimental subjects to 450 volts.) This rate (62.5%) exceeded by a factor of 500 the figure estimated by psychiatrists who read the study protocol [8] . It is the shock value of this finding – the fact that a majority of ordinary people were apparently capable of destructive obedience – that has triggered the enduring interest in Milgram's work, and the desire to make sense of it.

Less well-known is the fact that this finding represents just one of 23 diverse experimental conditions that Milgram conducted, which varied enormously in levels of obedient responding. Only 18 of these were reported in the monograph that reported the study [8] . The full set of 23 conditions, numbered in the order they were carried out from August 1961 to May 1962 and in accordance with Milgram's notes from the Yale University archive, are sketched in Table 1 . Although several conditions are familiar to many psychologists, others are obscure and rarely discussed. For example, a survey of ten social psychology textbooks [9] , [10] , [11] , [12] , [13] , [14] , [15] , [16] , [17] , [18] shows that although the average text refers to 7.6 conditions, nine conditions go completely unmentioned (see Figure 1 , which lists conditions according to Milgram's numbering: see Table 1 ).

An external file that holds a picture, illustration, etc.
Object name is pone.0093927.g001.jpg

No.NameBrief description
1No feedbackLike baseline condition (2) but L does not cry out
2Voice feedbackBaseline condition with 1 T in separate room from L, with 1 E present
3ProximityLike baseline condition but with T in same room as L, seated behind him
4TouchLike baseline condition but with T holding L's hand to the shock plate
5Coronary troubleLike baseline but L mentions heart trouble at beginning of the experiment and protests about it later
6Different actorsIdentical to condition 5 but with a different actors playing Learner and Experimenter
7Group pressure to disobeyLike baseline condition but with 3 Ts: two (confederates) defy the E, who urges the participant T to continue shocks
8Learner's provisoLike baseline condition but at study outset L insists that he will only agree to take part if he can leave when he wants
9Group pressure to obeyLike condition 7 but the 2 confederate Ts pressure the participant T to obey the E's directions
10Conflicting instructionsLike baseline condition but E urges T to stop the shocks and L urges him to continue ( )
11Group choiceLike condition 7 but Ts can determine shock level (lowest of their 3 bids): confederate Ts go first and always increase
12Role reversalLike baseline condition but E and L swap roles ( )
13Non-trigger positionLike condition 7 but participant T reads word pairs while one of the confederate Ts administers shocks
14Carte blancheLike baseline condition but T decides the level of shocks on his own, without E's directions
15Good/bad experimenterLike baseline condition but there are 2 Es who give conflicting directions: one to stop, one to continue
16Experimenter becomes learnerLike baseline condition but with 2 Es, one of whom volunteers to serve as L when original L is said to be unavailable
17Teacher in chargeLike baseline condition but with 2 Ts, one of whom (a confederate) is given authority to choose shock levels when E is called away
18No experimenterLike baseline condition but E is called away and tells T to continue the experiment on his own, leaving E's phone number
19Authority from afarLike condition 18 but E leaves pre-recorded instructions for T to follow
20WomenLike baseline condition but all Ts are female
21Expert judgmentPsychiatrists and laypeople read the baseline study protocol and estimate level of obedience ( )
22Peer authorityLike condition 17 but confederate T suggests shock levels without being given authority to chose them and E leaves them to T's discretion
23BridgeportLike condition 5 but study conducted in dingy Bridgeport office rather than at Yale
24Intimate relationshipsLike baseline condition but the L is a friend or relative of the T

An analysis of the data from the 23 study conditions could establish which of the situational properties that vary across conditions covary with participants' rates of progression to maximum voltage. However, this task is made difficult by the ad hoc nature of the conditions [6] , which compose a patchwork of methodological elements rather than a systematic investigation of well-articulated experimental factors. Milgram often designed new conditions to explore specific situational factors that might influence obedience, such as the well-known Bridgeport replication, which repeated the original Yale study in an industrial setting. These specific variations are commonly reported as pairwise comparisons of study conditions, each of which had a small sample size (usually 40, but sometimes only 20). Thus the 47.5% obedience rate in Bridgeport is usually contrasted with the 62.5% rate for the comparable condition at Yale, and interpreted as evidence that the status, legitimacy, or prestige of the setting influences obedience. As a result, it is difficult to offer any definitive conclusions about Milgram's findings based on anything more than piecemeal analysis of small sample variations within the larger experimental program.

A better way to examine the experimental factors that influence obedience in Milgram's research would be to synthesize its findings by amalgamating his conditions in a manner akin to meta-analysis and assessing moderators of obedience in the combined sample. The combined sample of the 23 conditions is a substantial 780 participants. No analysis that synthesizes conditions from Milgram's study to examine determinants of obedience has previously been conducted. Packer [5] carried out a meta-analysis of eight conditions but focused on the critical voltage levels at which disobedient participants refused to continue rather than on differences in levels of obedience across conditions. Reicher, Haslam, and Smith [19] correlated levels of obedience in 15 of the 23 conditions with ratings by social psychologists and students of the teacher's probable level of identification with experimenter and learner, but did not examine characteristics internal to the Milgram study as predictors of obedience levels.

Deciding how to systematically characterize the variations among Milgram's conditions in a way that might illuminate differences in obedience rates is no easy task. Milgram himself did not provide a systematic classification of his conditions beyond simply clustering them into those exploring the “immediacy of the victim”, “presence of an authority figure”, and “group experiments”. Other writers have identified numerous differentiating characteristics, often labeled in multiple ways. Sometimes these characteristics have been integrated into two broad components: those that connect the teacher to the experimenter and those that link the teacher to the learner. Gilovich et al. [12] refer to these sets of features as “tuning out [or in] the experimenter” and “tuning in [or out] the learner”. Other writers offer alternative distinctions. For example, Aronson et al. [9] distinguish informational and normative influences. Myers [15] proposes that the primary factors are the victim's distance, the authority's closeness and legitimacy, institutional authority, and the liberating effect of disobedient peers. Sutton and Douglas [17] sort the relevant factors into proximity of experimenter to teacher, proximity of learner to teacher, authority of the situation, authority or status of the experimenter, and group pressure.

Rather than begin with a particular classification of factors that might influence obedience levels across the study conditions, we began with an abstract schema of Milgram's experiment and attempted to fit his experimental variations into this schema. By this means we attempted to determine inductively which of a large set of experimental features are independently associated with variations in obedience. Our schema (see Figure 2 ) started from the recognition that the Milgram experiment involves three hierarchically organized roles (Experimenter, Teacher, Learner) and two relationships between them (Experimenter-Teacher and Teacher-Learner), there being no unmediated relationship between Experimenter and Learner. By “relationship” we mean any intrinsically relational aspect of their connection, such as distance or intimacy. With one exception the factors that Milgram varied across his conditions can be located within one of the three roles or the two relationships. The exception is the setting in which the experiment was conducted (i.e., Yale versus Bridgeport). The schema therefore identifies six classes of factors that Milgram manipulated across his study conditions.

An external file that holds a picture, illustration, etc.
Object name is pone.0093927.g002.jpg

Having developed a reasonably comprehensive set of study properties to capture the variations among Milgram's conditions, we conducted a statistical analysis to determine which of these factors were independently associated with obedience levels. Treating Milgram's conditions as a single study with a large sample, rather than as a variegated collection of studies with small samples, allows a powerful test of the situational influences on obedience within his paradigm. The aim of our study was to determine which of the many potential influences were statistically reliable, rather than to test a particular theory of obedience or interpretation of the Milgram study. Nevertheless, any such theory or interpretation must be consistent with the determinants that are found to be efficacious.

Materials and Methods

Ethics statement.

This report presents a re-analysis of publically available, previously published data originally collected by Milgram and his colleagues in 1961 and 1962, prior to the advent of institutional review boards. No informed consent was required at that time by Yale University. Participants provided uninformed verbal consent and signed a waiver absolving Yale University of legal responsibility.

Selection of conditions

Milgram's study included 23 conditions in which participants completed a variation of the obedience protocol. Another variation, sometimes referred to as condition 21, assessed levels of obedience predicted by laypeople and psychiatrists rather than actual behavior, and is therefore not an experiment. Two conditions – numbers 10 (“conflicting instructions”) and 12 (“role reversal”) – differ from the others in that proceeding to the 450 V shock involves dis obeying the experimenter, and because of this fundamental difference in the meaning of the dependent measure these conditions were excluded from the analysis. The analysis therefore included 21 of the 23 conditions, and 740 of the 780 (94.9%) total participants.

Four conditions with complex, two-part designs allow two alternative ways of counting the number of obedient participants. Obedience levels from part B of condition 15 (“good experimenter, bad experimenter”) were selected because part A ended at 150 V and therefore did not allow all participants the opportunity to defy the experimenter. Parts A of conditions 17 (“teacher in charge”), 18 (“no experimenter”), and 22 (“peer authority”) were selected because they all allowed participants to proceed all the way to 450 V before part B was initiated.

To determine which variations among study conditions were independently associated with differences in obedience rates, we developed a set of codes to distinguish the conditions. Development of the codes was guided by two considerations: codes should identify distinctions recognized by Milgram or other scholars, and they should be reasonably exhaustive, ideally yielding a unique configuration of codes for each condition. The latter goal was successfully met with two exceptions. Conditions 5 and 6 (“coronary trouble” and “different actors”) were coded identically because they differed only in the actors playing the learner and experimenter roles. Conditions 18 and 19 (“no experimenter” and “authority for afar”) were coded identically because in both conditions the experimenter departs after explaining the study and leaves a phone number on which he can be contacted, with no other significant procedural differences.

A total of 14 codes were developed and organized into our six-part schema (see Figure 2 ). Some codes pertained to variations in properties of the three roles in the study: the learner, the teacher, and the experimenter. Others pertained to the relations between pairs of protagonists or roles: the teacher-learner relation and the experimenter-teacher relation. Finally, one code related to the overall setting or context of the study. With one exception, all codes were dichotomous with “0” representing the more common default position and “1” representing the deviant condition, which guided the naming of the coded properties. The codes are described according to the six-part schema below, and are summarized in Tables 2 and ​ and3, 3 , along with their associated obedience rates.

Learner propertiesTeacher propertiesTeacher-learner properties
No.Condition label “obey”VulnerabilityRights expressionFemale genderGroup pressure to obeyGroup pressure to disobeyIntimacyProximityIndirectness
1No feedback402600000000
2Voice feedback402500000010
3Proximity401600000020
4Touch401200000030
5Coronary trouble402610000010
6Different actors402010000010
7Group pressure to disobey40400001010
8The learner's proviso401601000010
9Group pressure to obey402900010010
10Conflicting instructions2020Not included in analysis
11Group choice40700010010
12Role reversal2020Not included in analysis
13Non-trigger position403700000011
14Carte blanche40100000010
15Good/bad experimenter20400000010
16Experimenter → learner201300000010
17Teacher in charge201100000010
18No experimenter40900000010
19Authority from afar401500000010
20Women402600100010
22Peer authority20400000010
23Bridgeport401910000010
24Intimate relationships20300000110
Experimenter propertiesExperimenter-teacher propertiesSetting property
No.Condition label “obey”NumberIllegitimacyNon-directivenessInconsistencyDistanceLow status
1No feedback4026000000
2Voice feedback4025000000
3Proximity4016000000
4Touch4012000000
5Coronary trouble4026000000
6Different actors4020000000
7Group pressure to disobey404000000
8The learner's proviso4016000000
9Group pressure to obey4029000000
10Conflicting instructions2020Not included in analysis
11Group choice407001000
12Role reversal2020Not included in analysis
13Non-trigger position4037000000
14Carte blanche401001000
15Good/bad experimenter204100100
16Experimenter → learner2013100000
17Teacher in charge2011010010
18No experimenter409000010
19Authority from afar4015000010
20Women4026000000
22Peer authority204011010
23Bridgeport4019000001
24Intimate relationships203000000

Learner properties

Two codes referred to properties of the learner. “ Vulnerability ” refers to three conditions (5 [“coronary trouble”], 6 [“different actors”] & 23 [“Bridgeport”]) in which the learner mentions heart trouble at the beginning of the experiment, augmenting the heart-related concerns that are part of the standard script in the other conditions. Thus conditions 5, 6, and 23 were coded “1” and all other conditions coded “0”. “ Rights expression ” refers specifically to condition 8 (“learner's proviso”), where at the outset the learner says he will only participate if he is able to leave when he wants. Condition 8 was therefore coded “1” and all others “0”.

Teacher properties

Three codes referred to properties of the teacher role. “ Female gender ” pertains to the single condition (20 [“women”]) that employed female participants, so this condition was coded “1” and all others “0”. “ Group pressure to obey ” refers to the distinction between two conditions (9 [“group pressure to obey”] & 11 [“group choice”]) in which multiple teachers (actually confederates) exert pressure on the participant teacher to escalate the shocks (coded “1”) and all other conditions (coded “0”), where no such pressure was exerted. “ Group pressure to disobey ” contrasted one condition (7 [“group pressure to disobey”]) involving pressure within the teacher group against obeying (coded “1”) and all other conditions (coded “0”). These group pressure variants are discussed in terms of “normative influence,” “social consensus”, or “social support” by some writers on the Milgram study.

Experimenter properties

Four experimenter properties were coded. “ Number ” distinguishes two conditions (15 [“good experimenter, bad experimenter”] & 16 [“experimenter becomes learner”]) employing two experimenters, both coded “1”, from all others, coded “0”. (Condition 18, entitled “no experimenter,” actually has an experimenter who meets the participant before being called away.) “ Illegitimacy ” – referred to as low experimenter “status” or “authority” by some writers – distinguishes two conditions (17 [“teacher in charge”] & 22 [“peer authority”], both coded “1”) in which an apparent participant (actually a confederate) takes over the experimenter role, from all other conditions, coded “0”, where the experimenter is identified as a scientist or researcher. “ Non-directiveness ” distinguishes three conditions (11 [“group choice”], 14 [“carte blanche”] & 22 [“peer authority”], all coded “1”) in which no explicit direction is given to increase the shocks (shock level is instead left to the discretion of the participants) from all other conditions, where such a direction is always given (coded “0”). Finally, “ Inconsistency ” separates one condition (15 [“good experimenter, bad experimenter”]) in which the experimenter role is internally conflicted (coded “1”) from all other conditions (coded “0”), where the role is consistent, most often because there is a single, unwavering experimenter.

Teacher-learner relation properties

Three properties of the relationship between teacher and learner were coded. “ Intimacy ” distinguishes the little-known condition 24 (“intimate relationships”), in which the learner was a friend or relative of the teacher (coded “1”), from all other conditions (coded “0”), where the two were strangers. “ Proximity ” – sometimes referred to as “immediacy” – captures degrees of distance between teacher and learner. Least proximal is condition 1 (“no feedback”, coded “0”), where the learner is in an adjoining room and does not cry out, followed by the baseline condition 2 (“voice feedback”, coded “1”) in which the learner is in an adjoining room but screams. Condition 3 (“proximity”, coded “2”) has the learner seated close behind the teacher in the same room, and condition 4 (“touch”, coded “3”) has the teacher holding the learner's hand to the shock-plate. All other conditions, which followed the baseline condition in this regard, were coded “1”. Finally, the “ Indirectness ” code distinguished condition 13 (“non-trigger position”, coded “1”), where the participant is a teacher who reads the word pairs while another administers the shocks, from all other conditions (coded “0”), where the teacher's role in shocking the learner was unmediated.

Experimenter-teacher relation properties

One code, “ Distance ”, captured variation among conditions in the relation between experimenter and teacher. Four conditions in which the experimenter absents himself during the study (17 [“teacher in charge”], 18 [“no experimenter”], 19 [“authority from afar”] and 22 [“peer authority”]) (coded “1”), are distinguished from all other conditions (coded “0”), where the experimenter is physically present in the experimental situation throughout.

Setting property

A final code pertained to the setting or context of the experiment, distinguishing condition 23 (“Bridgeport”), conducted in an industrial neighborhood (coded “1”), from all other conditions (coded “0”), which were carried out on Yale University's ivied campus. The code was called “ Low status ”, but other writers have referred to it as low “prestige”, “legitimacy”, “institutional authority”, or “authority of the situation.”

All coding was based on published descriptions of the conditions and on Milgram's original notes, accessed by the third author at the Yale University archives. The original, hand-written data summary sheets were also used to confirm obedience rates for each condition. Data file construction .

A data file ( N  = 740) was reconstructed using the known sample sizes for each condition ( n  = 40 for 16 conditions, n  = 20 for 5 conditions) and the number of participants in each condition who proceeded to deliver the 450 V shock. Obedience was coded dichotomously as delivering this highest shock, consistent with standard practice and in recognition of the marked irregularity of the distribution of highest voltages delivered, which renders continuously scored voltage level statistically problematic as a dependent measure.

Across the 21 conditions the proportion of obedient participants was 323/740 (43.6%). Table 4 presents rates of obedience as a function of each dichotomous code. Eight codes were associated with differential rates of obedience. Obedience rates were higher for more vulnerable learners ( p  = .011), for female teachers ( p  = .005), and for more indirect teacher-learner relations ( p <.001). Rates were lower when there was more group pressure for experimenters to disobey ( p <.001), when the teacher-learner relation was more intimate ( p  = .009), when the experimenter was non-directive ( p <.001) and inconsistent ( p  = .031), and when the experimenter-teacher relation was more distant ( p  = .007). A comparable test of the bivariate relationship between obedience and the one non-dichotomous code, “Proximity”, showed that greater proximity between teacher and learner was associated with lesser obedience (Spearman r  = −.37, p <.001).

CodeCoded 1Coded 0χ
Number0.430.440.02.879
Illegitimacy0.380.440.65.420
Non-directiveness0.120.4947.09<.001
Inconsistency0.200.444.67.031
Female gender0.650.427.84.005
Group pressure to obey0.450.430.07.796
Group pressure to disobey0.100.4619.47<.001
Vulnerability0.540.426.44.011
Rights expression0.410.440.23.632
Distance0.330.467.24.007
Intimacy0.150.446.86.009
Indirectness0.930.4141.03<.001
Low status0.480.430.26.614

In view of the redundancy among the predictor codes, a logistic regression analysis was conducted to determine which condition properties were independently associated with obedience levels. “Proximity,” was coded in increasing order of closeness from 0 to 3. Although linear, quadratic, and cubic effects for this variable were estimated within the model, only the linear effect was of interest. The model accounted for substantial variation in obedience (Nagelkerke R 2  = 0.30, p <.01) and eight of the 14 coded variables independently predicted this outcome. Findings of the analysis are summarized in Table 5 , where positive values of B signify that conditions higher in the property named by the code tend to have higher rates of obedience, and negative values signify the reverse.

Code (SE)Waldd.f.
Number0.32 (0.55)0.341.560
Illegitimacy1.37 (0.47)8.501.004
Non-directiveness−2.79 (0.39)50.451<.001
Inconsistency−2.01 (0.73)7.561.006
Female gender0.32 (0.44)0.531.467
Group pressure to obey0.78 (0.40)3.771.052
Group pressure to disobey−2.49 (0.60)17.041<.001
Vulnerability0.06 (0.37)0.001.987
Rights expression−0.70 (0.44)2.571.109
Distance−1.14 (0.38)8.921.003
Intimacy−2.03 (0.69)8.611.003
Indirectness2.22 (0.67)10.981.001
Proximity12.003.007
 (linear)−1.14 (0.34)11.551.001
 (quadratic)−0.59 (0.32)0.031.855
 (cubic)0.14 (0.31)0.211.648
Low status−0.40 (0.39)1.071.614

Table 5 indicates that three of the four Experimenter variables were associated with obedience. Higher obedience resulted when experimenters gave authoritative directions rather than leaving shock levels to teachers ( p <.001), and lower obedience occurred when their directions were inconsistent (i.e., differing between experimenters: p  = .006). Surprisingly, obedience rates were somewhat higher when the authority was illegitimate (i.e., a peer rather than a researcher: p  = .004), an effect that might reflect collinearity among predictors given the lack of bivariate association between illegitimacy and obedience shown in Table 4 . The presence of multiple experimenters did not influence obedience levels ( p  = .56).

Similarly mixed findings were obtained for the three Teacher variables, only one of which had a significant effect. Pressure to disobey from a group of teachers substantially decreased obedience ( p <.001). However, pressure to obey from a group of teachers only marginally increased it ( p  = .052) and teacher gender had no effect ( p  = .467), the higher rate of obedience obtained for female teachers in the bivariate analysis disappearing when other variables were statistically controlled. Neither of the two Learner variables – vulnerability ( p  = .987) or rights expression ( p  = .109) – had significant effects on obedience, the bivariate vulnerability association also disappearing when other variables were held constant.

Turning to the relationship and setting variables, distance between the Experimenter and Teacher had an effect ( p  = .003), such that greater distance between them was associated with lesser obedience. All three Teacher-Learner relation variables had significant effects: conditions in which the teacher and learner were more proximal ( p  = .001), more intimate ( p  = .003), and more directly related ( p  = .001) had lower rates of obedient responding. Finally, the Setting variable, “low status”, was unrelated to obedience ( p  = .301).

Although the six code groupings – learner, teacher, experimenter, teacher-learner relation, experimenter-teacher relation, and setting properties – contain different numbers of codes, the relative magnitude of their effects offers some insight into the importance of these property types within the set of conditions that Milgram employed. Table 6 presents Nagelkerke R 2 values for each set of codes, which suggest that three property types - Experimenter, Teacher-Learner relation, and Teacher - are pre-eminent determinants of obedience rates across Milgram's 21 study conditions.

Code setVariablesNagelkerke
Experimenter (E)40.116
Experimenter-Teacher relation (E-T)10.013
Teacher (T)30.052
Teacher-Learner relation (T-L)30.110
Learner (L)20.012
Setting1<0.001

Our analysis indicates that many properties of Milgram's study conditions were associated with rates of obedient responding. These eight properties are diverse, pertaining to aspects of two of the three roles in the study – Teacher and Experimenter – as well as to both of the relationships between roles: Teacher-Experimenter and Teacher-Learner. Although our study brackets off the issue of how obedience within the Milgram study should be understood and takes no theoretical position on that issue, the number and diversity of these properties present a challenge for any encompassing account of obedience in the Milgram paradigm.

The significant predictors of obedience in our analysis are clearly disparate. The most powerful effects, in decreasing order, are the Experimenter's non-directiveness, the Teachers' group pressure to disobey, the Teacher-Learner relation's proximity and indirectness, the Teacher-Experimenter relation's distance, the Teacher-Learner relation's intimacy, and the Experimenter's illegitimacy and inconsistency. Several of these effects are well-established within the literature on the Milgram study, such as proximity, group pressure to disobey, and distance between Experimenter and Teacher. Others have been largely overlooked.

For example, few of the textbooks whose coverage was sampled in Figure 1 recognized the importance of the Experimenter's directiveness vs. non-directiveness, failing to note the very low levels of obedience in the “Carte blanche” and “Group choice” conditions. Proceeding to the 450 V shock rarely occurs if the authority figure does not give explicit commands to escalate the shocks, even if pressure to escalate is coming from fellow teachers (i.e., in the “Group choice” condition). Few textbooks noted the role of inconsistency among Experimenters in reducing obedience, neglecting to cite the “Good experimenter/bad experimenter” condition, where a benign experimenter almost completely overrode the power of the standard “bad” experimenter to induce compliance. No textbooks in our sample recognized the role of the indirectness of the relation between Teacher and Learner, failing to mention the “Non-trigger position” condition and its very high rates of obedience. Similarly, no textbooks acknowledged how the intimacy of the relationship between Teacher and Learner reduces obedience. Participants shocked learners with whom they had an existing social bond at less than one quarter the rate as when the learners were strangers. These four factors deserve greater attention in commentaries on Milgram's work.

Just as some factors that significantly predict obedience have been overlooked, other well-publicized factors were not significant predictors in our analysis or had unexpected effects. In particular, the analysis of textbook coverage shows that Milgram's replication of his study in Bridgeport, and his examination of the role of experimenter legitimacy through the “Peer authority” condition, attract substantial attention. However, the status of the setting was not associated with obedience in our systematic analysis of the 21 conditions, with levels similar regardless of the prestige of the experimental situation. Moreover, the illegitimacy of the authority was associated with higher obedience levels. Although this finding may be unreliable, it clearly contradicts the expectation that more legitimate authorities generate greater obedience in the Milgram paradigm. Although obedience was low (20%) in the “Peer authority” condition, our analysis suggests that this was probably due to the non-directive instruction in that condition rather than to the illegitimacy of the person proposing the shock levels (i.e., a peer rather than an identified researcher). In “Teacher in charge”, another condition where a peer was drafted into the authority role, obedience rates were a relatively high 55%, challenging the standard interpretation that peers, as illegitimate authorities, are not obeyed. In short, the importance of the prestige of the situation and the legitimacy of the authority may have been over-estimated in past interpretations of Milgram's work.

Such interpretations have often distinguished two components of the experimental situation. On the one hand, the Experimenter exerts a more or less authoritative influence on the Teacher, and on the other, the Learner generates more or less compassion or moral concern in that Teacher. The relative strength of these two influences is taken to determine rates of obedience, whether it is understood in terms of the Teacher's relative identification with Experimenter and Learner [19] or “tuning them in (or out)” [15] . Milgram's conditions cannot definitively answer which of these two components is the more important determinant of obedience in any general sense, as it may not comprehensively manipulate the range of properties that might capture the components or manipulate them in equally powerful ways.

Nevertheless, our analysis indicates that within the confines of 21 of Milgram's conditions, the two components are fairly similar in strength. As Table 4 shows, properties on the Experimenter side of the Teacher (i.e., Experimenter and Teacher-Experimenter relations) have similar overall predictive power as those on the Learner side (i.e., Learner and Teacher-Learner relations), with a small advantage to the Experimenter side. This general finding implies that any interpretation of the Milgram study that neglects one component or the other – that sees the study exclusively through the lens of the Experimenter's influence on the Teacher or the Teacher's disengagement from the Learner, for example – must be incomplete.

One limitation of our analysis is that by focusing on objective properties of the experimental situation it neglects the participant's interpretation of that situation and their understanding of the significance of their behavior. The ambiguity of the situation and apparent skepticism about the experimental set up among many participants [7] all raise questions about how ‘obedience’ – and variations in it across conditions – should be understood within the Milgram paradigm. For example, Milgram's own notes suggest that some conditions were difficult for participants to take seriously. Their degree of belief or disbelief, unmeasured in our analysis, may well have altered the meaning and extent of their ‘obedient’ responding. A second, unavoidable limitation of our analysis is that it could not capture some objective properties of the experimental situation. As Gibson [20] and Perry [7] have shown, the experimenter frequently did not adhere to the published details of the study protocol. Tape recordings show, for example, that he often went beyond the standard ‘four prods’ in ways that are likely to have influenced the delivery of shocks by participants.

Although it is over five decades old the Milgram study is of more than historical significance. Although its meanings remain elusive and continue to generate disagreement, stimulated by new theoretical perspectives and by revelations of methodological weaknesses, attempts to clarify what the study teaches us continue to be important. Whether or not it illuminates the influences on obedience in any general sense, we believe that our analysis helps to extract and systematize some of the patterns within Milgram's complex set of findings. These patterns may help to guide and constrain future interpretations of his study.

Funding Statement

The authors have no support or funding to report.

How The Milgram Experiment Showed That Anyone Could Be A Monster

The milgram experiment tested its subjects' willingness to harm other people for the sake of obeying authority — and it ended with truly shocking results..

Milgram Experiment

Yale University Manuscripts and Archives Participants in one of Stanley Milgram’s experiments that examined obedience to authority.

In April 1961, former Nazi official and SS Colonel Adolf Eichmann went on trial for crimes against humanity in an Israeli courtroom.

Throughout his trial, which ended with a conviction and death sentence, Eichmann had tried to defend himself on the grounds that he was “only following orders.” He asserted that he was not a “responsible actor,” but merely a servant of those who were, and so he should be held morally blameless for just doing his duties, even if they included organizing the logistics of shipping people to the Nazi camps during the war.

This defense didn’t work in court and he was convicted on all counts. However, the idea of an unwilling-but-obedient participant in mass murder captured the interest of Yale psychologist Stanley Milgram, who wanted to know how easily morally normal people could be convinced to commit heinous crimes after an authority figure ordered them to do so.

To examine the matter, Milgram polled dozens of people for their opinions. Without exception, every group he asked for predictions thought it would be difficult to get people to commit serious crimes just by ordering them to.

Only three percent of the Yale students Milgram polled said that they thought an average person would willingly kill a stranger just because an authority figure pressured them into it. A poll of colleagues on the staff of a medical school showed similar results, with only around four percent of faculty psychologists guessing test subjects would knowingly kill a person if they were coerced into it by someone who looked like they were in charge.

In July 1961, Milgram set out to discover the truth for himself by devising an experiment, the results of which are still controversial to this day.

What Was The Milgram Experiment?

Ad For Milgram Experiment

Wikimedia Commons An advertisement to participate in the Milgram experiment in 1961.

The experiment Milgram set up required three people. One person, the test subject, would be told he was participating in a memorization experiment, and that his role would be to administer a series of electric shocks to a stranger whenever he failed to correctly answer a question.

In front of the subject was a longboard with 30 switches labeled with increasing voltage levels, up to 450 volts. The last three switches had high-voltage warnings pasted on them and appeared to be very dangerous.

The second participant was an actor and confederate, who would chat with the test subject before moving to an adjacent room and connecting a tape recorder to the electrical switches so that they could play recorded shouts and screams — which sounded like their reaction to getting “shocked.”

The third participant was a man in a white lab coat, who sat behind the test subject and pretended to administer the test to the actor in the next room.

sentiment baffling

From The ‘Real-Life Mowgli’ To The ‘Human Pet,’ Learn The Bizarre Stories Of 9 Feral Children From History

By All That's Interesting

What Happened When The “Test-Takers” Failed Their Tests?

Milgram Experiment Setup

Wikimedia Commons Illustration of the setup of the Milgram experiment. The experimenter (E) convinces the subject (“Teacher” T) to give what he believes are painful electric shocks to another subject, who is actually an actor (“Learner” L).

At the beginning of the experiment, the test subject would be given a quick shock from the apparatus on its lowest power level. Milgram included this to ensure that the subject knew how painful the shocks were and to make the pain of the shocks “real” to the subject before proceeding.

As the experiment got underway, the administrator would give the unseen confederate a series of memorization problems requiring an answer. When the actor gave the wrong answer, the administrator would instruct the subject to flip the next switch in the sequence so that they were seemingly delivering progressively higher-voltage shocks to the confederate.

When the switch was thrown, the tape recorder would play a yelp or a scream, and at higher levels, the confederate would start pounding on the wall and demanding to be set free. The actor was also given scripted lines about having a heart condition to make the situation seem very urgent.

After the seventh shock, he would go completely silent to give the impression that he had either passed out or died. When this happened, the administrator would continue on with his questions.

Getting no response from the “unconscious” confederate, the administrator told the subject to apply higher and higher shocks, up to the last, 450-volt switch, which was colored red and labeled as potentially lethal.

The Results Of The Milgram Experiment

Obedience Experiment

Yale University Manuscripts and Archives Participants in the Milgram experiment.

The groups that Milgram polled before the experiments began had predicted that just three or four percent of test subjects could be convinced to deliver a potentially fatal electric shock to an unwilling participant.

But results showed that 26 of the 40 subjects — 65 percent — went all the way up to 450 volts during the experiment. Furthermore, all of them had been willing to deliver 300 volts to a screaming and protesting subject.

All of the subjects had raised some kind of objection during the test. However, Milgram was astounded to find out that, apparently, almost two-thirds of normal people would be willing to kill a person with electricity if a man in a lab coat told them, “It is imperative that you continue.”

Accordingly, after the initial experiment was over, Milgram organized more tests with some variables controlled to see what importance different factors had in affecting people’s resistance to authority.

He found that people are vastly more likely to carry out atrocious acts if they feel like they have permission from some recognized authority (such as a scientist in a lab coat or a senior officer in the SS) and that participants’ willingness to shock increases as they are made to feel that the authority has taken moral responsibility for the actions they commit.

Experiment Equipment

Yale University Manuscripts and Archives Most participants in the Milgram experiment apparently believed they were delivering electric shocks to strangers.

Here are some other findings from the Milgram experiment:

  • When instructions to shock are given by phone, rather than having the authority figure physically present in the room, compliance dropped to 20.5 percent, and many “compliant” subjects were actually cheating; they would skip shocks and pretend to have thrown the switch when they hadn’t.
  • When the subjects were made to press the victim’s hand down onto a shock plate, thus eliminating the distance of throwing an impersonal switch, compliance dropped to 30 percent.
  • When the subjects were put in the position of ordering other people — confederates who were part of the experiment staff — to throw the switches, compliance increased to 95 percent. Putting one person between the subject and the victim made it so that 9.5 out of 10 people went all the way up to the presumed-fatal shock.
  • When subjects were given “role models” to set an example of resistance, in this case, confederates who raised objections and refused to participate, compliance plunged to only 10 percent. It’s as if the subjects really wanted to stop, but needed leadership to grant moral permission to disobey an authority figure.
  • When the administrator participated without the lab coat, that is, without a uniform indicating authority, compliance fell to 20 percent.
  • Experiments held at locations separate from the prestigious Yale campus yielded less compliance, only 47.5 percent as if the perceived status of the surroundings had some conforming influence on the subjects.

The Legacy Of The Milgram Experiment

Stanley Milgram

Yale University Manuscripts and Archives Some believed that Stanley Milgram’s experiment was unethical, and others thought that his results said more about the types of people who participated in psychology experiments at Yale than people in general.

They say nothing in the social sciences is ever proven, and the disturbing results of Milgram’s experiment are no exception. Milgram’s work with his subjects faced criticism from other experts in the psychology community almost as soon as his results were published.

One of the more serious charges leveled against Milgram’s paper was the original sin of social science research: sample bias.

It was convincingly argued that even though the 40 local men Milgram had recruited for his research varied in backgrounds and professions, they represented a special case and that such a small group of white males may not be the most representative sample of humanity. Therefore, Milgram’s work had limited value in understanding human psychology.

In fact, critics argued, Milgram may have discovered something alarming about the kind of person who participates in psychology experiments at Yale, but such people would be expected to be more conformist and eager to please authority figures than a truly representative sample of the populace.

This critique was lent some weight when later researchers had trouble reproducing Milgram’s findings. Other investigators, using less-biased samples drawn from other groups in the population, found significantly less compliance with the administrators’ requests. Many reported meeting stiff resistance from non-college-educated and working-class people.

Shock Experiment

Yale University Manuscripts and Archives The Milgram experiment is still considered one of history’s most controversial psychology experiments.

The results seemed to plot a curve of compliant behavior, from the very top of society (wealthy, white, upper-class overachievers) to the lowest (unemployed, racially diverse school dropouts).

Those who had risen the highest seemed more eager to shock strangers to death when a man in a lab coat asked them to. It was theorized that others who may have had negative experiences with authorities were generally willing to argue and quit the experiment before things went too far.

Though some continue to muse about the results of the Milgram experiment and others have performed different versions of it in recent years, it’s unlikely anyone will ever completely replicate it again in its original form.

The intense psychic stress that test subjects have to be put through, as they are led to believe they’re committing what amounts to murder, violates many of the ethical restrictions now in place for human research. Another problem is the notoriety of the experiment — too many people know about the experiment now to ensure honest performance from the test group.

Whatever the Milgram experiment’s faults, and however hard it might be in the future to make sense of its findings, the fact that so many seemingly normal men felt compelled to violate their own conscience to obey authority is enough to send chills up the spines of many people even today.

After learning about the Milgram experiment, read about Unit 731 , Japan’s sickening human experiments program during World War II. Then, learn about the infamous Stanford Prison Experiment .

Share to Flipboard

PO Box 24091 Brooklyn, NY 11202-4091

IMAGES

  1. The Milgram Obedience Experiments

    milgram experiment results table

  2. Stanley Milgram Experiment Results

    milgram experiment results table

  3. PPT

    milgram experiment results table

  4. 2.2 Obedience, Power, and Leadership

    milgram experiment results table

  5. PPT

    milgram experiment results table

  6. Milgram: The man that shocked the world

    milgram experiment results table

COMMENTS

  1. Milgram Shock Experiment

    Stanley Milgram, a psychologist at Yale University, carried out one of the most famous studies of obedience in psychology. He conducted an experiment focusing on the conflict between obedience to authority and personal conscience. Milgram (1963) examined justifications for acts of genocide offered by those accused at the World War II, Nuremberg ...

  2. Milgram Experiment: Overview, History, & Controversy

    Yale University psychologist Stanley Milgram conducted these experiments during the 1960s. They explored the effects of authority on obedience. In the experiments, an authority figure ordered participants to deliver what they believed were dangerous electrical shocks to another person. These results suggested that people are highly influenced ...

  3. The Milgram Experiment: Summary, Conclusion, Ethics

    The goal of the Milgram experiment was to test the extent of humans' willingness to obey orders from an authority figure. Participants were told by an experimenter to administer increasingly powerful electric shocks to another individual. Unbeknownst to the participants, shocks were fake and the individual being shocked was an actor.

  4. The Milgram Experiment: Theory, Results, & Ethical Issues

    Milgram's experiments are always afforded lots of space in general and social psychology textbooks; interestingly, when his experiments are discussed, their controversial nature has been less and less of a focus over time (Stam et al., 1998). Milgram Experiment Theory. Milgram (1974) saw his experiments as demonstrating the power of authority.

  5. Taking A Closer Look At Milgram's Shocking Obedience Study

    The results of Milgram's experiment made news and contributed a dismaying piece of wisdom to the public at large: It was reported that almost two-thirds of the subjects were capable of delivering ...

  6. Milgram experiment

    Milgram experiment, controversial series of experiments examining obedience to authority conducted by social psychologist Stanley Milgram.In the experiment, an authority figure, the conductor of the experiment, would instruct a volunteer participant, labeled the "teacher," to administer painful, even dangerous, electric shocks to the "learner," who was actually an actor.

  7. Milgram experiment

    Later, Milgram and other psychologists performed variations of the experiment throughout the world, with similar results. [13] Milgram later investigated the effect of the experiment's locale on obedience levels by holding an experiment in an unregistered, backstreet office in a bustling city, as opposed to at Yale, a respectable university.

  8. PDF Milgram's Study of Obedience

    the study, to alleviate any anxiety upon the end of the experiment. Results The results of the study were surprising to everyone. No subject stopped before 300 volts, but of the 40 participants, 5 stopped at 300-volts, 4 stopped at 315-volts, 2 ... shown in table one (Milgram, 1974, p. 35). Characteristics of the "Teacher"

  9. PDF Milgram experiment

    1 The experiment 2 Results 3 Ethics 4 Interpretations o 4.1 Alternative interpretations 5 Replications and variations o 5.1 Milgram's variations o 5.2 Replications o 5.3 Other variations 6 Media depictions 7 See also 8 Notes 9 References 10 External links The experiment Milgram Experiment advertisement The volunteer subject was given the role of teacher, and the confederate, the role of learner.

  10. PDF CommonLit

    The Milgram Experiment. By Saul McLeod 2008. In 1963, Stanley Milgram conducted a study on obedience. Using a series of social psychology experiments, Milgram measured participants' willingness to comply with an authority figure. As you read the text, identify the factors that influenced the behavior of the participants in the study.

  11. Milgram's Experiment on Obedience to Authority

    Social psychologist Stanley Milgram researched the effect of authority on obedience. He concluded people obey either out of fear or out of a desire to appear cooperative--even when acting against their own better judgment and desires. Milgram s classic yet controversial experiment illustrates people's reluctance to confront those who abuse power.

  12. Credibility and Incredulity in Milgram's Obedience Experiments: A

    Gina Perry is an Australian writer and author of Behind the Shock Machine: The Untold Story of the Notorious Milgram Psychology Experiments (2012) and The Lost Boys: Inside Muzafer Sherif's Robber Cave Experiment (2018). Both works draw on extensive archival research and interviews with experimental participants. She completed her PhD at the University of Melbourne, where she is an associate ...

  13. Modern Milgram experiment sheds light on power of authority

    Milgram's original experiments were motivated by the trial of Nazi Adolf Eichmann, who famously argued that he was 'just following orders' when he sent Jews to their deaths. The new findings ...

  14. Milgram's Experiments on Obedience to Authority

    Summary. Stanley Milgram's experiments on obedience to authority are among the most influential and controversial social scientific studies ever conducted. They remain staples of introductory psychology courses and textbooks, yet their influence reaches far beyond psychology, with myriad other disciplines finding lessons in them.

  15. Explanations for Obedience

    Deception. Ethical Issues. Right to withdraw. Protection from harm. Milgram (1963) conducted one of the most famous and influential psychological investigations of obedience. He wanted to find out if ordinary American citizens would obey an unjust order from an authority figure and inflict pain on another person because they were instructed to.

  16. Explanations for Obedience -Variations of Milgram (1963)

    In these two variations, the closer the proximity of the teacher and learner, the lower the level of obedience. The proximity of the authority figure also affects the level of obedience. In one variation, after the experimenter had given the initial instructions they left the room. All subsequent instructions were provided over the phone.

  17. PDF Milgram's Experiment: "The Perils of Obedience"

    destructive process"-- Milgram Context In 1963, a Yale psychologist conducted one of the classic studies on obedience. Stanley Milgram designed an experiment that forced participants either to violate their conscience by obeying the immoral demands of an authority figure or to refuse those demands.

  18. A novel experimental approach to study disobedience to authority

    The experiment of Stanley Milgram is one of the most (in) ... appear to replicate Milgram's results 16 with respect to the actual ethical standards, ... (see Table 1). Table 1 Schematic ...

  19. Meta-Milgram: An Empirical Synthesis of the Obedience Experiments

    Abstract. Milgram's famous experiment contained 23 small-sample conditions that elicited striking variations in obedient responding. A synthesis of these diverse conditions could clarify the factors that influence obedience in the Milgram paradigm. We assembled data from the 21 conditions ( N = 740) in which obedience involved progression to ...

  20. What Really Happened During The Milgram Experiment?

    Published February 26, 2024. Updated March 22, 2024. The Milgram experiment tested its subjects' willingness to harm other people for the sake of obeying authority — and it ended with truly shocking results. Yale University Manuscripts and Archives Participants in one of Stanley Milgram's experiments that examined obedience to authority.