In this post I step into the world of Stanley Milgram’s provocative obedience experiments at Yale University, where ordinary individuals were thrust into a moral quandary. Participants in these experiments were tasked with administering escalating electric shocks to a fellow participant under the guise of a learning experiment, facing a stark test of obedience versus personal conscience. This post explores the complexities of human behavior under authority, revealing profound insights and ethical dilemmas that continue to resonate today.
In Milgram’s shock study, participants were asked to participate in research bearing outwardly on learning at Yale University. Participants were introduced to another participant; who was assisting in the plot. The experimenter pretended to randomly pick one of the two, who is also an accomplice, to be the learner. Subsequently, participants were told that the study tested whether someone motivated to steer clear of electric shocks would learn better. The learner should memorise a list of words; otherwise, he would be electrocuted by the participant when he made a mistake. The participants had a big machine with switches corresponding to increasingly high-voltage electric shocks in front of them. The confederate was led slightly away, the participants could only hear the confederate through a microphone. In the beginning, the confederate did a good job memorizing the words, but he started making mistakes as the task levelled up. The experimenter asked the participants to shock the confederate, and all of them did. The switches were marked as delivering a slight shock at first, but as the confederate kept making mistakes, the voltage would increase gradually; thus, the switch voltage was divided into seven levels, “slight shock”, “moderate shock,” “strong shock”, “very strong shock,”, “intense shock”, “ extreme intensity shock”, and “ danger severe shock”. A few participants refused to proceed on the last switch. The confederate was expressing aches and pains during the whole process of switching electric shock. Specifically, he started begging the participants to break off at the higher level of shocks. One of the confederates grumbled about a heart problem; however, more than half carried on the experiment. Milgram’s result showed that two-thirds of United State participants had no problem shocking someone almost to death if they had been told to do that by an authorized person (Mercier. 2020 pp. 231).
Hugo Mercier provides us with More accurate data about that study. Initially, Stanly Milgram was interested in figuring out how far people would follow orders if it involved harming another person. The participants were invited to be part of a scientific experiment that concerned people learning things better whenever they get punished. While indeed, the core experiment goal was something else. Secondly, according to Mercier, the two-thirds figure was only obtained in one variant of the experiment. The number does not precisely show the study’s result, which could account for confirmatory bias (Mercier. 2020). Thirdly, the same experiment had been done with minor changes, such as a new experimenter, and reduced lower rates of compliance. It should be noted almost half of the participants had doubts about the reality of the whole setup. Just a quarter of them thought they were shocking someone and went to the highest voltage. When the experimenter ordered the participant strictly, for example, “You have no other choice, you must go on”, it tended to have the opposite effect, prompting participants to rebel and refuse to take further part in the experiment (Mercier. 2020).
The effect of university prestige is visible in that most dramatic demonstration of deference toward science: the Milgram obedience experiments. It is worth pointing out that the experiment is located at Yale University, one of the most prestigious universities. No doubt that the effect of university prestige is even visible in that most spectacular demonstration of deference toward science. They were met with well-educated people who gave a detailed scientific rationale for the experiment. These elements significantly affected the participant’s behaviour (Mercier). It is worth pointing out that when other versions were done in other locations, the results decreased by 20 per cent. That illustrates that participants followed the experimenter’s request only when they believed in the study’s scientific goal (Mercier. 2020).
The following text will discuss how people have developed some mechanisms to avoid being deceived or aided by others. Nonetheless, in Milgram’s case, it might show that the participants were capable of believing whatever they had been asked for. While in fact, the elements that we discussed above explain a lot why part of the participants follows the orders.
For a long time, people have believed that people are gullible, which is not valid. Some people might associate the lack of education or a lack of sophistication with a lack of intelligence and gullibility; that kind of thinking is wrong. From an early age, we have developed some cognitive mechanisms that allow us to believe easily in things that don’t correspond to reality correctly. Hugo Mercier spotlights in his book, Not Born Yesterday, how our critical thinking has developed from extreme conservatism, where few signals influenced us, to a modern situation where we are both more vigilant and open to different forms of communication and content (Mercier. 2020).
This idea is somewhat similar to Comte’s concept of the three stages of the human mind. Human beings have come to mechanisms geared towards accuracy, which means we want reliable information about the world. We would be more vulnerable to exploitation or deception if we were gullible. In fact, we are far from gullible that we all have the ability to sort out what we hear, and our communication skills have been refined more than in the past (Mercier. 2020).
Mercier believes that we are capable of specific cognitive skills. He means when we receive any information, it goes into the filtration stages. We evaluate and control all communicated information carefully, trying to understand that information from different points of view. Therefore we do not adopt anything in social information gathering (Mercier. 2020).
However, it should be mentioned that there is an exception, which could occur in data-overwhelmed situations. To shed light on this, it could be risky if we want to decide if we are overwhelmed by information because the amount of input to our system exceeds our processing capacity. So, when information overload occurs, a reduction in decision quality could occur (Mercier. 2020).
Humans have developed some vigilance mechanisms against the danger of communicative manipulation and misdirection. We use these mechanisms of epistemic vigilance to avoid being deceived or aided by other people. The function of these mechanisms would be to get into triage to filter communicated information so that we are more likely to be influenced by reliable sources and plausible information. In general, we stand to gain from connecting points and communication instead of a thing at risk of being too easily manipulated (Mercier. 2020).
Starting with checking plausibility, the quality of seeming reasonable or probable. To boot, we do a negative screening to help exclude certain information, and it allows us to access valid information objectively. In addition, pay attention to the sources by evaluating their sources. One other significant process is the plausibility factor about the coherence of the information as a whole matches statements with what one already knows. People frequently apply a conditional, categorical or disjunctive syllogism to conclude any provided arguments and reasoning. Apply our inference skills (Bergman. 2022). Whereas reasoning makes us more alert and open to logical alternatives – evaluate arguments objectively. Use good reasons and arguments (Mercier. 2020).
Even so, there is always an exception, we are often systematically wrong. Epistemic vigilance could sometimes be misplaced, in this situation, it leads to many invalid explanations. It could happen due to the availability of heuristics, the tendency to assess the probability of an event based on the ease with which instances of that event come to mind, e.g. conspiracy theories because it creates a research gap, which is what we experienced during the Covid-19 pandemic, when all questions have not been answered by scientific people, such as this virus affecting people to different degrees. Experts released information, after which it turned out that this information was not accurate enough. Because it is a new condition that has not been studied before, along these lines, humans start to believe in irrational things like conspiracy theories. For example, Coronavirus has been deliberately created to kill half the population of the globe; because we are too many. Some people do that to fill that gap and make sense of an event in the world around them. Especially when they feel out of control of their lives, anxious, or unable to protect their needs if threatened (Mercier. 2020).
For the reason that we depend on communication with others, that will leave us open to the risk of being misled or misinformed. Thus, using an epistemic trust is essential. The definition of epistemic trust is the trust in the authenticity and personal relevance of interpersonally transmitted information (Sperber et al. 2010). The following paragraph will introduce some tactics that structure the capacity for social learning and guide the individual through how to trust somebody.
First, listening and identifying the person who can do more than the rest of his/her peers. Asking relatable questions about a topic helps to get over reflexive scepticism when the plausibility check shows that something doesn’t match the claims with what one already believes. In addition, competence tracking, identifying signals from who can/knows best/better, from variant aspects:
cautious, know-it-all people who are good at pitching nonsense ideas. It is not about the person’s education or the complex scientific language.
In conclusion, regardless of the ethical issue with Milgram’s shock experiment, it showed us ” the dangers of an overreliance on coarse cues to evaluate scientific value” (Mercier.2020. p.233), it means that, it has shown us the dangers of relying too much on crude clues in judging scientific value, e.g. the signals given to the participants were not clear or precise. Moreover, Milgram’s study is a good example of inductive reasoning because he went from a single idea to a general one and had a confirmatory bias.
The standard narrative of Milgram’s results is that ordinary people can do anything if they are asked by authorities, Which is not true. The last part of this paper explains the mechanisms of vigilance and how these mechanisms help us keep misleading in communication, lies and even manipulation at arm’s length. The cognitive system and epistemic vigilance mechanisms help us filter, evaluate, and absorb information. A variety of cues tell us how much we should believe what we’re told. On the other hand, there are some exceptions, there are some factors that might affect our cognitive system, which might be because of distorted or overwhelming information. A question pops up to my head: What steps can be taken to mitigate the influence of authority in decision-making processes, as highlighted by Milgram’s findings?
Sources: