PSYC 110: Introduction to Psychology

Lecture 15

 - A Person in the World of People: Morality

Overview

Professor Bloom provides an introduction to psychological theories of morality. Students will learn how research in psychology has helped answer some of the most central questions about human morality. For instance, which emotions are “moral” and why did these moral feelings evolve? What factors guide our moral judgments? And what factors predict when good people will do bad things?

 
Transcript Audio Low Bandwidth Video High Bandwidth Video
html

Introduction to Psychology

PSYC 110 - Lecture 15 - A Person in the World of People: Morality

Chapter 1. Introduction [00:00:00]

Professor Paul Bloom: Let me begin by just reminding us where we are in this course, reminding us of what we’ve done and what we have yet to do. We started by talking about the brain, the physical basis of thought. And then we moved to some general introductions to some foundational ideas in the study of psychology, Freud and Skinner. We spent a bit of time on more cognitive stuff: development, language, vision, memory. Then we took a little break and the dean told us about love. Then we dealt with the emotions, rationality, and evolution, and a lot of that. What we learned particularly regarding the evolution of the mind provided supporting material for what follows. We learned about cognitive neuroscience using the study of face recognition as an important case study — human differences, behavioral genetics, nature and nurture, sex and food. My lecture was on sex. Dr. Brownell came and spoke to us about food. Today, morality. Next week, social thought and social behavior, mysteries; basically, a series of topics that don’t fit anywhere in the course and really make psychologists scratch their heads. These topics are sleep, laughter, and religion, mental illness, two lectures on madness, what can go wrong in your minds, and a last lecture on happiness. And then you’re just done. You know a lot of psychology and a lot of stuff and you’re well prepared for your ultimate major in psychology, ultimately graduate training at a good school.

How many people here are either psych majors or expect to become psych majors or cognitive science as though you could raise your hand to? Okay. Good. It’s nowhere near enough [laughter] and so I’ll ask the question again. Once you deal with happiness and then mysteries, you’re really not going to want to — What is there? Chemistry? Anthropology? [laughter] Pre-med? Give me a break. [laughter] Okay.

Chapter 2. Kin Selection, Cooperation and Moral Feelings [00:02:43]

We’re going to deal with three facets of morality. I’m going to talk about moral feelings, moral judgments, and then moral action with particular focus on why good people do bad things, which will lead us to review and discuss the Milgram study, which was presented in the movie on Monday. Now, moral feeling is what we’ll start off with and we’ve already discussed this in a different context. The question is, ‘How could moral feelings evolve?” So, moral feelings we could view as feelings of condemnation, shame, emotions like that — shame, condemnation, pride, righteous anger, but also simple affection, caring for other people, wanting to do well by them, being upset if an injustice is to be done by them. And you might think that the existence of these feelings is a mystery from an evolutionary point of view. If evolution is survival of the fittest, nature red in tooth and claw, how could animals evolve moral feelings? But in fact, we know the answer to this. And there are two answers to this.

One answer is kin selection. So, evolution works at a level of the genes and because of that it could give rise to animals that are themselves altruistic. And they’re altruistic because they act to preserve other animals that share the same genes. And so, I’m not going to spend any time on this because we’ve discussed it in detail, but we know from previous lectures that people will be generous to others. And there’s an evolutionary explanation for your generosity towards kin. It could be mathematically worked out. Your caring, your moral feelings towards other creatures to the extent of the proportion of genes that you share with them. The most altruistic behavior of all, giving your life to help another, can be explained in cold-blooded evolutionary terms. Animals that are altruistic even to the point of dying to help another, those genes will, under some circumstances, be preserved over the genes of people who are less caring. And that is one force towards kindness.

A second force towards kindness is cooperation. Even if animals are unrelated, they are nice to one another. Animals will give warning cries, they will groom one another, they will exchange food, and the reason for this is that animals have evolved, our minds have evolved, to enter into sort of cooperative situations with other people and to surmount prisoner’s dilemmas, to surmount deception and cheating. This gives rise to some emotion including emotions that could be viewed as moral emotions, like guilt and anger, and again, grounds altruistic behavior in an evolutionary perspective.

This is all by means of review but the question you can now ask is, “Fine. That’s why moral feelings might evolve, but what do we know as psychologists about the emergence in nature of moral feelings in individuals? What’s the psychology of moral feeling?” And this is an issue I’m going to talk about now but I’m going to return to next week when we deal with issues such as liking and disliking, racial prejudice and other things. But I want to deal now with a couple of interesting case studies about moral feelings from a psychological point of view.

The first one I want to deal with is empathy. And empathy has different definitions but we can simply view it as the feeling that your pain matters to me. If you are hurt, that is, in some sense, painful for me. If you are sad, that affects my own mood. I am not a selfish creature. I am built, I am hard wired, to be attuned to your pain. This is an old observation. Adam Smith, who is often falsely viewed as a proponent of selfishness and hardheadedness, was quite explicit about the pull this has. He notes:

When we see a stroke aimed and just ready to fall upon the leg or arm of another person, we naturally shrink and draw back our own leg or arm and when it does fall we feel it in some measure and are hurt by it as well as the sufferer. If you see somebody being kicked in the groin in a movie, you might yourself tense up. If you see somebody bang their thumb with a hammer, you might cringe.

Here is a good illustration of somebody in anticipatory pain. [laughter] Now — It’s a very British face actually. [laughter] Now, we know certain things about this empathy, some which might be surprising. The pain of others is aversive even for babies. We know this because if babies hear other babies crying they will get upset. The crying of babies is aversive to babies.

Now, some of you may be sufficiently cynical to say, “That could be explained in other ways. For one thing, one theory is that babies hear other babies cry, because babies are so stupid they think they themselves are crying; if they’re crying they must be in some sort of pain so they cry some more.” But clever psychologists have ruled this out. What they did was a study where they exposed babies to tape-recorded sounds of other babies crying and tape recorded sounds of themselves crying. Babies cry more to this pain of other babies than they do to their own pain, suggesting that their response is to some extent a response to the “otherness” of the characters.

We know pain is — of others is aversive for chimpanzees and we know this in certain ways. But we know this, in particular, from a series of studies that would be unethical if they were to be done today. In these studies, they put a chimpanzee in a room and there’s a lever. And when the chimpanzee slaps the lever, it gets some food. Trivial, smart animal, piece of cake. But the room has a window leading to another room. And in the other room another chimpanzee is placed. This second chimpanzee is not a relative of the first chimpanzee and they’ve never seen each other before. Now, when the first chimpanzee hits the lever the second chimpanzee gets a painful electric shock, putting the first chimpanzee in a horrible dilemma. In order to feed himself, he has to torture another animal. Chimpanzees do not starve themselves to death. It’s very unlikely any of you would either but they go a long time without food, suggesting they do not want to cause this other chimpanzee pain. It only works within species. So, in another experiment they put a rabbit in the other room and the chimpanzee would slap the lever repeatedly to make the rabbit scream in pain [laughter] and jump.

Now, we’ve known for a long time that empathetic feeling is not logically linked to morality. This is a point made by Aristotle. I could see you writhing in pain. That could cause me pain but it doesn’t mean I’m going to be nice to you. I could run away from you. I could turn my head or I could blame you for causing me this misery. But it does happen that emotional — that this sort of empathy does lead to moral concern and action. If we do an experiment and we induce you to feel empathetic to somebody, we get you to feel what they’re feeling, you’re more likely to be nice to them. And people differ in the extent to which they feel empathy. People differ to the extent it will hurt them to watch me slam my thumb with a hammer. If you are high empathy, you’re more likely to be a nice person than if you’re low empathy, suggesting there is some connection between empathetic feeling and liking.

Now, empathetic feeling, like any other human capacity, differs across people. Some of us have a lot of it. Some of us don’t have much of it. There is some reason to believe that in the population known as “psychopaths,” a population we’ll return to later on when we discuss mental illness, this sort of instinctive empathy is broken and the pain of others just doesn’t bother them very much. I have some illustrative quotes here. In Damon’s book, a wonderful book on psychopathy, he talks about a thirteen-year-old mugger who specialized in mugging blind people. And when asked about the pain he caused his victims he responded, “What do I care? I’m not her,” which is logically correct but, in a sense, inhuman. The fact that it’s another person should make you care.

The serial killer Gary Gilmore basically said the pain of others gratified him and caused him no unhappiness at all. “I was always capable of murder. I can become totally devoid of feelings of others, unemotional. I know I’m doing something grossly — ” and here is a very bad word ” — wrong. I can still go ahead and do it.” And Ted Bundy, when interviewed at one point, said he was astonished that people made such a fuss about all of his murders because he said, “I mean, there are so many people.” And if any of you here are nodding in agreement at these sentiments, [laughter] that’s not such a good sign. These are particularly callous and cold-blooded statements suggesting that this instinctive empathy, this aspect of moral thought, is not — is present in most of us but not in all of us.

The second case study of moral feeling is “in-group” and “out-group.” In our affections, in our caring, who we like, who we feel close to, whose pain bothers us, we are not indiscriminate. I care a lot more about my children than I do about my friends and I care more about my friends than I care about strangers. We’re all like that. We also favor our group over others in every possible way. You are a member of many groups. You are men. You are women. You’re Yale students. You’re young. You’re white, you’re black, you’re Asian. You’re a member of these groups and, as we will discuss repeatedly when we talk about social cognition and social behavior, this membership matters a lot to you. What’s particularly interesting is even groups that are formed, that you were not born with, that are formed on the fly, exert a huge amount of control over your moral feelings and moral attitudes. And the best example of this is discussed in detail in the textbook. And this is the Robber’s Cave study. And this Robber’s Cave study serves as a nice illustration of morality in everyday life.

The study was, eleven- and 12-year-old boys at a camping program. These were well-adjusted, pretty rich kids, racially homogeneous, and they were put into separate cabins. And the cabins were given leaders and they gave themselves names. Being unimaginative boys, they called themselves “The Eagles” and “The Rattlers” but as — what happened was, being separated they developed distinctive cultures. And when these groups were set in competition against each other, the Eagles versus the Rattlers, the within-group intensity grew. The Eaglers began — Eagles began to care a lot more about other Eagles than about anybody else.

So, there’s within-group solidarity. And then there were negative stereotypes. So, these groups developed different cultures. It was a randomly cut apart — kind of like Yale College is actually, where you get a random assortment of people. But despite the fact that the assortment is random, the division is random, cultures begin to emerge. The Eagles prided themselves on being clean living, not using cuss words and treating each other with respect. They viewed the Rattlers as dirty and tough and kind of slovenly slobs. The Rattlers viewed the Eagles as goody-goody kids. It’s cruel.

Finally, [laughter] it all evolved into hostilities, raids and violence. The Eagles burnt a Rattlers banner, cuss words were occasionally used, and so Sherif, the psychologist designing all of this, went, “Excellent,” [laughter] and then the problem — He then says, “Now we’ve created two different warring cultures. That was fun. [laughter] What do we do to make them friends again? And then we figure out how to — now we’ve done that and this’ll solve all sorts of problems.” So they started off. They wanted to have — They set up peace talks where a representative of the Eagle and a representative of the Rattler were set to meet and plan ways so that they could disarm and stop using cuss words and everything like that. This failed. The kids who engaged in the peace talks were ostracized by their own groups as treasonists. That failed. They decided to set up individual competitions like the Olympics where they — where people wouldn’t compete as Eagles or Rattlers but rather they would compete as individuals. That failed too. Like the Olympics, people — the teams took their — they took their individual accomplishments as reflecting on the group and it evolved into Eagles versus The Rattlers.

They shared meals, they turned — which turned into food fights and more cuss words. They shared movies, more fights, more cuss words. They shared fun with firecrackers, [laughter] which was a disastrous thing which nearly brought the experiment to an end. [laughter] They brought in a religious figure to give them sermons on brotherly love. [laughter] The sermons were entirely unsuccessful. What’s interesting is they — the Eagle — they took them to heart. These were good kids. They were respectful of religious authority but the lessons they took from them is “I should learn to love my neighbor.” If I’m a Rattler, I should learn to love my fellow Rattler and appreciate him as a fellow, as a person. “I love him. It’s love, not like those scummy Eagles.” [laughter] They all failed.

Here’s what worked. Sherif told the kids — all of the kids — that the water line to the camp was cut and they all had to defend the camp. What this did was it established a super ordinate goal, that is a goal that everybody shared, and perhaps more important a common enemy. This is where the solution, by the way, to bringing together — and you could write this down — to bringing together all the warring countries and religions of this planet is an alien attack. [laughter] By the logic of the Sherif it will bring us all together as a group.

A different question is, there in that experiment the “groupiness” was established in a very powerful way. They lived separately, they interacted with each other, they had their own names. The psychologist Tajfel after World War II was interested in the question of what could make a group. In other words, what do I have to do to you to put you in a different group from him? What do I have to do to this class — this side of the class to put you in a different group from this side and different from that side? And what would I have to do for those groups to matter such that, for instance, if I separate you in one group and you’re in another group and I give you a hundred dollars will you give the money more to him or to him, will you give it more to your own group or to another group? And what he found was you don’t need much.

In one experiment he showed people pictures of modern art and based on their responses he described them as Klee lovers or Kandinsky lovers. Now, this is all made up. They were just random assignments but the Klee lovers viewed themselves as more similar to other Klee lovers. They thought the Klee lovers tended to be smarter than the Kandinsky lovers and the Klee lovers would devote more resources to themselves than to others. This is why it’s called “minimal groups.” You don’t need much to make you into a group.

And in fact, later experiments just flipped a coin. So the lot — the experiment goes like this. I ask everybody in this class to take out a coin. You all flip it. Everyone who has heads, you’re one group. Everyone who has tails, you’re the other group. Then I ask people in the heads group, “Which group do you — Putting yourself aside, which group on average do you think is smarter?” You’d say, “Well, you know, it kind of works out that the heads group is kind of really — heads, smart.” Which group — “Here is some money. You have to distribute it.” You’re more likely — It’s a subtle effect when you make the groups so minimal but you’re more likely to give it to your own group than to others and this suggests that moral feelings are exquisitely attuned not necessarily only to individuals but also to the psychology of groups.

Any questions at this point about moral feelings? Yes.

Student: How you formed the groups — How is that morality?

Professor Paul Bloom: It’s morality — It bears on morality because it bears on — So, the question is, “How does group membership, how does that relate to the topic of morality?” And the answer is the moral feelings we’re talking about are feelings like empathy and caring. For me to have a moral feeling towards you means you matter to me. If you were to be harmed, I would view it as wrong. And the group experiment suggests that the extent to which these moral feelings operate are partially determined by the groups to which we belong to. If I’m American and you’re from another country, I will view myself — this is a very — kind of obvious finding — my obligations to you will be seen as less than if you were another American. If I’m a Klee lover and you’re a Kandinsky lover, I don’t think you quite deserve as much as me.

Chapter 3. Moral Reasoning and Moral Judgments [00:23:07]

Moral judgment is an area that is tremendously exciting and there’s a lot of recent research on this. By moral judgment I mean not empathetic feelings, not feelings of caring and love or approval and disapproval, so they’re not feelings of caring and love and empathy, but notions like something is good or bad, something — like something is fair or unfair. So, there are three hallmarks for moral judgments. So, suppose I say I don’t like strawberry ice cream. That’s an evaluation. That’s a judgment but it’s not a moral judgment. Why not? Because I don’t think it carries a sense of obligation. I don’t think anybody’s obliged to eat or not to eat strawberry ice cream. And it doesn’t carry a notion of sanctions, meaning I don’t think anybody should be punished for eating strawberry ice cream. On the other hand, if I say I don’t like baby killers, that actually is a moral judgment in my case. So [inaudible] I say, “Well, I don’t like baby killers. You like to kill babies. I actually think we are obliged not to kill babies.” If you disagree with me, you’re wrong and you should stop killing those babies. [laughter] Should you fail to stop killing those babies, I think you should be punished for killing babies.” And that’s what my judgment about “no killing babies” makes it a moral judgment.

Now, some people attempted to look at this the wrong way and say, “Look. What a weird topic, morality. I don’t believe in morality. I believe in Nietzsche. I don’t believe in ethics,” but I don’t believe you if you were to say that because morality isn’t — morality as we talk about it in this context isn’t just regarding your position on big questions like political issues or big moral questions like abortion or capital punishment. Rather, some sort of moral judgment happens all the time, often unconsciously. So, as you live your life you have to answer questions like what should you eat? Any moral vegetarians here? I’m just raising my hand to encourage people. [laughter] Okay. Anybody give to charity? Anybody not give to charity? Good. [laughter] Different from the moral vegetarians I noticed. Who do you socialize with? There’s homeless people around Yale and New Haven. What do you give to them? Do you avoid their eyes? Do you — What do you want to do with your life? Who do you have sex with? Under what context or conditions? These are moral questions.

My favorite moral dilemma is as I’m walking down the street and I see somebody I sort of know, do I avoid eye so we don’t have a conversation [laughter] or do I say, “Hey. How are you doing?” or do I kind of do the nod hoping that there won’t be more than this nod? [laughter] And then after I leave and I say, “Oh, I should have made eye contact with that person. I’m such a jerk. [laughter] There is a homeless person [simulating making great eye-contact with them]” [laughter] and — but these are day-to-day moral questions we struggle with all the time and so there’s a centrality in the study of how we do moral reasoning.

So, what do we know about moral reasoning? Well, we know that there are some universals. There are some aspects of moral reasoning that show up everywhere on earth. And there is some evidence, though it’s not particularly strong at this point, that these same intuitions show up in young children and in nonhuman primates like chimpanzees, capuchins, macaques and so on. And these are things like anger at cheaters, gratitude toward sharers, the sort of things you’d expect to come out in a prisoner’s dilemma, feelings that some things are right and some things are wrong. These are foundational.

But at the same time the study of moral reasoning is a fascinated — fascinating issue for those of us interested in cross-cultural psychology because there are plain differences across cultures. So, the anthropologist Richard Shweder gives a list here of human differences:

People have found it quite natural to be spontaneously appalled, outraged, indignant, proud, disgusted, guilty and ashamed by all sorts of things. Then there’s a long list: “masturbation, homosexuality, sexual abstinence, polygamy, abortion, circumcision, corporal punishment, capital punishment, Islam, Christianity, Judaism, capitalism, democracy, flag burning, miniskirts, long hair, no hair, blah blah, parents and children sleeping in the same bed, parents and children not sleeping in the same bed, women being allowed to work, women not being allowed to work.

If I put it down in a list and got people to tick it off, what you all thought, there would be some differences. Some of you think meat eating is okay. Some of you do not. Some of you — You might have different views about divorce. Most of you believe women should be allowed to work. Most of you will be in favor or not morally scolding of homosexuality. You’ll be lukewarm about polygamy. Nobody would like abstinence and so on. [laughter] But if we gave that same list to people in a different culture, they’d tick off entirely different things. These are ways in which people vary. I don’t think people vary in their feelings about baby killing. I don’t think people vary about the feelings of I do something for you and then you don’t do something for me. I think that’s gut-level, hard-wired, evolved to solve prisoner’s dilemmas. But these are important issues and these vary a lot from culture to culture and a good theory of psychology has to explain how these differences arise.

And Shweder has a theory which is quite interesting. Shweder argues that there are three styles of thought, three different frameworks of moral thought, three different ethics. There’s an ethics of autonomy. This is what moral philosophers within our culture view as morality, notions of rights, of equality, of freedom. But many cultures focus on an ethics of community, bringing together duty, status, hierarchy, and interdependence. Other cultures focused more on an ethics of divinity where notions such as purity, sanctity, pollution and sin are relevant.

So for example, when we’re talking about the rights of men and women and what they should be allowed to do, many people in our society following an ethics of autonomy will argue that they should have equal rights in all domains of behavior. Since they are sentient, free creatures, they should have a right to do whatever they want unless there is a compelling argument against it and a compelling argument would have to involve some infringement of the freedom of other people. On the other hand, if you’re in an ethics of community you might argue that men and women have different rights and different responsibilities. They may be born to perform certain things and as such they’re duty bound to follow them. If you’re from an ethics of divinity, you may appeal to religious injunctions against certain actions and behaviors and these may differentially restrict the behavior of men and women. You might believe for instance, that women should not prepare food when menstruating because it would contaminate the food. You may believe that there’s — there are severe restrictions about who could have sex with one another that don’t have to do with human rights and human freedom. It has to do with the way things should be because of issues of pollution and sin.

Now, Western cultures, as I’ve said, are highly invested in an ethics of autonomy and so debates we have in our culture tend to be framed in terms of an ethics of autonomy. If we have a debate about abortion in this class, people — some people might say, “Look. The fetus is a sentient being and as such it has a right to survive and shouldn’t be killed by its mother.” Other people would argue, “No. A woman has full freedom over her own body and as long as a fetus is within the body they — she has a right to control it.” If we’re arguing about hate speech, we could talk about the balance between the rights of the freedom of speech versus the right to a certain quality of education free of harassment and humiliation. Those are the ways we frame things but one of the more interesting discoveries in this field is that although people think that they’re governed by the ethics of autonomy, even people within our culture, even highly educated people within our culture, even people like you show moral judgments that are not quite as simple.

So, this is the work of Jonathan Haidt at University of Virginia. And Haidt finds if you ask people, they believe in our culture that they hold to an ethics of autonomy. If it doesn’t harm anyone, it’s okay. So, if I was to ask you your attitudes about sex, most of you — not all of you, you come from different cultures, you have different attitudes — but most of you would say sex between consenting adults is okay as long as nobody gets hurt, as long as nobody gets hurt people’s rights are respected. So, gay marriage, for instance, or gay sex would be okay with you because it is — nobody is harmed and these are consenting adults. Haidt points out that there are certain problems with this argument and he illustrates this problem — these problems with stories like this:

Julie and Mark are brother and sister. They are traveling together in France on summer vacation from college. One night they are staying alone in a cabin near the beach. They decide it would be interesting and fun if they tried making love. At the very least, it would be a new experience for each of them. Julie was already taking birth control pills but Mark uses a condom too just to be safe. They both enjoy making love but they decide not to do it again. They keep that night a special secret which makes them feel even closer to each other. What do you think about that? Was it okay for them to make love?

Who says yes? Good. I know that some people would say yes, shoot up their hands, and they look around in astonishment that no one else is with them. [laughter] Who says no? Okay. Who is not sure? You’re not sure. That’s the weirdest of all. [laughter] Haidt finds that the distribution even among this — If you — Look. If you go home and you ask your parents, they say, “Ew. What is — What are you learning at Yale?” [laughter] This is a very unusual culture and where some people will say it’s okay.

What Haidt finds is most people say it doesn’t and then he simply asks them, being a good psychologist, “Okay. What’s wrong with it?” And this is the brother/sister case. And the responses are interesting. Because people view themselves as committed to an ethics of autonomy, they can’t just say it’s disgusting. So, they exhibit what Haidt describes as “moral dumbfounding,” meaning that they struggle to find an explanation. They say it’s terrible because they’ll have a kid and the kid’ll grow up freaky [laughter] and then the experimenter — it’s an interview situation — says, “Well, no. Remember they’re both using a lot of birth control.” “Maybe she’s under age.” “No, not under age.” And finally, “Well, it’s just wrong.”

Similarly, another one of the scenarios — [laughter] This isn’t as bad as you might expect. [laughter] The family dog is playing outside and gets hit by a car. [laughter] They bring it in and they say, “Oh, Fido’s dead, Fido’s dead, but what’s for dinner?” So, they cook it and eat it. Who says it’s okay? Good. [laughter] Who says it’s not okay? Okay. Then they notice that their toilet is kind of dirty. “But whoa, there is an American flag.” [laughter] They then use the toilet to clean the flag. Who says that’s okay? [laughter] Anybody think it’s not okay? And just keep in mind we’re getting sort of even responses here. On all of these, the majority of people who are not college students in elite universities say, “Oh, that’s so wrong.”

Finally, there is this one. And this one really is as bad as one might expect. [laughter] A guy is lonely so he purchases a frozen chicken from the supermarket, brings it home and has relations with it. [laughter] Then he cooks it and eats it. [laughter] Look. This is a scientific paper in the Psych Review. [laughter] Okay. Who says that’s okay? [laughter] Good. And I notice there is consistency among people. The people who think it’s okay have every right to say that they believe, if they really, sincerely believe it’s okay, they are committed to an ethics of autonomy. Those of you who think it’s not okay, none of these, should ask yourself why and should then scrutinize your reasons. People are very smart and they could present — easily present reasons why. They could say, “Oh, disease,” but these reasons tend not to be sincere. If you take away the considerations that the reaction stays. And these are then interesting case studies of how our moral judgment is governed by factors that we might not be conscious of. Our moral intuitions can surprise us.

Chapter 4. Milgram’s Work in the Context of Morality [00:38:48]

The motivation for Milgram’s work, and this is the final thing we’ll talk about in the context of morality — The motivation for Milgram’s work was the Holocaust and he was interested in exploring why such a thing could happen. I should note by the way — you know from the movie that Milgram was a Yale professor. He left Yale when he didn’t get tenure, moved to Harvard, didn’t get tenure there too. He was — He had a reputation by then as a mad doctor. He ended up at City University of New York, became a full professor at age thirty three, died in his early ’50s, did not lead a good life but had extraordinary discoveries. Another discovery which we’ll talk about next week is — Has anybody heard the phrase “six degrees of separation?” Milgram, and we’ll talk about that later. Milgram had a powerful imagination.

Okay. So we know — This is all review. There is the guy. How many of you laughed when you saw the movie [A movie on Milgram’s conformity studies called “Obedience”]? Interesting question why and we’ll talk about that in a little while. Shocks, “slight shock” to “XXX.” There is — This is just repeating what you’ve seen. The learner protests as he’s being shocked more and more but the experimenter continues to request obedience. For those of you who haven’t seen the movie, again, the setup is someone is a subject. They don’t know — They think that they’re teaching somebody in a memory game but actually the person who is being shocked is a confederate who is trained to react in certain ways as he’s being increasingly shocked. And the finding is that the majority of people will deliver fatal shocks to this person who they had never met based on the instructions of another person.

Now, there are some immediate bad explanations for this. One explanation is these are really strange people. “These are an abnormal group of psychopaths.” But we know that’s not true. It’s been replicated with many subjects. There’s no reason to believe that the subjects in Milgram’s original study were in any way unusual. It’s also a misreading to say that people are, in general, sadistic. You remember from the movie nobody got pleasure from giving the shocks. They felt acutely uncomfortable, embarrassed, conflicted, under a huge amount of stress. They weren’t liking doing this.

There were follow-up studies. This is the original study. If you take it away from Yale, some of the authority goes away, and similarly, the extent to which there are fatal shocks goes down. As the teacher is with the learner next to him, it goes down. If you have to put the guy’s hand on it, you’re less likely to kill him. If the experimenter gives you instructions by phone, you’re less likely to do it. If an ordinary man, not the guy in a white lab coat but an ordinary guy, says, “Hey, keep shocking him, that’s okay,” you’re less likely to do it, and if there is a rebellion, if somebody else rebels and says, “I won’t do it,” you are much more likely not to do it yourself. There are some — Oh, sorry. Yeah, and if you could get to choose your own shock level, you could keep — then very, very few people go all the way. So, these are an important list of factors as to the factors that can make somebody less likely to bring it up to the killing level. And as a result we can look at those factors and think about what is the perfect situation for making somebody do something like this and the perfect situation not to.

Some more serious critiques of Milgram: Milgram’s experiment is why we have human-subjects committees. This is a terribly stressful experiment to do to people and, as I say now about a lot of studies that I describe in this class, it would not today be done. People did say they were happy to have participated and only 2% said that they were sorry, but still serious damage could have been done and perhaps was done. These people left the lab having learnt about themselves that they’ll kill another person if someone tells them to, and as psychologists I don’t think we have any right to do that to people. I think people can learn this — these things about themselves. We have no right to put you in a circumstance where you believe you killed somebody and then tell you it was just pretend — we just made you kill somebody. And that’s a serious ethical criticism.

Historians and sociologists have brought in things back to the questions that Milgram was interested in and argue — and this is controversial — the extent to which obedience really is a good model for acts of genocide. So, just to take one example among many, Goldhagen argued that the participants in Nazi Germany and in the Holocaust were actually not people who were obediently following orders but rather were enthusiastic, people who volunteered to do it. Still Milgram’s work is interesting in many — for many reasons, in large part because he provides an illustration of the perfect situation for getting somebody to do a terrible thing and the perfect situation has certain ingredients. It includes authority, in this case the authority of Yale and the authority of science. “This is an experiment that must go on.” The notion of a self-assured experimenter — The results would be very different if the experimenter himself seemed nervous, unwilling to proceed, confused, but he was confident and he kept saying that he will take responsibility. There was distance between the learner and the experimenter. Recall you get less of an effect if you have to touch the guy but distance makes it easier for you to kill him. And finally, there’s a new situation and no model of how to behave. One of the reasons why the Milgram experiment is so nice to know is that if this ever happens to you, not as an experiment but in real life, it will no longer be new to you. You’ll know what sort of thing this is and you’ll be able to examine it in that light.

Chapter 5. Two Forces for Evil, Two Forces for Good [00:46:20]

I want to end this lecture summing up, drawing a lot upon Milgram and some other work, and talk first about two forces for evil and then to end by talking about two forces for good. The first force for evil is deindividuation of self. And what this means is — one reason why people are so bad in groups is because you could diffuse your responsibility. If I’m running through the street alone with a baseball bat smashing through windows, it’s me and I know it’s me. If I’m with twenty other people, it’s not me anymore. It’s part of the group and I don’t feel as bad. Responsibility becomes diffuse. One of the powers of a group then is it diminishes responsibility.

You could diminish responsibility in other ways. Another way of diminishing responsibility is you could accept orders. It’s not me. I’m just an instrument of somebody else telling me what to do. And yet another way of diminishing responsibility is anonymity. Here’s a question. In so many violent acts and so many people go to war, what they do is they paint their faces or they put on masks. Why? Well, there’s anonymity from others. If I’m wearing a mask as I do my terrible stuff, nobody will know it’s me, but there’s also a psychologically liberating effect. If I’m anonymous, it’s not me and I could do terrible things without feeling the same moral responsibility.

This analysis has explained why people don’t always help others in need. If there’s a group, responsibility to help decreases and this is captured in different ways but the main idea is we all think someone else will help so we don’t. There’s a diffusion. This [slide] just summarizes some studies — some famous studies supporting this. And the classic example, which is discussed in detail in the textbook, is the Kitty Genovese case where somebody was murdered in the common lot that apartment buildings surrounded while dozens of people watched, dozens of good, normal people watched and did nothing. If there’s some advice I’ve heard on this, which is pretty good advice: If you’re ever in a predicament on a city street, you have a heart attack, you broke your leg, you’re being mugged and everything, and there’s — this is based on the research, screaming “Help” is often not very successful because if I’m with ten people and there’s somebody screaming “Help,” I look at the other nine people. They’re not doing anything. They’re looking at me. I’m not doing anything. We keep walking. What is useful is point to somebody and say, “You in the green sweater, call the police,” and the psychological evidence is if you — if somebody’s — if I am wearing the green sweater and somebody asks me to call the police I will call the police. I’m a good guy. I wouldn’t sit aside when somebody’s being harmed. On the other hand, if somebody says, “Somebody call the police,” well, I got things to do and so diffusion of responsibility explains both when we’re willing to do terrible things and also when we’re willing to help people who are in trouble.

Denigration of others. There’s a lot of ways to make other people matter less. So, this is the flip side. The way to do terrible things — One way to do terrible things is to lose yourself so you’re not an individual anymore but another way to do terrible things is so that the person you’re doing it to isn’t an individual. How do you do that? Well, you have psychological distance or physical distance. I’m more likely to kill you if you’re very far away than if you’re close. I don’t — I could describe you and start to think about you not as a person and language can be used for this. Instead of people you could use terms like “cargo,” instead of murder, extermination. Humor is very powerful in denigrating and demoting people. When you start laughing at somebody you think of them as less of a person and we’ll get to that a little bit more when we talk about laughter.

You could take away their names. One of the more interesting things in the United Nations Declaration of Human Rights is a very interesting right. It says, “Every person has a right to a name.” And you might think what a strange right but there’s a cleverness to it. When you take away somebody’s name they matter less. People have names. People have distinct, individual names that mark them as people and once you know somebody’s name you are less likely to do bad things to them. And another option which I’m interested in from the standpoint of my own research is you could see them as disgusting.

Disgust is what Paul Rozin has called “the body and soul emotion.” And we know certain things about disgust. It is a human universal. It is a basic emotion with a characteristic facial expression. Remember Paul Ekman’s work on the basic emotions, the universals of emotional expression? Disgust is one of them and it is universally elicited by certain things like this list. Wherever you go, feces, urine, blood, vomit, rotten flesh and most meat will be disgusting. Now, if that was all we had to say about disgust, it wouldn’t affect morality very much but we know that people can be seen as disgusting. And Charles Darwin actually, who was an astute observer of human behavior, tells a nice story to illustrate this: how “a native touched with his finger some cold preserved meat and plainly showed disgust at its softness whilst I felt utter disgust at my food being touched by a naked savage though his hands did not appear dirty.”

People can be disgusting and if people are seen as disgusting they matter less. The philosopher and legal scholar Martha Nussbaum nicely summarizes this: “Thus, throughout history certain disgust properties have repeatedly and monotonously been associated with Jews, women, homosexuals, untouchables, lower class people. All of those are imagined as tainted by the dirt of the body.” Any — I won’t read this but this is a typical bit of Nazi propaganda. Any genocidal movement that has left behind a written record has been shown to use the mechanism of disgust to dehumanize people and make them easier to kill. I’ll skip that.

I want to end though on a positive note. And the positive note are forces for good. So, forces for bad are to lose yourself as an individual, lose yourself in a crowd, lose yourself because there is some authority using you as an instrument, lose yourself because you’re anonymous, plus treat others not as people, as numbers, as objects, as disgusting things, but there are some forces for good. These include “contact” and “interdependence.” What this often — What this can be viewed as, as an extended version of selfish gene theory, which is that to the extent you’re interconnected with other people you care about them more for purely selfish reasons. Robert Wright presented this in a very blunt way, but I think his quote is quite moving: “One of the many reasons I don’t want to bomb the Japanese is that they built my minivan.” And the idea is he has economic codependence with these people. They’re a different group. He might want to kill them under normal circumstances but the interdependence gives rise to a moral connection.

Thomas Friedman proposed the “Golden Arches Theory of Human Conflict,” which said that no two countries which each have a McDonald’s will ever go to war because McDonald’s forces global interdependence. This was falsified in the NATO bombing of, I think, Sarajevo but still his heart’s in the right place, the idea that interconnection makes you more likely to get along with other people.

More generally, there’s what’s been called “The Contact Hypothesis.” So, interdependence is one thing but what’s maybe more interesting is that simple contact with other people. Particularly if you’re of equal status, you have a common goal, and you have social support makes you like people more. There are now dozens, probably hundreds, of studies that show that people who would otherwise show animosity towards one another, like blacks and whites in the United States, like each other more if they’re brought together. And there’s a lot of social psychology research as to the conditions in which you have to bring them together. The Robber’s Cave study talked about before is a nice example. It was not easy to bring them together but when they had a common goal that brought them — that caused the interconnection and then the contact led to moral feeling.

The military is a superb example. The military in the United States was a situation which brought together people who wouldn’t otherwise have any contact and they liked each other. There has been study after study showing that people in the military who were otherwise, for instance, racists after working with people of different races liked them more because you had all of the right ingredients. You had — They had — They worked together for a common goal, the military supported bringing these people together, and they were brought together on an equal and fair footing.

There is, of course, a lot of debate about universities like Yale to the extent in which they promote interdependence — sorry, they promote positive contact between groups. And you could think of yourself as an exercise. If these are the conditions for contact, to what extent are they met in the university setting between, say blacks and whites, people from the American South versus people from the American North, people from other countries versus people from the United States? And I know there’s debate on campus about the extent to which there is segregation within the Yale community. And you could ask yourself the — about the extent of that segregation and how that reflects — what role that should play with regard to the Contact Hypothesis.

Finally, and this is the last thing I’ll say: If you take another person’s perspective, you’ll care more about them. This is the final force for good from a moral perspective. JFK, when making the plea for equal rights, didn’t produce an abstract philosophical argument but rather tried to invite his listeners who were white to engage in perspective taking.

If an American, because his skin is dark, cannot eat lunch in a restaurant open to the public — [and so on and so on and so on], then who among us would be content to have the color of his skin changed and stand in his place? Who among us would be content with the counsels of patience and delay?

Again, Nussbaum goes on and talks about how in Greek dramas — Greek dramas invited people to take the perspectives of those who they would never imaginably be or even be in contact with and argue that this gave — led to an empathetic expansion. I think one of the greatest circles for moral good is storytelling where you’re invited to take the perspective of another and see the world as they do.

Finally, there are direct ways. You can ask people — and this is a way which we talk to our children when we try to get our children to expand their moral concern of compassion. We say, “Try to see it from their point of view. How would you feel if — “Then there’s indirect ways. You can, for instance, use the power of metaphor. There could be familiar things that you are close to and you could bring in together new things as falling under the rubric of these familiar things. So, if I wanted to cause you to feel moral concern for a fetus, I would do well to describe it as a pre-born child. If I wanted you to care about an animal, I would do well to describe it as if it were human. If I wanted to think about all of you and get — and establish more of a connection with you, I would not describe you as unrelated strangers. Rather, you are my brothers and my sisters. And of course, any political movement that tries to bring us together — people together says — uses a family metaphor.

Finally, when Steven Spielberg tried to get us to entertain the notion that computers and robots are sentient moral beings he did not show us one that looked like this [a faceless mechanical-looking robot]. He showed us one that looked like that [a cute child actor].

Okay. The reading response for next week is a simple one. I know I’ve been giving difficult reading responses. This is simple. You could write it up very short and that will be a passing grade if you just write it up very short. You could also write it up a bit longer. Suppose the Milgram experiment had never been done and it was being done for the first time here. What would you do? What do you think everyone else would do? Okay. I’ll see you next week.

[end of transcript]

Back to Top
mp3 mov [100MB] mov [500MB]