PHIL 181: Philosophy and the Science of Human Nature
|Transcript||Audio||Low Bandwidth Video||High Bandwidth Video|
Philosophy and the Science of Human Nature
PHIL 181 - Lecture 15 - Empirically-informed Responses
Chapter 1. Recap of the Trolley Problem and Three Responses [00:00:00]
Professor Tamar Gendler: So we left ourselves at the end of the last lecture in a somewhat perplexing situation. We had thought through the particular scenarios that Judy Thomson presents us with in her trolley paper. And we had discovered the following apparently perplexing feature about the class’s responses.
In what’s called the Classic Bystander case–the case where there’s a bystander standing next to a trolley that’s hurtling down a track about to hit five people, and the bystander could if he chose turn the trolley onto a track where the trolley will only hit one person–your responses were as follows. Roughly 15% of you thought he was morally required to turn the trolley from the five to the one. 70% of you thought he was morally permitted to do so. And only 15% of you thought that it’s a morally prohibited act for him to turn the trolley from the five to the one.
By contrast, we ended class with Thomson’s famous Fat Man case. This is a case where our bystander is standing next to the trolley as before, the trolley is hurtling down the track about to kill the five, and the bystander has available to him a means for stopping the trolley. In this case, rather than turning it onto a different track, the means he has available to him is to push a fat man off a bridge, thereby stopping the trolley in its tracks. And your responses in this case exhibited a highly different distribution than they did in the first case. Whereas in the first case, 15% of you thought it was prohibited to stop the trolley from hitting the five by killing the one or by causing the trolley to kill the one, in the Fat Man case, 78% of you–4/5 of the class–thought that the act of turning or of stopping the trolley by putting in its way another person was morally prohibited.
Now the puzzle that this raises, as you know from the end of last class, is that it seems that in both the Bystander case where one–sorry. The puzzle is this. In the Bystander case, it seems clear to most people that killing one person is bad, but that letting five die is worse. Whereas in the Fat Man case, it seems to be just the inverse. So what Thomson asks us at the end of that paper, having run through a number of cases, including some that I didn’t go over in this summary right now, is: what could possibly explain the difference in our reactions to the Bystander case and the Fat Man case?
And what she suggests is that whereas utility prohibits letting the five die–that is, it would be better for the number of lives saved if we saved five than if we saved one–the notion of a right is what prohibits killing the one in the Fat Man case. So what has to happen, says Thomson in the Fat Man case, is that you interfere with his right not to have his person used as a means to the end of saving another. Whereas in the case of Bystander, there’s no right that is infringed upon. And, suggests Thomson, “rights trump utilities.” So what the right prohibits is what is mandated in the Fat Man case.
So that’s where we were at the end of class last time. And the solution that Thomson proposed there is what we might call a classic solution to trolley-type dilemmas. It’s a solution that assumes that Fat Man case and the Bystander case carry different moral mandates, and that the reason they carry those different mandates is because of a deep moral difference that those cases encode. So the difference between our response to Fat Man and our response to Bystander, says Thomson in that 1985 article, is one that we should respect. And the reason we should respect that difference, she contends, is that that difference is tracking a profound moral difference between them, namely that in the case of Fat Man but not in the case of Bystander, the rights of an individual are violated.
What I want to do in class today is to go through with you three non-classic responses to the trolley case. And I’ll be giving you the chance to use your clickers in the first and third of these. So if you get your clickers out, we’ll be prepared for what’s going to happen in a few minutes. So what are the three non-classic responses? Remember, in a classic response, the claim is that Fat Man and Bystander carry different moral mandates, and that that difference can be traced to a deeper, morally relevant difference between them.
So two of the responses that we’ll consider today are ones that suggest that Fat Man and Bystander in fact don’t carry different moral mandates. So the first example that I’m going to run through with you is Judy Thomson’s rethinking of trolley cases in a 2008 paper in which she ends up assimilating the Bystander case to the Fat Man case. And suggesting that in neither of the cases is it permissible to kill the one to save the five.
The second view that we’ll consider is Josh Greene’s view that the right thing to do in the Fat Man case is the same thing as the right thing to do in the Bystander case, namely that in both cases, the right thing to do is to stop the trolley from hitting the five and cause it instead to kill the one.
And finally–I’m shoehorning this a bit, because in truth, Sunstein is a little closer to Greene than he is to Thomson. But we might use his thinking to maintain the position that though our responses to the cases differ, the cases are in some more fundamental sense the same. And what Sunstein is going to suggest we need to do is to push the fat man.
So what we have are three views here. Thomson’s saying the cases come together, and they come together in telling us never to kill the one to save the five. Greene’s saying the cases come together, and they come together in telling us always kill the one to save the five. And then, perhaps, Sunstein’s view telling us that the cases come apart. But these three non-classic responses are interesting not just for the difference in their content, I think they’re interesting for the purposes of this class because each of them makes use of a slightly different kind of argumentative methodology.
And there’s no reason that the methodologies and the answers needed to line up in the way that they did. So one of the things that I want you to think about as we go through today’s lecture is what use might be made of each of these methodologies to make one of the alternative arguments. So Thomson’s contention that in the Bystander case we shouldn’t turn the trolley is one that she makes on the basis of inviting you–as I will do in a moment–to consider additional hypothetical cases, and then asking you to be consistent about your responses to cases that fail to differ in moral ways. So Thomson’s methodology is the same as it was in her 1985 paper, there’s just a new case that she’s thought about. Sunstein’s methodology is to canvas a large array of literature in the heuristics and biases tradition, and to suggest that moral reasoning is no different than any other sort of reasoning. And Josh Greene’s method is of course to make use of neuroimaging results and on that basis to argue in favor of his view that what is morally mandated of us is a certain kind of utilitarian stance.
Chapter 2. Thomson’s New Response to the Trolley Problem [00:09:19]
So let’s start–and here you’ll need your clickers–with the additional hypothetical cases that convinced Judy Thomson, and may convince you, that it’s not OK to turn the trolley in Bystander. So the case that Thomson presents us with is a case that we’ll call Bystander’s Three Options. So here’s poor Jim, deeply regretting that he ever enrolled in this class, standing by the trolley in a usual Bystander dilemma where the trolley is about to hit the five and Jim has the possibility of deflecting it to hit the one. But because Jim lives his life in Judy Thomson’s thought experiment, she has, in rather dastardly fashion, introduced a third track at the end of which, rather unfortunately for Jim, Jim is standing.
Now here’s Jim’s three-way dilemma. One, allow the trolley to continue on its original path killing the five. Option two: deflect the trolley so that it hits the other guy. Option three: deflect the trolley from the five to the one–oh, except the one is Jim.
Question. In three-way Bystander, if Jim decides to turn the trolley–so we’re ignoring the case where he lets it hit the five–he’s made the decision to turn the trolley, the question is the following. Is it morally required for him to turn the trolley onto the track where it hits the other guy instead of himself? Is it morally permitted, but not morally required, for him to turn the trolley onto the track where it hits the other guy instead of himself? Or is it morally prohibited for him to turn the trolley onto the track instead of to himself?
So we’re assuming that Jim has made the decision to turn the trolley from the five. After all, it’s a straight Bystander case. If he doesn’t turn the trolley, it’s going to hit the five. 78% of you have previously told me that what one ought to do, or at least what one is permitted to do in this case, is to turn the trolley.
How come there’s no responses coming, guys?
Students: [interposing voices]
Professor Tamar Gendler: It’s not working? Oh, my goodness. All right. So why is it not open for you? Let’s try. Is it open now?
Professor Tamar Gendler: Tragic. This is really, really, very, very horrible. That did not work. OK, the whole lecture today depends upon these working. So let’s try this again. And tell me now whether this works. Is it working? OK. Is it working now? No? Still no? No? All right. Hm.
We’re going to have to run–I think there’s nothing I can do. I’m going to try resetting once more and see if that works. And I’m going to try removing and then returning this receiver. And then–if not–we’re going to do the old-fashioned show of hands and all my beautifully constructed slides will turn out not to be useful, but that’s all right. Worse things have happened in the world.
All right. Try it again. Yay! Awesome. I have no idea what I changed. OK. So, answering this question. Wow. There’s 64 of you. There’s 71 of you. We’ll do the countdown. 10, 9, 8–so let’s see how the numbers come out in–4, 3, 2, 1 seconds. Oh, and it’s so exciting. Especially because we had to suffer first. The contrast. OK.
So in this case, 6% of you think it’s morally required for Jim to turn the trolley onto the other man. But you were the 6% who continue to be outliers, or perhaps you’re different people. But let’s look at what’s going on. 61% of you think it’s morally permitted for him to turn the trolley onto the other man. And 32% of you think it’s morally prohibited for him to turn the trolley onto the other man. Now interestingly, Judy Thomson thinks–expects that more of you will fall into this category. So it’s an interesting question for us to think about as a class why it is that she is under the impression that it’s rather surprising that this is the response that you gave. But in any case, let’s move to a second contrast case and see how this goes. OK.
Suppose now that we have only a two-way case. In the two-way case, bystander Jim has only two options. Either the trolley is going to hit the five or he can deflect the trolley in such a way that it hits him.
I want to go back for a second and just get the numbers that I got on the last slide, because I forgot to record those for myself, thrown off as I was by our situation. So let me just record these. 6%, 61%, 32%. OK.
So moving on to the new case. It’s a two-way trolley, and the question is this. In Bystander’s Two Options, is it morally required for him to let the trolley hit the five instead of himself, is it morally permitted for him to let the trolley hit the five instead of himself, or is it morally prohibited for him to let the trolley hit the five instead of himself? OK? So let’s think through that case. So remember, it’s a two-way case. The trolley’s heading down towards the five. And the question is: is it required, permitted, or prohibited for him to turn the trolley from the five to hit himself? OK.
And let’s see how the numbers come out here. We’ve got roughly 10 seconds to find out whether your distribution is going to be similar or different here. OK, so here’s how the numbers come out: 8%, 70%, 22%. Now, the case with which we want to contrast this is the Bystander–classic Bystander case. In the classic Bystander case, more of you thought he was morally required to turn the trolley than you think in this particular case. In the classic Bystander case, interestingly, you had roughly the same view about whether it was morally permitted. And more of you think it’s morally prohibited for him to let the trolley hit the five instead of himself.
So the interesting difference is this one here. You took a different attitude with respect to whether it’s morally required for him to turn the trolley when the person it’s going to hit is himself than when the person it’s going to hit is another person. So let’s go back and do just a classic Bystander case and see whether, as a result of having thought through this case, there’s any change in your intuitions. So this is just the standard Bystander case that you’ve seen before. In the classic two-way Bystander case, do you think it’s morally mandatory, morally permitted, or morally prohibited for Jim to turn the trolley?
So we’re 3, 2, 1. And let’s see how the numbers come out. 20%, 65%, 15%. So as a result of having thought about the first-person analogue, some–though many fewer than I would have thought–some of you changed your view. Whereas originally, 15% of you thought it was morally mandatory to turn the trolley–oh, you’ve changed your view exactly the direction against the one I would have predicted. So here’s a mystery. Here’s a little bit of experimental philosophy done in our classroom.
What Judy Thomson was predicting–and we can talk in sections about why this didn’t happen–what Judy Thomson was predicting is that you would react as follows. If it’s not morally mandatory for me to turn the trolley onto myself, then it’s not morally mandatory, indeed not morally permitted, for me to turn the trolley onto another person. If I’m not willing to take a hit myself in that case, I shouldn’t be deciding on behalf of another person that he take that hit. So I want you to think about what it is in Thomson’s thinking about this case that made it feel to her so obvious that as the result of considering the first-person case, people would be inclined to rethink the third-person case. And I have to say, I myself in reading Thomson’s 2008 paper am very easily brought into the mindset she describes there. So I find it surprising and extremely interesting to see that that isn’t what happened in this context.
Let’s assume, however, that at least for some of you, the intuition that you came to have as the result of considering this case was something like Thomson’s intuition. So that whereas on the old view in Bystander case, you thought the right thing was to kill the one rather than to let the five die–that is, in the standard switch case, and this is in fact what most of you think–in the standard Bystander case, most of you think that the right thing to do is to kill the one rather than to let the five die. What Thomson says is that in thinking through the first-person case, you ought to realize that Bystander is a lot more like Fat Man than you initially thought. To the extent that you reject that intuition of Thomson’s, you’re in a position to disagree with her.
Chapter 3. Greene on the Trolley Problem [00:20:28]
So let’s move to the view with which I take it most of you are going to end up agreeing, since this is exactly the opposite of Thomson’s, namely Greene’s argument that the assimilation ought to go the other way. So just to remind you where we are in the picture, the puzzle with which we began is that people were giving a different response in Bystander than in Fat Man, and Thomson tried to get rid of the problem by causing you to assimilate Bystander to Fat Man. I was unable, through Thomson’s cases, to get you to shift your intuitions in that case. So we’re stuck with a residual difference between your responses. Most of you think it’s OK to turn the trolley in Bystander regardless of whether you wouldn’t do it on yourself, but that it’s not OK to push the man on the bridge in Fat Man.
So Greene’s going to give us a second way of thinking about how it is that we might bring those responses together. And his argument runs as follows. In general, we’re not in a very good position to determine what really underlies our reasoning. There’s an entire tradition in social psychology that I talked about in one of the early lectures that aims to show that a lot of what people engage in when they make decisions is post-facto rationalization of intuitive responses that they had which weren’t in fact tracking what they would say are the relevant features of the situation.
So famously, people are more likely to choose an object that lies on the left-hand side of a visual array than an object that lies on the right-hand side of that visual array. But in making the choice, they don’t provide as their reason the location of the object; they provide as their reason some other feature of the object. And when we looked at, in the second lecture, the confabulation results, whereby subjects who had undergone commissurotomy–that is, whose corpus callosum had been severed–so that the right and left hemispheres of their brains weren’t in communication, we discovered that when they performed an act that was based on stimulation of the right brain, the left brain, which is the linguistic part, came up with an explanation for what they were doing that was obviously not the real source of their behavior.
So there are many cases, Greene points out, where our motivations our opaque us. Where we think we’re responding to one thing, but in fact we’re responding to something else. One of those cases, says Greene, is the difference in our response to the Fat Man case and to the Bystander case. So what happens in the Bystander case–where we’re trying to decide whether to shift the trolley from the five to the one–is that our rational processing system gets activated.
Whereas what happens, hypothesizes Greene–and we’ll give some evidence in a minute–what happens in the Fat Man case is that our emotional processing system gets activated. And says Greene, given the choice between our rational system and our emotional system, the rational system is the one whose outputs we ought to trust. So says Greene, the morally right thing to do in this case is to push the fat man. Notice that this is a multi-step argument, some of whose premises are a good deal more controversial than others.
So the premise that our motivations are often opaque to us is completely undisputed by everyone. There’s no question that often we aren’t aware of what’s causing us to respond in a particular way. I may be particularly irritable because my feet are wet, and unaware of the fact that the reason that I’m responding to you in a short-tempered way is not because you are particularly irritating, but because my feet are uncomfortable. This phenomenon is undeniable. The question of whether what actually explains our different responses in these two cases is an interesting empirical question.
And there has been collected over the last decade or so some pretty interesting neuroimaging data suggesting that there are systematic activation differences in what goes on when people give utilitarian responses to cases and what goes on when people give responses to cases that seem to involve the sorts of notions to which deontologists appeal. Notions like rights. And there is a certain amount of additional evidence coming from other research that the areas that are differentially activated in those two cases correspond on the one hand with what is often thought of as a rational processing system–a calculative processing system–whereas in the other, they correspond with areas of the brain that have been in independent cases implicated in emotional processing.
So the first premise is uncontroversial. The second premise is reasonably well-supported. There’s controversy about the data, but there is scientific evidence for which there’s a good argument to be made that what it shows is roughly what’s written here. The controversial question is whether even if the first two premises are true, the third normative premise is true. Is it the case that if our responses to Fat Man are triggered by emotion, whereas our responses to Bystander are triggered by the rational system, is it the case that we ought to go with the rational system? That is a normative claim, not an empirical one. And even if the arguments that we’re going to consider in a minute successfully establish the truth of the second premise, we don’t yet have the truth of the third premise thereby established.
So let’s talk about the evidence that Greene has found in favor of the second premise–in favor of the premise that what goes on in cases like Fat Man is an emotional response, whereas what goes on in cases like classic Bystander is a rational response. So Greene has for the last decade or so put people into fMRI machines–into scanners which track where blood is flowing in the brain–and presented them in the scanners with three kinds of dilemmas. The first kind of dilemma are dilemmas that he calls moral/personal dilemmas. These are dilemmas like Fat Man where you’re being asked whether you want to push the fat man off the bridge. Dilemmas like the doctor case, which I presented, where we’re considering whether to cut up a healthy patient to save the lives of others. Dilemmas like a lifeboat case where there’s not enough food and water to go around on the lifeboat and you’re considering whether to throw off one of the people on the lifeboat so as to leave enough food and water to go around for the remaining subjects. So that’s the first class of cases that he has subjects consider in the scanner.
The second class of cases that he has people consider in the scanner are what he calls moral and impersonal cases. So these are cases like Bystander at the switch where you’re facing a moral dilemma, but not one where you are imagining, in an up-close and personal way, causing particular harm to a particular individual who’s in your proximity. Cases like ones where you’ve found a lost wallet and you need to decide whether to return it. Cases where you’re voting on a policy that will have certain kinds of effects on people, but where those effects are relatively remote from you.
And finally, he presents people with what he calls non-moral dilemmas. Questions like, if I’m trying to get from Cleveland to Chicago, should I take the bus or the train or a plane? Or if I’m trying to decide which coupon to use on the Internet to save on shipping, should I do this or that? Cases that involve the same kinds of objects, right? Fat Man involves trains. Bus versus train involves trains. We might have a coupon-use case where you’re using the coupon to buy a boat. Lifeboat involves a boat.
So he has the subjects in the scanner and they’re presented with these sorts of cases. And you’ll notice that I’ve put a little color-coded box here of black, grey, and white. What Greene discovered in the 2001 paper–and let me say some of these data have since been re-analyzed, so some of the details haven’t held up, but many of them have–what he discovered is that if one believes, as many do, that the brain areas listed here–brain areas like medial frontal gyrus, and angular gyrus, and posterior cingulate gyrus–if one believes that those are areas associated with emotion, then we have good evidence that in the moral/personal cases, the areas of the brain associated with emotion are activated. Whereas in the moral/impersonal and non-moral cases that doesn’t occur. By contrast, it looks like a bunch of areas that are traditionally associated with working memory–parietal lobe, middle frontal gyrus–are more active in the impersonal case and the non-moral case than they are in the personal case. And here’s the famous image from Greene’s 2001 paper reproduced in many papers since that shows the brain areas that exhibit differential response in the moral/personal cases as contrasted with the other cases.
So it looks like there is some, perhaps decisive, evidence in favor of Greene’s second premise. In favor of the premise that what goes on in moral/personal cases is an activation of the part of the brain associated with emotion, whereas what goes on in cases like Bystander is an activation of the part of the brain associated with reasoning and other sorts of more-controlled processes. Moreover says Greene, there’s lots and lots of behavioral evidence that supports the hypothesis that one of the things that goes on when we respond to hypothetical moral dilemmas is that we track features of the case that are not morally relevant.
So for example, there’s a study from the early 2000s by behavioral economists Small and Loewenstein that points out that in a very profound sense, identifiable victims produce in us more powerful emotional responses the non-identifiable victims. And this isn’t just the difference between a picture of the child to whom your Oxfam donations will go versus a description of the child to whom your Oxfam donations will go. There is in fact a strikingly large difference between people’s willingness to give some of their rewards in a game in a laboratory to person number four–right, so they draw a name from a hat and it says person number four–than in cases where they’re told, decide how much money you want to give to the person whose number you’re about to draw from the hat.
In instance–in neither of these instances do they know who person number four is going to be. But the fact that in the first case, the person they draw from the hat and it says person number four, and they think, “oh, I’ll give this amount of my proceeds to the person.” Whereas in the second case, they decide what amount of proceeds they want to give to the person whose number they are about to draw. The fact that that produces in subjects consistently different responses suggests to Greene, and perhaps to others of you, that perhaps using our intuitions about these sorts of cases to build our moral theories may not be the best way to proceed, since presumably there are few of you who think that there is a relevant moral difference between whether you know the number of the person to whom you’re going to be giving the gift or whether you’re about to find out the number of the person to whom you’re going to be giving the gift.
Here’s something else that appears to affect our moral responses to cases. This is work done by Jon Haidt, author of The Happiness Hypothesis, with various collaborators. If you’re deciding how much punishment to give somebody–if you’re deciding how wrong an act was–if you have been induced to feel disgust, either by sitting at a dirty table or by having been trained to associate certain terms with disgust through a hypnotic suggestion, you will be harsher in punishing people for their misdeeds. Now, I take it that most of you don’t think people deserve harsher punishment when you are feeling disgust because the table in front of you is dirty. I take it you think that how bad an act is that somebody else has done is independent of your feelings of disgust. But it looks like one of the things that condemnation tracks is that feeling. And in a minute, I’ll talk about how that connects to Sunstein’s more general discussion of heuristics.
Finally, some work by David Pizarro, a Yale PhD, suggests that in specific trolley cases, we can get people’s intuitions to move around in cases like Fat Man just by varying what most people would say are morally irrelevant features of the situation. In particular, Pizarro presents subjects with two different versions of the Fat Man case. In the first, you’re asked whether it is morally permitted, required, or prohibited, to push a man named Tyrone Peyton off the bridge in order to save 100 members of the New York Philharmonic. And in the second, you’re asked whether it’s morally acceptable to push a man named Chip Ellsworth III off the bridge to save 100 members of the Harlem Jazz Orchestra. So the question is whether pushing a white man off the bridge to save 100 people of African descent or pushing a black man off the bridge to save 100 people of European descent should produce different responses. And interestingly–perhaps as the result of a certain kind of self-correction–liberals say it is less morally acceptable to push Tyrone Peyton off the bridge than to produce [correction: push] Chip Ellsworth.
Regardless of which direction the numbers come out, what’s interesting is the numbers come out differently; tracking a feature which most of us would think isn’t a morally relevant feature. So it looks like strengthening Greene’s second premise–and this is an argument that he makes in more detail in a paper, from which we’ll read excerpts after the break, called “The Secret Joke of Kant’s Soul”–it looks like there’s pretty good reason to think at least some of our responses to these cases are tracking features which we wouldn’t reflectively endorse. And Greene thinks in particular in Fat Man, our reluctance to push the fat man off the bridge is tracking one of those morally irrelevant features. Deontological judgments, says Greene–those where we’re unwilling to make the utilitarian move–deontological judgments, says Greene, are driven by emotional responses. Consequentialist judgments are driven by cognitive ones. And the deontological responses, he says, lack moral significance. In fact, deontology itself is a kind of moral confabulation.
I’m going to give Kant the last word in this lecture. So those of you who are crying for the sage of Konigsberg, realize that he will get the very last word in today’s lecture complete with a beautiful image of his face. But before I do that I want to spend the final 10 substantive minutes of the lecture talking you through the third article which we are considering for today, namely Cass Sunstein’s article.
Chapter 4. Sunstein on the Trolley Problem [00:39:41]
So Sunstein, in somewhat similar vein to Greene, though drawing on a slightly different literature, argues that a good portion of our moral reasoning operates in exactly the same way that our regular reasoning does, namely by making use of heuristics, which we know about from the lecture on January 20th on dual processing. Heuristics are fast and frugal tools for dealing with the complexity of the world when we’re faced with time-sensitive, decision-making tasks.
And the way that heuristics work is really smart. They work by means of something called attribute substitution. We’re interested in a target attribute–something that’s relatively hard to find out about the world. And we focus our attention instead on a heuristic attribute–something that’s relatively easy to find out about the world.
So some of you may make use of this when you’re trying to distinguish your telephone from other people’s telephones. The target attribute–the thing you’re really interested in is–is this my phone? Something which you’re only going to be able to determine by turning on the phone and looking to see whether the numbers in it are the numbers that you’ve placed into it, let’s say. But you might make your life easy by putting a cover on your phone or a sticker on your phone or some surface feature on your phone that will let you find your phone quickly and well. Right? So you’re going to make use of an easy to find attribute rather than a difficult to determine attribute.
In general, this is an extraordinarily good way to navigate the world. Target and heuristic attributes generally coincide. That’s how the heuristic attributes came to be the ones which you’re using as the markers of the target. And it takes much less effort to process surface features of the world than to spend your time working through the details of each of the things that you want to make sense of.
So I observed myself this morning making use of a target [correction: heuristic] attribute on my way into school. I was stopped at a stoplight, and I noticed out of the corner of my eye that the car next to me had started to move. Now obviously, the attribute in which I was interested in was whether the light had turned green. But because I couldn’t quite see the green light from where I was sitting, I was able to use instead the motion of the car next to me as an indicator of the thing that I was concerned with. Now of course, the heuristic could have misfired in this case. It could have been that that car was moving even though the light was still red. It could have been that that car was moving in the left lane and had a special light that I didn’t. But for the most part, we make use of heuristics all the time and they help us.
Now Sunstein’s argument is that in non-moral cases, people often use heuristics. That though these are useful, they may also lead to errors. And that in moral cases, people often use heuristics as well. But just as they may lead to errors in the non-moral cases, so too may they lead to errors in the moral cases. And in particular, he thinks they do in a number of cases that he goes on to discuss.
So, for example–and I realize, I think we’re going to close–I said Kant would get the last word, but Kant’s going to get the last word on Thursday. We’re going to go through Sunstein and one of the examples, and then we’ll get to Kant.
So Sunstein points out, for example, that there’s a heuristic called the availability heuristic. That’s a heuristic that says, look, I’m trying to figure out how likely something is to happen, and here’s a good way to determine how likely something is to happen. I think about how easy it is for me to think of cases where that did happen. So whenever I’m worried that my children are going to be kidnapped, I think for myself, how many friends do I know whose children were kidnapped? How many people do I know whose children were kidnapped? And when I discover that the answer is none, I relax.
This kind of heuristic is often correct, but it can lead us astray. Suppose, for example, you’re asked whether it’s more likely whether there are more words in the English language that end in I-N-G or more words in the English language whose second-to-last letter is N. It’s much easier to think of words that end with I-N-G, and so people tend to say that there are more words ending in I-N-G than words whose second-to-last letter is N. But of course, every word that ends with I-N-G is a word whose second-to-last letter is N. You’ve been bamboozled by the availability heuristic.
Or suppose you make use of what’s sometimes called the representative heuristic. That the probability of something occurring tracks its degree of typicality. This too is often correct. What it is to be a typical instance is to be one of the instances that occurs more frequently. But as you know from the Linda the bank teller case or the farmer with a tractor case, if I ask you of our random resident of Iowa whether it’s more likely that that resident is a farmer or a farmer with a tractor, the representative heuristic is going to draw you towards saying that it’s more likely that the person is a farmer with a tractor. But obviously, every farmer with a tractor is also a farmer.
Now remember that Sunstein’s arguments for one and two are easy to make because we have an independent way of determining whether somebody has made an error in those cases. We can see what went wrong in the availability heuristic and in the representative heuristic, because we can see that it is in fact more likely that the second-to-last letter of a word is N than it is that the last three letters of the word are I-N-G. We can see that it’s more likely that somebody’s a farmer than it is that somebody’s a farmer with a tractor. Because in both of those cases, one of them is a special instance of the other.
Sunstein’s argument for moral heuristics is going to take more steps. Because it’s not enough for him to show what we’ll establish–that in moral cases, people often use heuristics–he’s also going to need to show that in so doing, they’re making mistakes. Where the question of how we get an independent handle on what it is to make a mistake is a rather complicated one. But let’s first think about what his argument in favor of the claim that in moral cases people often use heuristics is. And I’m going to close today’s lecture with two examples that he gives. And then we’ll begin on Thursday by running through some particular cases where I’ll ask you to respond.
So one of the examples that he provides is again some work by Jonathan Haidt on a phenomenon known as moral dumbfounding. As you know from reading Sunstein’s paper, people often respond to the question: “is it morally acceptable for a brother and sister to engage in consensual, harm-free sibling incestuous relations?” by saying that it is morally unacceptable. But when asked to provide reasons for that, subjects find it difficult to do so. Likewise, many people are inclined to think there’s something morally problematic about wiping the floor of your bathroom with a flag, or about eating your dog if he’s been hit by a car, but they find it difficult to articulate what their reasons are for those responses. Sunstein suggests the reason is an overextension of heuristics.
Likewise, he points out in moral framing cases–and we’ll start with this next lecture–cases like the Asian disease case that I presented you with in our lecture on January 20, whether you present a moral dilemma as involving lives saved, or by contrast lives lost–even when those are just complementary descriptions of the same outcome–people are likely to have different responses. And Sunstein concludes on that basis that people make use of heuristics in moral reasoning just as they do in non-moral reasoning. And we’ll begin on Thursday with Sunstein’s discussion of those cases, and then we’ll let Kant and Mill get the last words in the trolley debate. I’ll see you then.
[end of transcript]Back to Top
|mp3||mov [100MB]||mov [500MB]|