WEBVTT 00:00.210 --> 00:05.110 PROFESSOR: So what I want to do in today's lecture is to 00:05.110 --> 00:08.610 shift gears somewhat from what we've been talking about in 00:08.610 --> 00:10.310 the first unit of the course. 00:10.310 --> 00:14.120 As you know, the first unit of the course was focused on a 00:14.119 --> 00:17.409 set of texts that we're concerned with what is 00:17.410 --> 00:20.160 involved in human flourishing. 00:20.160 --> 00:24.770 And though our opening text, Glaucon's challenge from 00:24.770 --> 00:29.010 Plato's Republic concerned itself with morality and the 00:29.010 --> 00:31.510 way in which morality contributes to human 00:31.510 --> 00:35.560 flourishing, we haven't, up until this point, given much 00:35.560 --> 00:39.690 attention to what philosophers have had to say about the 00:39.690 --> 00:41.640 nature of morality. 00:41.640 --> 00:47.650 And so goal in this unit is, in an incredibly accelerated 00:47.650 --> 00:52.110 fashion, to introduce you today and next Tuesday to two 00:52.110 --> 00:54.920 of the most prominent moral theories in the Western 00:54.920 --> 00:59.610 tradition and then it then the remaining sessions before 00:59.610 --> 01:02.920 March break to talk to you about some of the empirical 01:02.920 --> 01:05.400 research about these questions. 01:05.400 --> 01:08.070 And I know we have a wide range of 01:08.070 --> 01:09.240 backgrounds in this class. 01:09.240 --> 01:12.400 Some of you are now taking your first philosophy course. 01:12.400 --> 01:16.470 Some of you have taken an entire course on ethics. 01:16.470 --> 01:20.210 And so I've tried to pitch the lecture in such a way that it 01:20.210 --> 01:25.120 brings everybody up to speed, but that it does so in a way 01:25.120 --> 01:28.360 that I hope won't bore those of you who have encountered 01:28.360 --> 01:29.400 this before. 01:29.400 --> 01:32.570 In particular, to make up for the fact that there's very 01:32.570 --> 01:37.300 little empirical psychology in this lecture I have six 01:37.300 --> 01:38.720 polling slides. 01:38.720 --> 01:40.820 So those will come in in the middle of the lecture right 01:40.820 --> 01:43.600 when all of you are zoning out because you got two hours of 01:43.600 --> 01:44.560 sleep last night. 01:44.560 --> 01:46.970 So even if you don't pay attention for the first part, 01:46.970 --> 01:48.740 you'll get to vote in the middle. 01:48.740 --> 01:51.860 All right, so what is it that moral 01:51.860 --> 01:54.350 philosophy sets out to do? 01:54.350 --> 01:56.800 What is it to provide a 01:56.800 --> 02:00.290 philosophical account of morality? 02:00.290 --> 02:05.180 What moral philosophy is the systematic endeavor to 02:05.180 --> 02:09.960 understand moral concepts and to justify moral principles 02:09.960 --> 02:11.240 and theories. 02:11.240 --> 02:15.530 That is: moral philosophy, even if it ends up giving a 02:15.530 --> 02:20.920 non-systematic answer to how it is that morality works and 02:20.920 --> 02:27.250 what it is that morality does, does so within the endeavor of 02:27.250 --> 02:31.890 thinking systematically about the nature of morality. 02:31.890 --> 02:33.520 What do I mean by morality? 02:33.520 --> 02:38.220 I mean that moral theories aim to provide accounts of terms 02:38.220 --> 02:40.830 like "right" and "wrong," "permissible" and 02:40.830 --> 02:44.870 "impermissible," "ought" and "ought not," "forbidden," 02:44.870 --> 02:48.740 "good," "bad," and the like--and to provide an 02:48.740 --> 02:54.700 account of the behaviors to which those terms apply. 02:54.700 --> 02:58.710 It is fundamentally, to remind you of a terminological 02:58.710 --> 03:02.940 distinction that we've made before, a normative as opposed 03:02.940 --> 03:05.170 to a descriptive enterprise. 03:05.170 --> 03:10.240 Philosophical moral theory doesn't aim to tell us how 03:10.240 --> 03:11.720 people act. 03:11.720 --> 03:17.910 It aims to tell us how people ought to act if they wish to 03:17.910 --> 03:22.330 conform to the constraints that morality places on them. 03:22.332 --> 03:25.542 In particular, moral philosophy is concerned with 03:25.540 --> 03:32.690 providing a principled answer to three kind of questions. 03:32.690 --> 03:35.770 The first kind of question we encountered already in the 03:35.770 --> 03:38.010 context of Glaucon's Challenge. 03:38.010 --> 03:41.090 It's the question of moral motivation. 03:41.090 --> 03:46.770 "Why should we want to act in keeping which what morality 03:46.770 --> 03:47.570 demands of us? 03:47.570 --> 03:50.960 And a minute I'll give you a sense of the range of answers 03:50.960 --> 03:53.260 that have been provided to that question. 03:53.260 --> 03:56.840 So the first question that moral philosophy asks is why 03:56.840 --> 04:00.920 would we even want to be moral. 04:00.920 --> 04:05.400 It then asks the particular question, "what should we do 04:05.400 --> 04:08.820 insofar as we seek to act morally?". 04:08.820 --> 04:12.130 And about that we've had very little to say so far. 04:12.130 --> 04:15.070 We know that according to Aristotle, to be brave, one 04:15.065 --> 04:16.815 acts as the brave one does. 04:16.820 --> 04:20.360 But Aristotle just put forth bravery as a virtue without 04:20.360 --> 04:24.440 any explanation of what it was that made bravery fall into 04:24.440 --> 04:27.810 the category of virtues and cowardice fall into that 04:27.810 --> 04:30.690 category of vices other than the very general 04:30.690 --> 04:33.170 analysis of the mean. 04:33.170 --> 04:38.570 And we haven't looked at any specific claims about 04:38.570 --> 04:42.470 particular actions being morally acceptable or not. 04:42.470 --> 04:44.660 So the second sort of thing that a moral theory tries to 04:44.660 --> 04:46.870 do-- and, again, I'll give some examples in a minute-- 04:46.870 --> 04:52.250 is give us specific answers to the question "is this act 04:52.250 --> 04:55.280 morally OK?". 04:55.280 --> 04:59.560 In addition, what a moral theory aims to do is to tell 04:59.560 --> 05:05.600 us why we gave the answers that we did in question two. 05:05.600 --> 05:11.150 "In virtue of what common feature are the acts that fall 05:11.150 --> 05:15.010 into the category of moral to be distinguished from the acts 05:15.012 --> 05:20.932 that fall into the category of immoral?" So what do answers 05:20.930 --> 05:24.070 to these three questions look like? 05:24.070 --> 05:26.540 Let's start since we've encountered it already with 05:26.540 --> 05:30.110 the question of moral motivation. 05:30.110 --> 05:34.870 So one category of answers that one might give to why it 05:34.870 --> 05:38.130 is that we would be moral, act in keeping with the 05:38.130 --> 05:42.290 constraints of morality, is a self-interest account. 05:42.290 --> 05:45.800 So one might give an account which says when you behave 05:45.800 --> 05:50.350 morally, things run smoothly. 05:50.350 --> 05:55.250 As Socrates argues in response to Glaucon, when you behave in 05:55.250 --> 05:59.090 keeping with the constraints of morality, there is harmony 05:59.090 --> 06:01.120 in your soul. 06:01.120 --> 06:04.990 And that provides you with the possibility of a certain kind 06:04.990 --> 06:07.140 of flourishing. 06:07.140 --> 06:10.410 Or you might have what's implicit in the very first 06:10.410 --> 06:15.470 argument that Glaucon gives, a view that morality provides a 06:15.470 --> 06:18.960 certain kind of stability in society. 06:18.960 --> 06:23.150 Each of us behaving in pro-social ways increases the 06:23.150 --> 06:27.420 likelihood of others around us behaving in pro-social ways. 06:27.420 --> 06:31.270 And so we reach a kind of equilibrium state whereby 06:31.270 --> 06:34.840 things run smoothly if everybody behaves 06:34.840 --> 06:35.930 pro-socially. 06:35.930 --> 06:38.370 And we'll talk about that again at the beginning of the 06:38.370 --> 06:40.220 political philosophy section. 06:40.220 --> 06:43.140 So one kind of self-interest theory is a theory that 06:43.140 --> 06:47.140 appeals to a certain kind of coordination, either a 06:47.140 --> 06:50.120 coordination among the parts of the soul, or coordination 06:50.120 --> 06:53.530 across individuals in a society. 06:53.530 --> 06:56.750 A second kind of self-interest theory is what we might call a 06:56.750 --> 06:58.880 get good stuff theory. 06:58.880 --> 07:02.900 So this lies at the heart on some religious traditions. 07:02.900 --> 07:05.440 Here's what you get if you act in keeping with the 07:05.440 --> 07:10.610 constraints of morality: you get eternal life in a really 07:10.610 --> 07:12.750 nice place. 07:12.750 --> 07:15.160 Here's what you get if you don't act in keeping with the 07:15.160 --> 07:19.210 constraints of morality: you get eternal continuation in a 07:19.210 --> 07:21.200 really unpleasant place. 07:21.200 --> 07:26.000 So the notion that there is some reward beyond earth for 07:26.000 --> 07:30.470 behaving in moral ways is an example of a self-interested 07:30.470 --> 07:33.240 justification of morality. 07:33.240 --> 07:37.170 Or one might give the sort of justification that Adeimantus 07:37.170 --> 07:40.750 gives in response to Glaucon's challenge. 07:40.750 --> 07:43.180 Adeimantus point out that one of the things morality 07:43.180 --> 07:46.680 provides you with is enhanced reputation. 07:46.680 --> 07:49.880 So as a result of behaving in keeping with the standards of 07:49.880 --> 07:53.660 morality, you come to be perceived as having behaved in 07:53.660 --> 07:58.280 that way, and that reputation brings to you some value. 07:58.280 --> 08:02.290 Or it might be, as Aristotle discusses at the end of Book 08:02.290 --> 08:07.780 10, that society is structured in some way that motivates 08:07.780 --> 08:10.360 people to act in keeping with the constraints of morality 08:10.360 --> 08:14.880 because doing so is a way of avoiding punishment. 08:14.880 --> 08:20.670 Many of us obey speeding laws for precisely that reason. 08:20.669 --> 08:25.299 We obey them most especially when there are flashing lights 08:25.299 --> 08:27.509 in our vicinity. 08:27.510 --> 08:30.800 But we can have an internalized version of the 08:30.800 --> 08:33.310 reduction of punishment as well. 08:33.310 --> 08:37.190 Part of the Freudian picture that we heard about in the 08:37.190 --> 08:41.600 Divided Soul lecture discussed the development of conscience 08:41.600 --> 08:45.270 as an internalization of external rules, whereby the 08:45.270 --> 08:50.060 super ego gets upset when the id behaves in ways that aren't 08:50.060 --> 08:52.470 in keeping with the constraints of morality. 08:52.470 --> 08:55.080 And one can have a non-Freudian version of that 08:55.080 --> 08:57.970 as well that appeals to the notion of conscience. 08:57.970 --> 09:01.310 So the idea that what morality brings you is either the 09:01.310 --> 09:05.330 possibility of salvation or enhanced reputation or the 09:05.330 --> 09:08.380 possibility of not being punished by external laws or 09:08.380 --> 09:11.160 the possibility of not being punished by one's conscience 09:11.160 --> 09:15.420 is another version a self-interest theory. 09:15.420 --> 09:17.900 So that's one kind of justification one might 09:17.900 --> 09:21.800 provide for behaving in moral ways. 09:21.800 --> 09:25.270 A second very different kind of justification says the 09:25.270 --> 09:30.050 reason we act morally is because normative features are 09:30.050 --> 09:32.970 fundamental features of the world. 09:32.970 --> 09:36.400 There's a brute "ought" out there. 09:36.400 --> 09:41.290 It's a fact about reality that what we are morally obliged to 09:41.290 --> 09:46.080 do is to act in whatever ways it is that morality demands 09:46.080 --> 09:51.230 and not out of self-interest, but simply because we are 09:51.230 --> 09:58.850 responsive to that feature of the world, we are motivated to 09:58.845 --> 10:01.385 act morally. 10:01.390 --> 10:06.040 A third kind of justification, third kind of explanation of 10:06.040 --> 10:09.970 more motivation, is what we might call a factive theory 10:09.970 --> 10:14.440 that says roughly this is just the way people are. 10:14.440 --> 10:21.920 So evolutionary accounts that say pro-social behaviors have 10:21.920 --> 10:24.760 been selected for, perhaps because they enable the 10:24.760 --> 10:27.830 resolution of coordination problems. But whatever the 10:27.830 --> 10:30.770 explanation, pro-social behavior says this theory had 10:30.770 --> 10:31.770 been selected for. 10:31.770 --> 10:34.420 So it's a brute fact about the world that we behave in 10:34.420 --> 10:37.220 pro-social ways--not a brute normative fact about the 10:37.220 --> 10:39.900 world, just a brute descriptive fact about the 10:39.900 --> 10:41.740 world that we behave in that way. 10:41.740 --> 10:45.120 Or you might have, not an evolutionary based version of 10:45.120 --> 10:48.610 this, but a version that says look, this is just the way the 10:48.610 --> 10:53.790 human soul expresses itself when it conforms to its 10:53.790 --> 10:55.590 natural state. 10:55.590 --> 10:59.350 So you might have a theory of morality that says the reason 10:59.350 --> 11:02.210 to behave morally is out of self-interest. You might have 11:02.210 --> 11:04.780 a theory of morality that says the reason to behave morally 11:04.780 --> 11:06.190 is because of altruism. 11:06.190 --> 11:08.620 You might have a theory of morality that says the reason 11:08.620 --> 11:12.540 to behave morally is just that's the way we do behave. 11:12.540 --> 11:18.300 Or you might have some sort of combination theory. 11:18.300 --> 11:21.350 And we've talked already about the first of these, the 11:21.350 --> 11:23.130 self-interest theory. 11:23.130 --> 11:26.040 And as this section of the course goes on, we'll talk 11:26.040 --> 11:30.740 more about some of the other sorts of explanation. 11:30.740 --> 11:33.800 So those are some examples of the kinds of answers that are 11:33.800 --> 11:37.560 given to the first question, the question of moral 11:37.560 --> 11:38.680 motivation. 11:38.675 --> 11:42.145 What kinds of issues arise when we think about the 11:42.150 --> 11:45.050 question of moral behavior? 11:45.050 --> 11:48.780 Well you saw a number of examples of this in the 11:48.780 --> 11:51.690 reading that we did for today. 11:51.690 --> 11:55.700 One kind question that moral theories set out to provide 11:55.700 --> 11:59.400 answers to is the question of whether it's either morally 11:59.400 --> 12:05.150 required or morally permitted to harm one person in order to 12:05.150 --> 12:07.720 help many others. 12:07.720 --> 12:11.610 So Bernard Williams' story of Jim and the Indians, where Jim 12:11.605 --> 12:16.125 is presented with a case where if he's willing to shoot one 12:16.130 --> 12:20.920 of 20 prisoners, the other 19 will be set free, whereas if 12:20.920 --> 12:23.560 he's unwilling to shoot that one, all 20 of 12:23.560 --> 12:25.080 them will be shot. 12:25.080 --> 12:27.940 Or the Omelas story, where we're told the story of a 12:27.940 --> 12:32.050 society whose flourishing depends upon the suffering of 12:32.050 --> 12:33.400 a single child. 12:33.400 --> 12:36.850 Or the trolley cases that I presented you with in the very 12:36.850 --> 12:40.690 first lecture, where a trolley is headed down a track towards 12:40.690 --> 12:43.600 five people, and we're in a position to deflect the 12:43.600 --> 12:46.360 trolley in some way so that one ends up 12:46.360 --> 12:48.420 being killed instead. 12:48.420 --> 12:53.420 Those are examples of schematic representations of 12:53.420 --> 12:55.770 the kinds of questions that moral theories 12:55.770 --> 12:58.120 confront all the time. 12:58.120 --> 13:01.340 Whenever we think about deferrals of threat -- 13:01.340 --> 13:04.900 is it right to quarantine a population suffering from a 13:04.900 --> 13:07.940 particular illness in a way that will cause harm to them 13:07.940 --> 13:10.360 but benefit the rest of society? -- 13:10.360 --> 13:13.850 we are thinking about these sorts of questions. 13:13.850 --> 13:17.420 So one sort of question that moral philosophy aims to 13:17.420 --> 13:21.590 answer is the question of whether this sort of trade off 13:21.590 --> 13:24.900 is morally required or morally permitted. 13:24.900 --> 13:29.230 A particularly profound version of that question comes 13:29.230 --> 13:33.450 out when we think about what our moral duties are to those 13:33.450 --> 13:35.860 who are less fortunate. 13:35.860 --> 13:40.450 So the philosopher Peter Singer has famously argued 13:40.450 --> 13:45.770 that the entire structure of the first world and the third 13:45.770 --> 13:50.700 world is a morally illegitimate one because it 13:50.700 --> 13:55.430 involves an unwillingness on the part of those in the first 13:55.430 --> 13:59.210 world to do what is morally demanded of them, namely to 13:59.210 --> 14:03.380 take a large proportion of their resources and 14:03.380 --> 14:08.630 redistribute those to people who are suffering from 14:08.630 --> 14:13.350 extraordinarily easily curable illnesses. 14:13.350 --> 14:16.810 People who don't have mosquito nets, people who don't have 14:16.810 --> 14:21.070 vaccinations, people who don't have clean water, people who 14:21.070 --> 14:25.460 don't have access to basic medical care in the first five 14:25.460 --> 14:29.270 years that would, for example, prevent lifelong blindness. 14:29.270 --> 14:33.180 So another question that moral theory asks--in some ways of 14:33.180 --> 14:36.570 version of the earlier question--is in general what 14:36.570 --> 14:40.450 our duties are to those who are less fortunate. 14:40.450 --> 14:43.490 It also asks questions like this: Are these sorts of 14:43.490 --> 14:44.880 behaviors morally mandatory? 14:44.875 --> 14:48.605 Is it morally mandatory for us to behave in ways that help 14:48.610 --> 14:51.190 the environment, say by recycling? 14:51.190 --> 14:54.400 Is it morally mandatory for us to act in certain ways towards 14:54.400 --> 14:57.950 non-human animals, perhaps by being vegetarian? 14:57.950 --> 15:00.990 Is it morally required of us to worship a 15:00.990 --> 15:02.570 deity in some way? 15:02.570 --> 15:05.780 Is religious worship something that's morally mandatory? 15:05.780 --> 15:09.480 Is something like respect for elders, a fundamental part of 15:09.480 --> 15:13.250 traditional moral frameworks, morally mandatory? 15:13.250 --> 15:15.850 And moral theories also ask questions like: Are these 15:15.850 --> 15:17.700 kinds of things morally permissible. 15:17.700 --> 15:19.330 Is abortion morally permissible? 15:19.330 --> 15:21.300 Is euthanasia morally permissible? 15:21.300 --> 15:23.510 Is capital punishment morally permissible? 15:23.510 --> 15:25.010 How about sex before marriage? 15:25.010 --> 15:27.730 How about lying for one or another motivation? 15:27.730 --> 15:30.830 How about, as Kant's going to argue in our next reading, 15:30.830 --> 15:34.810 failing to cultivate one's talents, which Kant thinks is 15:34.810 --> 15:37.850 a violation of moral mandate? 15:37.850 --> 15:40.700 So these are the kinds of questions that moral theories 15:40.700 --> 15:42.870 aim to provide answers to. 15:42.870 --> 15:46.760 And it might seem like a heterogeneous bunch. 15:46.760 --> 15:51.100 But it gives you a sense of the generality of explanation 15:51.100 --> 15:53.820 that moral theories seek to provide. 15:53.820 --> 15:58.420 So let's turn to four major moral theories in the western 15:58.420 --> 16:02.390 tradition and think about how it is that they could simply 16:02.390 --> 16:05.460 categorically provide answers to this 16:05.460 --> 16:08.260 wide range of questions. 16:08.260 --> 16:12.420 So the kind of moral theory that we're going to discuss in 16:12.420 --> 16:16.300 today's lecture primarily is a moral theory known as 16:16.300 --> 16:18.030 utilitarianism. 16:18.030 --> 16:21.120 It tells us an act is moral insofar as it produces the 16:21.120 --> 16:23.300 greatest good for the greatest number. 16:23.300 --> 16:26.190 It takes as its fundamental notion the notion of good. 16:26.190 --> 16:29.110 And it gives us answers to the questions that we've 16:29.110 --> 16:32.740 previously asked ourselves as long as we know how goods are 16:32.740 --> 16:34.730 distributed in response to them. 16:34.730 --> 16:41.710 So if we know what it is that produces happiness in sentient 16:41.710 --> 16:45.310 beings, then utilitarianism will give us an answer to the 16:45.310 --> 16:47.030 question of whether being 16:47.030 --> 16:49.920 vegetarian is morally mandated. 16:49.920 --> 16:52.280 It'll tell us to take the amount of happiness that's 16:52.280 --> 16:55.670 distributed across sentient beings, and look at which 16:55.670 --> 16:57.750 distribution is going to maximize 16:57.750 --> 16:59.960 the amount of happiness. 16:59.960 --> 17:05.320 So utilitarianism gives us one sort of systematic answer to 17:05.319 --> 17:06.409 this question. 17:06.406 --> 17:09.046 A second sort of answer to this question, which we'll 17:09.050 --> 17:13.150 discuss in lecture on Tuesday, is the answer given by Kant 17:13.150 --> 17:15.660 and the deontological tradition. 17:15.660 --> 17:20.110 What Kant says is that an act is moral insofar as it's 17:20.110 --> 17:24.190 performed as the result of acting with the correct sort 17:24.190 --> 17:26.300 of motivation. 17:26.300 --> 17:30.860 It takes as its primary notion not the notion of goodness, 17:30.860 --> 17:34.280 but rather the notion of rightness. 17:34.280 --> 17:38.360 And on that basis, Kant is going to give a bunch of 17:38.360 --> 17:40.920 answers to our specific questions. 17:40.920 --> 17:45.130 In particular, he's going to argue that it's not OK to 17:45.130 --> 17:49.090 sacrifice the good of the one for the good of the many. 17:49.090 --> 17:53.720 And he's going to argue that lying is morally unacceptable. 17:53.720 --> 17:57.210 And we'll talk next class about how from a very abstract 17:57.210 --> 18:00.460 principle like this one one can derive these sorts of 18:00.460 --> 18:02.540 particular answers. 18:02.540 --> 18:06.610 We've already looked at the ancient traditional answer to 18:06.610 --> 18:10.760 this in Aristotle, that an act is moral insofar as it's 18:10.760 --> 18:14.160 performed as the results of having a virtuous character. 18:14.160 --> 18:17.370 And so what Aristotle says to us is look and see how the 18:17.370 --> 18:22.720 well-raised one would behave. And once you see what is that 18:22.720 --> 18:26.860 the virtuous one does, you can learn through his or her 18:26.860 --> 18:32.280 example what it is that morality demands of us. 18:32.280 --> 18:35.610 And a final tradition about which we won't have much to 18:35.610 --> 18:40.970 say in this lecture is, of course, a basis for morality 18:40.970 --> 18:46.040 which has stood at the center of western culture for at 18:46.040 --> 18:50.640 least 2,000 years, which is the idea that an act is moral 18:50.640 --> 18:57.330 insofar as it conforms to what the divinity demands of us. 18:57.330 --> 19:00.600 So one can provide an explanation, as the 19:00.600 --> 19:03.060 utilitarian does, that makes appeal to 19:03.060 --> 19:04.800 the notion of goodness. 19:04.800 --> 19:06.870 One can provide a justification that makes 19:06.870 --> 19:10.640 appeal, as deontology does, to the notion of rightness. 19:10.640 --> 19:12.780 One can provide a justification that makes 19:12.780 --> 19:16.760 appeal, as virtue ethics does, to the notion of virtuousness. 19:16.760 --> 19:20.500 Or one can provide an account that makes appeal, as 19:20.500 --> 19:26.430 religious ethics does, to the notion of divine mandate. 19:26.430 --> 19:31.270 So let's think a little more about the relation among these 19:31.270 --> 19:34.910 three particular theories, the ones on which we're going to 19:34.910 --> 19:38.710 focus in the context of this class, as a way of coming to 19:38.710 --> 19:41.300 understand the particular theory that we're thinking 19:41.300 --> 19:44.550 about today, namely utilitarianism. 19:44.550 --> 19:52.240 So virtue ethics focuses its attention on the actor, not 19:52.240 --> 19:54.780 the person who stands up on the stage and recites lines 19:54.780 --> 19:58.610 from Hamlet, but rather the actor who performs an act that 19:58.610 --> 20:00.560 will be moral or not. 20:03.150 --> 20:08.800 Deontology focuses its attention on the act. 20:08.800 --> 20:14.860 It looks not at who's doing it, but rather at what act is 20:14.860 --> 20:17.760 done and under what description. 20:20.480 --> 20:25.480 consequentialism, by contrast, looks not at who does the act 20:25.475 --> 20:28.885 and looks not at the description under which the 20:28.890 --> 20:33.860 act is done, but looks rather at the consequences that the 20:33.860 --> 20:36.560 act brings about. 20:36.560 --> 20:42.830 And we've encountered virtue theory in the voice-- 20:42.830 --> 20:47.210 see if you recognize this gentleman-- 20:47.210 --> 20:49.150 in the voice of Aristotle. 20:51.810 --> 21:00.890 We will encounter deontology in the voice of Immanuel Kant. 21:00.886 --> 21:04.056 And what we're going to discuss today is 21:04.060 --> 21:08.040 consequentialism and, in particular, utilitarianism in 21:08.040 --> 21:11.890 the voice of John Stuart Mill. 21:11.890 --> 21:17.390 So let's look now at what it is that Mill has to say about 21:17.390 --> 21:22.050 the fundamental nature of morality. 21:22.050 --> 21:23.800 So what Mill contends-- 21:23.800 --> 21:26.560 and let me say we're coming up on the clicker slide, so if 21:26.560 --> 21:29.800 you're zoning out, it's time to pull out your clicker. 21:29.800 --> 21:32.240 And in about four or five minutes, we'll 21:32.240 --> 21:34.840 be doing some polls. 21:34.840 --> 21:40.440 So Mill contends that the right kind of framework for 21:40.440 --> 21:43.940 thinking about moral theories is a consequentialist 21:43.940 --> 21:46.430 framework, so not one that looks at the actor as virtue 21:46.430 --> 21:49.240 theory does, not one that looks at the act as deontology 21:49.240 --> 21:51.820 does, but rather one that looks at the consequences in 21:51.820 --> 21:53.640 the way that consequentialism does. 21:53.640 --> 21:57.420 The degree of moral rightness of an act is determined by its 21:57.420 --> 21:58.170 consequences. 21:58.170 --> 22:01.990 And Mill provides a particular version of this. 22:01.990 --> 22:04.380 He says the degree of moral rightness of an act is 22:04.380 --> 22:08.260 determined by a particular kind of consequence, namely 22:08.260 --> 22:11.790 the utility that the act produces. 22:11.790 --> 22:14.050 So you might have a consequentialist theory that 22:14.050 --> 22:16.360 says the degree of moral rightness of an act is 22:16.360 --> 22:18.680 determined by its consequences, namely, for 22:18.680 --> 22:24.130 example, the amount of bananas that it produces. 22:24.130 --> 22:27.110 It would be an odd moral theory, but it would be a 22:27.110 --> 22:29.970 consequentialist theory that says the degree of moral 22:29.970 --> 22:31.610 rightness of an act is determined by its 22:31.610 --> 22:34.860 consequences, in particular by its degree of banana 22:34.860 --> 22:36.430 production. 22:36.430 --> 22:39.200 So that would be a very general kind of 22:39.200 --> 22:41.330 consequentialist theory. 22:41.330 --> 22:44.740 Utilitarian theories are a particular kind of 22:44.740 --> 22:47.740 consequentialist theory that says the degree of moral 22:47.740 --> 22:49.760 rightness of an act is determined by its 22:49.760 --> 22:53.490 consequences, in particular by the amount of 22:53.490 --> 22:58.500 utility--usefulness, happiness in Mill's account of what kind 22:58.495 --> 23:02.035 of utility we're concerned with--by the amount of utility 23:02.040 --> 23:03.850 that it produces. 23:03.850 --> 23:09.860 That means, to remind you of the handouts that you got in 23:09.860 --> 23:14.890 section this week, that to be utilitarian is a sufficient 23:14.890 --> 23:18.690 condition to be consequentialist, but not a 23:18.690 --> 23:20.160 necessary one. 23:20.160 --> 23:24.180 And to be a consequentialist is a necessary condition on 23:24.180 --> 23:28.410 being a utilitarian, but not a sufficient one. 23:28.410 --> 23:32.950 And if what I just said isn't completely obvious to you, 23:32.950 --> 23:36.560 take a look at the second side of the handout that you got in 23:36.560 --> 23:38.720 section this week. 23:38.720 --> 23:43.600 So Mill not only makes a utilitarian commitment, he 23:43.600 --> 23:47.060 actually in the course of making that commitment makes 23:47.060 --> 23:52.030 two very particular claims that I now want ask you to 23:52.030 --> 23:56.490 think about in light of some particular cases. 23:56.490 --> 24:00.600 The first is the famous formulation of the greatest 24:00.600 --> 24:04.380 happiness principle, which in your text appears right at the 24:04.380 --> 24:07.910 beginning on page 77 in the reprint. 24:07.910 --> 24:11.110 Mills says famously, "Actions are right in proportion as 24:11.110 --> 24:13.930 they tend to promote happiness, wrong as they tend 24:13.930 --> 24:17.470 to promote the reverse of happiness." And he continues a 24:17.470 --> 24:20.850 few pages later to clarify that what he means is not the 24:20.850 --> 24:25.540 agent's own happiness, but that of all concerned. 24:25.540 --> 24:30.990 In a minute we'll think through what that implies. 24:30.990 --> 24:33.840 The second commitment of Mill's that I want you to 24:33.840 --> 24:39.840 think about is one runs straight on in opposition to 24:39.840 --> 24:42.130 what we talked about in Aristotle last week. 24:42.130 --> 24:46.720 Mill says the motive has nothing to do with the 24:46.720 --> 24:48.870 morality of the action. 24:48.870 --> 24:53.120 "He who saves another creature from drowning does what is 24:53.120 --> 24:58.660 morally right whether his motive be duty or the hope of 24:58.660 --> 25:03.500 being paid for his trouble." The motivation with which an 25:03.500 --> 25:06.980 act is performed, says Mill, tells us nothing about the 25:06.980 --> 25:08.580 morality of the act. 25:08.580 --> 25:12.050 He doesn't deny that it tells us something about the actor. 25:12.050 --> 25:15.250 He's perfectly happy to say that somebody who does the act 25:15.250 --> 25:20.240 out of the hope of being paid is in some way different from 25:20.240 --> 25:24.260 the person who does act out of a sense of duty or moral 25:24.260 --> 25:25.610 obligation. 25:25.610 --> 25:30.170 But as far as the moral value of the act itself is 25:30.170 --> 25:34.880 concerned, Mill thinks there is no difference. 25:34.880 --> 25:38.060 So that's the first question that I wanted to ask you. 25:38.063 --> 25:41.493 Take the case that Mill's described. 25:41.490 --> 25:45.970 You see somebody drowning in a lake. 25:45.970 --> 25:51.550 And the question is this, is your act of saving that person 25:51.550 --> 25:56.280 morally right, morally virtuous, moral only if it's 25:56.280 --> 25:57.790 done out of duty? 25:57.790 --> 25:59.740 I want to save that person because it's the right thing 25:59.740 --> 26:02.890 to do or some other sort of pro-social motive. 26:02.890 --> 26:05.310 If you think that, push one. 26:05.310 --> 26:08.990 Or is the act morally right regardless of its motive even 26:08.990 --> 26:12.300 if you do it because there's a big sign up on the trees that 26:12.300 --> 26:17.040 say save a drowning person: $10,000 reward. 26:17.040 --> 26:20.250 And so you think: "$10,000, that's good money." And in you 26:20.250 --> 26:21.750 jump into the water. 26:21.750 --> 26:26.140 All right, I'll push the ten second timer. 26:26.140 --> 26:28.310 We have roughly 50 of you, 70 of you. 26:28.310 --> 26:30.180 Good, numbers are jumping up. 26:30.180 --> 26:34.260 Let's just see whether instinctively this room is 26:34.260 --> 26:37.240 filled with Kantians or filled with consequentialists. 26:37.240 --> 26:41.450 So, interestingly, there's a pretty close to even split. 26:41.450 --> 26:46.370 Most of you seem to side with Mill on the question that an 26:46.370 --> 26:49.140 act is morally right regardless of the motive. 26:49.140 --> 26:54.310 But a sizable portion of you are going to be pleased when 26:54.310 --> 26:57.890 we read Kant, who gives the answer that you offer. 26:57.890 --> 27:01.450 And one of the things that we want to do in section next 27:01.450 --> 27:05.210 week is to have those of you who fall on one or the other 27:05.210 --> 27:09.650 side of this question talk through with others around you 27:09.650 --> 27:13.380 why it is that you either fell into this group or you felt 27:13.380 --> 27:15.390 into this one. 27:15.390 --> 27:17.750 So, so far Mill's doing pretty well. 27:17.750 --> 27:20.990 He has a slight majority of you on his side. 27:20.990 --> 27:25.560 I now want to present you with a series of cases to ask what 27:25.560 --> 27:28.790 you think about the greatest happiness principle. 27:28.790 --> 27:32.140 Remember, Mill says that an act is moral insofar as it 27:32.140 --> 27:34.550 produces the greatest happiness for the greatest 27:34.550 --> 27:38.300 number, where we're not concerned with how that 27:38.300 --> 27:42.170 happiness is distributed across individuals. 27:42.170 --> 27:46.470 So let's start with a following case. 27:46.470 --> 27:51.350 There's an act which you can perform which will give you 27:51.350 --> 27:54.290 100 units of happiness. 27:54.290 --> 27:56.990 Each of those colorful smiley faces-- 27:56.990 --> 27:59.580 aren't you all feeling pro-social in their light? 27:59.580 --> 28:00.770 Each of those smiley faces 28:00.770 --> 28:02.710 represents 10 units of happiness. 28:02.710 --> 28:04.850 So suppose you have a fan. 28:04.850 --> 28:06.780 It's a very hot day, and you have a fan 28:06.780 --> 28:08.250 that blows upon you. 28:08.250 --> 28:11.490 And the coolness of that fan just provides you with 100 28:11.490 --> 28:12.370 units of happiness. 28:12.370 --> 28:15.250 Or suppose you have some delicious cookies, and eating 28:15.250 --> 28:19.060 those cookies provides you with 100 units of happiness. 28:19.060 --> 28:23.740 In addition, performing that act provides 100 other people 28:23.740 --> 28:26.010 with one unit of happiness each. 28:26.010 --> 28:28.670 Suppose your fan blows a little bit outside of your 28:28.670 --> 28:32.330 room so that in addition to cooling you off 100 units, it 28:32.330 --> 28:35.070 cools the people in the next room off one unit apiece. 28:35.070 --> 28:37.860 Or supposed that when you finish eating your 100 28:37.860 --> 28:41.260 cookies, there are 100 cookies left over, and each of 100 28:41.260 --> 28:42.930 people get to have one cookie, and it brings 28:42.930 --> 28:45.030 them one unit of happiness. 28:45.030 --> 28:45.630 OK. 28:45.630 --> 28:46.690 So that's act one. 28:46.690 --> 28:49.440 It has a total of two hundred units of happiness. 28:49.440 --> 28:54.260 You get 100 units, and each of 100 other people get one unit. 28:54.260 --> 28:57.440 So your choice is between performing that act and 28:57.440 --> 29:00.670 performing an act which I'm going to call act two, which 29:00.665 --> 29:02.935 has exactly the same effect for you, right? 29:02.940 --> 29:05.300 It brings you 100 units of happiness. 29:05.300 --> 29:08.370 So here you are with your 100 units of happiness. 29:08.374 --> 29:12.674 But in this case, if you made a slight change in the angle 29:12.670 --> 29:16.740 of your fan, for example, you would be just as cool as you 29:16.740 --> 29:18.280 were in the first case. 29:18.280 --> 29:21.970 But it would double the amount of happiness of the people on 29:21.970 --> 29:22.890 the outside, right? 29:22.890 --> 29:24.570 You angle this fan slightly differently. 29:24.570 --> 29:26.770 And instead of being cooled one unit, the people are 29:26.770 --> 29:28.500 cooled two units. 29:28.500 --> 29:31.320 Or instead of throwing out your trash at the end of 29:31.320 --> 29:34.110 eating your cookies so that people only get one unit of 29:34.110 --> 29:36.990 happiness, you leave the other cookies around so that 29:36.990 --> 29:39.360 everybody else gets two units of happiness. 29:39.360 --> 29:43.190 In this case, by performing an act which has no different 29:43.190 --> 29:47.020 consequences for you as far as happiness is concerned, you 29:47.020 --> 29:49.940 double the happiness of a hundred other people with 29:49.940 --> 29:51.610 respect to the act. 29:51.610 --> 29:54.250 So the question is simply this. 29:54.250 --> 29:58.090 Given the choice between act one, which brings a total of 29:58.090 --> 30:01.880 200 units of happiness, 100 units for you and one unit for 30:01.880 --> 30:06.370 each of 100 other people, or act two, which brings the same 30:06.370 --> 30:10.180 amount of happiness to you, but 200 units of happiness to 30:10.180 --> 30:13.820 others and hence a total of three hundred, do you think-- 30:13.820 --> 30:17.150 push one if you think only act one is moral. 30:17.150 --> 30:20.070 That is only the one where you get 100 units, and everybody 30:20.070 --> 30:21.450 else gets one. 30:21.450 --> 30:24.770 Push two if you think only act two is moral, the one where 30:24.770 --> 30:27.910 you redirect your fan slightly or whatever it is that you do 30:27.910 --> 30:31.490 to double the happiness of those around you, or three, 30:31.490 --> 30:35.550 that either one of those is a moral act. 30:35.550 --> 30:35.950 OK. 30:35.950 --> 30:39.270 And I'm going the turn our timer on so that we have 10 30:39.270 --> 30:43.250 seconds to see how it is that your first take on Mill's 30:43.250 --> 30:45.840 greatest happiness principle goes. 30:45.840 --> 30:49.440 And let's see how the numbers come out. 30:49.440 --> 30:50.250 OK. 30:50.250 --> 30:54.590 So very few of you think that the moral act as the one 30:54.590 --> 30:58.530 whereby you get 100 units of happiness, and the 100 others 30:58.530 --> 31:00.640 get one unit. 31:00.640 --> 31:04.890 But you're roughly equally divided on the question of 31:04.890 --> 31:09.230 whether morality mandates that you redistribute your 31:09.230 --> 31:14.760 resources in such a way that they go also to others. 31:14.760 --> 31:20.180 So most of our discussion in the remaining slides will be 31:20.180 --> 31:25.030 concerned with when this 44 percent moves 31:25.030 --> 31:27.410 over to another place. 31:27.410 --> 31:31.570 But I'll be interested to see how all of this plays out. 31:31.570 --> 31:31.940 OK. 31:31.940 --> 31:35.350 So that was our first case, the case where at no cost to 31:35.350 --> 31:39.090 yourself you can bring happiness to others. 31:39.090 --> 31:41.630 Let's now contrast exactly the same first case. 31:41.630 --> 31:43.650 You get 100 units of happiness; 100 others 31:43.650 --> 31:44.620 get one unit each. 31:44.620 --> 31:47.530 So there's a total of 200 units, with the second case, 31:47.530 --> 31:52.560 we'll call this act three, where in order to redirect the 31:52.560 --> 31:58.330 goods, you bring your own happiness down to 50 units. 31:58.330 --> 32:01.170 So in order to redirect your fan in such a way that the 32:01.170 --> 32:04.470 other people get two units each, you have a slight 32:04.470 --> 32:07.430 reduction in the amount of utility for you. 32:07.430 --> 32:09.240 But it's still the case that this is 32:09.240 --> 32:12.040 more beneficial overall. 32:12.040 --> 32:14.400 So act one you get 100 unit of happiness, 32:14.400 --> 32:15.910 other people get one. 32:15.910 --> 32:18.830 Act two, you've reduced your happiness, you've redirected 32:18.830 --> 32:21.350 the fan, you're eating fewer of the cookies, but you've 32:21.350 --> 32:23.940 distributed it in such a way that others 32:23.940 --> 32:26.130 get their two units. 32:26.130 --> 32:26.420 OK. 32:26.420 --> 32:30.110 So the question is only act one, where you get 100 units, 32:30.110 --> 32:34.150 and everybody else gets one, only act two, where you get 50 32:34.150 --> 32:37.120 units, and everybody else gets two, but the total is higher, 32:37.120 --> 32:40.640 or either one? 32:40.640 --> 32:43.860 And, again, we'll open polling with the ten second timer. 32:43.860 --> 32:46.150 And let's see how the numbers go. 32:54.750 --> 32:55.580 All right. 32:55.580 --> 33:00.790 So little bit of change over to either act being moral. 33:00.790 --> 33:06.620 More of you think that it is morally required to increase 33:06.620 --> 33:09.550 the happiness of those around you when there's no harm to 33:09.550 --> 33:14.740 yourself than if you think is required when there is some 33:14.740 --> 33:16.370 cost to yourself. 33:16.370 --> 33:22.660 Notice that Mill is very clear that what is morally required 33:22.660 --> 33:27.550 is number two here, that only the act which brings the 33:27.550 --> 33:31.790 greater amount of utility to the community as a whole is 33:31.790 --> 33:33.040 morally required. 33:35.730 --> 33:37.310 Turn to a third case. 33:37.310 --> 33:39.070 The first version is the same as before. 33:39.070 --> 33:40.010 You get 100 units. 33:40.010 --> 33:41.710 Everybody else gets one. 33:41.710 --> 33:47.090 Now, in order to do the good for others you have to 33:47.090 --> 33:49.370 experience some kind of disutility. 33:49.370 --> 33:52.000 You turn your fan totally away from yourself. 33:52.000 --> 33:54.230 But the result of that is that 100 others 33:54.230 --> 33:56.010 get three units each. 33:56.010 --> 33:58.500 So now the question is this. 33:58.500 --> 34:04.560 Is the act that is morally permitted of you, or is the 34:04.560 --> 34:08.310 act, that is a moral act, the one that we've initially 34:08.310 --> 34:11.890 presented, the one where you have some disutility, but 34:11.889 --> 34:14.029 other people get utility? 34:14.030 --> 34:16.790 Or are these of equal value? 34:16.790 --> 34:19.940 Notice the total of 200 units, 250 units. 34:19.940 --> 34:23.560 So the first case, our classic case, the second case one 34:23.560 --> 34:25.960 where you experience some discomfort. 34:25.960 --> 34:29.810 But in exchange for that discomfort, other people, not 34:29.810 --> 34:32.640 you, experience some good. 34:32.639 --> 34:32.879 OK. 34:32.880 --> 34:35.800 Let's turn on the 10 second timer and see 34:35.800 --> 34:37.340 how this comes out. 34:46.170 --> 34:46.990 OK. 34:46.990 --> 34:54.730 So in this case, it appears that very few of you are 34:54.730 --> 34:56.420 siding with Mill. 34:56.420 --> 34:59.570 A certain number of you are here, saying that what we need 34:59.570 --> 35:01.620 to do is to provide the greatest good for 35:01.620 --> 35:02.900 the greatest number. 35:02.897 --> 35:08.227 And a sizable percentage of you is growing to think that 35:08.230 --> 35:13.050 perhaps morality doesn't demand any sacrifices of you. 35:13.050 --> 35:15.080 Let's go on. 35:15.080 --> 35:18.690 Next case exactly like the last one, except it's somebody 35:18.693 --> 35:22.033 else who has 50 units of disutility in order to 35:22.030 --> 35:24.510 distribute three units of utility to others. 35:24.510 --> 35:25.910 So here's the case. 35:25.910 --> 35:29.060 Either you get 100 units of happiness, and others get one 35:29.060 --> 35:31.540 unit each for a total of 200 units. 35:31.540 --> 35:33.950 Or let's assume you preserve your 100 units 35:33.950 --> 35:34.520 of happiness here. 35:34.520 --> 35:36.440 We're leaving you out of the equation. 35:36.440 --> 35:37.550 And the question is this. 35:37.550 --> 35:40.780 Suppose you are distributing resources for 35:40.780 --> 35:42.800 society as a whole. 35:42.800 --> 35:46.340 There's a case where actually, this act one ought to also be 35:46.340 --> 35:49.650 someone else, so the case where someone else gets 100 35:49.650 --> 35:53.410 units of happiness, and 100 others get one unit each, or a 35:53.410 --> 35:58.020 case where somebody else loses 50 units of happiness, but 100 35:58.020 --> 36:00.070 others get three units each. 36:00.070 --> 36:03.220 OK, so let's replace this you in act one with someone else 36:03.220 --> 36:08.050 and as the question of whether a distribution of resources 36:08.050 --> 36:13.340 across society, which produces 200 units of good in this form 36:13.340 --> 36:16.360 or a distribution of resources across society, which produces 36:16.360 --> 36:21.950 250 units of good in this form, a minor 50 units of 36:21.950 --> 36:24.250 suffering by one for three hundred units 36:24.250 --> 36:25.700 of benefits by another. 36:25.700 --> 36:31.690 Which one of those do you take to be what morality demands? 36:31.690 --> 36:36.240 And five, four, three, two, one. 36:36.240 --> 36:39.660 And let's see if there's any change from the previous case. 36:39.660 --> 36:40.660 OK. 36:40.660 --> 36:43.670 All of a sudden, here we get a radical 36:43.670 --> 36:46.340 shifting of the graphs. 36:46.340 --> 36:52.340 Almost 50 percent of you are clear that the act that 36:52.340 --> 36:56.740 requires bringing suffering to one person, a reduction of 36:56.740 --> 37:01.290 utility is not morally mandated. 37:01.290 --> 37:07.120 Later in the section that we are encountering in the class 37:07.120 --> 37:10.420 right now, we will consider the question of whether 37:10.420 --> 37:13.420 there's actually a fixed matter of where the baseline 37:13.420 --> 37:17.570 is and whether in fact this radical shift that we get when 37:17.570 --> 37:21.350 we moved from increasing utility to decreasing utility 37:21.350 --> 37:24.770 in people's psychology about what morality demands is in 37:24.770 --> 37:29.610 fact picking up on an artificial difference. 37:29.610 --> 37:34.540 Let's move to our final case. 37:34.540 --> 37:39.550 So our final case is one where either someone gets 100 units 37:39.550 --> 37:41.980 of happiness, and 100 others get one unit each. 37:41.980 --> 37:45.760 So there's 200 units of happiness, or a case-- 37:45.760 --> 37:46.710 hmm-- 37:46.710 --> 37:51.690 where someone gets 5,000 units of happiness taken away, but 37:51.690 --> 37:54.260 100 other people get five hundred units each so that 37:54.260 --> 37:59.000 there are 45,000 units of happiness produced by the 37:59.000 --> 38:01.360 performance of act six. 38:01.360 --> 38:05.410 So the case here is either a place where nobody has 38:05.410 --> 38:08.940 anything bad going on, but the total units of happiness are 38:08.940 --> 38:15.630 only 200, or one person has a lot of suffering going on, but 38:15.630 --> 38:19.800 the total units of happiness are 45,000. 38:19.802 --> 38:20.212 OK. 38:20.210 --> 38:24.880 And let's put the poll on with our 10, nine, eight, seven, 38:24.880 --> 38:28.330 six seconds and see how it is that you 38:28.330 --> 38:30.580 come out on this question. 38:34.110 --> 38:35.020 All right. 38:35.020 --> 38:38.460 On this question, which I know already for many of your 38:38.460 --> 38:41.490 reading responses to the Omelas case, on which this is 38:41.490 --> 38:49.210 model, it seems clear to a lot of you that suffering of one 38:49.210 --> 38:54.800 is not something that morality demands of us even if the 38:54.800 --> 39:00.840 result is an increase in general utility. 39:00.840 --> 39:04.910 Now, as you know, the Omelas story tells the story of a 39:04.910 --> 39:09.830 society where there is a community of people, each of 39:09.830 --> 39:13.300 whom has thousands and thousands of units of utility. 39:13.300 --> 39:17.100 They're incredibly happy in how they live. 39:17.100 --> 39:23.220 But that society exists as it does only because there is a 39:23.220 --> 39:27.680 child locked away whose suffering permits the 39:27.680 --> 39:29.940 society's joy. 39:29.940 --> 39:35.420 And as you know in the story, when children reach adulthood, 39:35.420 --> 39:39.290 they are brought to see the suffering child. 39:39.291 --> 39:43.441 And most of them return to the community of which they were a 39:43.440 --> 39:47.040 part aware of this, shaped by this, but 39:47.040 --> 39:49.450 willing to tolerate it. 39:49.450 --> 39:55.050 A smaller number of them, upon seeing this, leave the society 39:55.050 --> 39:57.220 altogether. 39:57.220 --> 40:01.460 Now, the question that I want you to think about in light of 40:01.460 --> 40:05.250 your answer a few minutes ago about what is demanded by 40:05.250 --> 40:11.090 morality is some things that seem to have the structure of 40:11.090 --> 40:14.210 the Omelas story. 40:14.210 --> 40:20.740 I take it that at some point in the last 18 years or so, 40:20.740 --> 40:26.300 someone has let you in on the secret that the pleasure that 40:26.300 --> 40:31.790 comes from eating meat depends, as does the joy of 40:31.790 --> 40:38.160 Omelas, upon the suffering of a large number 40:38.160 --> 40:41.420 of non-human animals. 40:41.420 --> 40:46.880 I take it that you noticed last week and the week before, 40:46.880 --> 40:50.750 when the snow was falling long Yale's campus, and the routes 40:50.750 --> 40:54.490 were made clear for you to get to classes, that the 40:54.490 --> 40:59.470 possibility of you walking across campus depended upon a 40:59.470 --> 41:04.550 large number of people whose lives are already difficult 41:04.550 --> 41:08.660 getting up very early in the morning and doing 41:08.660 --> 41:14.740 back-breaking shoveling work in the ice cold. 41:14.740 --> 41:19.510 I trust that somebody has let you in on the secret that the 41:19.510 --> 41:24.290 clothes that you wear and from which you take a certain 41:24.290 --> 41:29.240 amount of pleasure are in a great number of cases produced 41:29.240 --> 41:34.880 as a result of something quite close to the Omelas story, 41:34.880 --> 41:38.290 namely child labor. 41:38.290 --> 41:42.340 Indeed I take it that most of you are aware that the 41:42.340 --> 41:46.880 structure of the modern world bears a rather shocking 41:46.880 --> 41:50.100 similarity to the Omelas story. 41:50.100 --> 41:56.680 The possibility of flourishing in the first world is in many 41:56.680 --> 42:02.420 ways a consequence of an inequitable structure with 42:02.420 --> 42:05.560 regard to the third world. 42:05.560 --> 42:12.210 Now, almost all of you gave an answer that said this sort of 42:12.210 --> 42:16.970 structure is at least schematically morally 42:16.970 --> 42:18.700 acceptable. 42:18.700 --> 42:23.890 And the question is what is going on there. 42:23.890 --> 42:30.120 Le Guin in her story suggested you as college students are at 42:30.120 --> 42:35.150 exactly the age where the salience of this may affect 42:35.150 --> 42:36.340 you most profoundly. 42:36.340 --> 42:37.800 So she writes -- 42:37.800 --> 42:41.010 after being exposed to these sorts of facts -- she says 42:41.010 --> 42:43.850 "often the young people go home in tears or in a tearless 42:43.850 --> 42:47.360 rage when they've seen the child on whose suffering the 42:47.360 --> 42:49.850 fate of their society depends and face 42:49.850 --> 42:50.740 this terrible paradox. 42:50.740 --> 42:53.900 They may brood over it for weeks or years. 42:53.900 --> 42:58.420 But as time goes by," she says, "they begin to realize 42:58.420 --> 43:01.540 that even if the child could be released it would not that 43:01.540 --> 43:04.640 much good if its freedom, a little vague pleasure of 43:04.640 --> 43:08.210 warmth and food, no doubt, but little more." 43:08.210 --> 43:11.030 Now one of the interesting things about literature in 43:11.030 --> 43:17.080 contrast to philosophy is that it leaves it to you to 43:17.080 --> 43:19.250 interpret what's going on. 43:19.250 --> 43:24.440 And the fundamental question, I think, of the Omelas story 43:24.440 --> 43:28.220 is whether this sentence, "They begin to realize that 43:28.220 --> 43:30.670 even if the child could be released, it would not get 43:30.670 --> 43:32.680 much good if its freedom, a little vague pleasure of 43:32.680 --> 43:38.050 warmth and food, no doubt, but little more," is in fact 43:38.050 --> 43:44.760 true--or whether it is the sort of rationalization that 43:44.760 --> 43:50.340 recognition of one's comfort brings with it. 43:50.340 --> 43:54.790 She goes on perhaps explaining, perhaps protesting 43:54.790 --> 43:56.910 too much, to say the following, "It's too degraded 43:56.910 --> 44:01.560 and imbeciled to know any real joy. 44:01.560 --> 44:04.430 It has been afraid for too long ever to be free of fear. 44:04.430 --> 44:07.100 Its habits are too uncouth for it to respond to humane 44:07.100 --> 44:11.120 treatment." Indeed think about arguments about bringing 44:11.120 --> 44:14.990 democracies to countries with no tradition of democracy. 44:14.990 --> 44:17.920 "After so long, it would probably be wretched without 44:17.920 --> 44:21.270 walls about it to protect it and darkness for its eyes, its 44:21.270 --> 44:23.380 own excrement to sit in. 44:23.380 --> 44:28.310 Their tears at the bitter justice dry when they begin to 44:28.310 --> 44:34.310 perceive the terrible justice of reality and to accept it." 44:34.310 --> 44:38.950 Now, I don't have an answer to which of the two readings that 44:38.950 --> 44:42.800 I proposed is the right one to make of the Le Guin case. 44:42.800 --> 44:47.980 Is she contending there or helping you to recognize there 44:47.980 --> 44:53.340 that early feeling of rage at the fact that your well-being 44:53.340 --> 44:58.100 depends upon the suffering of others is, in fact, and 44:58.100 --> 45:01.910 immature response to an inevitable structure of 45:01.910 --> 45:04.530 inequity in the world? 45:04.530 --> 45:12.100 Or is she suggesting that in coming to think that way you 45:12.100 --> 45:18.210 are letting go of your only chance for moral behavior, 45:18.210 --> 45:21.600 that it's at the moment when you are profoundly exposed to 45:21.600 --> 45:26.650 injustice, and it hits you in the form of tears or rage that 45:26.650 --> 45:32.240 you are in a position to bring that into your life? 45:32.240 --> 45:37.840 She suggests regardless that living your life with your 45:37.840 --> 45:42.570 eyes open to the fact that your well-being depends upon 45:42.570 --> 45:47.830 the suffering of others is morally mandatory. 45:47.830 --> 45:50.500 "It is their tears and anger," she continues, "the trying of 45:50.500 --> 45:52.370 their generosity and the acceptance of their 45:52.370 --> 45:55.400 helplessness, which is perhaps the true source of the 45:55.400 --> 45:56.750 splendor of their lives. 45:56.750 --> 46:00.200 They know that they, like the child, are not free, that they 46:00.200 --> 46:03.760 live in a world of mutual interdependence. 46:03.760 --> 46:05.650 "They know compassion. 46:05.650 --> 46:09.080 It is because of their awareness of suffering in the 46:09.080 --> 46:12.500 world," she writes, "It is because of that child that 46:12.500 --> 46:14.590 they are so gentle with their children. 46:14.590 --> 46:17.960 They know that if the wretched one were not there sniveling 46:17.960 --> 46:22.340 in the dark," if we were not provided with the resources 46:22.340 --> 46:25.540 that let the first world thrive as it does, "the other 46:25.540 --> 46:28.720 one, the flute player would make no joyful music." All of 46:28.720 --> 46:31.730 the things that we benefit from, the greatness up this 46:31.730 --> 46:34.290 university, wouldn't be here. 46:34.290 --> 46:36.730 "No joyful music as a young writers line up for their 46:36.730 --> 46:39.390 beauty of the race in the sunlight on the first morning 46:39.390 --> 46:40.880 of summer." 46:40.880 --> 46:45.980 So I want to leave you with that as one of the many things 46:45.980 --> 46:49.950 which we can take from the Omelas story and as an 46:49.950 --> 46:55.680 introduction to what really goes into making a claim like 46:55.680 --> 46:58.570 the one Mill does. 46:58.570 --> 47:03.220 And what we'll talk about next class in the context of Kant 47:03.220 --> 47:08.430 are some systematic critiques which are offered of the 47:08.430 --> 47:12.160 utilitarian framework from the writings of Bernard Williams 47:12.160 --> 47:15.840 and our alternate which is offered in the writings of 47:15.840 --> 47:17.160 Immanuel Kant. 47:17.160 --> 47:19.470 So I'll see you on Tuesday.