WEBVTT

00:00.680 --> 00:04.760
PROFESSOR: All right, so where
we are in the context of the

00:04.760 --> 00:09.350
course is on our second
lecture on punishment.

00:09.350 --> 00:13.670
So you'll recall that before the
break we had been thinking

00:13.670 --> 00:19.030
about what sorts of moral norms
are appropriate: What

00:19.030 --> 00:23.030
sort of principle underlies our

00:23.030 --> 00:25.730
specification of moral norms?

00:25.730 --> 00:30.740
And we're now looking at those
same cluster of questions from

00:30.740 --> 00:32.180
the opposite side.

00:32.180 --> 00:35.130
We're looking at that cluster
of questions from the

00:35.130 --> 00:40.380
perspective of what ought to go
on when somebody violates

00:40.379 --> 00:46.769
those norms, either qua moral
norms or qua legal norms as

00:46.770 --> 00:51.370
encoded in a society's
set of statutes.

00:51.370 --> 00:55.360
And we were looking at a number
of justifications that

00:55.360 --> 01:00.020
might be offered for the
practice of punishment.

01:00.020 --> 01:02.260
And we talked briefly, and I'll
say a little bit more

01:02.260 --> 01:05.530
today, about the bottom two
of these: restitution and

01:05.530 --> 01:06.720
rehabilitation.

01:06.720 --> 01:11.920
But we spent the bulk of the
second half of Tuesday's

01:11.920 --> 01:14.920
lecture thinking about
backward-looking

01:14.920 --> 01:19.270
justification, which in many
ways is analogous to the sort

01:19.270 --> 01:23.300
of justification of moral
constraint that we see in the

01:23.300 --> 01:25.230
deontological picture.

01:25.230 --> 01:30.010
What I want to do now is to look
at the first of these in

01:30.010 --> 01:34.350
some detail, and to think about
what a forward-looking

01:34.350 --> 01:40.030
or consequentialist picture
of punishment looks like.

01:40.030 --> 01:44.130
And to think about the ways in
which the problems that arise

01:44.130 --> 01:48.780
for a consequentialist account
of punishment seem to echo a

01:48.780 --> 01:52.100
number of the problems that
arise for a consequentialist

01:52.100 --> 01:54.740
account of morality
more generally.

01:54.740 --> 01:59.060
So just to remind you what the
characterization of punishment

01:59.060 --> 02:02.210
that we're working with looks
like, of which the most

02:02.210 --> 02:06.380
important clause is going to be
the first, we're making use

02:06.380 --> 02:10.550
of the classic mid-century
characterization of punishment

02:10.550 --> 02:14.440
that comes out of the
Anglo-American legal

02:14.440 --> 02:16.700
philosophical tradition.

02:16.700 --> 02:21.580
And what's key in thinking
through the justification of

02:21.580 --> 02:25.040
punishment from a
consequentialist perspective

02:25.040 --> 02:29.970
is to remember that punishment
is done in response to a legal

02:29.970 --> 02:33.430
offense, is done by one judge
to the offender, and so on.

02:33.430 --> 02:39.450
But it's also something that, as
part of what punishment is,

02:39.450 --> 02:44.880
involves the imposition of
unpleasantness and suffering

02:44.880 --> 02:47.750
on the person to whom
conditions two

02:47.750 --> 02:49.670
through five apply.

02:49.670 --> 02:54.620
And the question we're trying to
answer for ourselves is how

02:54.620 --> 02:59.540
a state could ever be justified
in bringing

02:59.540 --> 03:04.320
deliberate disutility to
one of its members.

03:04.320 --> 03:09.900
So what the harm reduction or
consequentialist outlook says

03:09.900 --> 03:15.760
is that the sole goal of, or
the sole justification for,

03:15.760 --> 03:22.020
punishment is to prevent or
deter future wrongdoing.

03:22.020 --> 03:26.340
Consequentialist accounts,
in their pure forms, are

03:26.340 --> 03:29.780
interested solely
in consequences.

03:29.780 --> 03:32.020
And to the extent that they're
concerned with the

03:32.020 --> 03:37.100
distribution of utility across
members in a society, what

03:37.100 --> 03:42.240
they're interested in is the
maximization of utility from

03:42.240 --> 03:44.730
that moment onwards.

03:44.730 --> 03:48.030
They're called future-directed
because though they may take

03:48.030 --> 03:51.630
the past into account to the
extent that the memories of

03:51.630 --> 03:55.180
individuals take the past into
account, they don't take the

03:55.180 --> 03:58.050
past into account as something
relevant to the

03:58.050 --> 04:00.760
calculation of utility.

04:00.760 --> 04:03.780
The sole goal of punishment is
to prevent or deter future

04:03.780 --> 04:04.640
wrongdoing.

04:04.640 --> 04:08.460
And the sole justification for
punishment is to prevent or

04:08.460 --> 04:10.960
deter future wrongdoing.

04:10.960 --> 04:18.690
Now if a utilitarian account is
to serve as a justification

04:18.690 --> 04:22.280
for punishment as a practice,
punishment being the

04:22.280 --> 04:26.860
deliberate bringing about of
disutility to a member of the

04:26.860 --> 04:32.820
community who has violated some
moral or legal norm, then

04:32.820 --> 04:38.730
it must be the case, for
it to be justified on

04:38.730 --> 04:44.970
consequentialist grounds, that
punishment is an effective,

04:44.970 --> 04:48.760
and indeed, on some accounts,
that punishment is the most

04:48.760 --> 04:53.540
effective mechanism for bringing
about the prevention

04:53.540 --> 04:56.390
or deterrence of future
wrongdoing.

04:56.390 --> 05:01.230
If there's some alternate way
of bringing about an equally

05:01.230 --> 05:06.520
beneficial result, then it
can't be justified on

05:06.520 --> 05:11.720
consequentialist grounds to
impose harm or suffering on an

05:11.720 --> 05:12.770
individual.

05:12.770 --> 05:16.350
That's what the consequentialist
picture says.

05:16.350 --> 05:19.620
And notice, this is the mirror
image of some of the

05:19.620 --> 05:22.980
perplexities that we found
ourselves getting into around

05:22.980 --> 05:28.180
things like the surgeon case,
where if all one takes into

05:28.180 --> 05:32.840
consideration are distributions
of utility, then

05:32.840 --> 05:36.990
a lot of the factors that go
into common sense reasoning

05:36.990 --> 05:43.250
about cases seem to fall out
with respect their relevance.

05:43.250 --> 05:48.870
So the suggestion, if the
empirical hypothesis is

05:48.870 --> 05:53.660
correct, is that punishment is
the most effective mechanism

05:53.660 --> 05:57.430
for prevention or
incapacitation.

05:57.430 --> 06:02.080
That is, it's the best way of
precluding the possibility of

06:02.080 --> 06:07.430
wrongdoing on the part of the
perpetrator, and/or that it's

06:07.430 --> 06:12.720
the best mechanism for deterring
wrongdoing, either

06:12.720 --> 06:17.230
on the part of that individual
or on the part of others who--

06:17.230 --> 06:20.900
as the result of making public
the practice of punishment--

06:20.900 --> 06:23.680
come to recognize that what
you might think of as the

06:23.680 --> 06:26.150
cost-benefit analysis
of performing a

06:26.150 --> 06:28.050
particular act changes.

06:28.050 --> 06:30.740
So Jeremy Bentham
famously says--

06:30.740 --> 06:35.630
Jeremy Bentham was the
utilitarian predecessor of

06:35.630 --> 06:40.100
John Stuart Mill, the person who
originally articulated in

06:40.100 --> 06:45.090
the modern philosophical
tradition how it is that we

06:45.090 --> 06:46.700
should think of consequentialism
--

06:46.700 --> 06:51.100
so Jeremy Bentham famously said,
what punishment does is

06:51.100 --> 06:55.630
to change the calculus of costs
and benefits associated

06:55.630 --> 06:59.090
with a particular act of
wrongdoing, and to attach to

06:59.090 --> 07:03.270
something which is generally
beneficial some sort of cost,

07:03.270 --> 07:06.700
so that that can enter
into the calculus.

07:06.700 --> 07:11.020
In the literature on
conditioning and

07:11.020 --> 07:14.100
reinforcement, punishment is
something that changes the

07:14.100 --> 07:17.740
contingencies associated
with a particular act.

07:17.740 --> 07:21.890
It brings a penalty along with
a reward to something that

07:21.890 --> 07:24.100
typically carries
only a reward.

07:24.100 --> 07:27.620
Or it reduces the reward
associated with something that

07:27.620 --> 07:30.470
generally brings
a high reward.

07:30.470 --> 07:34.500
Now it should be clear to all
of you, because we went

07:34.500 --> 07:38.060
through the mirror image of
this in the positive case,

07:38.060 --> 07:42.290
that thinking merely in terms
of consequentialist

07:42.290 --> 07:48.340
justifications for punishment
seems to carry with it two

07:48.340 --> 07:53.000
kinds of problems. The first
problem is that the

07:53.000 --> 07:57.980
consequentialist justification
seems to under-generate

07:57.980 --> 08:02.460
reasons for punishment, if we
remember that punishment

08:02.460 --> 08:06.420
involves the deliberate bringing
about of a harm.

08:06.420 --> 08:11.520
So, for example, it may well
be, with respect to the

08:11.520 --> 08:17.590
question of incapacitation, that
if our sole goal is to

08:17.590 --> 08:22.840
incapacitate individuals who are
likely to commit crimes in

08:22.840 --> 08:26.180
the future that will bring
disutility to society as a

08:26.180 --> 08:32.100
whole, if our sole goal is to
incapacitate them, then there

08:32.100 --> 08:37.860
may well be equally effective
non-punitive alternatives.

08:37.860 --> 08:41.960
Remember condition one in
our characterization of

08:41.960 --> 08:43.320
punishment.

08:43.320 --> 08:45.630
Punishment involves

08:45.630 --> 08:49.190
state-imposed pain or suffering.

08:49.190 --> 08:53.430
It involves the deliberate
bringing about of disutility

08:53.430 --> 08:55.730
to an individual.

08:55.730 --> 09:01.650
Presumably, simple
incapacitation can be done in

09:01.650 --> 09:07.550
a way that does not involve
anything more than the minimum

09:07.550 --> 09:11.970
amount of imposition of
suffering on the individual.

09:11.970 --> 09:17.460
So any amount more than required
for incapacitation

09:17.455 --> 09:25.035
can't be justified on this
consequentialist ground.

09:25.040 --> 09:31.170
Moreover, it seems, not just
with regard to prevention,

09:31.170 --> 09:34.990
that there's a possibility of
non-punitive incapacitation,

09:34.990 --> 09:39.780
but also with regard to the
question of deterrence.

09:39.780 --> 09:45.020
Presumably, if one's real
concern is to reduce the crime

09:45.020 --> 09:49.750
rate, then there are things
that are a good deal less

09:49.750 --> 09:55.130
expensive than the prison system
to bring about that

09:55.130 --> 09:57.310
desired goal.

09:57.310 --> 10:01.920
Creating social situations in
which people have access to

10:01.920 --> 10:06.720
education and access
to employment may--

10:06.720 --> 10:08.940
an empirical question--

10:08.940 --> 10:14.060
be a more effective means of
deterring crime than public

10:14.060 --> 10:15.210
punishment.

10:15.210 --> 10:18.740
Particularly if one of the
results of jailing

10:18.740 --> 10:23.260
individuals, many of whom are
parents, is to leave the next

10:23.260 --> 10:27.920
generation of children in a
position where the kind of

10:27.920 --> 10:31.210
stable households, that we
know from our earlier

10:31.210 --> 10:34.620
discussions of what it is that
allows people to have

10:34.620 --> 10:36.200
well-ordered souls.

10:36.200 --> 10:39.740
Many children growing up in
households that don't provide

10:39.740 --> 10:41.600
them with that sort
of stability

10:41.600 --> 10:44.410
might have social costs.

10:44.410 --> 10:48.870
So let me reiterate, these
are empirical questions.

10:48.870 --> 10:53.540
It is an empirical question
whether the most effective

10:53.540 --> 10:58.890
form of incapacitation is one
that in addition involves the

10:58.890 --> 11:02.300
imposition of more suffering
than is required for

11:02.300 --> 11:04.370
incapacitation at the time.

11:04.370 --> 11:08.300
Perhaps that has better
long-term consequences in

11:08.300 --> 11:10.770
terms of preventing
future crime.

11:10.770 --> 11:15.690
Likewise, perhaps the most
effective form of deterrence

11:15.690 --> 11:20.530
is a kind of punishment that
brings with it a publicity

11:20.530 --> 11:24.440
that causes others to avoid
behaviors of that kind.

11:24.440 --> 11:30.360
But if it turns out that the
prevention requires simple

11:30.360 --> 11:32.950
incapacitation and that
deterrence seems more

11:32.950 --> 11:37.050
effective through some other
means, consequentialism can't

11:37.050 --> 11:39.890
give us a justification
for punishment.

11:39.890 --> 11:45.070
So the first problem with the
consequentialist argument, if

11:45.070 --> 11:47.040
what you're trying to find
is a justification for

11:47.040 --> 11:49.870
punishment, is that it
under-generates.

11:49.870 --> 11:55.620
The second problem is that in
certain cases it seems to

11:55.620 --> 11:58.500
over-generate punishment.

11:58.500 --> 12:00.310
So it seems--

12:00.310 --> 12:03.670
and we read about this in the
John Rawls piece that we read

12:03.670 --> 12:04.970
for last class--

12:04.970 --> 12:08.650
to license what's sometimes
called telishment.

12:08.650 --> 12:12.070
Where telishment means focusing
on the telos,

12:12.070 --> 12:16.720
focusing on the goal, focusing
on the end, rather than

12:16.720 --> 12:19.070
focusing on the process.

12:19.070 --> 12:22.160
So how might this go?

12:22.160 --> 12:23.680
Here's an example.

12:23.680 --> 12:25.670
Here's a community
of shepherds.

12:25.670 --> 12:26.910
They all look like Gyges.

12:26.910 --> 12:30.800
And there they all are
with their horses.

12:30.800 --> 12:41.440
In comes our standard bad guy
and steals two of the horses.

12:41.440 --> 12:45.150
The police come looking
for him.

12:45.150 --> 12:47.200
And he leaves town.

12:47.200 --> 12:51.690
So there's no one to punish
for stealing the horses.

12:51.690 --> 12:58.480
Poor Jim, unlucky, shows up
just at the time that the

12:58.480 --> 13:02.480
police have come looking
for a perpetrator.

13:02.480 --> 13:08.150
And the police realize that a
very effective way to prevent

13:08.150 --> 13:13.730
the stealing of future horses,
perhaps, would be to put Jim

13:13.730 --> 13:17.340
in jail and write a big
newspaper article about it:

13:17.335 --> 13:18.405
"Horse Thief Captured!

13:18.410 --> 13:24.260
Sentenced to Life in Prison."
That is, the deterrence effect

13:24.260 --> 13:32.650
that is demanded by punishment
seems, at least in principle,

13:32.650 --> 13:36.370
as if in certain cases it could
be carried out equally

13:36.370 --> 13:40.970
effectively by bringing
punishment, or something like

13:40.970 --> 13:46.840
punishment, to bear on an
individual who is not in fact

13:46.840 --> 13:49.230
the one who perpetrated
the crime.

13:49.230 --> 13:54.550
If the goal is deterrence, and
if the hypothesis that

13:54.550 --> 14:00.340
publicizing punishment is what
produces deterrence, then it

14:00.335 --> 14:04.875
seems as if there aren't
resources in the conceptual

14:04.880 --> 14:09.570
repertoire of the
consequentialists for denying

14:09.570 --> 14:12.010
telishment.

14:12.010 --> 14:16.440
As with the standard cases of
the surgeon cutting up the

14:16.440 --> 14:19.930
patient, there are more
complicated moves that the

14:19.930 --> 14:21.830
consequentialists can make.

14:21.830 --> 14:25.330
And the suggestion is not that
it's impossible within a

14:25.330 --> 14:28.560
consequentialist framework to
rule out something like

14:28.560 --> 14:29.800
telishment.

14:29.800 --> 14:35.250
The thought is just that it's
important to be clear what

14:35.250 --> 14:39.310
resources are available to you
if this is the justification

14:39.310 --> 14:40.780
to which you're appealing.

14:40.780 --> 14:45.190
And to recognize, that at
least prima facie the

14:45.190 --> 14:48.860
consequentialist justification
seems both to under-generate

14:48.860 --> 14:54.250
and over-generate things that
look like punishment in cases

14:54.250 --> 14:59.570
where, at least as encoded in
our inherited legal statutes,

14:59.570 --> 15:03.160
it violates the norms to which
people feel themselves

15:03.160 --> 15:04.730
intuitively drawn.

15:04.730 --> 15:07.670
So how might we get around
this problem?

15:07.670 --> 15:10.550
Remember there were problems,
it seemed, with the

15:10.550 --> 15:14.560
retributivist picture, and
problems it seems with the

15:14.560 --> 15:16.970
consequentialist picture.

15:16.970 --> 15:21.680
One of the standard ways of
getting around this is to

15:21.680 --> 15:25.910
introduce what might be called
a two-level theory.

15:25.910 --> 15:31.000
And we read an example of one of
the most sophisticated and

15:31.000 --> 15:33.930
influential two-level
theories in our

15:33.930 --> 15:36.300
readings for last class.

15:36.300 --> 15:40.170
So John Rawls, from whom we'll
hear again in a couple of

15:40.170 --> 15:43.090
weeks, the political
philosopher, wrote in the

15:43.090 --> 15:47.760
1950s a famous paper called "Two
Concepts of Rules." in

15:47.760 --> 15:53.200
which, one of the things that
he analyzes is punishment.

15:53.200 --> 15:58.090
And what he suggests is that we
think about the question of

15:58.090 --> 16:01.600
the justification of punishment
in a more

16:01.600 --> 16:03.940
complicated way.

16:03.940 --> 16:08.450
That we think first about what
it is that justifies the

16:08.445 --> 16:13.425
practice itself: What makes it
legitimate for a society to

16:13.430 --> 16:15.910
have punishment?

16:15.910 --> 16:19.540
And the suggestion is that what
justifies the practice

16:19.540 --> 16:24.590
itself is that having such a
practice in place is something

16:24.590 --> 16:26.910
of societal utility.

16:26.910 --> 16:32.610
But that once we have set up the
practice, what justifies

16:32.610 --> 16:38.330
particular actions within the
practice is retribution.

16:38.332 --> 16:41.942
And he suggests, later in the
paper, that we think of this

16:41.940 --> 16:45.860
distinction between justifying
a practice and justifying

16:45.859 --> 16:50.249
actions within a practice, on
analogy with something like

16:50.250 --> 16:53.480
the rules of baseball.

16:53.480 --> 16:59.460
So when we set up the rules of
baseball, we can have a debate

16:59.460 --> 17:03.260
about whether baseball would be
a better game if there are

17:03.260 --> 17:05.920
three strikes or four strikes.

17:05.920 --> 17:09.100
Whether baseball would be a
better game if stealing bases

17:09.100 --> 17:11.090
is allowed or disallowed.

17:11.090 --> 17:16.360
Whether baseball is a better
game if, or if not, pitchers

17:16.360 --> 17:19.490
are also treated as batters.

17:19.490 --> 17:23.870
So that's the debate about the
nature of the practice.

17:23.870 --> 17:26.670
And you might think when we're
debating the nature of the

17:26.670 --> 17:31.600
practice, one sort of
consideration comes into play.

17:31.600 --> 17:36.320
But once we have the practice
in place, we don't have

17:36.320 --> 17:40.270
debates in the context of a game
about whether it would be

17:40.270 --> 17:44.760
better in that particular
instance for a batter to get

17:44.760 --> 17:46.340
four strikes.

17:46.340 --> 17:49.310
Or whether it would be better
in that particular instance

17:49.310 --> 17:53.440
for a stolen base to count
or not count as a way of

17:53.440 --> 17:56.090
advancing across the bases.

17:56.090 --> 17:59.970
So we can distinguish between
what it is that sets up our

17:59.970 --> 18:04.350
practice, and what it is that
happens once our practice has

18:04.350 --> 18:06.050
been set up.

18:06.050 --> 18:09.940
And Rawls' suggestion is that
what justifies the practice in

18:09.940 --> 18:14.060
the case of punishment is a
general picture of utility,

18:14.060 --> 18:18.460
whereas what justifies acts
within the practice is

18:18.460 --> 18:20.940
something like retribution.

18:20.940 --> 18:25.790
And the suggestion is that this
manages simultaneously to

18:25.790 --> 18:31.220
resolve two problems. It
resolves the problem of

18:31.220 --> 18:34.840
under-generation, in some sense,
because if the general

18:34.840 --> 18:37.340
practice isn't useful, then the

18:37.340 --> 18:39.890
practice will be abandoned.

18:39.886 --> 18:43.406
And it resolves more clearly
the problem of

18:43.410 --> 18:47.390
over-generation, the problem
of telishment, because it's

18:47.390 --> 18:51.700
ruled out as an act within
the practice.

18:51.700 --> 18:55.040
We've set up the rules of what
punishment involves.

18:55.040 --> 18:58.050
Those are justified on
utilitarian grounds.

18:58.050 --> 19:01.980
But within the practice we can't
do things like put Jim

19:01.980 --> 19:06.680
in jail as a way of deterring,
because that's prohibited

19:06.680 --> 19:09.010
within the practice.

19:09.010 --> 19:13.400
So it might seem that the
pluralist challenge, or the

19:13.400 --> 19:17.000
pluralist solution, to the
dilemma of how punishment

19:17.000 --> 19:20.180
might be justified, either
retributively or on

19:20.180 --> 19:24.920
consequentialist grounds, solves
the problem with which

19:24.920 --> 19:26.260
we were concerned.

19:26.260 --> 19:29.740
Which is, how can the state ever
be justified in bringing

19:29.740 --> 19:32.740
about harm to its citizens?

19:32.740 --> 19:37.410
But there are, I think, three
questions that can be raised

19:37.410 --> 19:40.360
even with regard to
this solution.

19:40.360 --> 19:44.140
So we might ask whether utility
alone, without appeal

19:44.140 --> 19:49.030
to retribution, can really
serve as the sole

19:49.030 --> 19:53.490
justification of the practice,
or whether in fact the

19:53.490 --> 19:57.230
under-generation problem is just
going to re-emerge at the

19:57.230 --> 19:59.840
level of justification
of the practice.

19:59.840 --> 20:03.860
We can ask whether retribution
alone, without appeal to

20:03.860 --> 20:08.970
utility, can serve to justify
actions within the practice,

20:08.970 --> 20:12.370
or whether ultimately in order
to capture what it is that we

20:12.370 --> 20:15.340
want punishment to do, even
once we've established the

20:15.340 --> 20:18.880
practice, requires some
appeal to retribution.

20:18.880 --> 20:20.020
And I'll talk about
the psychology

20:20.020 --> 20:22.230
of this in a minute.

20:22.230 --> 20:24.870
And finally we might ask
whether, given the

20:24.870 --> 20:28.220
distinctness of the two
levels, we've provided

20:28.220 --> 20:32.420
anything like a coherent account
of what justifies

20:32.420 --> 20:33.340
punishment.

20:33.340 --> 20:38.200
If one sort of reason governs
the practice, and another sort

20:38.200 --> 20:42.900
of reason governs the
application of the practice,

20:42.900 --> 20:46.440
then even if each of them
independently is able to do

20:46.440 --> 20:49.940
that work, in fact, especially
if each of them independently

20:49.940 --> 20:53.390
is able to do that work, one
might wonder how the two

20:53.390 --> 20:56.950
together provide a
coherent account.

20:56.950 --> 21:00.810
So that closes what I want
to say about the general

21:00.810 --> 21:03.980
philosophical issues underlying

21:03.980 --> 21:05.990
punishment as a practice.

21:05.990 --> 21:09.430
We thought through two of the
standard justifications, and

21:09.430 --> 21:12.140
then a third which attempts
to reconcile them.

21:12.140 --> 21:15.800
And let me again remind you that
part of what we're doing

21:15.800 --> 21:18.890
here is thinking more generally
about moral

21:18.890 --> 21:20.130
justification.

21:20.130 --> 21:22.780
So all of the arguments that
we've considered in the

21:22.780 --> 21:27.170
context of punishment have
direct analogs in the positive

21:27.170 --> 21:28.840
mirror image of it.

21:28.840 --> 21:33.250
What I want to in the last half
of lecture is to talk

21:33.250 --> 21:35.420
about three additional
questions.

21:35.420 --> 21:38.100
These are the readings that
we did for today.

21:38.100 --> 21:40.320
I want to talk about the
psychology and the

21:40.320 --> 21:46.880
psychological constraints that
seem to constrain any picture

21:46.880 --> 21:49.550
of punishment that we're going
to have. That's going to

21:49.550 --> 21:53.150
accord with how it is that it
appears people intuitively

21:53.150 --> 21:57.540
respond to particular instances
of norm violation.

21:57.540 --> 22:00.000
I want to connect what we've
talked about in the context of

22:00.000 --> 22:02.860
punishment with the issues that
we talked about around

22:02.860 --> 22:06.920
luck at the end of our classes
before March break.

22:06.920 --> 22:10.020
And finally, I want to bring us
back to some Aristotelian

22:10.020 --> 22:13.660
themes about virtuous character
by talking about

22:13.660 --> 22:16.080
punishment and parenting.

22:16.076 --> 22:21.486
So there's been, in the last
thirty years or so, a vast

22:21.490 --> 22:26.320
body of empirical research
conducted by a number of

22:26.320 --> 22:30.220
extremely sophisticated social
psychologists who have looked

22:30.220 --> 22:34.950
both at people's responses to
hypothetical cases and at the

22:34.950 --> 22:39.770
legal codes of numerous
societies in an attempt to get

22:39.770 --> 22:44.090
not at the normative question,
what should justify

22:44.090 --> 22:48.430
punishment, but at an answer to
the descriptive question,

22:48.430 --> 22:54.700
what psychological human need
does punishment address.

22:54.700 --> 22:57.950
And in exploring these
questions, John Darley--

22:57.950 --> 23:00.470
from whom we've read selections
at a number of

23:00.470 --> 23:03.170
points in the course, including
for today--

23:03.170 --> 23:06.930
makes appeal in his discussions
to the very

23:06.930 --> 23:09.880
distinction that we've been
talking about with respect to

23:09.880 --> 23:11.790
the normative question.

23:11.790 --> 23:17.320
So you can distinguish in
looking to see what factors

23:17.320 --> 23:21.520
affect people's decisions about
punishment, whether what

23:21.520 --> 23:25.190
they seem to have their
attention directed to are

23:25.190 --> 23:29.630
questions of things like just
desert and retribution.

23:29.630 --> 23:31.860
That is questions--

23:31.860 --> 23:37.690
does it affect how likely or
how severe their punishment

23:37.690 --> 23:42.490
will be, if it looks like the
individual in question

23:42.490 --> 23:46.620
intended to do a great amount of
harm, regardless of whether

23:46.620 --> 23:49.670
he or she succeeded at it?

23:49.670 --> 23:54.540
Does it affect people's
assessment of bringing about

23:54.540 --> 23:59.500
of harm, if the individual, for
example, stole money to

23:59.500 --> 24:03.420
give to a charity, as opposed
to stole money to

24:03.420 --> 24:06.500
buy herself a Ferrari?

24:06.500 --> 24:09.790
If questions about what
was going on in the

24:09.790 --> 24:11.800
individual's mind--

24:11.800 --> 24:16.050
did the individual intend to
bring about the harm or not --

24:16.050 --> 24:21.680
if questions about what goals
the person had with respect to

24:21.680 --> 24:23.420
the proceeds of the crime--

24:23.420 --> 24:25.890
did they want to use them for
something prosocial or

24:25.890 --> 24:28.230
something antisocial --

24:28.230 --> 24:33.440
then it looks like one of the
kinds of considerations that's

24:33.440 --> 24:37.330
coming into play when people
reason about punishment are

24:37.330 --> 24:40.060
things about retribution.

24:40.060 --> 24:43.680
By contrast, if when people are
thinking about what kind

24:43.680 --> 24:49.170
of punishment to impose, what
they look at are things like,

24:49.170 --> 24:54.390
how likely is a crime like this
one to be detected, or

24:54.390 --> 24:59.760
how public is the act of
punishment going to be, then

24:59.760 --> 25:03.770
it seems like the underlying
psychological mechanisms

25:03.770 --> 25:08.760
behind punishment are ones that
are primarily sensitive

25:08.760 --> 25:11.370
to consequentialist
constraints.

25:11.370 --> 25:17.720
So in a series of studies over
the last several decades,

25:17.720 --> 25:22.830
psychologists have asked the
question, to what sort of

25:22.830 --> 25:27.470
variations are punishment
judgments sensitive?

25:27.470 --> 25:31.400
In general, when people are
assigning punishment, when

25:31.400 --> 25:34.440
people are assigning severity,
when people are deciding

25:34.440 --> 25:38.890
whether to punish or not, do
they, if given the chance to

25:38.890 --> 25:42.780
look for information, want
information, for example,

25:42.780 --> 25:46.770
about the criminal's state of
mind or about the criminal's

25:46.770 --> 25:48.330
motivation?

25:48.330 --> 25:52.660
Or do they want information
about how widely publicized

25:52.660 --> 25:57.080
the punishment is going to be
or how likely crimes of this

25:57.080 --> 26:00.670
sort are to be detected?

26:00.670 --> 26:05.010
And it turns out fairly
consistently that it's

26:05.010 --> 26:10.030
considerations of the first type
that seem to be driving

26:10.030 --> 26:12.550
people's responses.

26:12.550 --> 26:18.370
That is, that a utility calculus
doesn't seem to be

26:18.370 --> 26:24.110
the primary motivation when
individuals who are surveyed

26:24.110 --> 26:29.410
in psychology studies, or penal
codes, are looked at as

26:29.410 --> 26:32.670
indications, of the
psychological mechanisms that

26:32.670 --> 26:34.680
underlie punishment.

26:34.680 --> 26:39.290
And we see further evidence in
favor of this hypothesis if we

26:39.290 --> 26:44.030
consider a phenomenon that's
sometimes known as altruistic

26:44.030 --> 26:47.450
or costly third-party
punishment.

26:47.450 --> 26:52.140
So these are cases where an
individual, A, punishes--

26:52.140 --> 26:56.170
that is brings about harms
and costs to--

26:56.170 --> 27:01.550
another individual, in a way
that, first of all, brings a

27:01.550 --> 27:08.630
cost to A, brings no direct
benefits to A, and concerns a

27:08.630 --> 27:13.510
norm violation that didn't
affect A in the first place.

27:13.510 --> 27:19.270
So for example, I'm standing in
a long line waiting to get

27:19.270 --> 27:26.630
my iPad 2, and a person cuts
in line behind me.

27:26.630 --> 27:29.210
Right-- so, he cuts
in line behind me.

27:29.210 --> 27:33.280
It doesn't affect my
access to the iPad.

27:33.280 --> 27:39.130
I might punish him by inviting
everybody that stands behind

27:39.130 --> 27:45.190
him to cut in front
of both of us.

27:45.190 --> 27:47.930
That makes me worse
off, right?

27:47.930 --> 27:52.090
I've just let everybody behind
me in line cut in front of me.

27:52.090 --> 27:56.060
It doesn't directly benefit
me in any way.

27:56.060 --> 28:00.800
And the thing for which I
am causing a harm to the

28:00.800 --> 28:05.020
individual behind me is not
something that concerned a

28:05.020 --> 28:07.280
norm violation directed at me.

28:07.280 --> 28:10.310
He cut in line behind me.

28:10.310 --> 28:15.000
And we see instance after
instance of behaviors with

28:15.000 --> 28:18.430
this structure in the
behavioral economics

28:18.430 --> 28:19.680
literature.

28:19.680 --> 28:22.690
It appears for whatever
reason.

28:22.686 --> 28:26.426
Perhaps because this is a way of
promoting a certain kind of

28:26.430 --> 28:29.110
evolutionarily stable
prosociality.

28:29.110 --> 28:34.720
Perhaps because norm violations
tap into heuristics

28:34.720 --> 28:37.100
that cause people to act
in certain ways.

28:37.100 --> 28:39.360
Perhaps for some third reason.

28:39.360 --> 28:44.870
It appears that in numerous,
both laboratory and public

28:44.870 --> 28:51.210
settings, individuals engage in
punitive acts that have the

28:51.210 --> 28:53.700
structure articulated above.

28:53.704 --> 28:57.364
Consequentialism, at least
in its simple form,

28:57.360 --> 29:00.230
can't explain that.

29:00.226 --> 29:05.436
Finally, state of mind seems to
play a large role in how it

29:05.440 --> 29:09.400
is that people respond to
situations that potentially

29:09.400 --> 29:11.250
involve punishment.

29:11.250 --> 29:15.250
So suppose poor Jim comes over
to my house, and hidden behind

29:15.250 --> 29:20.760
the door is my prize umbrella.

29:20.760 --> 29:25.580
And Jim knocks on my door, and
I say to Jim, "Come in." And

29:25.580 --> 29:30.160
Jim walks in the door and knocks
over my prize umbrella

29:30.160 --> 29:33.850
in a way that causes him to
stamp through it and poke a

29:33.850 --> 29:35.280
hole in my umbrella.

29:35.280 --> 29:42.660
And he says, "Oh no, your prize
umbrella." Case one.

29:42.660 --> 29:45.500
Case two, Jim shows
up at my house.

29:45.500 --> 29:49.560
And I say to him as he knocks
on the door, "Jim, Jim look

29:49.560 --> 29:52.480
out for my umbrella as you come
in!" And Jim opens the

29:52.480 --> 29:56.270
door and stomps on my prize
umbrella, putting a hole in

29:56.270 --> 30:03.450
it, and says, "Oh darn, your
umbrella!" Case number three,

30:03.450 --> 30:05.700
Jim knocks on my door.

30:05.700 --> 30:10.400
I say to him, "Jim, look out
for my umbrella!" Jim opens

30:10.400 --> 30:16.290
the door, stomps on my prize
umbrella, and looking down at

30:16.290 --> 30:19.330
his footwork says, "Yeah!

30:19.330 --> 30:22.790
I smashed your umbrella!"

30:22.790 --> 30:30.050
What sort of responses do
these three cases evoke?

30:30.050 --> 30:34.210
Here's our famous Bad-o-meter
that you will recall from our

30:34.210 --> 30:38.110
various trolley and
other cases.

30:38.110 --> 30:41.010
When Jim accidentally stomps
on my umbrella--

30:41.010 --> 30:44.490
the first case where I said,
"come in Jim," and he stomped

30:44.490 --> 30:48.830
on it and said, "Oh no, your
umbrella!" There's harm, but

30:48.830 --> 30:52.480
it's treated as pretty
low-level.

30:52.480 --> 30:56.440
When I say to Jim, "Look out,
my umbrella!" And Jim

30:56.440 --> 30:58.500
nonetheless opens the door and
stomps on it and says, "Oh

30:58.500 --> 31:03.280
darn, your umbrella!" he's
done something negligent.

31:03.276 --> 31:07.756
And there's a sense, on the part
of most subjects, that

31:07.760 --> 31:10.700
something slightly worse
has occurred in

31:10.700 --> 31:13.310
the general bad direction.

31:13.310 --> 31:16.940
And when Jim comes into my
house and stomps on my

31:16.940 --> 31:21.130
umbrella and proudly looks down
at what he's done, having

31:21.130 --> 31:25.210
done so, it seems,
intentionally, there is, on

31:25.210 --> 31:28.030
the part of most subjects,
a sense that

31:28.030 --> 31:31.120
something worse has happened.

31:31.116 --> 31:33.796
Consequentialism alone
can't explain this.

31:33.800 --> 31:39.410
In all three cases I've got my
poor stomped-on umbrella.

31:39.410 --> 31:44.700
What can explain this is a
response that individuals seem

31:44.700 --> 31:48.420
pretty persistently to have
across cultures, which is

31:48.420 --> 31:52.110
sometimes called the phenomenon
of moral outrage.

31:52.110 --> 31:55.790
In the accident case, there's
no moral outrage at all.

31:55.790 --> 31:59.620
Nobody thinks, that was morally
outrageous of Jim to

31:59.620 --> 32:01.890
step on my umbrella, given that
he didn't know it was

32:01.890 --> 32:05.330
behind the door and had just
come in in response to my

32:05.330 --> 32:07.730
request that he enter.

32:07.730 --> 32:11.020
In the negligent case, there's
some irritation, there's some

32:11.020 --> 32:14.710
moral outrage, there's a
feeling: "Jim, I told you my

32:14.710 --> 32:17.610
umbrella was there." But
the moral outrage

32:17.610 --> 32:19.910
is relatively low.

32:19.910 --> 32:24.380
But in the case where Jim did
something that strikes us as

32:24.380 --> 32:29.750
having been an intentional
bringing about of harm, in

32:29.750 --> 32:33.450
direct violation of my request
that he not do so, "Jim look

32:33.450 --> 32:36.070
out for my umbrella"
there's a response

32:36.070 --> 32:39.260
of high moral outrage.

32:39.262 --> 32:43.472
And what Carlsmith and Darley
suggest, in the article that

32:43.470 --> 32:48.620
we read for today, is that these
attitudes of either no

32:48.620 --> 32:53.100
moral outrage, low moral
outrage, or high moral outrage

32:53.100 --> 32:56.890
direct our attention
to different

32:56.890 --> 33:00.410
parties in the episode.

33:00.410 --> 33:04.570
When there is an event that's
happened that doesn't produce

33:04.570 --> 33:09.080
moral outrage at all, there is,
except with respect to a

33:09.080 --> 33:12.850
general desire to make the
world a better place, no

33:12.850 --> 33:18.120
attention either to the victim
or to the perpetrator.

33:18.120 --> 33:19.590
Bad stuff happens.

33:19.590 --> 33:21.840
We might go about trying
to change the world.

33:21.840 --> 33:24.680
If there's an extra supply of
umbrellas in my car, I might

33:24.680 --> 33:26.620
bring one of them in.

33:26.620 --> 33:31.310
But in general, when things
happen by accident the

33:31.310 --> 33:35.100
perpetrator isn't a focus
of attention.

33:35.100 --> 33:38.540
And the victim is no more a
focus of attention than he or

33:38.540 --> 33:42.160
she would be if this had been
brought about as the result of

33:42.160 --> 33:46.000
a gust of wind having destroyed
the umbrella.

33:46.000 --> 33:51.260
In cases of low moral outrage,
attention is

33:51.260 --> 33:53.920
directed to the victim.

33:53.920 --> 34:00.420
There's a tendency to focus on
compensation, on the need to

34:00.420 --> 34:03.360
make things better.

34:03.360 --> 34:09.340
But there's no focus on the
perpetrator as someone in need

34:09.340 --> 34:13.150
of our focused punitive
attention.

34:13.150 --> 34:14.350
But in the case--

34:14.350 --> 34:17.070
and in a moment I'm going ask
you to take out your clickers

34:17.065 --> 34:20.065
because we're going to talk
about intentional action.

34:20.070 --> 34:25.360
In the case where the act that
violates the moral norm is

34:25.362 --> 34:29.802
seen as having been intentional,
there is a

34:29.800 --> 34:36.050
tendency to focus not merely on
compensation, but also on

34:36.050 --> 34:37.560
punishment.

34:37.560 --> 34:41.260
This is an extraordinarily
resilient pattern, and one

34:41.255 --> 34:45.075
that seems very interesting
psychologically if we're

34:45.080 --> 34:50.250
trying to come up with moral
codes that will seem to

34:50.250 --> 34:58.050
evolved human beings, to strike
them as satisfactory.

34:58.050 --> 35:02.980
But one problem with this
picture is that it turns out

35:02.980 --> 35:08.130
that determining whether an
action was done intentionally

35:08.130 --> 35:10.900
may well be more complicated
than it

35:10.900 --> 35:13.500
had initially appeared.

35:13.500 --> 35:17.530
So my colleague, and your fellow
Yale professor, Joshua

35:17.525 --> 35:22.615
Knobe, has done a wonderful
series of studies on the

35:22.620 --> 35:28.120
question of what it is that
leads people to think of an

35:28.120 --> 35:32.390
action as having being performed
intentionally.

35:32.390 --> 35:34.650
And we're doing the study in
the way that all of you are

35:34.650 --> 35:35.810
going to get both vignettes.

35:35.810 --> 35:39.300
So we won't get exactly the
distribution that's indicative

35:39.300 --> 35:40.420
of typical responses.

35:40.420 --> 35:43.700
But I think it's nonetheless
interesting for you to think

35:43.700 --> 35:44.350
through the cases.

35:44.350 --> 35:48.050
So how many of you have seen
these cases before?

35:48.050 --> 35:48.980
Just hands.

35:48.980 --> 35:52.040
Ok, so about 5% to 10% of you.

35:52.040 --> 35:55.500
So Josh Knobe presents his
subjects with scenarios,

35:55.500 --> 35:57.460
vignettes like the following.

35:57.460 --> 35:59.530
The vice president of a company
went to the chairman

35:59.530 --> 36:00.840
of the board and said,
"We're thinking of

36:00.840 --> 36:02.580
starting a new program.

36:02.580 --> 36:05.660
It will help us increase
profits, but it will also harm

36:05.660 --> 36:09.370
the environment." The chairman
of the board answered, "I

36:09.370 --> 36:11.370
don't care at all about harming
the environment.

36:11.370 --> 36:14.420
I just want to make as
much profit as I can.

36:14.420 --> 36:17.850
Let's start the new program."
They started the new program.

36:17.850 --> 36:21.610
Sure enough, the environment
was harmed.

36:21.610 --> 36:26.490
The question is this: in
starting that program--

36:26.490 --> 36:29.590
"I don't care at all about
harming the environment, I

36:29.590 --> 36:32.100
just care about making
a profit" --

36:32.100 --> 36:36.170
is it the case that the
chairman harmed the

36:36.170 --> 36:38.490
environment intentionally?

36:38.490 --> 36:42.570
Did the chairman one, yes harm
the environment intentionally,

36:42.570 --> 36:46.090
two, no, did not harm the
environment intentionally?

36:46.090 --> 36:46.680
Huh?

36:46.680 --> 36:47.950
People didn't really have
their clickers out.

36:47.950 --> 36:49.350
There's only 70 of you.

36:49.350 --> 36:50.900
OK, let's see how it came out.

36:50.900 --> 36:55.460
So 77% of you think that he
harmed the environment

36:55.460 --> 36:58.010
intentionally.

36:58.010 --> 37:01.610
And we'll now try
the second case.

37:01.610 --> 37:04.450
Second case-- try to forget that
you just had the first. I

37:04.450 --> 37:07.460
know you can't do that, but
try to do it anyway.

37:07.460 --> 37:09.240
The vice president of a company
went to chairman of

37:09.242 --> 37:10.512
the board and said "We're
thinking of

37:10.510 --> 37:12.420
starting a new program.

37:12.420 --> 37:14.290
It'll help increase profits.

37:14.290 --> 37:18.260
But it will also help the
environment." The chairman of

37:18.260 --> 37:21.020
the board answered, "I don't
care at all about helping the

37:21.020 --> 37:21.930
environment.

37:21.930 --> 37:24.880
I just want to make as
much profit as I can.

37:24.880 --> 37:27.860
Let's start the new program."
They started the new program.

37:27.860 --> 37:30.420
Sure enough, the environment
was helped.

37:30.420 --> 37:33.920
So the chairman says, "I don't
care at all about helping the

37:33.920 --> 37:34.810
environment.

37:34.810 --> 37:39.230
I just want to make a profit."
Question, did the chairman

37:39.230 --> 37:41.420
help the environment
intentionally?

37:41.420 --> 37:43.480
One, if yes.

37:43.480 --> 37:45.620
Two, if no.

37:45.620 --> 37:50.690
And let's see how the
numbers come out.

37:50.690 --> 37:51.350
Wow!

37:51.350 --> 37:52.700
You guys are amazing!

37:52.700 --> 37:56.730
All right, you came in even
cleaner than standard Knobe

37:56.730 --> 37:59.220
results, even though I gave
you the two cases

37:59.220 --> 38:01.360
sequentially, which should've
mitigated the effect.

38:01.360 --> 38:02.110
But there we go.

38:02.110 --> 38:05.960
So classically, when you present
this to people, in the

38:05.960 --> 38:10.600
first scenario where you guys
gave 77% yes he harmed it

38:10.600 --> 38:15.430
intentionally, roughly 83% of
people say that he did.

38:15.430 --> 38:17.880
Whereas, in the second case,
where you guys gave this

38:17.880 --> 38:21.600
amazing 89% that he didn't
help the environment

38:21.600 --> 38:25.590
intentionally, typically
this number is 23%.

38:25.590 --> 38:28.160
So that one's 67--

38:28.160 --> 38:29.450
sorry, 77.

38:29.450 --> 38:31.800
OK, now what's going on here?

38:31.795 --> 38:34.085
Let me remind you what
just happened.

38:34.090 --> 38:37.890
We had two identical
scenarios.

38:37.890 --> 38:42.700
The only difference is that we
changed the term "help" here

38:42.700 --> 38:46.140
to "harm" in the first case.

38:46.140 --> 38:53.320
In both cases, what the CEO said
is, "I don't care about

38:53.315 --> 38:55.505
thing you think I did
intentionally in the first

38:55.510 --> 38:58.240
case and thing you didn't think
I did intentionally in

38:58.240 --> 38:59.340
the second.

38:59.340 --> 39:04.820
I just care about making a
profit." So something that he

39:04.820 --> 39:11.590
was ignoring in the first case,
most of you thought is

39:11.590 --> 39:14.210
something he was intending
to do.

39:14.210 --> 39:18.760
Whereas something he was
ignoring in the second case,

39:18.760 --> 39:24.310
almost none of you thought
he was intending to do.

39:24.310 --> 39:31.220
That's perplexing if what's
going on here is that

39:31.220 --> 39:36.350
punishment is supposed to be
tracking intentional action.

39:36.350 --> 39:40.440
Because it looks like what kind
of action we take to be

39:40.440 --> 39:47.020
intentional is in some way
confounded with what sorts of

39:47.020 --> 39:51.360
actions we take to be
morally problematic.

39:51.360 --> 39:54.390
So that makes the whole question
of how we ought to

39:54.390 --> 39:58.570
think about punishment even
more complicated than it

39:58.570 --> 40:01.410
struck us already.

40:01.410 --> 40:06.490
But let's set that aside
temporarily and think about

40:06.490 --> 40:10.150
what implications there are from
the psychological results

40:10.150 --> 40:13.350
that we've just been studying.

40:13.350 --> 40:18.930
On the one hand, it looks like,
psychologically, paying

40:18.930 --> 40:23.260
attention to backward-looking
or retributivist reasons for

40:23.260 --> 40:29.260
punishment, to the rebalancing
of the scales of justice, is

40:29.260 --> 40:34.200
psychologically required for a
theory to feel satisfying.

40:34.200 --> 40:38.430
Or at the very least, that
simply looking forward and

40:38.430 --> 40:40.030
thinking about consequentialist

40:40.030 --> 40:44.320
considerations and utility
doesn't seem psychologically

40:44.320 --> 40:46.180
sufficient.

40:46.180 --> 40:50.950
So the question is whether,
taking human psychology

40:50.950 --> 40:55.110
seriously, there's an additional
alternative.

40:55.110 --> 41:00.590
And a number of psychologists
and philosophers have asked

41:00.590 --> 41:04.690
the question whether if we
look not backwards and

41:04.690 --> 41:08.340
forwards, towards the
performance of the crime as

41:08.340 --> 41:10.970
something that demands a
rebalancing of the scales of

41:10.970 --> 41:14.940
justice, or towards the utility
which can be brought

41:14.940 --> 41:19.210
to society, but rather at the
things that we were calling

41:19.210 --> 41:22.470
three and four in our initial
characterization, at

41:22.470 --> 41:27.790
restitution and rehabilitation
as our motivations, can we

41:27.790 --> 41:32.000
somehow get a psychologically
satisfying account of

41:32.000 --> 41:37.780
punishment, where the goal is
not rebalancing or utility,

41:37.780 --> 41:42.200
but rather some sort
of reparation?

41:42.200 --> 41:44.160
This is sometimes called
restorative

41:44.160 --> 41:46.050
or reparative justice.

41:46.050 --> 41:49.590
And those of you who are
intrigued by it, I'm pointing

41:49.590 --> 41:52.420
you to a location where
you can read

41:52.420 --> 41:54.580
articles about that question.

41:54.580 --> 41:58.660
OK, so that closes the first
part of our discussion, the

41:58.660 --> 42:01.320
psychological implications
of punishment.

42:01.320 --> 42:04.070
And what I want to do in the
last five minutes of lecture

42:04.070 --> 42:06.930
is to talk about the interaction
with luck.

42:06.930 --> 42:09.910
And then we'll begin lecture
next Tuesday by talking about

42:09.910 --> 42:12.810
the connection in the context
of parenting.

42:12.810 --> 42:16.220
So we've been thinking
throughout this unit about the

42:16.220 --> 42:20.380
cases like this, where luck
seems to play a role in

42:20.380 --> 42:21.550
consequences.

42:21.550 --> 42:25.990
When the person didn't
intend to do harm.

42:25.990 --> 42:29.250
Unlucky Cell Phone and
Unlucky Alert had no

42:29.250 --> 42:31.670
desire to harm the child.

42:31.670 --> 42:36.460
But of course, luck can play a
role in cases where somebody

42:36.460 --> 42:39.820
is deliberately trying
to bring about harm.

42:39.820 --> 42:45.160
So here's our standard bad
guy, who shows up in town

42:45.160 --> 42:48.240
where there's a potential
victim for him.

42:48.242 --> 42:52.512
And he pulls out his gun and
says, "Oh yeah, I'm gonna

42:52.510 --> 42:57.120
shoot that guy." And luck
is on his side.

42:57.120 --> 43:00.540
Intention carries out
in the way that he

43:00.540 --> 43:04.700
intended and he succeeds.

43:04.700 --> 43:12.130
His perfect analog, Unlucky
Shooter, shows up in town and

43:12.130 --> 43:18.280
says, "Oh yeah, I'm gonna shoot
that guy." But when he

43:18.280 --> 43:22.230
pulls the trigger,
unfortunately, just at the

43:22.230 --> 43:26.020
moment that his bullet would
have hit his intended victim,

43:26.020 --> 43:30.420
up comes the truck that was in
our last story, and stops the

43:30.420 --> 43:32.140
bullet in its tracks.

43:32.140 --> 43:37.600
So he doesn't succeed at
his intended crime.

43:37.600 --> 43:45.780
Notice that this is the mirror
analog to the previous case.

43:45.780 --> 43:49.830
The intention in the two
cases is the same.

43:49.826 --> 43:52.686
The action in the two
cases is the same.

43:52.690 --> 43:57.630
But in the first case the
criminal has been lucky with

43:57.630 --> 43:59.290
respect to his intended goals.

43:59.290 --> 44:01.540
He shot the guy that
he tried to shoot.

44:01.540 --> 44:06.420
Whereas in the second case the
criminal has been unlucky with

44:06.420 --> 44:07.440
expected goals.

44:07.440 --> 44:12.410
He didn't manage to
shoot his victim.

44:12.410 --> 44:18.910
The question is why we
punish this one more

44:18.910 --> 44:22.570
seriously than this one.

44:22.570 --> 44:30.520
What sort of justification could
possibly underlie this?

44:30.520 --> 44:35.860
And David Lewis in the rather
complicated, I admit, paper

44:35.860 --> 44:38.910
that I had you read for today,
and I promise we will talk

44:38.910 --> 44:42.490
about in sections to help you
understand what the logic of

44:42.490 --> 44:49.010
the argument is, but David Lewis
suggests that the only

44:49.010 --> 44:54.830
thing that could justify
treating this act as more

44:54.830 --> 45:00.730
punishment worthy than this one,
is if we think it would

45:00.730 --> 45:07.080
be all right to impose what
he calls a penal lottery.

45:07.080 --> 45:13.540
So what he says is, if we knew
objectively that when you

45:13.540 --> 45:16.950
engage in a shooting action
where you pull a trigger

45:16.950 --> 45:21.610
there's a one in ten chance that
your action will fail and

45:21.610 --> 45:27.330
a nine in ten chance that your
action will succeed, then

45:27.330 --> 45:32.320
punishing you is fundamentally,
in our current

45:32.320 --> 45:37.480
system, equivalent to having you
draw a straw from among a

45:37.480 --> 45:42.720
set of straws where there are
nine that send you to a long

45:42.720 --> 45:48.200
prison term and one that sends
you to a short one.

45:48.200 --> 45:51.730
And he suggests that we can see
that our current practice

45:51.730 --> 45:55.730
is akin to that by bringing
you through a series of

45:55.730 --> 46:00.280
imaginary cases, each of which,
he suggests, has the

46:00.280 --> 46:05.850
same justification as
this straw drawing.

46:05.846 --> 46:10.116
So imagine a system where once
we've established that the

46:10.115 --> 46:12.835
intent was equal in
the two cases.

46:12.840 --> 46:17.900
We had our two individuals
here just draw straws.

46:17.900 --> 46:20.250
Each of them has a one
in nine chance of

46:20.250 --> 46:22.210
getting off the hook--

46:22.210 --> 46:23.810
sorry a one in ten chance.

46:23.810 --> 46:27.300
And a nine in ten chance
of getting to jail.

46:27.300 --> 46:32.860
That corrects for luck.

46:32.860 --> 46:36.800
Equivalent to that, says
Lewis, is not that the

46:36.800 --> 46:40.230
individual draws the straw,
but that before the trial

46:40.230 --> 46:42.150
takes place, a court

46:42.150 --> 46:45.050
representative draws the straw.

46:45.050 --> 46:48.960
And though the jury is just
deciding, "did he intend to do

46:48.960 --> 46:52.330
it?" they don't know
in advance what the

46:52.330 --> 46:55.440
penalty will be.

46:55.440 --> 46:59.910
Equivalent to that, he suggests,
is that beforehand,

46:59.910 --> 47:03.460
the court representative draws
the straw and makes public

47:03.460 --> 47:09.160
that if they convict him of
intending to do the crime,

47:09.160 --> 47:13.770
then he will either receive
the nine penalty, the more

47:13.770 --> 47:17.560
severe one, or the one.

47:17.560 --> 47:22.240
Equivalent to that, suggests
Lewis, is that beforehand we

47:22.240 --> 47:27.350
provide a reenactment of the
crime, with similar odds, and

47:27.350 --> 47:31.940
if the victim dies in that case,
then the individual will

47:31.940 --> 47:36.030
get a more severe punishment.

47:36.034 --> 47:40.674
And, suggests Lewis, our
actual practice of just

47:40.670 --> 47:46.150
letting the world play itself
out, so that we punish the

47:46.150 --> 47:50.890
person whose crime attempt
succeeded more severely than

47:50.890 --> 47:56.010
the one whose crime attempt
didn't, is just a

47:56.010 --> 48:01.710
pre-enactment, morally
equivalent to a re-enactment,

48:01.710 --> 48:04.170
morally equivalent
to the drawing

48:04.170 --> 48:07.620
of straws in a lottery.

48:07.620 --> 48:12.700
So I'll close with that as a way
of trying to bring out yet

48:12.700 --> 48:18.110
again the perplexity of moral,
or perhaps, immoral luck.

48:18.110 --> 48:21.570
And we'll open next Tuesday with
our discussion of Kazdin

48:21.570 --> 48:25.670
and then move on to our
selection from the writings of

48:25.670 --> 48:27.540
Thomas Hobbes And
I'll post that

48:27.540 --> 48:30.020
reading for you by tomorrow.

48:30.020 --> 48:31.270
Thanks.