PHIL 181: Philosophy and the Science of Human Nature

Lecture 4

 - Parts of the Soul II

Overview

Professor Gendler begins with a demonstration of sampling bias and a discussion of the problems it raises for empirical psychology. The lecture then returns to divisions of the soul, focusing on examples from contemporary research. The first are dual-processing accounts of cognition, which are introduced along with a discussion of the Wason selection task and belief biases. Next, the influential research of Kahneman and Tversky on heuristics and biases is introduced alongside the famous Asian disease experiment. Finally, Professor Gendler introduces her own notion of alief and offers several examples that distinguish it from belief.

 
Transcript Audio Low Bandwidth Video High Bandwidth Video
html

Philosophy and the Science of Human Nature

PHIL 181 - Lecture 4 - Parts of the Soul II

Chapter 1. Sampling Bias [00:00:00]

Professor Tamar Gendler: So what I would like you to do is to answer the following question. “Do you have…?”–Oh, now wait a second–this says polling closed. Let’s see whether we can get polling to open. “Do you…?”–No, that didn’t work. Oh, this is the most desperately horrible thing that I’ve ever had happen. OK. [laughter] Current slide. I’ve had horrible things happen that are worse than this, I’m sorry. I don’t want to belittle all the really horrible things that have ever happened to me. [laughter]

OK. So let’s try again. Click practicalities blah blah. OK. “Do you have a clicker yet? Polling is open.” OK. And there’s ten seconds left. You’ll see this little timeline. Eight, seven, six, five, four– ninety two of you, so many of you have clickers. OK. Let’s see what we learned. It looks like 97% of you have clickers. [laughter]

Now, I actually began with this exercise to make a point about psychological research. [laughter] We just made a classic and dangerous mistake, a mistake known as sampling bias. We used a measure which doesn’t give us accurate information about what we wanted to find out. We wanted to find out, what percentage of people in the room had clickers. And what we found out instead was, what percentage of people in the room with clickers had clickers. [laughter]  I don’t know who you guys are.

This error is an incredibly dangerous one, and it could have persisted. Suppose I then ask you a question whether you’re from the class of 2014, ‘13, ‘12, or ‘11, and discovered that 80% of you were freshmen and sophomores. There, again, I would inherited exactly the sampling bias error that we just observed. Freshmen and sophomores have handed in their schedules already. Consequently, they’re certain about what classes they’re taking. Consequently, they’re more likely to have clickers already.

So when we do psychological empirical research, we need to be extraordinarily careful that the means by which we are choosing subjects for experiments are, in fact, means that give us an accurate sample of what it is that we’re interested in.

An article appeared recently in the journal Behavioral and Brain Sciences with the title, “The Weirdest People in the World”. And “WEIRD” here stands for Western, educated, industrialized, rich, and democratic. That is, American college students. People with access to online Internet polling. And what the article argued was that a great deal of psychological research that claims to make general assumptions, that claims to prove general assumptions about human nature, is biased in exactly the way my poll concerning what percentage of you have clickers was biased. It looks at a small sample of the population, those who are readily available to research laboratories on university campuses, for example, the psych introductory psych pool. And it bases its conclusions about human nature on that sample.

Throughout the semester we need to be attentive, when we read psychological studies which are making claims about human nature, to the fact that it is possible that some of the things which we are told apply to human beings in general instead have at least being shown conclusively only to apply to WEIRD folk. Western, educated, industrialized, rich, and democratic. And in certain cases, the data would look very different if we looked at a different population.

This is also the case in the classical texts we used. The process that Plato and Aristotle took on for themselves didn’t look experimental in the way that experimental psychology looks experimental. But they looked at a sample of people around them, and on the basis of their experience, they drew conclusions about human nature.

So little reminder of something that I said in the first lecture that sometimes gets lost. In some ways, this is a class about philosophy and the science of human nature. And in other ways, this is a class about Western philosophy and the science of Western human nature. And we need to be attentive to that when it becomes relevant.

Chapter 2. Dual Processing Accounts of Cognition and the Wason Selection Task [00:05:58]

So what I want to do in today’s lecture is to return to the issue that we took up last class, which is the issue about the multi-part nature of the human soul. And you’ll recall that in the last lecture, we were introduced to Plato’s great analogy of rational [correction: reason] spirit and appetite, as exemplified by a charioteer, a calm horse and a wild horse. And also to a number of other distinctions. Left brain, right brain, which I managed to get wrong four times in a row, if I remember correctly. We were introduced to the difference between brainstem and upper brain. We were briefly introduced to Freud’s idea of superego, ego and id. But we didn’t get to work in the dual processing tradition. And today’s lecture will take up where the last lecture left off, with another way in which it is typical to distinguish parts of the soul.

Let me say that there is very good reason to think that the research which I’m presenting today is not subject to the WEIRD objection. That is, there have been powerful cross-cultural demonstrations of nearly all the results which I’ll be talking about today. And there are also good evolutionary reasons to think that the two systems which dual systems theorists posit are, in fact, going to be part of any human being because of the evolutionary process, which all of us underwent.

So I wanted to start with a picture of Edward Thorndike not because he’s important, but just because he’s so fabulous-looking. This is from the New York Public Library Archive. Edward Thorndike was a late nineteenth/early twentieth century psychologist who did a lot of important associationist animal work. But he also did research, published in an article in 1922, that in some ways can be seen as the founding work of the reasoning tradition that Jonathan Evans described in the article that we read for last class.

So he describes, in his 1922 article, an experiment that he does, which is entitled “The Effect of Changed Data Upon Reasoning.” And what he’s interested in there is the question of whether problems that are posed to people that are formally identical, but that differ in how that formal material is presented, are processed differently.

So, for example, he asked people, either what is the square of x plus y, (first, outer, inner, last, I think is how you would do it) and he asked people, what is the square of b1 plus b2? Whereas people found the first question easy, they found the second question much more difficult. Success rates on this [x/y] were up around 90%. Success rates on this [b1/b2] were considerably lower. Or he asked them, what’s the square of a2 x3, versus what’s the square of r18r112? And so on. Presenting people with problems that were formally identical but which differed in terms of the complexity of the characters used to represent them produced a massive decrement in performance.

Fast-forward to a period in the 1960s, ’70s, and ’80s, in which people began to study syllogistic reasoning, and a number of special instances of this phenomenon emerge. So in the Jonathan Evans piece that you read for last class, you were presented with examples like this. A syllogism that was valid and believable – that is, whose formal properties guaranteed that if the premises were true, the conclusion was true, and as a matter of fact conclusion was true. And arguments that were valid but unbelievable–arguments where the structure of the argument guaranteed that if the premises were true, the conclusion was true, but where the conclusion was false.

So, for example, you might be told:, No Greek tragedies are comedies. Some Greek comedies are plays. And asked whether it followed from that that some Greek plays are not Greek tragedies. No Greek tragedies are comedies, some Greek plays are comedies, therefore, some Greek plays are not Greek tragedies, and 90% of people were able to see that that argument was valid.

By contrast, you might be given an argument, equally valid, but with an unbelievable conclusion; like: No Russian novels are short. Some novels by Dostoevsky are short., Therefore, some novels by Dostoevsky are not Russian novels. That’s a valid argument. It’s a valid argument with a false conclusion, because one of the premises is false. But as a result of the conclusion being implausible, only 55% of people were able to recognize that the conclusion followed from the premises.

Notice, however, that these two arguments are structurally identical. They both have a form, no A’s are B’s, some C’s are B’s, therefore, some C’s are not A’s. Formal properties alone don’t determine our ability to judge validity. And, in fact, it is also true that formal properties alone don’t determine our ability to judge invalidity.

So whereas valid arguments with plausible conclusions are judged to be valid roughly 90% of the time, we just heard that valid arguments with implausible conclusions are judged to be valid considerably less often. An opposite error arises in the case of invalid arguments. Invalid arguments with plausible conclusions are judged to be valid. It’s judged to be the case that the form of the argument guarantees the truth of the conclusion. Whereas it turns out that what guarantees the truth of the conclusion, in that case, is nothing other than facts about the world, as opposed to facts about the structure of the argument.

In light of this Thorndikean condition, researcher after researcher came up with research paradigms that demonstrated what we’ve just been talking about. So, famously, the Wason selection task asks people to determine which of four cards they need to turn over to verify the truth of a statement.

So, for example, I might give you a statement, “if there’s an A on the one side, there’s a three on the other.” And I might present you with four cards, and ask you which ones you need to turn over. [image of 4 cards] Obviously, you need to turn over the A, and everybody recognizes that. You need to check whether there’s a three on the other side. Obviously, you don’t need to turn over the D. You know that the cards have a letter on one side, and a number on the other. So there’s no worry that there’s an A on the other side of here. But people have a tendency to think that you need to turn over the three, and that you don’t need to turn over the seven. But look out. Right on that other side of the seven was an A, and the statement turns out to be false. The ones you need to check are the A and the seven.

People find this task relatively difficult. But here’s a structurally identical task that people find relatively easy. “If a person is drinking beer, the person must be over 21.” I show you four cards. The beer drinker, the soda drinker, the over-21 year old, and the 17 year old. And every single one of you could I take it get a job as a bouncer, walk in, discover that the 17 year old is drinking a beer. And thereby, learn exactly what was hard to see in earlier case, that the cards you need to turn over are the first and the last, rather than the first and the third.

Now, there have been all sorts of explanations hypothesized for why it is that we find the second of these tasks easier than the first. Perhaps the most famous of these is a hypothesis advanced by the evolutionary psychologists Leda Cosmides and John Tooby, who argue that we have within ourselves what they call a cheater detection module, and that we’re enormously sensitive to cases that involve violations of normative rules.

So you notice that the second one is different from the first in two ways. One is that it’s socially embedded, whereas the first is purely abstract. And we’ll discover, as we continue our readings this semester, that social embedding awakens reasoning processes that aren’t present otherwise. In fact, we learned that last week, with the eyes study when we learned that people are more pro-social, more likely to engage in morally normative behavior, when they’re presented visually, with eyes, in part, because it awakens and activates a kind of social understanding which all of us have.

So the first difference between the bottom and the top is that the bottom one invokes sociality, whereas the top one is purely abstract. And the second difference between the bottom and top is, of course, that this is a normative rule. It’s about how things ought to be. Whereas this is a descriptive rule. It’s about how things are.

So the lesson that we can take away from the Wason selection task, for those of you who are interested in it, there have been thousands and thousands of variations done on it, which are extremely interesting in sorting out exactly where people are good and not so good at the task. The lesson that we can take away from it is the lesson that has been emphasized throughout this lecture and the previous one: that we have ways of processing information that don’t merely track formal properties.

And some of those ways of processing information involve bringing on line, so to speak, what Plato would call parts of the soul which had not been previously attentive to the situation. Now–is there a question in the back?

Student: Yeah, sorry. For the last one, what exactly were the subjects asked to do? I don’t really understand the experimental design.

Professor Tamar Gendler: So the experimental design for both of these cases was as follows: you’re given a sentence that you need to verify the truth of. So you’re asked “is it true that if there’s an A on one side, there’s a three on the other? Is it true that if a person is drinking beer, the person must be over 21?” And you’re told that you have four items in front of you, and you’re asked, “which ones do you need to turn over to verify the statement?” So to verify the statement, “if there’s an A on one side, there’s a three on the other side,” you need to turn over the A and the seven, not the A and the three. And that should become obvious to you if you look at this case, where to determine whether the person was drinking beer, the person must be over 21, you need to turn over the card belonging to the beer drinker, and the card belonging to the person who is in fact not over 21. So does that clarify? [nod from student] Excellent. OK.

So dual processing accounts attempt to provide a general explanation for what’s going on in the cases I’ve just described, and in the cases I’m about to describe. They suggest that we have two relatively autonomous mechanisms for processing information, and they’re called all sorts of things, but what’s become the most normative way of describing them, is to call them simply system one and system two.

So whereas system one is evolutionarily primitive–it makes use of parts of the brain that came into our evolutionary process relatively early in the game–system two is evolutionarily relatively recent, it involves higher cortical function. System one is unconscious, or preconscious, whereas system two is conscious. System one operates automatically, whereas system two is consciously controlled. System one is effortless, it happens without our trying. System two is effortful, in the sense that it involves an expenditure of cognitive energy; you have to pay attention. System one is super fast. It processes information almost instantaneously. System two is, relatively speaking, slow, the information that we get through system two takes considerably longer: seconds rather than milliseconds. System one is associative; it recognizes patterns in the world. System two is rule-based; it can apply principles. System one is, people sometimes say, reflexive; it happens without reflection, which is what underlies system two.

So the distinction between system one and system two is the result of many decades of work by many people. Here’s a chart from a different paper by Jonathan Evans in which he enumerates–and this will be available to you on the v2 site–some of the many researchers whose work went into talking about system one and system two.

And I want to point out to you one important thing here, which is that it’s a bit misleading to speak of system one and system two as if they are individual things. System one is sometimes called the autonomous set of subsystems. The idea: there is the visual processing, and the auditory processing, and there’s processing that gives us very specific information about things like faces, or the average length of lines, or that enables us to recognize something as predator or prey. All of those systems have the characteristics that system one does. They’re quick, they’re unconscious, they’re evolutionarily primitive, they come online without reflection. But they are not, each of them, they are not altogether a coherent system.

So dual processing accounts are a way of trying to make sense of a set of phenomena, some of which have to do with the processing of reasoning, and some of which, as we learned in the absolutely lovely Nobel Prize speech of Daniel Kahneman, which I asked you to watch for today, some of which takes place in a more general domain of reasoning. And I’ve reproduced for you here Kahneman’s beautiful chart explaining his understanding of the relation between system one and system two, where he talks about the similarities between perception and intuition on the one hand and reasoning on the other. And again, I’ll leave this slide for you on the website.

Chapter 3. Kahneman and Tversky on Framing Effects [00:23:55]

So what I want to do in the next part of the lecture, is to move from the discussion of Evans, which in some ways was left over from last lecture though connected to this one, and talk a little bit about the work of Daniel Kahneman and his collaborator, Amos Tversky.

So you’re now going to get the second chance to use your clickers. We’re only doing one other try today, and if this one works, we’ll have a whole slew next Tuesday. OK, so this is the famous Asian disease problem from Kahneman and Tversky, and it runs as follows. A terrible disease has struck 600 people in your town. Without treatment, they’re all doomed. You are the mayor, and there are two courses of treatment available.

If your last name begins with the letters A through L, you’re going to need to read the information that I’m going to put in the green box. So with you head pointing over to this side, and only read what happens in the green box. If your last name begins M through Z, you’re going to read information in the blue box. And let me tell you, we will use these color conventions throughout, if this ends up working, and that the A through L group will have only numbers ‘1’ and ‘2’ for yes and no, and the M through Z group is going to have numbers ‘3’ and ‘4’ for yes and no. So if you’re an M through Z-er, look at the blue, and get your fingers ready on three and four, and if you’re an A through L-er, look at the green box and get your fingers ready on one and two.

I asked Marvin Chun how to do slides like this, and this was his suggestion. OK. So ready? I’m going to tell you about plan A and plan B, so look at your side of the board and not the other. OK? So read about plan A. And now read about plan B. OK. Get your clickers out, and if you are on the green team, use ‘1’ to indicate if you’d choose plan A, and ‘2’ to indicate that you’d choose plan B. And if you’re on the blue team, use ‘1’ to indicate that you choose plan A and ‘2’ to indicate that you choose plan B.

OK. I’m going to put on the timer. We want 64 to 66, zillions and zillions and zillions of responses. And let’s see how the numbers come out. OK. Here are our numbers. OK.

Those of you on the green team, 40% of you chose plan A. Those of you on the blue team, only 25% of you chose plan A. Ah, you know what? This is actually not–So. We–The relevant size of the bar is relevant but this is divided into a 100. I need to learn a little bit more about how to use clickers. So let me now regroup and make my point again.

As you’ll notice, on the green team, the relative preference for plan A exceeded the relative preference for plan B. Whereas in the very small second half of the class, which consists of, the problem is that 60% of you are in A through L, and only 40% of you are M through Z (this is our problem). OK. But in this group, if I could quickly do 26 times 0.4, we would be able to find out what that what absolute percentage it was. In this group, the relative preference is for plan B rather than plan A.

Notice, however, that plan A and plan B are identical. There’s 600 people. And under plan A, 200 people will live; which means, 400 people will die. And on this side, there are 600 people; if we go with plan A, 400 people will die, which means it’s certain that 200 people will live. However, the results, which you all showed, are, in fact, exactly the typical set of results.

Typically, people presented with a problem that involves a choice between certainty and probability framed in terms of its positive outcomes will go with the certain rather than with risky plan. Whereas people who, told the same thing, where they’re given to focus on the certainty that 400 people will die tend to go with the probabilistic option.

Notice, again, plan A and plan B on the two sides are exactly the same. Just one is framed in terms of who will live, the other in terms of who will die. And the result is almost a complete inversion of people’s preferences. And we get these sorts of framing effects over and over.

Here’s a study by Kahneman and Tversky’s student, Eldar Shafir, from the early 1990s. You go to an ice cream store, and you’re hoping to get yourself two flavors of ice cream. One is a good flavor, the other is an excellent flavor, but it has high cholesterol. And you discover that you only have enough money to buy one of the ice creams.

So if I asked you which one would you choose, 28% of people choose the good flavor, and 72% of people choose the excellent flavor with high cholesterol. But if I ask you which one do you give up, 55% of the people give up the one with the good flavor, and 45% of the people give up the one with excellent flavor, high cholesterol. So even though, in this case, 28%– even though these are exactly the same question, which do you choose? If you choose A, you give up B. If you choose B, you give up A. These numbers don’t match up. When you’re asked which one you choose, the excellent looms large, and so you go for it, neglecting the high cholesterol. When you’re asked which one you give up, the high cholesterol looms larger.

This phenomenon occurs over and over again. Suppose you’re going to a movie, and when you get to the theater, you discover that you’ve lost something from your wallet. What you’ve lost from your wallet is either a $10 bill or a $10 movie ticket which you had purchased last night for the movie. You get to the theater, you open your wallet to go in, either, in the first case, to pull out a $10 bill to buy the ticket, or in the second case, to pull out your “Admit 1” ticket to let yourself in, and you discover that you’ve lost the item that would have enabled you to get into the theater. However, you have another $10 bill in your pocket, and the question is: do you buy another movie ticket?

For people who’ve lost a $10 bill on the way to the movie theater, 90% of them say, “I lost a $10 bill, but so what. I’m going to buy a new ticket” For people who have lost a $10 movie ticket which is of exactly the same value as a $10 bill, only 42% said that they would spend the $10 to buy another ticket.

Chapter 4. Alief [00:32:18]

So framing is one of the examples of a heuristic, or bias, which Kahneman and Tversky focus on in their work. We will return to some other examples later in the semester, when we read the work of Cass Sunstein. What I want to point you to now is a particular example, which is going to serve as our segue into the idea of alief. And that’s the distinction between frequency and probability.

So suppose you’re trying to get a red ball because getting a red ball will help you win a prize. And you have a choice about whether you want to draw from this box over here, which has nine white balls and one red one, or this box over here, which has eight red balls and 92 white ones. So here you have a 10% chance, here you have an 8% chance. And you’re going to be drawing from the boxes blindfolded. An image for which I don’t suggest doing a Google search. [laughter]

So, I found blindfolded justice. That did a little better. All right. So you have your choice. Do you want to let this box be the one–you’re trying to get a red ball,–do you want to let this box be the one, or do you want to let this box be the one? Obviously, rationally, you’ve got a better chance over here. But people are in fact pulled in two directions. You had a 10% chance over here, but my goodness, there’s 8 balls over here! 8 8! 1! 8! 1! 8! More! More! More!

What’s going on here, I suggest, is that whereas you have a belief that here, you have 10% chance, you have what I call an alief that here there are eight, whereas over there, there is one. So alief is a notion that I’ve actually discussed on bloggingheads.tv with Paul Bloom. [series of images from bloggingheads] And though we were filming in different places, here I am in the game room at my house with our geo-puzzles, here’s Paul in the study, it appears that we were separated at birth, because that’s at the same time as that photo. All right. [laughter]

So suppose I take you to the Grand Canyon, and I bring you out on the glass walkway that extends 4,000 feet above the roaring river below. And you step out there with me, and you voluntarily remain there. I take it if you voluntarily remain on a glass surface 4,000 feet above a roaring river, you believe that surface is safe. Nonetheless, I will wager that most of you would shudder and shiver and shake. And you’d do so because you have what I would call an alief that says to you, “I am 4,000 feet in the air, with nothing holding me here, and I’m going to tremble.”

Or suppose we’re watching a Western movie–anybody recognize that gentleman? [picture of Ronald Reagan] Guys, he was president when you were born. When you watch a movie, a Western movie, and the bullets are flying off the stage, obviously you believe that you’re safe. You don’t think, “oh, what a good thing it was that the bullets didn’t come off the screen this time.” Nonetheless, particularly in 3D, you will bend your head down. If the green slime is coming off the screen, you’ll tremble in your seat. If Anna Karenina is about to die, you will cry. Not because you believe that you’re in danger, but because you have an alief.

How many of you set your watch five minutes fast, and then subtract back down? When you do it, you believe that it’s ten but you have an alief–you look down at it–10:05! It enters your visual system, it gets processed really fast and it says “10:05! Hurry!”

Or suppose you’re watching a rerun of your favorite team on television, and you know that if one of the guys on the team tries to steal second, he’s going to be thrown out. And so you yell at the television screen, “Don’t run! Don’t run!” Why? Because you believe that your voice is going to go through the television screen, back in time, [laughter] to first base to reach the runner? No. You have a belief that it’s a rerun, and your alief says “don’t run”.

Suppose you’re on a diet, and you see this beautiful piece of chocolate cake and you have the belief that it’s undesirable. Your alief system, in the form of your Platonic horse may nonetheless pull you towards it. [laughter]  Suppose I present you with this delicious cake [picture of a cake that looks like a box of kitty litter]. Those are Tootsie rolls, this is a perfectly clean and sterilized pan, that’s coconut there. In fact, this one has the same ingredients as that one, exactly. You believe me, right? I’m your professor, you’re here, listening to what I tell you. I tell you, this is edible! Tootsie rolls, coconut. [laughter] Nonetheless, I take it that your alief system kicks into gear.

Suppose I ask you to sign this contract: “I hereby assert that my soul belongs only to you, O Satan.”  [laughter] And I write at the bottom, “This is not a legal contract. It’s just a prompt in a psychology experiment.” You will, nonetheless, be reluctant to sign. Not because you don’t believe that this is a legal contract, right? [high-pitched voice] “Oh my goodness, if I sign my soul over to the devil, I can tell it’s the devil–it’s parchment.” [laughter] No. You believe that there’s nothing to it, but nonetheless, you hesitate.

Suppose I take you to Monica Bonvicini’s bathroom, which is, as you can see, completely opaque from the outside. You stand outside this public restroom, you peer into it, you see that there’s no way to see inside. Your belief is that you are totally protected. Nonetheless, when you go in to use the facilities, it looks like that. [transparent walls] And your alief makes it rather difficult to do what you had gone in with the intention of doing.

Suppose we have a bag of sugar and two glasses of water. And you take a spoonful of the sugar, and you put one in this glass and label it “Sugar,” and one in this glass and label it “Poison.” You took the sugar, you put it in the glasses, you put the labels on it. Nonetheless, people are reluctant to drink from this glass. Moreover, they’re not just reluctant to drink from it when it says Poison. They’re reluctant to drink from it when they’ve written the words Not Poison on it. [laughter] Why? Because the word poison is there. Running into your alief system. “Good sugar. Mmm drink sugar. Poison? Poison? Don’t drink the poison!”–OK.

Suppose that I have a kitchen, and I’m interested in making some kitty litter cake, so I have my cake pans stored over here. And a chef comes in and says to me, it would be much more efficient if the cake pan were over on the right. He’s very pleased with what he’s done. And when I ask him to get the cake pan, he says, “I’m so happy that we moved the muffin tin cake to the right cabinet”. And those two get it, believing it could be on the right, exactly where it used to be. His belief is that it’s on the right. His alief is a lagging habit.

Any of you who have ever rearranged your room knows this feeling. Any of you who has ever put your cell phone in your hand, and then looked for your cell phone. I’ve had terrors of having lost my children, and then realized they were on my shoulder. [laughter]

So to have an alief is roughly to have a representationally mediated propensity to respond to an apparent stimulus in a particular way. Right? So the apparent stimulus of the glass staircase [correction: skywalk], the apparent stimulus of the kitty litter cake, you have a propensity, that’s either innate, as in the case of the glass, or the fudge shaped like feces, or a habitual propensity, as when you arrange the kitchen. These are habitual ways of responding to the world that activate the sort of lower-level systems that we’ve been talking about. And importantly–although we can recognize alief most easily by looking at these kinds of discordant cases, the cases where belief tells you to do one thing, and alief tells you to do the other–in fact, alief is active all the time.

Every time I’ve used my right thumb to push the key on this, I’ve done it out of alief. Fortunately, it’s one that corresponds with what I intend to do, but it is certainly the case that an enormous proportion of our actions are governed by alief. The question is this: given that I just showed you that there are hundreds and hundreds of ways of describing what I adverted to with the notion of alief, why introduce this new term?

The story has something to do with alief itself. So every 20 years or so, the United States government introduces a dollar coin. Here’s the one they introduced in 1921. Here’s the one they introduced in 1972. Here’s the one they introduced later in the 1980s. Here’s the one they introduced in 1980, with Susan B. Anthony. Here’s the Sacajawea one they introduced in 2003, and in a massive fit of public relations genius, here’s the Millard Fillmore golden coin, which will be issued soon.

What’s going on here? What’s going on here is that it’s hard to get people to make use of something if it doesn’t fit into the currency system which they have already. Dollar coins don’t fit naturally into the ways that Americans use money.

Likewise, talk of system one, system two, relatively autonomous systems, heuristics biases and so on, don’t fit naturally into the way that we have of talking about ourselves. We talk about ourselves in terms of beliefs and desires. And in order to make use of the gold coin that is the recognition of the multipartite soul, we need a notion that fits into our conceptual currency. And that’s the role that my hope is alief will play.

So alief is going to return to us in later lectures. We’ll hear about it again in the context of the harmony of the soul and in other domains, we’ll hear about heuristic and biases and we’ll hear about the multipart soul. I look forward to seeing all of you on Tuesday for harmony and happiness.

[end of transcript]

Back to Top
mp3 mov [100MB] mov [500MB]