ECON 251: Financial Theory

Lecture 16

 - Backward Induction and Optimal Stopping Times

Overview

In the first part of the lecture we wrap up the previous discussion of implied default probabilities, showing how to calculate them quickly by using the same duality trick we used to compute forward interest rates, and showing how to interpret them as spreads in the forward rates. The main part of the lecture focuses on the powerful tool of backward induction, once used in the early 1900s by the mathematician Zermelo to prove the existence of an optimal strategy in chess. We explore its application in a series of optimal stopping problems, starting with examples quite distant from economics such as how to decide when it is time to stop dating and get married. In each case we find that the option to continue is surprisingly valuable.

 
Transcript Audio Low Bandwidth Video High Bandwidth Video
html

Financial Theory

ECON 251 - Lecture 16 - Backward Induction and Optimal Stopping Times

Chapter 1. Calculating Default Probabilities [00:00:00]

Professor John Geanakoplos: We’re now at the stage where we’re considering the implications of uncertainty, so I hope that the subtlety, and surprise element of the class, will gradually pick up without increasing the difficulty. The complexity will pick up a little bit, but not the difficulty. It’s just you’ll have to keep a few more things in your head, but the mathematics isn’t any harder.

So we ended last time talking about default and inferring default probabilities, and so I just want to finish off that discussion. So suppose that at any stage of the tree, you know, lots of things can happen in the world. We’re always going to model the uncertainty in the future by a tree with different things happening, and at each of these nodes people are going to have a discount rate.

So maybe it’ll be r equals 20 percent, and here r could equal 15 percent, something like that, and we want to add to this the possibility that there’s default. So if we add the possibility of default, and these things keep going and maybe there are payoffs at the end or payoffs along the way. If at any point in the tree like this one we add a new possibility, which is the default possibility, so this happens–by the way, when do people default?

They never default before they have to make a payment. So when do they default, exactly when they’re supposed to make a payment. So suppose that this guy is going to default here when he’s going to make a payment. At every possible scenario he would default there. So we’ve got a very simple model of default, so not a very realistic one where the guy defaults in all of these following scenarios. So something’s just bad. Once he’s gotten here you know that he’s not going to make the payment the next period.

We further assume that not only does he default there, but he defaults on everything thereafter. So the payoff is just going to be 0 here. So this is going to be–originally we had probabilities p1, p2, p3, let’s say for the probabilities, now we’re going to have probabilities d for default and then 1 - d times all of these, right? So essentially what have we done?

We’ve simply replaced in our calculation of payoffs and present values, we’ve simply replaced these possibilities with probability p1, p2, p3. We added another possibility, but the payoffs are 0 here. Nothing’s going to happen from then on except 0, and we said that happened with probability d, which means presumably, all of these have to be scaled down by that so they still add up to 1. So essentially the point I’m trying to make is that default that leads to 0 payoffs thereafter is just like discounting more.

Why is that? Because whatever calculation you did for the value here of what the bond could possibly be worth there is–it’s all the same numbers as there were before except we’ve multiplied it by 1 - d, so it’s the same thing. So instead of going 1 over 1 + r times future payoffs, that’s no default value, so that times future payoffs. Now we’ve got default value under this special kind of default is going to be 1 - d times 1 over 1 + r times the same future payoffs.

I could rewrite 1 - d as 1 over 1 + s or something, and so then I really have just 1 over 1 + r times 1 over 1 + s. So that’s just 1 over (1 + r + s + r s). So that’s going to equal 1 over (1 + r + s + r s) times future values. So the effect of this special kind of default–we just get 0 thereafter. The guy decides after this payment, “I’m not going to make any more payments. I’m defaulting from then on.” That’s the same thing when valuing the future payoffs, it’s the same thing as instead of discounting by r, discounting by r + s with a little bit r times s.

If r and s are small numbers this [r times s] is probably quite a small number. So default probabilities get mapped into spreads they’re called. They way to evaluate it is just you just multiply it by 1 - d, which is the same thing as discounting by a higher number, and that higher number is almost the same. It’s very close to r + d, as a matter of fact, because 1 - d is 1 over 1 + s. If d is very small s is going to be very close to d as well. So what’s the implication of this?

The implication of this in a special case again where we just have no uncertainty, except we have default. So here there could be d1, 1 - d1 and here there could be probability of default 2 and 1 - d2, probability of default 3, 1 - d3, and we’ve got interest rates r0–so this will be i0, iF1, iF2. So if you knew what the interest rate was going to be today, you knew what the interest rate was going to be tomorrow, you knew what the interest rate was going to be the day after tomorrow, there’s no uncertainty about interest rates, they’re perfectly anticipatable, but you know that there’s a probability of default each time.

So in stage 1 this guy might default before making his payment here, which case you’re just going to get 0. In stage 2 he might default instead of making his payment, won’t pay the coupon, he’ll just default, or in year 3 he might default. So what’s the implication of what we just said?

You can evaluate this bond, the payoffs of the bond, so let’s say it pays a coupon, C, C 100 + C. All right, the way you would evaluate that without default is you would just take the value of the coupon, the present value would have been–you would have done it recursively. You would have gotten P3 = 100 + C. Then you would have said P2, you would have gone in your computer–you would have said P2 is 1 over (1 + iF2) times 100 + C. So the value here is the 100 + C discounted by that forward rate, then P1 would have been (C + P2) divided by (1 + iF1).

So you take the value here times that, so this is case there’s no default, and P0 would have been (C + P1) over (1 + iF0). So that’s how you would have done it by backward induction.

But now that you know there’s a chance that there’s default you have to not multiply by 1 + iF2. You have to multiply all these things by the probability of default. So you’d have to multiply this by, if we change colors, by 1 - d. You have to multiply this by 1 - d2, and you’d have to multiply this by (1 - d1) times (1 - d2), and this by 1 -, sorry this would be 1 - d3.

This is (1 - d3) times d2, and this is (1 - d1) times (1 - d2) times (1 - d3). So this value would be the old value you got here multiplied by 1 - d3, multiplied by this one minus d3. This value is what you would have gotten here, but you’ve already scaled it down, so multiplied by 1 - d2 and this is 1 - d1. So what’s the upshot? So that gives you the price with default.

Student: Professor?

Professor John Geanakoplos: Yep?

Student: When you’re doing the backwards induction should you be tacking on like that or should we just take one at a time. So P0 should be multiplied by all 3, or isn’t it that you’re taking whatever you get as P1 as given and just multiply by…

Professor John Geanakoplos: Well, I could have done it two ways. I could have written–so what you’re suggesting is a better way would have been to say P2, the default bond P2 is going to be (1 - d3) times P3, which is also equal to P3because there’s no default after here. The world just ends.

Then P1 is equal to (1 - d2) times P2, but that P2, remember, is already (1 - d3) times P3. And then P0 is going to be (1 - d1) times (1 - d2) times P1, but that’s equal to (1 - d1) times (1 - d2( times P1, which is (1 - d3) times P3, right? So all right, if you go from here to here the value here’s 100 + C, so if P3 is just 100 + C let’s leave it as 100 + C.

That’s if the guy actually pays. So the present value would just be–oh, then you have to divide all this by 1 + i2. Sorry, this is 1 + iF2. Oh, I’m making a mess of this. So usually you’ll go back from here to here by discounting by the interest rate, but now we’re going to have to also multiply by the probability that you default to go back here. So we get a lower number. P2 is not just (100 + C) divided by (1 + iF), that’s the discounting, you also have to multiply by the probability of default.

Then when you go back one period further you have to discount again. So I should have divided this by 1 + iF1, you have to discount it, and also you have to multiply by the probability of default. But the thing you’re bringing backwards is P2, which has already taken into account the probability of default the next time. Then when you go back one step further you have to do the whole thing again divided by 1 + iF0.

Student: You shouldn’t have a 1 - d2 in there as well, shouldn’t you just be discounting by 1 - d1 and 0?

Professor John Geanakoplos: This is P1 not P2.

Student: You’ve already discounted by 1 - d2.

Professor John Geanakoplos: No, but I’m switching the Ps on you. When we go from here the value here we just calculated was going to be P2, so here the value of P1 taking into account default is (1 - d2) times P2, which already takes into account the default next time, times P2 discounted by the interest rate here. So I’ve got the 1 - d2 here.

Now, when I discount back to here you’re saying, “How come the d2’s showing up any more because I’m just at d1, that’s your question, right? So it doesn’t show up. It’s just P0 is 1 - d1–oh, you’re asking this, you’re right, (1 - d1)–you were right–times P1 divided by (1 + iF0). That’s right, but if I plug in for P1, P1 already had–that’s where the d2 came from, so P1’s got the d2 in it.

So it’s (1 - d1) times (1 - d2) times P2, and then P2 had a P3 in it. So I’ve got all the defaults in it. Are you with me now? So, sorry about that, so you were right, I said it wrong. So, but this isn’t the point. This was supposed to be obvious. I didn’t even think about it.

The next step is the thing that’s not obvious. Here are the potential cash flows. You’re discounting them by the interest rate. You also have to discount it again by the fact that the guy might not actually pay you. So that gives you a lower present value. Yellow P2 is less than the no default white P2. When you discount again you’re discounting the yellow P2by the interest rate here and also the fact that the guy might not pay. So you have to multiply by 1 - d2 and also the fact that he might not pay, the forward rate, and you keep moving that backwards.

Chapter 2. Relationship between Defaults and Forward Rates [00:14:58]

So that was supposed to be obvious even though I made it sound complicated. What’s slightly subtler is just saying the same thing backwards which is, suppose I knew all these forward rates. Suppose I had a bunch of bonds, suppose I had American bonds, coupon bonds.

So the American coupon bonds are going to pay, you know, the 1-year pays a coupon C1 and has a face of 100 and has a price Pi1. The 2-year American bond has a coupon C2, a face of 100, and a price of Pi2. And let’s say the 5-year has something, C5, a face of 100 and a price Pi5.

Now from that we know that we can deduce what all the forwards are. We did that in the first class. So now suppose at the same time we have Argentina. Many Argentine sovereign bonds promise payments in dollars, by the way, they’re trying to trade them internationally. So let’s say we also have the Argentina bonds C-hat1, 100, Pi-hat1, that’s the 1-year, down to the 5-year which is the Argentina C-hat5, 100 and Pi-hat5, its price.

Now, let’s suppose that Argentina could default whereas America can’t. So it’s quite likely that Pi-hat1 will be less than the American Pi1 and Pi-hat5 is going to be less than the American Pi5, because all these bonds might default. So if the coupons were the same, if C-hat1 was the same as C1 and C-hat5 was the same as C5 the fact that Argentina could default obviously would mean its bonds would trade less for the American ones.

So the question is can you figure out the default probabilities very quickly in Argentina without having to do a lot of complicated calculations, and the answer is yes. And why is that? Because you could take this data and you could say–so we could just erase this here. We could say, assuming no default, we could explain these prices by finding, just like we did the America, the Argentine forwards, 1 + i-hat0, 1 + i-hatF1 and 1 + i-hatF4. So these are the Argentine forwards.

Now, these forwards would be much bigger than the American forwards. Why is that? Because the prices in Argentina are so much lower. If you’re assuming there is no default, assuming no default, contrary to fact, how could you explain all these very low prices? Well, you must think that in Argentina they’ve got very high interest rates and very high forwards, so they’re discounting more and that’s why they’ve got lower prices.

And we know how to get those forwards assuming there was no default. So the trick, I’m merely pointing out now, is that if we now go back and say, ah ha, Argentina doesn’t have different forwards because anyone in Argentina–the bonds are denominated in dollars precisely so that people can be crossover investors. An American can put his money in Argentina. An Argentine can put his money in America, so you can move your money to either place. So it must be that the forward rates can’t be different.

If you knew for sure you were going to get paid in Argentina you’d have to have the same forward rates in America. So the reason these forward rates are higher is because there’s a chance of default. So what is the chance of default? So I claim the chance of default is, and I was supposedly–you’re supposed to realize this now. If I’d been clearer before you would see where I’m going. The chance of default is incredibly simple to find out. So it’s 1 - dt = what?

Student: <>

Professor John Geanakoplos: But in terms of forwards is what? It’s not Pi–this Pi, this isn’t the 0 price. This is the big price of the bonds. So it’s not Pi-hatt over Pit. What is it though?

Student: The ratio of forwards?

Student: Pit - <>.

Professor John Geanakoplos: This is going to be a bigger number than that, and in fact that ratio is the default probability. So this is assuming, remember, that if the Argentine bond defaults at this period, say, it’s never going to pay anything after that. You’re going to get 0 pay off and all the other Argentine bonds will also default. I claim that this [note: meaning, (1 - dt = (1 + iFt-1) over (1 + i-hatFt-1)) ] is going to be the easy way of getting the default probability.

And so the differences in the forwards is just explained by the default probabilities and so the extra Argentine interest, if this [the denominator] is a higher number than that [the numerator], 1 - d is approximately 1 over (1 + d), and so it’s basically if all the numbers i and d are small then i-hatFt - 1 is approximately iFt - 1, the American one, plus this default rate in Argentina. So I should probably have a hat because I’m referring to Argentina with the hat.

So why is that true. I just argued it. How could that possibly be true? So you see what I’m claiming that you have now a very simple algorithm for finding out, inferring what Argentina default rates are. Again, I’m making a special assumption that when Argentina defaults you get nothing. That really isn’t the case. There’s some huge convention that happens and all the countries get together they defaulted on. They’ve got some big meeting, and someone like Brady invents some idea where they’ll owe less and they will be a writing down of principal, by the way.

So whenever this happens there is recovery after a writing down of principal. So what all the countries do is they say, “Okay, we know that you can’t pay all that you owe us. We’ll settle for half of it. We’ll write down the principal and we’ll hold you to that half, or to a third of it. So this is one of the things we curiously haven’t done in America where all these homeowners can’t pay and we don’t write down their principal. We just throw them out of their houses.

But anyway, let’s say you wrote the principal down to 0 in that special case. You could easily infer from the price of the Argentine bonds what the default probabilities were, and by this formula. And so the question is, why is that true? We know how to calculate the forwards in America given the American data. That’s was one of the first things we did in class. We said that every American company in the whole country, financial company, is doing that.

Everybody has those forwards calculated. Now, if you’re given the Argentine data, which is, after all, just coupons and the prices of the bonds, you could find Argentine forwards assuming there’s no default. But there is default, so it must be that they have access to the American interest rates in forwards, but the Argentine bond might default.

Do you see what we did when we did this calculation? The difference between the backward induction in America from here to here was just discounting by the American forward–to go to Argentina we had to discount by the American forward and multiply by 1 - d, so discounting it again. So all I’m saying is that in the U.S. when we went backwards we just discounted by this thing. In Argentina when you go backwards you have to discount by this thing, so those things have to be the same.

So the Argentinian discount is like taking the–I hope I haven’t got the thing–so the American forwards are going to be less than the Argentine forwards. It’s going to be like that. So that’s it. There’s nothing else to show except that whenever you’re going backwards here you’re discounting–remember, you’re discounting by the interest rate times the probability that you’re actually going to pay off, so that’s what it is in Argentina.

Hang on. I hope I haven’t inverted one of these, yeah, exactly. So if you write 1 - d times the American thing in the denominator, so as I said, to do the discounting in Argentina at every step from going back from here to here what did we do in Argentina?

We simply took 1 - d-hat, that was the default rate–hat–in Argentina, and discounted it at the American forward. So that’s what I did here. So if I take 1 - d-hat, this is multiplied by 1, so I take 1 - d-hat multiplied by 1 over this. I just get one over the Argentine discount, and that’s how we calculated–that’s how we went backwards with our recursion just taking the interest rate, the discount 1 over (1 + i) times (1 - d) and that’s how we discounted going backwards.

And so therefore in Argentina if you’re forgetting that there’s default and you’re just thinking you have to discount at the right rate and you’re getting this discount you’re getting this number, but in reality you should have been taking this divided by that. So therefore figuring out this and knowing that tells you what this has to be. So it’s extremely simple to deduce what the market thinks Argentinian default probabilities are year by year if you make the added assumption that once they default they default completely.

And if you think you’re only going to get a little bit back, well then the calculation won’t change that much. Yeah?

Student: Can we also do it by using the price of 0s?

Professor John Geanakoplos: Yeah, so you could also do it by using the price of 0s, but to me the best thing is, the easiest thing is using the forwards, but you can also do it by 0s. So that’s all I wanted to say. As I said, the one last thing to say is that if these numbers are all small then 1 over 1 - d–or, 1 - d is approximately equal to 1 over (1 + d). If d is very small those are practically the same things. Then if you multiply 1 over (1 + d) by 1 over (1 + i) it’s almost 1 over (1 + d + i), so it’s almost this thing. It’s not quite true, literally true, but it’s very close to say that the gap between Argentinian forwards and American forwards is just the default probability in Argentina, and that reason is why it’s called a default spread.

You just add some spread to the interest rate. You can guess by the spread what the probability of default is. If it’s 8 percent interest there and 3 percent interest here somebody must think the probability of default is 5 percent there. That’s it.

Chapter 3. Zermelo, Chess, and Backward Induction [00:28:09]

So let’s now move to a tree where you have to make decisions. So I’m going to now describe the method of backward induction which occurs over and over and over again, and we’ve used it a couple of times, but not in its subtlest form, so backward induction.

Now, who first invested the idea of backward induction? Well, the first person who spelled it out formally was Zermelo in 1910, I think, that’s within a couple of years, a famous mathematician, Fraenkel-Zermelo Axioms. And he proved that chess–that there’s an optimal strategy in chess by backward induction. So, for example, let’s take a game.

We always are on a tree, but now we’re going to use a slightly extended definition of a tree. A tree is going to look like this. So it’s a root, a finite number of branches from every–I don’t want to formally define a tree. You know what it sort of looks like, and there’s no reason why the number of branches has to be two or even has to be the same from every point. But the reason we’re going to extend it is, the node is going to be described by who moves.

So let’s say white is moving here and black is moving here. Now, let’s say the outcomes are a win for white, a win for black, a draw, or a draw. So the question is, so it’s a two-move chess game, white moves first up or down, and then after white moves black moves up or down and then the game ends. And depending on where, position you reach either it’s a win for white, a win for black, or a draw. So what should white do?

Assuming that black is a smart player what should she do? So what did Zermelo do? He said not only there is an optimal strategy, but you know what the outcome should be with rational players. So Zermelo said if white goes up then black is clearly going to go down and win the game. So white ought to be thinking here, if I go up, the game, although it won’t end for another period, it’s already lost. So the value of the game is already 0.

So this method of backward induction attaches the value. Here we have values at the end, and so to figure out what the right thing to do is by backward induction you can propagate the values backwards. If black makes the right choice here the payoff is black gets the negative of white, so the right choices here are black could get negative 1 or could get 0, so black clearly wants to get 0.

So black could win the game by moving down, so black surely will move down. So I should think of the game as already lost here and pretend that I had a shorter tree with the final valuation of zero at this node. Similarly, if white goes down it doesn’t matter what black does the game is going to be drawn. So white should think to himself the game is already a draw if I go down.

And now white has an easy choice, do I want to move to a loss or do I want to move a draw. So I could just pick a move for black here. Clearly white is going to go down and therefore with correct play the game is a draw. So by backward induction you figure out the correct play.

Now, why is this surprising? Because chess has an incredibly big tree, not an infinite tree, there are all these rules that keep it finite. If you reach to the same position three times it’s considered a draw. If you make something like 50 moves in a row without a pawn moving it’s a draw. Whatever those rules are–I used to play chess quite a bit. I’ve even forgotten. But whatever those rules are they’re designed to make the game finite, so the tree is finite.

And so it’s impossible to see the whole tree, and how should you know what to do at the beginning? Well, you don’t know what to do at the beginning until you know what black’s going to do afterwards, and so what could happen later in the tree. But if you were fast enough to put the whole thing on a computer you could figure out what to do at the beginning, because your best move at the beginning depends on what you think black is going to do next, which depends on what he thinks you’re going to do after that, which depends on what you think he’ll do after that. But if the tree ends you can always go backward from the end to the beginning and figure out what to do at the very beginning.

So this is a familiar argument to all of you, I think. It was a beautiful argument in chess in 1910 and then it was anticipated–I mean, in mathematics in 1910.

The chess players, of course, all knew about it, so Steinitz who was a world champion from when to when, something like 1870 or so to, or 1880 let’s say to let’s see, 19, 21, to 1894. I think he was world champion from then to there. Lasker became the champion then. So he wrote a bunch of books and stuff in which he said there’s a backward induction value to chess, but since we can’t figure that out on general principles you can tell by looking at the configuration of pieces what the right possible move is, and so you can have positional values, so I’ll do that, and then you can have the backward induction values. So for instance a positional value might tell you that having doubled pawns is a bad thing, having control of the center’s a good thing and you add up all those pluses and minuses and you get these positional values.

And so he said if you’ve got the right positional algorithm, right positional understanding, your positional sense of what to do, you only need to analyze one move deep. You can figure out what the best position’s going to be and move that way, and if you really understand the game properly that positional thinking, that strategic thinking, so it’s called strategic thinking, is going to give you the same decision as the exhaustive analysis of all the possibilities which was tactical thinking, so the two should amount to the same thing.

Now, in fact, people can’t do the full tactical thing and also they don’t have the full strategic understanding either, so they kind of mix strategy and tactics and that’s what makes the game interesting. So no one has ever written this, but I’m sure there’s an interesting study to be made about what games are interesting and they must be the kinds of games where there’s always a mixture of strategy and tactics.

In game theory, as we describe it in economics, there’s no such thing as strategy. All this is out. It’s all just backward induction, which is what I’m teaching you. So the way computers play chess, incidentally, is–and the guy who invented this is Shannon, so he’s a famous professor of information systems, so an engineering professor.

So he said, well, you can’t look at the whole tree which is too long in chess, so what you should do–so all this extends way further. Maybe you can only look two moves deep. So what Shannon recommended is look as far as you can put in your computer, apply some positional thinking to evaluate these positions at the pseudo end of the tree. So it’s really not a win for white. Let’s just pretend White’s so far ahead that we’ll call it a win, and a loss, and a draw, and a draw. That’s just by looking at positional values.

And then having assigned those terminal nodes values, now by backward induction you can figure out what the value is here and exactly what the right first move to make is. And after white’s move black will come here, black can now look two moves deep, so black’s going to look from here all the way down here. He’s going to do his positional evaluator to these nodes and try and figure out what they’re worth and then do backward induction to figure out what his right move is.

Anyway, that’s basically the idea of all chess algorithms, and then they’ve gotten refined by saying–wait a minute, there are lots of refinements. So I used to very interested in this. I don’t think I’ll talk more about it unless anyone wants to ask me something. So there’s the origin of backward induction, Zermelo’s proof and it’s obviously a big deal in chess and the chess players all knew about it before Zermelo, but they didn’t write anything down as formal as Zermelo.

Chapter 4. Optimal Stopping Games and Backward Induction [00:36:48]

So how does this apply to everything we do in economics? Well, I want to give a series of examples culminating in market examples, but starting off a little far from life. So the first one I want to give is the red and the black. So these are just two games. This is the first one I invented ten years ago, but I don’t think they’re–anyway, I think they aren’t that original. I thought they were when I invented them.

But anyway, so the red and the black works like this. There’s a deck of cards, 52 cards, a deck of 52 cards, so half of them are–26 red, right, and 26 black, which is all I care about the cards. And someone offers you a game and they say, okay, the deck is upside down, they’ve been shuffled, you can turn over a card and if it’s black I’ll pay you 1 dollar. If it’s red you have to pay me 1 dollar. So I’m offering you this chance to play this game and of course you can quit whenever you want to.

I can’t keep forcing you to play, so anytime you want to you can quit. So that’s the game, can stop whenever you want. And once you draw the card you throw it away.

And so all these examples are going to be examples of stopping games and you’ll see in economics that when you prepay on a mortgage or when you call a bond you’re stopping the thing, the contract’s ending.

Life is going on, but that contract is ending, so want to know, when’s the right time to take an action like that? So red and black is a simple game like that where you turn over a card. If it’s black, you’re in the black you win 1 dollar. If it’s red you lose 1 dollar and you can stop whenever you want.

So you have an option, so call this an option, and most people totally underestimate the value of options. So let’s just figure out how to figure out the optimal thing to do. What would you do in this game? Would you play if I gave you the chance to play? I think I did this on the very first day. Yes, you’re about to say something? Your hand twitched.

Student: I was going to say we have deceasing marginal utility as well, so assuming we have <> you wouldn’t play the game because you would derive less utility from winning 1 dollar than you would <> in magnitude that the loss <>.

Professor John Geanakoplos: So I’m going to now disagree with what you said, but it’s very interesting what he said. He said, look, if you draw a card at the beginning it’s 50/50 whether you’re going to win or not. If you win you get 1 dollar. If you lose you lose 1 dollar, 50/50 chance. If you’re a little bit afraid, if a 1 dollar loss is more important to you than a 1 dollar gain right away it’s not very good odds, I mean, it’s barely even, and if you’re a little bit risk averse and it’s barely even you shouldn’t play. But now, is it really barely even, this game? Yep.

Student: Well, I mean, I think you should play because even if you get the first 26 red at that point just go to the end and you haven’t lost anything except the time you’ve spent playing the game. You might as well play on the off chance that you’ll get some blacks ones first.

Professor John Geanakoplos: Right, so you can’t possibly lose if you play this right. You can always go to the very end of the deck. We’re ignoring a good point. We’re ignoring your utility of time, so you can always go to the end of the deck and assure yourself of 0. So this is actually a pretty valuable option to be able to stop like he says.

If the first one’s black you could stop and then you’ve won 1 dollar. If a whole bunch of them are red and you lose, well, you can always go to the end of the deck and get zero. So you’re never going to lose and you have a chance of winning. So obviously you should play. Even if you are risk averse you should play, but now the question is can we tell exactly. Suppose you’re risk neutral? How many dollars do you expect to win would you guess? Yep?

Student: 0.

Professor John Geanakoplos: You’d expect to get 0. Now he just made an argument that you should expect more than 0, because for instance he said take this strategy. Pick a card. If it’s black you win 1 dollar, quit. You’re 1 dollar ahead. So at 50 percent of the time you’re plus 1 dollar. If it’s red the first time just close your eyes and play to the end of the game and you’re going to get 0 because you’re going to win 26 times and lose 26 times. So that’s equal to .5. So there’s one strategy that gets you 50 cents on average. You can’t lose and half the time you’ll get 1 dollar, but that may not be the best strategy.

Student: You can play a bunch of times and at worst you’ll break even and at best you could get all 26 black.

Professor John Geanakoplos: Right. So he’s saying this isn’t ambitious enough. This surely gets a half a dollar, but you could do much better. Like, let’s just wait. The first time suppose you get 1 dollar. Suppose you get black the first time, so that gives you 1 dollar. Now, the trouble is the deck is starting to turn against you. Now it’s 25 blacks and 26 reds. So what would you do then?

Student: I’d stop.

Professor John Geanakoplos: You’d stop or keep going? Well, the deck is against you, so now your very next draw is unfavorable. And, by the way, playing to the end of the deck is going to lose you 1 dollar because there are 25 black and 26 red, so this argument that if you just play to the end of the deck you’ll break even it’s not true after you’ve already taken a black one out, so you could lose it.

From then on you’re starting to run a little bit of a risk. So we’re ignoring risk aversion. We’re just caring about expected dollars. The fact is the deck is against you, so should you play or not? And so first reaction is hell no, the deck’s against me. Why should I draw another card?

But you still have the option of going to the end of the deck, so the most you could lose is 1 dollar if you went all the way to the end of the deck, and who knows, maybe you’ll get a run of more black cards in the beginning and make a lot more than 1 dollar. So you should choose another card. In which case if you get black again–if you got red on the next card you’d be breaking even, but now it’s 25/25 and by the previous argument it’s obvious you should pick another one because the worst you can do is break even from then.

But what if you got two blacks in a row? Well, now the deck is way against you. It’s 26 red and 24 black, so now you’ve only got a 48 percent chance of drawing a black one the next time. The deck is going further against you. Should you really draw another card? It’s more likely to be red. Well, the answer is yes.

And suppose you got a black one again, meaning you’re three up, and now the deck is 26/23. Sorry, I went the other way, 26 red, 23 black. It’s getting further and further against you. Should you draw another card? Well, what you’ve got is you’ve got a bad deck working against you, but you’ve got this option working in favor of you. So the question is just how valuable is the option. And like I said, people always underestimate the value of options. And so–okay, go ahead.

Student: Don’t you want to play until your lose-condition is either balanced out or worse than your win condition?

Professor John Geanakoplos: Yes, but what is that condition?

Student: Thirteen blacks <>.

Professor John Geanakoplos: If you got ten blacks in a row you would keep drawing blacks, is that what you were saying?

Student: Well, I mean, at that point if you assume that that’s all <>, assuming that you pick another red and then play out the end of the game you could lose, it would be…

Professor John Geanakoplos: Thirteen more. So you’re saying you want to keep drawing blacks until if you play to the end of the deck you would lose as much as you’d won up until that point. So you want to never run the risk of losing more. But you see, that strategy would get you to quit after the first one. After the first black if you ran to the end of the–your strategy doesn’t make sense. By going to the end of the deck you’re always going to undo everything you’ve won until that point, because you’ll be zero no matter what if you go to the end of the deck. Yep?

Student: Shouldn’t you quit whenever you have 1 dollar, because say if you have more black cards than red cards then the number of black cards left in the…

Professor John Geanakoplos: Deck is less…

Student: Is lower than the number of red cards left in the deck, so you should quit whenever you have 1 dollar.

Professor John Geanakoplos: No. So he’s saying just the commonsensical thing. After you draw one black card he would quit because now the deck is against you. It’s 25 red [correction: 26 red] and only 25 black. The deck’s against you. Why go on and play against an unfavorable deck? You’ve got your dollar, be satisfied and quit. That’s his recommendation, but that’s wrong and it’s because you’re doing what everybody does.

You underestimate the option. The option is incredibly valuable here and now I just want to show how to compute what your optimal strategy is, and I think you’ll be surprised. You should keep drawing, not three times–if you got a fourth black card, so you’ve already made 4 dollars. It’s a sunk cost. You’ve got this horrible deck. It’s 26, 22. Should you keep playing? Yes you should. Yes you can.

If you get a fifth black card in a row you’re up 5 dollars. The deck is horribly against you. Should you keep playing? Yes you should. So anyway, it’s really shocking, I think. So now let’s just see how to compute this out so that we don’t have to argue about it. It’s just a little bit of mathematics and you just see how surprising this calculation is.

So how would you do it? Well, the key is to figure out how to put it into a tree. So I’m not going to draw the picture because it gets too complicated, but basically what you want to know is how valuable–remember the tree in backward induction? It was, take the thing at the end and then figure out by propagating the values backwards.

So if I have black and red here, and I’ve got 1 black card and no red cards, the value to me of that is what? I’m going to win 1 dollar for sure. If there’s only 1 black card left in the deck and no red cards I know I’m going to play to the end and get 1 dollar, and obviously the value of 2 black cards and no red cards is 2 dollars etcetera. And I also know, what’s the value to me of no black cards? You know at every stage what’s left in the deck because you’ve seen what came up before. So if you’re at the very end of the deck with no black cards and 1 red card what’s the value of that position to you?

Student: 0.

Professor John Geanakoplos: 0, not minus 1, 0, why is that? Because you’re going to quit, you’re not going to play and that’s the critical step, seeing that this is 0. Someone said minus 1.

That difference between 0 and minus 1, that’s the whole heart of the thing. So 0 and 2, the value of that is also 0. You’re just going to quit.

So what in general is the value? What is the value of V if there are B black cards and R red cards what’s the value to you? Well, a crucial step is that you can choose to quit by not playing. So this is the value from then on to you. So B black cards left, R red cards left. You could get 0 by quitting, or you could draw a card.

Now, what happens to you if you draw a card? What happens to you if you draw a card? Well, with probability B over (B + R) you win 1 dollar, right, but then you move on to the new deck. So what do I write V here, V of B - 1 and R, but with probability R over (B + R), you drew a red card, so that’s minus 1, but then you move onto a deck that has 1 less red card, and that’s it.

You either decide to stop or if you’ve decided to draw a card you know what the chances of getting a black card are. You look at the black cards, 26. I’m down there. It’s 21 out of 47. It sounds horrible, 21 out of 47, I win 1 dollar. 26 out of 47 I lose 1 dollar. So the immediate draw is terrible, but if I get a black card I move to this situation, and if I get a red card I move to this situation. So do you agree with me that that’s what the value’s going to be? Are there any questions?

This is a critical formula, a critical spot. Does everyone–Sophia you’re now in trouble. Somebody came and said hello to me after class and now I know a name, so does this formula make sense?

Student: Yes.

Professor John Geanakoplos: Kathleen, yes? Is it Katherine or Kathleen?

Student: Katherine.

Professor John Geanakoplos: Katherine, okay. Katherine, so you agree with this formula, right? But this formula’s the key. It’s just like our tree.

Once you know what the values are down here you can always go backwards and figure out the value here. So what is the tree? The tree is going to have–well, I’m going I’m going to do it on a computer. So now we can just do this on a computer. I hope I don’t have to do that. So you have this, by the way, it’s on the web. Oh no! Why are all these, oh okay, there aren’t question marks. So here it is.

So you can see that on this, I can do it with this, on this thing it’s–by the way, I did my own spreadsheet and an undergraduate last year thought it was so messy that she just redid it for me. So this is her doing. It looks much better than I did. So anyway, here are the number of cards. This is what the number of black cards left. This is the number of red cards left, and then when you go to the corresponding coordinate like this one, this is the value of the game when you have 1 black card and 1 red card.

Even though there’s an even deck it’s a favorable game to you. Why is that? Because you draw the first card, if it’s black, which happens with probability 50 percent, you win 1 dollar. So you’ve got half a dollar. If it’s red, which happens with 50 percent probability, you draw the next one and so you end up with 0. And by the way, if you got a black the first time, obviously you stop.

So you get 50 percent chance of 1 dollar and stopping or 50 percent chance of going to the end and getting nothing, so it’s value of 50 percent, but that’s a bad way of calculating that number. She’s got a much better way of calculating it.

So what she said is if you had no red cards and only black cards the value is to go to the end of the deck and just win them all. So if you’ve got only red cards in the deck and the top line is no black cards obviously you should quit right away. So we’ve got this first thing trivially done.

Now, how do you figure out this thing? Well, if you look at the formula up there it’s just the formula you wrote, we wrote, which is you could quit if you wanted to. So you have to take the max of 0, but if you go on and draw, I can’t read what’s written there, if you go on and draw there’s going to be the probability of red, this is the probability of getting a red over the total number of cards times losing 1 dollar, plus what happens is you then move. If you drew red then you go to–what did she do?

Student: <> the value.

Professor John Geanakoplos: C 4, oh C 4 is here, right. So sorry, I was getting confused with the cards. So C 4 is this squared. That’s the value in here. So C 4 says if you draw a red card the first time, it happens with this probability, the number of reds over the total number of cards, you lose 1 dollar and then you move to the position C 4, which is 1 back here, the one where you’ve got 1 less red card and just a black card left.

On the other hand, you could have drawn the first time, not the red one but the black one divided by the total number of cards, and you would have won 1 dollar, but then you would have moved to the position where you had 1 less black card, which is D 3, which is up here.

So instead of doing the whole game, she says, half the time you win, but then you move over to here and get that value. Half the time you lose, and then you lose 1 dollar, and then you move over to here. So she’s done that.

That same formula appears in every box, so all you had to do was just copy it. It’s max of 0, and then the chance that you’re going to lose 1 dollar, which is the number here of reds over the totals times minus 1 dollar and then moving over to here. Or you could get the probability of winning 1 dollar with the black cards. So you win 1 dollar, but then you move up to here. So it’s very simple. So she’s done it.

And notice that although the deck is even at 1 card each, so it sounds like a fair game, it’s not. It’s a favorable game because you have an option, so you all understood that, but the thing is the option is much more valuable than you think. So let’s see what the value of the game is. It’s when you had 26 black cards and 26 red cards, so we have to go way over to here. Sorry. Where am I going? That’s not the right answer.

Here it is, 26, 26, the value of the game is 2.6 dollars, not just half a dollar. You wanted to quit, wherever you are, at half a dollar. He’s not looking up anymore. So he wanted to quit after the first draw, but it’s much better than that. So now what the shocking thing, though, is, so this means with 26, 26 you have a favorable game and you should draw.

If you didn’t draw it’d be worth 0. So obviously you’re supposed to draw here. If you get a black card you’re going to go here. So here you’re down. You’ve gotten a card and you’ve got one more, you know, it wasn’t that likely you were going to get black, but if you did–50 percent chance you go here. So you win 1 dollar and now you’re at this position.

Now, if you’re supposed to stop at that point, what would you have done? If you were supposed to stop at that point you would have had value 0. So the fact that that number is positive is telling you even when the deck is against you, 25, you can’t see it, it’s 25 blacks and still 26 reds it’s still a favorable game. You should draw a card and if by some miracle you won 1 dollar you would have moved to here.

So now you’re 26 red and 24 black, but the game is still favorable. You should draw another card, and if you win again you’re here. Now you’ve gone one, two, three times you’ve drawn blacks. You should still draw another one. Four times getting blacks you should still draw another one. Five consecutive black draws the deck is now 26 against you and 21 in your favor. You should draw again. The game is still slightly favorable, and it just seems shocking that could be the case, but this is the proof that it’s the case.

So anyway, that illustrates the power of the option of being able to continue. And if you work, you’re going to work out low numbers 2 and 3 in the homework, and then you’ll see very clearly why it is that this option is just so powerful. It’s uncannily strong. So are there any questions about this? Yes?

Student: I’m still a little bit confused. I know that the option value is positive, but the probability actually is a little bit against you. So especially after five wins why do you want another draw, because yes, the option value is a little bit positive, but the probability is still against you?

Professor John Geanakoplos: This isn’t the option value. This is the value of playing. So it says the option value is more important than the fact that the cards are against you. So he’s asking, my TA is asking, how could it possibly be that the deck is now 26 red and 21 black totally against you and according to this calculation you should still draw? He can’t see the advantage in drawing because the odds are pretty high you’re going to get a red card next time.

Well, that’s true. You’re going to get a red card next time, but the thing is your downside is limited. Here’s a way of thinking about it. You’re up 5 cards. You can never lose more than 5 dollars from that point on because you can always play to the end of the deck, right, which means you lose the five back that you already won.

So there’s a downside–the downside of losing is limited here. On the other hand there’s a big upside to you. You might by some miracle draw 10 consecutive black ones at that point and then you could quit. So your upside is much bigger than your downside. Now, the upside is less probable than the downside so it’s not so obvious which is going to be bigger. Is the option value more important or is the fact that the deck is against you more important? It would be impossible to intuit the answer, but we don’t have to intuit it. We just proved it. We solved for the optimal strategy. Are there any other questions? It’s quite amazing, right, this–I’m going to pause for a second. Yes?

Student: How do you calculate the profit at a given position?

Professor John Geanakoplos: That’s what we just did. So let’s try it again.

Student: It may not be a total profit.

Professor John Geanakoplos: So what we did, this number V is the expected profit you’re going to make if you start with B black cards and R red cards. And now the intuitive mind figures that if B is less than R you’ve got an unfavorable deck and you should just quit, but that’s not the case.

You can figure out what the profit is, how, by doing backward induction. You couldn’t tell what the value of this bond is here with all these defaults and stuff until you started computing backwards until you got to here, so the same way here. We know at the edges it’s very obvious when all the cards are black or all the cards are red, that’s up here, it’s obvious what the value is. But if you have a position here you can figure out what the value is of being in that position of 1 and 1. You could quit and be 0, or you could say, what are my chances of getting a black card and winning 1 dollar?

If I get a black card then I move to this position, but I already figured out this position’s value because I’m doing backward induction, right? That’s got one less–sorry, if I draw a black card I go this way. It’s got 1 less black card and we already know that value of that position is 0.

So to figure out the value of this position I know the chance of getting a black card, then I’m going to end up in that position which is value 0. I won’t draw any more. Or I’m going to get a red card and then I’m going to move to this position over here whose value I’ve already computed. So that gives me the value here. How do I figure out the value here? Well, it’s now 2 reds and 1 black. So this looks really bad. Actually this position, the value of this I happen to know is 0.

How could it be, with 2 red cards and only 1 black card actually the value of the position is 0? Well, what do I do? The chances of getting a black card the first time are 1 third. So V of (1, 2) is going to be 1 third of getting a black card plus then I go to here which is no black cards left and just red cards, which obviously is 0, plus 2 thirds. 2 thirds of the time I get a red card and lose 1 dollar, but then I’m going to move to here with 1 red card where I have V of (1 and 1), which I’ve already figured out the answer to, right?

This was V of (1 and 1) has value a half. So therefore V of (1 and 2) is going to be 2 thirds times minus 1 + V of (1, 2), right? I just drew a black card so it’s no longer–no, I drew a red card, so it’s (1 and 1). So I started with 1 black and 2 reds, 1 third of the time I get a black card, 2 thirds of the times I get a red card, but after getting the red card the position is now 1 black and 1 red. The red card disappeared. That’s over here.

So I get a black card, I move to here. If I get a red card I move to there, but V of (1, 1) is worth a half, so that’s equal to 1 third times (1 + 0) + 2 thirds times (minus 1 + 1 half), which equals 1 third - 1 third which equals 0.

So starting at this point you’ve got 1 red card and 2 black cards. It looks horrible to pick a card, 2 thirds of the time you’re going to get the wrong card, but you still have a position that’s actually equal in value, because if you get that black card which wins you stop. If you get a red card you’re now in a position that’s equal deck and that’s favorable for you, because if you get another black card you stop, and if you get a red card you keep playing until the end. So that’s it.

So how can you do this by backward induction? You have the stuff on the edges and then you solve for all these things along the side here, and having done that now I can solve for this one because I’ve got up and to the left. Now I do that row and then I can do this whole row, and then I can just by backward induction do the whole thing, and the computer does that instantly. So it figures out the value of every single node and it’s shocking what the answer is. So are there any other questions about this? Ben, do you– so…

Student: Yeah, I figured it out. I tell you initially I’m thinking about it. Well, maybe this value is actually an option value. For example if you choose to play this game and after you win or you lose then you’ll get an option to continue the game, and so I separated that option value to this.

Professor John Geanakoplos: Well, that is your option. The option is always to keep playing or to stop, but the value I wrote down is the value of the game to you, of being able to play the whole game however you want. So now let’s do another example. Yes?

Student: Would you call somebody rational who doesn’t play the game up to that point that you showed me, the point .05 <>…

Professor John Geanakoplos: Right, so if you got…

Student: My question is, is it preference to risk or is it actually the rationality of…

Professor John Geanakoplos: Right, so this is going to become very important very shortly. So his question is I just proved if you can call that a proof by computer, the computer proved that even if you got 5 blacks in a row you should still draw another card.

Of course things are quite risky now because there’s a very good chance you’re going to lose on that very next card. So he’s saying if you’re risk averse maybe you would stop there, and how can you distinguish somebody who’s risk averse from somebody who’s just dumb and can’t make the calculation.

So that’s going to be a question we’re going to take up in the very next class. But I would say that it’s usually because people are dumb and can’t make the calculation. So they just don’t realize how favorable the situation is they’re in by having this option to be able to play to the end, to stop when they want to stop.

Chapter 5. The Optimal Marriage Problem [01:06:47]

So let’s just do one more example. Suppose that you are undergraduates and you want to get married. You’ve been told that’s a good idea, and you–it’s going to be a very sexist thing, but anyway this is also a game I invented which turned out not to be as original as I thought. So I call this the optimal marriage problem. So let’s say you knew you were going to meet 1,000 women.

You’re telling it from the guy’s point of view. You’re going to meet 1,000 women and each woman you meet her suitability you can’t tell until you meet her and talk to her, and after you meet her each woman’s suitability is uniformly distributed on (0, 1). So what do I mean by that? I mean you meet her, you talk to her, you get to know her, and before you met her, you have no idea how suitable she’s going to be. After you’ve talked to her you understand how suitable. The best, it’s 1. The worst, it’s 0, and it could be a draw anywhere between 0 and 1.

Before you meet her you have no idea. After you meet her you know exactly how suitable she is, and there are going to be 1,000 of them that you could meet. The problem is that after you’ve talked to a woman you can marry her then, or you can move on, but once you’ve moved on you can never go back to her.

So you understand the problem. The problem is that let’s say the first woman is .95 or .90. You think, gosh how suitable. This is a great match, but I’ve got 999 more women to go. Maybe I’ll do better, and then you get 0s from then on, and so you’ve missed your .90, and so you’re going to end up marrying the last one who’s maybe a 0 for you. That doesn’t mean she’s a 0, just for you a 0. I’m trying.

Anyway, so what should your optimal strategy be and are you playing the optimal strategy. So what do you think, just intuitively, what’s the optimal strategy? Of course we’re going to do it by backward induction, but what do you think it’s going to look like, the optimal strategy? Yes?

Student: The further you get into the game the less selective you should be.

Professor John Geanakoplos: Right, that’s what’s going to happen. We’re going to prove this, but he’s exactly picked–he said, you set a threshold here at the beginning. You’ll marry her if she’s above some number. You keep to that threshold for a while. Then you haven’t married anyone and you’d say, oh my god, I’m running out of women and then your standards just collapse. Desperation sets in.

So that’s absolutely right, but the only interesting thing is to figure out how high the standard should be. So how high do you think it is at the beginning? What would you say the number is at the beginning? Now, let me give you a hint. If you divide up–here’s 1 and there’s 1,000 women, so here’s 0. So they’re randomly picked. So if you could look at all the women, and pick out the most suitable one, what would her suitability be?

Well, her suitability would be–so top, the top on average will be something like 1,000 over 1,001. This is a famous problem. If you take N people randomly, you take N numbers you pick randomly uniformly on (0, 1) the top one on average is going to be, if there’s 1,000 women it’s 1,000 over 1,001. Second top is going to be 999 over 1,001.

So this very standard statistical result actually was derived by a former Yale professor in World War II. The Americans captured German tanks which had all their serial numbers on them. The first tank was number 1, the second tank was number 2, the third tank they made was number 3, so we captured a bunch of them and then we had to guess, how many tanks did they make? Anyhow, so it’s related to this idea that if they’re uniformly distributed on (0, 1) the top one’s going to be on average 1,000 over 1,001, 999 over 1,001 etcetera.

So what standard would you set for the first one? Right? You have to set some threshold here. Here’s one and here’s 0. By the end you’ll take the last woman you’ve got–you’re going to take her no matter what. So what should the threshold be? Well, it’s hard to tell. We can’t do it except by backward induction. So what would you guess?

Student: 1 <>.

Professor John Geanakoplos: 1, well then you’ll never take her if it’s 1 because the odds of getting exactly 1 are 0, so what would you guess?

Student: <>

Professor John Geanakoplos: So you’d set the threshold this high, 1,000 over 1,001. So that means you’re expecting to get as good a woman as if you could go to the very end and look at all of them, you never make a mistake.

I told you, there’s a chance you’ll make a mistake. The first one’s the best. It doesn’t quite come to your threshold and all the rest are worse then you end up with a disaster here. So you have a chance of not doing that well, so you’re setting too high a standard here, because you have a very good chance of saying no to all these women and then ending up with what’s–right? However, you’re on the right track.

So amazingly, this is the answer, this is the threshold. So you should set the threshold at where you expect the second highest woman to be, the second highest match to be, and that is why there’s so many novels about the other woman because if you’re playing optimally you should be ending up with the second best woman and there should be one other woman that at the end of your life you regret that you didn’t wait for, but only one other woman.

So anyway, I just want to prove this to you in the same way we proved it before just by solving for backward induction for the optimal–so we can just do this by backward induction. We know exactly what to do. It’s one thing to say what you should do. It’s another thing to prove that’s what you should do and I’m going to prove it now. So you can just see by backward induction how easy it is to do these things. I’m going to have to take four minutes. If you can hang out for four minutes we’ll get this.

So what happens with 2 women left? What’s V of 2? What should you do? What should your threshold be for 2 women? So here’s the threshold. Threshold for 1 woman is 0. If it’s the last woman there whatever she is that’s it. You might as well marry her. It can’t be negative. And then the value–unlike Herodotus–the value of 1 is going to be a half, right, because if there’s 1 woman left you’re going to take her no matter what and on average you’ll get a half.

So the question is now, what’s the threshold when there are 2 women left and what’s your expected payoff? So what’s the threshold if there are 2 women left? You see the second to last woman, you talk to her, you find out how good the match is. You should take her if the match is above what?

Student: 1 over 1,001?

Professor John Geanakoplos: No, there’s only 1 woman left after her, so. So a half, right, because if you don’t take her you’re going to go to the last woman and on an average you’re going to get a half, so there’s no point in taking someone whose match is less than a half when the very next step you’re, on average, you’re going to get a half. So your threshold is a half. So what’s your expected quality of the match?

Well, with probability a half she’s going to be above a half, and if she’s above a half she’s going to be half way between 1 and a half, so it’s 3 quarters. And with probability, sorry, and with probability a half she’s going to be below a half and you’re going to pass on her and go to the last one and it’ll be on average a half. So this is 3 eighths plus, no. Probability a half, she’s going to be 3 quarters and probability a half, she’ll be a half so it’s 3 eighths + 1 quarter, is 5 eights. Now, what if there are 3 women, what should the threshold be for 3 women and what’s the value of 3?

Student: 5 eighths.

Professor John Geanakoplos: So the threshold should be V of 2, which is 5 eighths, and what’s the value?

The value’s going to be, what’s the chance that you take the one you just meet? Well, the odds that she’s above a half [correction: above V(2)] is 3 eighths, so it’s 3 eighths times the average of 1, you know, half way between, so he’s above 5 eights, so she’s somewhere between 1 and 5 eighths.

So that’s going to be (1 + 5 eighths) over 2, plus 5 eighths of the time you pass on her and then you get 5 eighths. So that equals, if we just do that a little bit more generally, it’s 1 - V (2)–So that’s (1 - V (2)) times (1 + V (2)) divided by 2 + V (2) times V (2). So that’s the formula.

So in general V (t), you set the threshold at V –(t-1). So with probability V (t-1) you’re going to get (1 + V (t-1)) divided by 2, and with probability 1 minus, no. This is 1 - V (t-1), with probability of V (t-1) you’re going to get–you’re going to pass on her and go to the next thing so you get just V (t-1).

So that’s just a formula. V(t) equals some function of V(t-1). So you can program that into a computer. So I’m ending now with this one picture. That’s the end of it. Sorry. I know I’ve gone over, but this is the last picture. It’ll only take a second. So 0, yield curve optimal marriage, so here it is, with one woman your value–oh shit, sorry.

With 1 woman the value is a half, 1 match to go. So the number on the left is with 1 match to go the value’s a half, with 2 it’s 5 eighths, with 3 we computed that too. So you can tell for however many women you want.

Now, what I’ve done on the right number, is this N - 1 over N + 1. So that’s the second best woman how good she’d be on average for you, and as you go down further and further you see these numbers are getting to be the same. So this number and this number’s practically the same and if you go down to the very bottom you’ll see they’re identical. Up to an incredible number of decimal places these two numbers are the same.

So if there are enough women you’re going to get exactly the second best, and it’s going to be the problem of the other woman. But anyway, the point of all this was to just illustrate how powerful the option is. It’s as if you could go to the very end and pick out the second best one even though you have to do them sequentially, so.

[end of transcript]

Back to Top
mp3 mov [100MB] mov [500MB]