chem-125a: Freshman Organic Chemistry I
Lecture 37 - Potential Energy Surfaces, Transition State Theory and Reaction Mechanism [December 10, 2008]
Chapter 1. The Boltzmann Factor and Entropy against Traditional Views on Society [00:00:00]
Professor Michael McBride: Okay, so last time we were talking about exponents, and how statistics gives rise to them. And we looked last time at the Boltzmann factor, that e-H/RT or e-ΔH/RT, which tells you the temperature dependence of the equilibrium constant. It gives you the equilibrium constant; remember at room temperatures 3/4thsΔH in kilocalories. Right? And we saw last time that that comes from counting arrangements of a fixed number of energy bits. And the important thing is that they're random, that all of them are equally likely. [Technical adjustment] That has interesting philosophical implications I think. Do you remember at the start of the course we were talking about types of authority, and in particular, remember, in Hamlet, Act V, Scene II, he says, "There's a divinity that shapes our ends." So this was a traditional view in society. But then this guy came along, in 1859. Do you know who that is? Charles Darwin, with The Origin of the Species. So there was conflict between the religious fundamentalists and Darwinists that continues to this day.
But I think they really miss the point, because Darwin had the idea that things were developing and getting better all the time. Right? So there's a certain compatibility between that and the sensibilities of the fundamentalists. But also, not too long from then, this guy came along, Boltzmann, and he said everything is driven just by randomness. There's no goal for everything. That should be the real conflict, between religious fundamentalists and science, is Boltzmann. But so few people understand what Boltzmann did that there's no reason for the conflict. At any rate, we saw where the Boltzmann factor came from, in terms of statistics. Then there's the entropy factor, which we saw last time. And notice that, as I've written it here, in the top equation, the one with Boltzmann, there's an R, and the one at the bottom has a k. Sometimes you see R's and sometimes see k's. They're actually the same thing. The k is if you measure the energy per molecule, and R is if you measure the energy per mole. So all you have is Avogadro's number is included in one of them and not the other. They're really the same thing, these two exponentials. But at any rate, we can cancel the T's and know that that eS/k is a counting number, W; the number of molecular structures being grouped. And we saw that at the end of last time in terms of gauche- versus anti-butane. And remember that S = ln k [correction: S = ln W] — which is the same equation, just the log on each side — is on Boltzmann's tomb.
But let's look at one more example, in terms of cyclohexane. Do you remember the cyclohexane conorformers — [technical adjustment] — remember the cyclohexane conformers, if it's a chair, it's very rigid. Why is it rigid? Do you remember? Why is it hard to twist? With respect to the molecular model, why is it hard to twist? Eric? Why does this particular model, in which rotation around bonds is completely free, why is it hard to rotate to go like this? Why is there a click?
Professor Michael McBride: Pardon me?
Student: It has the optimal bending angles, 109.5.
Professor Michael McBride: Ah, it's the bending angle that has to flatten out when you do this. Remember we talked about that. But anyhow this one is very rigid. But if you click it, into the boat, then it's actually a twist-boat and it turns to all sorts of structures ever so easily. So there are many more accessible structures for a boat, than there are for a chair. The chair, you have this one and you have this one. Right? But for a boat, or a twisted, or the "flexible" form as it's often called, you have lots and lots of accessible structures. What's that going to mean in terms of entropy? That there are many more structures you're going to count for the flexible form than you count for the chair form, the rigid form. Okay? So if we look at — this is a picture of the energy with the chair at the minimum of energy. The twist-boat is 5.5 kilocalories above, but it has very small barriers among different twist-boat forms. So we would say there are few structures that are chair cyclohexane, but many structures that are the boat. And furthermore, if we look at it in terms of quantum mechanics, which is the real way to do it, we notice that when there's a steep, stiff valley, there are very few energy levels that are accessible. Right? But if the barriers are low, then there are many, many quantum levels. So you count many more quantum levels for the flexible form than you do for the chair form.
What that means is from, whether the red classical view or the green quantum view, there's a big statistical factor, an entropy factor, which turns out to be about 7-fold favoring the twist-boat over the chair. The chair is still favored; it's favored because of energy, that Boltzmann factor, eΔH/kT. But this reduces that bias in favor of the chair — which would otherwise be 103/4ths of 5.5; right? — it reduces it by about a factor of 2000. So the equilibrium constant is only 14,000 — instead of 14,000, it reduces by a factor of 7, to make it 2000. Okay, so that's — but so we've been talking about entropy here in statistical terms. But remember earlier we talked about the fact that physical chemists actually measure entropy as something experimental, not just a counting exercise, and they do it beginning with something that's as close as possible to zero Kelvin. Because if you have a perfectly ordered crystal, at zero Kelvin, then there's only one structure you're talking about. Right? So the entropy is the log of one, which is zero, or log of 1/R; times R. Okay? So it's zero. But then as you warm it up, as you put heat in, you measure the increase in this thing called entropy by how much heat you put in, at each temperature, as you warm up. And we talked about that already. And you'll notice qualitatively how this works is that floppy molecules, like the flexible form of cyclohexane, that have many closely spaced energy levels, absorb more energy, and they absorb it at lower temperatures, and thus they have more entropy when they get warmed. So whether you do it experimentally, by measuring the heat that gets absorbed, or whether you measure it by counting quantum states, you get the same result; that a floppy molecule is favored in terms of entropy. We talked about that when we were talking about the barrier to ethane rotation.
Chapter 2. The Statistical Basis of the Law of Mass Action [00:07:41]
Okay, so we've looked at the Boltzmann factor and this entropy factor, that come from counting. Notice that in the second case, truly we should be counting not structures but quantum states; how many states are there that count? And if you want to be really, really picky, you say the weighted number of quantum states; because ones that are higher in energy are not as populated as ones that are lower in energy. But we don't have to worry about that now. At any rate, those two factors, taken together, are what gives the equilibrium constant, which as we said before is e-ΔG/RT, free energy; that means it's both enthalpy, heat, and entropy, this counting thing. But there's more to it than that. There's also the statistics involved in the Law of Mass Action. And that comes from counting molecules per volume; that is, concentration. So you can count energy, you can count quantum states, and you can count molecules per volume. This is very easily understood.
The way it developed experimentally is that in the late 1700s there was an attempt to assemble a hierarchy of affinities; that some elements had more affinity than others. So if you had a compound that involved one element, and mixed it with an element that had higher affinity, that one with higher affinity would take away molecules from the one with lower affinity. But then it was found, in the early 1800s, that it doesn't always go in that direction, because concentrations can change and make things go the wrong way, toward something that's a very low affinity, if it has very high concentration. And by the middle 1800s, this was formulated as an equilibrium constant, K, which was understood as the balance between forward reactions and reverse reactions. You've seen that k1/k-1 is the equilibrium constant. So concentration comes into it, in this thing called the Law of Mass Action. If you have two A's that can go to A2, then you have this equilibrium constant, written as the concentration of A2 divided by the concentration of A2.
Now why the exponent? Where does that exponent two come from? And sometimes it's three. Remember? Again, it's statistical and very, very easy to see. Here I used Excel to place a bunch of circles randomly on a two-dimenstional plot. So it put 50 of these circles down. And we're going to count as dimers things where they overlap, where they touch one another. Okay? So there's — if you have 50 particles, this particular realization of randomness gave one dimer. But suppose we add another 50. Right? Now, in addition to the original dimer, there are 8 more. Okay? And now suppose I add another 50, now there are 19 dimers. Or another 50; now there are 35 dimers. Or another 50, and now there are 59 dimers. Now let's make a plot of how many dimers there are, compared to how many particles there are, undimerized ones. So there's the number of particles and the number of dimers, plotted; the data there. And you see what it is. It's a parabola; that the number of dimers is proportional to the square of the number of monomers. Now why should that be so? It's because as you increase the concentration, you increase the number of units there; obviously. That's by definition. But you also increase the fraction of the units that have a near neighbor, that are touching. So not only do you increase the number that could be dimers, you increase the fraction that are dimers. So there are two ways in which increasing the concentration increases the dimer. Right? So it's a square. That's where the exponent comes in. It's purely statistical, for random distribution. And then, of course, energy can enter in, entropy can enter in. That will change the K. But the exponent on P — right? — is due just to this counting procedure. So you increase both the number and the fraction that are dimers.
Okay, so now we know about equilibrium, statistics and exponents. We have particle distribution, that's the law of mass action; that's that square we just talked about. We have energy distribution, ΔH, the Boltzmann factor. And we have counting of quantum states, which has to do with entropy. Okay, so free energy determines what equilibrium is, what can happen. Right? And that ΔG remember includes both entropy and energy. But it doesn't tell you at all how quickly it will happen, what the kinetics of the reaction are. So for that purpose we're going to try visualizing reactions, to see if we can get a picture of what determines rates. And we're going to look at two kinds of things. We're going to look at classical trajectories, and at the potential energy surface, and collective concepts.
Chapter 3. Understanding Reaction Rates: The Potential Energy Surface and Collective Energies [00:13:13]
So first let's start with the potential energy surface. Here's a very simple potential energy surface. It's just a two-dimensional diagram. So you've seen it before. We plot the distance between two atoms, A-B, and the potential energy as a function of the distance. So this is a Morse-type curve. Okay, so we put a ball on it, and let that ball roll to map out different distances, as it moves — that's the horizontal axis — and the energy corresponding to that distance. So it rolls back and forth — right? — mapping out the change in geometry horizontally, and the change in [potential] energy vertically. Now we're going to look at a three-dimensional surface, or actually two geometrical dimensions; plus the third dimension coming out at you, the contours, have to do with potential energy, of three particles now. But in order to describe the position of three particles, you need three distances; one to two, two to three, and three to one. Right? So that's too many dimensions to plot this way. So we're going to assume that they're all on a line. So we only have the distance between one and two, and the distance between two and three. So two coordinates will tell you what the arrangement of those three atoms are, if they're in just one dimension.
Okay, so a linear triatomic, A-B-C. On the horizontal distance we have the distance A to B, on the vertical axis, B to C, and the contours tell us how low the energy is; the darker, the lower the energy. Okay, so this specifies the structure. So the position of a point specifies not the position of an atom but the position of a set of atoms. Right? So as we move one point around, we're moving two atoms, or changing two distances. But we can denote it with one point here, in two geometric dimensions. Okay, so let's look at several regions. Let's look at that valley on the right. What is that valley? When a point is in that valley, what's it describing? It says that the distance between A and B is large. Right? So A is far away from B. What does it say about the BC distance? It's short. It's the normal distance for a BC molecule. So what is it; when a point is in that valley, what geometry is it describing? It's describing a BC molecule, with an A atom at a great distance from it, depending on how far out we go to the right. Does everybody see that? One position of the point tells us where both — where all three atoms are, as long as they're on a line. Right? Now how about up there, on that plateau? What's happening there? What's the physical system? Maria, what do you say?
Student: All the atoms are separate from each other.
Professor Michael McBride: Yeah, the atoms are all separate from one another. C is far from B and A is far from B. So there the atoms are separated. And it's high in energy. You're not surprised that it takes energy to pull them apart. And there is a cliff. Right? If this were a hiking map, that would be a cliff. What does that cliff mean? Why is there a cliff there? Greg, what do you say? Why is there a cliff shown there? What's the geometry that points in that region described?
Student: Closer together.
Professor Michael McBride: What's close together? Is A close to B?
Professor Michael McBride: Is A close to B? I couldn't hear.
Student: Closer maybe to B.
Professor Michael McBride: No. Here's the distance A-B. It's way out there, where that cliff is. Right? So A's far away. But what's happening? B and C are getting very close to one another. So that's a collision between B and C; much shorter than their bond distance. Right? They're being scrunched together in that region, and that's why the energy goes up very rapidly, because they're running into one another; closer than they like to be. There's a ridge — right? — as you go down there. If you were hiking, that would be a ridge. And there's a special point on the ridge, which is marked by the red cross. What would you call that if you were hiking? If you were in the valley that's labeled yellow, and you were thinking of getting to the other valley, what would you call that cross, if you were hiking?
Student: A peak.
Professor Michael McBride: Not a peak. Right? It comes — it gets high as it gets close to me here, and it gets high out on the plateau. It's actually low, as we go along the line, but it's high as we go perpendicular to the line. What would you call that if you were hiking? Some of you must hike.
Student: A pass.
Professor Michael McBride: Pardon me?
Student: A pass.
Professor Michael McBride: It's a pass, between one valley and the other valley. That's the way you would go if you were hiking, right? So you don't have to climb so high. Okay, so it's a pass. Or in chemistry we call it a transition state, or a transition structure. And it's like a potato chip. Right? It's a minimum in one direction, along the ridge, but it's a maximum in the path between the two valleys. Okay? So it's a very special point.
Okay, now let's look at this a little closer. Suppose we sliced this surface, we took a knife and sliced it along the red line there, horizontally, and then folded it back to look at the cross-section. Does everybody see what I'm saying? What would the cross-section look like? It's exactly that one we looked at before. So that involves stretching A and B, with C so far away that it's irrelevant. Right? So it's just stretching a diatomic molecule. Or if we sliced — so that's vibration of AB, with a distant spectator of C. Okay? Now suppose we sliced it that way and pulled it back. Now what is it? It's a vibrating BC molecule, with A far away. Right? So again, by choosing points on this, we can describe any geometry of these three points we want, as long as they're on a line. But a single point is a whole structure.
Okay, now let's look at a trajectory. We roll a ball on this surface, and it rolls up toward the pass but doesn't make it and comes back like that. So that's a trajectory that was unreactive; it didn't make it into the product valley, it didn't get across the ridge. And physically what it means is that A gets closer to BC. While BC is vibrating, it's moving up and down. Right? So BC is vibrating. A comes in and then bounces off again. Right? So it was an unsuccessful attempt of A to take B away from C. Or suppose we do this one. A approaches, and now C flies away from vibrating AB. Does everybody see that? You want me to say it again? Right? So as it comes along, B and C are standing still. It's not moving vertically, right? So B and C are at their normal distance, their standard distance. A comes in, whops into it, gets really close, runs into that cliff, really close to B, because it's the distance A-B. Right? And then it starts — AB is vibrating, and C is getting far away, as it goes down the product valley. So that's a reactive trajectory, where the first trajectory was unreactive. Notice that this is classical, rolling a marble on the thing, because A was approaching a non-vibrating BC. We know from quantum mechanics that BC can't just sit there. BC vibrates. So that's not a realistic trajectory, it's a classical model of a quantum mechanical system.
Now people have made surfaces like that for real systems, like H3, one hydrogen atom attacking a hydrogen molecule and taking one of the hydrogens away. Here it is. This was drawn. This is the first surface that was drawn, that I know of, by Henry Eyring at Princeton in 1935. And it had some — it has crazy angles. And the reason it has that angle is so that a marble rolling on it will behave according to the way the masses of the thing — so that the kinetics, the kinematics, are exactly right, so that it really does do the proper classical thing. If you roll a marble on here, it will trace Newton's Laws of Motion, for this H3 system. One thing that's unrealistic about it is that there's a minimum near the transition state. There's a little lake up there. That's just an artifact that came from the equation he used to approximate the surface. It's not real. There actually is a potato chip, not a lake up there at the pass. Some people have called it Lake Eyring. Okay?
But here's a more complicated surface that was — it says it was done by C. Parr, in his Ph.D. thesis at Caltech in 1969. This is for H-H-Br; again linear, but a hydrogen atom attacking HBr to take away it's hydrogen. And this describes the construction of a model based on that surface, an actual physical model on which you can roll marbles. Right? So here it is, and the dimensions, that one inch corresponds to 1.4 inches is an angstrom, and so on. So they actually constructed physical models to do this. And, as it happens, we have that physical model here. Here, this is it. [Technical adjustment]
Okay, and we have some marbles. So notice what's different about this one. I'll hold it up so you can see it. There's the surface. Right? And here's a slice through one. Right? That's H2 with — this valley is H2 with a Br atom. This valley is HBr, with an H atom. Right? So we can try rolling balls and see what happens here. So here I am far away. The H atom is far away from HBr. And I do that. And it's vibrating, but it doesn't succeed. Right? I can do it a little bit more. And most reactions aren't successful. Oh close. Okay? And as it comes out, it may be vibrating when it comes out, or it may not be vibrating when it comes — oh.
What's the cross? The cross is that transition structure, right? — or the transition state. Right? So you could do every possible trajectory. You could have different velocities at which A is colliding with BC, and you can have different amounts of vibration, and different phases of vibration. I could start it going this way, or I could start it going this way. And if I did every possible one of those, then I could average over all them and see how many of them succeeded, right? In reaction. And that would allow me to predict the rate of this process. [Technical adjustment]
Okay, so people have done that kind of thing. But studying lots of random trajectories provides too much detail. No one really cares about all those individual trajectories. What you care about is what fraction succeed. And if you could do that in a simpler way than rolling it a zillion times and averaging, then that would be a better approach. So it's better to summarize this thing statistically using collective terms; not individual paths but collective terms, such as enthalpy and entropy. And you can do it in this way. Okay, you have a steepest descent path from the pass. Suppose you slice this surface with a knife perpendicular to the screen, along that path, and then fold it out to make it flat, and look at the cross-section of it. Okay, and now we'll tip it up and it looks like that. Does everybody see how it looks like that?
Okay, we sliced along the path that goes up over the transition state. Okay, so we have potential energy as a function of distance along the reaction coordinate. But of course we're going to summarize a whole bunch of things here. Nothing rolls exactly along that path. Right? So we're going to lose some of the specificity of the reaction coordinate by grouping things. We're not going to take a trajectory, but a sequence of three species. The first species is the starting material; the second species is the transition state; and the third species is the product. And now instead of having an explicit meaning for every geometry along that path, what we're going to do is associate each of those starting materials, transition state and products, with a certain enthalpy and a certain entropy; that is, a certain free energy. So we have — instead we're going to plot free energy vertically, incorporate entropy — that is, how loose these things are — as well as enthalpy there; energy. And now we just have three values of free energy: the free energy of the starting material; the free energy of the transition state; the free energy of the product.
And now we can use those — but we've lost a lot of specificity in what the reaction coordinate is. It's just a sequence now of three species. But we can now use free energy to determine what can happen. We already could do that. But how rapidly? And you do that with the theory that Eyring made up when he drew that surface in 1935, called Transition State Theory. You assume that the rate constant — and we talked about this before — in units per second is 1013th. So things happen 1013th/second, times the concentration of the transition state. So the idea is that things that are in the transition state are moving at the rate 1013th/second. So if you know how many are at the transition state, then you know how many are going to product; 1013th times that, per second. Right? And you know how to do an equilibrium; we already do that. All we do is use that special double dagger, to mean the difference in energy going from the starting material to the transition state, the difference in free energy.
Now we can calculate that equilibrium constant. We know how much starting material there is; the Law of Mass Action. We know how much transition state there is in equilibrium with that starting material. Right? So that e-ΔG/RT is the equilibrium constant for getting the transition state; multiply it by the concentration of the starting material. You have the concentration of the transition state. Multiply it by 1013th, and you know how fast it's going. Right? So it assumes that there's a universal rate constant for transition states going to the product; 1013th/second. That's about how fast things vibrate. So things won't stay on the potato chip, they'll roll off, and the rate at which they roll off is 1013th/second. This is, of course, an approximation. It isn't correct. Because there's not true equilibrium between the transition state and the starting material. For true equilibrium the same number must be going in and out. Right? But when things get to the transition state, often they keep going, they don't come back. So it's not really a rigorous theory, but it's a very, very helpful theory for approximate purposes. And that's what I say here. Okay?
Chapter 4. Free Radical Halogenations: Predicting Reaction Equilibria and Rates [00:29:40]
So using energies to predict equilibria and rates for one-step reactions; free radical halogenations. And let me look, we do have time to go through this stuff, I think; at least most of it. We've already seen this, that you break a chlorine molecule into two atoms. Then you do single electron curved arrows; take the hydrogen away from methyl — from methane, to give methyl, and then it attacks chlorine — we've seen this before — and the chlorine atom comes back to constitute a chain reaction. Okay? So it's catalytic in radicals. Okay, now we break and make bonds in this. So we've already seen average bond energies that might tell us how hard it is to break a bond — right? — if that's what we have to do to get to the transition state. So we may be on the way to getting rates. But are these average bond energies real bond energies, or are they just a trick for reckoning the enthalpy, the total energy, the heat of formation or whatever, of a particular molecule? And we talked about this before and saw that mostly it is a trick; that individual bond energies change. Right?
So average bond energies are not what you want to use for this purpose; although they might be in the ballpark of the right numbers, but they're not the right numbers. What you really need are bond dissociation energies; the actual energy it takes to break a specific bond. And we've talked about how you could get that from spectroscopy. Those are real. The average bond energies are just a way of calculating molecular energy. So Appendix II of the Streitwieser and Heathcock book shows various specific bonds between the groups in column A and the atoms or groups in row A there, in the top row. And those were the best values when that book was published; the best values as of 2003 are a little bit changed from that, and were tabulated by Barney Ellison, whose picture I showed you the time before last.
So here's a table from Blanksby and Ellison, that shows a bunch of these things and how well they're known. The H2 bond dissociation energy is 104.206, plus or minus 0.003 kilocalories/mol. So it's very well-known. So some of them are very, very well-known, and some of them are known only approximately. Let's look at a few of them. So H2 104.2; HF 136; HCl 103; HBr 87; HI 71. So as you go down the halogens, the bonds get weaker and weaker. Why should that be? Why do larger halogens give weaker bonds? It's because they have poorer overlap with hydrogen, at normal bond distances, because their orbitals are very diffuse; they don't have high numbers in the region where the hydrogen is. And you have the energy match also is unfavorable. So if you have HF, with good overlap and a very low F, then you get lots of stabilization; the sum or those two red arrows is big, a strong bond. And if it's HI, the overlap isn't so good, and the energy match is better. But still the amount by which the electrons go down is not as much as with HF. So we can see qualitatively why that is. So less electron stabilization means a weaker bond. And we're going to talk about this more next semester. This is just to give you a flavor of it.
And here, in Table II, we see there's the same trend in bonding with methyl groups; that the fluorine-methyl bond is 115, but the iodine-methyl bond is only 58. But if you go across hydrogen with the different alkyls — methyl, ethyl, isopropyl, t-butyl — they're all very close to 100 kilocalories/mol. But not the same, and that will turn out to make a difference; as we'll see early next semester. And if you look at some of the other kinds of radicals, you see there are differences. There are special cases for vinyl, allyl — remember allyl alcohol — phenyl and benzyl. And I'll show you just a little bit about that. Are these unusual bond dissociation energy values due to unusual bonds or unusual radicals? That is, standard we have: here's a starting material; here's the bond; here's the product as the bond's broken — for making the radicals. So we can get a strong bond, either by lowering — by making the bond strong, or by making the radicals bad; either way increases the barrier. Right? Similarly we can make a small one, either by making the bond weak or making the radical stable. Is this something that it's meaningful to talk about?
Well let's look at it in the case of these special cases. So vinyl, you see, is 110 kilocalories/mol. It's the bond to H. So it's a very strong bond. Why? Is there something special about the vinyl radical? There's no special stabilization. There's no overlap between that singly occupied orbital, which is a σ orbital, and the double bond, which is a π orbital. There's nothing especially stable about the vinyl radical. But if you look at the bond in the starting material, the C-H bond, it has sp2 hybridization of the carbon. So that one's a strong bond. That one's hard to break, because the bond is unusually strong; not because the radicals are unusually unstable. Okay? Or if you look at phenyl, it's the same deal. You have a σ SOMO and the π bonds, the low LUMOs that might stabilize that singly occupied orbital's electron, are perpendicular or orthogonal to it. So again it's hard to break that bond. On the other hand, the allyl — and remember allyl alcohol we were talking about — here's an allyl radical. Now there's overlap, π overlap between the SOMO and the π* and the π. It turns out — and we'll talk about this more next time — that when you mix those two, the π* on the right and the SOMO in the middle, you get stabilization. You get the same stabilization by mixing the π with the SOMO, and if you mix both of them with the SOMO, you get that structure. But this isn't the time to talk about that. But at any rate, the normal, the starting bond, is normal. It's an sp3 C-H bond, a normal bond. But in this case the radical is unusually stable. So it's easy to break that one; 10 kilocalories easier than for normal C-H bonds. And the same for benzyl.
Okay, now possibility of doing a halogenation. Let's look first from the point of view of the equilibrium constant from this, at the difference in energy between starting material and product. So we look at what bonds are broken, and what bonds are formed. The red bonds are broken, the green bonds are formed. So there'll be a cost for breaking the red bonds and there'll be a return for making the green bonds. And let's see how big it is. Okay, we're going to do it for fluorine, chlorine, bromine and iodine. Okay, in every case we're breaking a C-H bond, 105 kilocalories/mol, and we're breaking a halogen-halogen bond. But those are different, for the different halogens; although interestingly not monotonic. It's not a smooth progression, it goes up and then down again. And the cost is the sum of those two; what it's going to cost to break those bonds.
How about making bonds? We're going to make the C-X bond and we're going to make the H-X bond. So the returns will be that. And now we see whether we can make a living doing this. What will the profit be? Right? So in the case of fluorine, 251 is much greater than the cost of 142. So it's a big — it's really exothermic. Right? Chlorine, it's only 19. Bromine is only 9, and iodine is minus 12. What does that mean? It means you can't do this reaction with iodine. The equilibrium lies is in the wrong direction. Okay, so already the equilibrium constant and how energy relates to it tells you that something is impossible to do. But the others seem to be possible. But will they happen? How fast will they be? Do you have to wait until the end of the universe in order for this to happen?
Well let's look at how the rate, which will depend on the mechanism. The equilibrium constant doesn't depend on the path. It's just how high the starting material is and how high the product is. But how fast you get depends on the barriers you have to go across. So let's suppose — let's just try breaking two bonds, changing partners, and forming two bonds. Okay? Then we have to — to get to the barrier, we have to break two bonds. So that cost — we're going to have to spend, in order to get to the barrier. So is this a plausible mechanism? How fast would the reaction be, if we used that mechanism? Well at room temperature, say 300 Kelvin, then it's 103/4ths of how high it is to get there; then times 1013th/second. But notice that 1013th times 10-106, which is 3/4ths of 142, is 10-93/second. 1093 seconds is a long, long time. I don't know how old the universe is, but I suspect that's longer. Right? So what about this mechanism? Plausible? Implausible? How could you make it happen? Shai?
Student: Change the temperature.
Professor Michael McBride: Pardon me?
Student: Change the temperature.
Professor Michael McBride: Change the temperature; because we can change that 3/4ths to something else by — to a much smaller number — by increasing the temperature. So no way to do that one. But if we go to 3000 Kelvin, then it's 250 reactions per second. So that's quite feasible, as long as nothing else happens; something else might be even faster. Right? But at least this mechanism could work, if you were in a flame say, or in Professor Chupka's oven, that we talked about last time.
Okay, now let's look at Eyring's H + H2 here. So we want to get from this valley to the other valley. How do we do it? This mechanism would be to break a bond and then — so dissociation and then association. But uh, it's very slow to get up there, right? — to get up to that plateau of breaking a bond. There's a much easier way to get from one valley to the other. What is it? Go through the pass. Right? So instead of doing that slow reaction, you can do this much faster reaction, making a new bond as you break the old one. So you don't have to pay all the cost of breaking the old one, and you're getting something back for it. So you can have a free radical chain substitution, where you have an X atom — and we've talked about this before. Take the H away from R. Then the R group takes away X from X2. And it can just go round and round.
There's this machine, very much like the machine that Professor Sharpless talked about, that goes around and around. You feed in starting material, and the products come out. And you don't — it's a catalyst; you don't have to go to such high energies as you normally would have to do. So the possibility of halogenation — here we looked at it at equilibrium and saw that forget it with iodine; the others look okay. And for a mechanism with a reasonable rate, we could change those two columns, exchange them, so that we make the H-X, as we break CH3-H — so the X atom helps you do that — and that generates then a CH3 group, which helps the reaction on the right. So now, step one, how much energy do you have to put in? You're paying 105 but you're getting back 136. So that's great. Right? The first step seems plausible now; although you don't know how high the barrier is that you have to get from starting material to product. But at least it's much easier.
So the top one, it looks good all around. The chlorine and bromine will be a little touchy, on those first two steps, because they're uphill in energy. But step two is good in all cases. Okay, so what we need to do is be able to predict activation energy. But Rome wasn't built in a day, and that's going to happen next semester, to get into this and to talk in more detail about these catalytic reactions. Notice that this is exactly the kind of thing Sharpless was talking about. If you could find a way of lowering one of these barriers, then the catalytic cycle would work. He was saying, remember, that it was democratic; that all the steps, as you go around the cycle, have to be at the same — go at the same rate, or else everything stops up, stops and waits before it tries to get over some big step. So you have to have all the steps be fast; and that's why he was saying that that diisopropylethylamine helped the formation of omeprazole.
Chapter 5. A Summary of the First Semester [00:43:02]
Well so we've been doing organic chemistry this semester. Some people might doubt that. There'll be no doubt next semester that everyone will agree that what we do is organic chemistry. We're going to talk about the chemistry of functional groups and sugars and amino acids and all carbonyl groups and esters; all these things. But there's a reason that it's been a little different this semester. We've talked mostly about physical-organic chemistry. And the reason is because of where I came from; that I did my Ph.D. — and in fact I took a course as an undergraduate with Paul Bartlett at Harvard. And as you've seen from our common ancestry, he was a physical-organic chemist; more interested in how reactions work than in what you can make. Okay, so there's this book that came out in 1939, went through at least three editions — I have the Second and Third Editions here — which is called The Nature of the Chemical Bond. And this was a fabulously influential book; even if it is the one that said it was 126 kilocalories/mol, to take carbon away from graphite, when we know it's 171. So Pauling was arguably the most influential chemist of the Twentieth Century. He got two Nobel Prizes.
[A tape is played]
Recording of Professor Jack Dunitz: At the time when I was reading that book I was wondering whether chemistry was really as interesting as I had hoped it was going to be. And I think I was almost ready to give it up and do something else. I didn't care very much for this chemistry which was full of facts and recipes and very little thought in it, very little intellectual structure. And Pauling's book gave me a glimpse of what the future of chemistry was going to be and particularly, perhaps, my future.
Professor Michael McBride: Right, okay. So that's what we've been doing the first semester, is this kind of thing, thinking about the chemical bond. We started with wondering, with Newton, whether there was an atomic force law, and then we looked to see if we could see bonds, or feel them. And then we tried to understand bonding and reactivity, through the Schrödinger equation; which most people believe is the fundamental way to understand things properly. And then we learned how chemists learned to treasure the molecular model. Things like this, that turned out to be really, really useful tools — composition, constitution, configuration, and conformation — and finally energy. Right?
So I hope that we've this semester, if we haven't done as much organic chemistry as some people would care for, that at least we've raised some big questions; like how does science know things? Right? Or, compared to what? Those are the really big questions. But even if we're only focusing on the specific content of the course, interested in bonds, here are two questions for you to think about. Were chemical bonds discovered or were they invented? To what extent are chemical bonds real? Or to what extent are they a figment of a chemist's imagination? This is the kind of thing we've been aiming at all semester. So you should be in a position to think about this now. Or would we even have chemical bonds, without our own chemical forbearers? So we've looked at how the idea of bonds and these different properties of bonds developed. But suppose chemistry was developing in some other solar system. Right? What would happen there? Suppose people discovered the Schrödinger equation, before they had the idea of bonds. Would bonds have been necessary? Is chemistry going to evolve, such that all you need do is put stuff into a computer, solve quantum mechanics, and forget about bonds? Okay? The answers to these questions aren't obvious, but they're good ones to think about. So this is the end, and good luck on the final. Are you fired up?
Professor Michael McBride: Are you ready to go? Good. Thanks.
[end of transcript]