Hindsight and Risk-based Decision Making
Last weekend I was watching an NFL game (I forget which one) and the team, which already had a solid lead, was considering going for a TD rather than a field goal at fourth and goal. The announcer was going "Bad idea, bad decision. Take the field goal and the sure points. You don't want to risk getting the other team back in the game with the emotional prop of stopping you at fourth and goal." Well, the team went for it and made the touchdown, after which the announcer said "I guess it was a good decision after all."
But was it? If you choose to hit a nineteen in blackjack, and pull a deuce, was it a good decision? If you placed a 50-50 bet that a normal die roll will come up with a "6", and it does, was that a good decision? I would say no. I would argue that both decisions were bad decisions, despite the fact they happened to yield positive results for the decision-maker. The reason is that, given the information the decision-maker had at the time of the decision, both moves have an expected value less than zero.
I won't bore my audience with a digression too far into expected value and decision trees. Suffice it to say that the standard approach for making decisions in uncertainty is to list the possible outcomes of the decision, assign values and probabilities to each outcome, and then total up the sums. The decision that yields the highest value times probability is the is the one that you would expect, on average, to yield the highest value. Take the example of the bet on the die roll above. If you bet a dollar, you would win a dollar on a roll of "6", which is a 16.7% probability. You would lose a dollar on a roll of 1-5, which is a 83.3% probability. The value of the "don't bet" decision is zero. The value of the "bet" decision is 16.7% x $1 plus 83.3% x -$1 equals -$0.67. So the "no bet" decision is best, since at zero it is higher than the negative outcome of the "bet" decision. Here is a more complete discussion of the decision tree process.
A couple of provisos:
- When the situation is more complex, the trick of course is to assign the right values and probabilities. We can assign these exactly for cards and dice, but it's a little harder for something in the business world, like say Enron's decision to enter the broadband business. But managers are paid the big bucks to do their best. And managers have tools at their disposal to manage their lack of information. For example, once you build a base-case, you can ask questions like "OK, I am not sure about the size of the broadband market, but how large does it have to potentially be to offset the risk involved."
- Like many real-world processes as the approach the asymptotes, things get a bit squirrelly for really small probability events, particularly when they have very large financial values (positive or negative) attached. Small probability positive events are essentially a lottery, and many people buy lottery tickets, even though we know the expected value is less than the price. I play blackjack too, despite a negative expected value, because I get non-monetary benefits from the play. Small probability negative events are called disasters, and are things we insure for. Many times the decision to buy insurance has a negative expected value, but we do it anyway because we would sleep better at night knowing that we may be throwing away a little expected value, but we have pre-empted an event that would bankrupt us. Here we get into interesting topics of risk profiles and risk tolerance, which I will avoid.
Unfortunately, in evaluating historical decisions, we often ignore the state of facts and risks the decision-maker faced at the time of the decision. We argue Mead should have pursued Lee harder after Gettysburg, because we know now Lee's army got trapped behind a swollen river. The Chargers shouldn't have traded half their assets** to move up one spot in the draft to get Ryan Leaf. And Enron should not have entered the broadband business. We treat the decision makers in each of these as boneheads today (we even threw Skilling in jail, as much for his failed business decision as for any fraud). But all of these evaluations are based on the outcomes, not on what the decision-makers were facing at the time. Mead had been in charge of the army for less than a week, had driven Lee from a battlefield for the first time ever, and had a primary charge of defending Washington. It is hard to believe today, but the Peyton Manning and Ryan Leaf were considered nearly equivalent in quality in the '98 draft, and the Chargers trade might have been perfectly appropriate if they had actually gotten a Manning-quality quarterback. Enron's vision of broadband looked like it would become an enormous business, which in fact it did, just five years too late for them.
** The Chargers traded an inventory of picks and players to the Arizona Cardinals, who, true to form, did nothing with this goldmine. The Cowboys, by contrast, arguably built a whole dynasty in the 90's off the slew of picks they got in the Herschal Walker trade with Minnesota.
Jody:
What of Bayesian reasoning?
In general, suppose the probabilities of outcomes for choices A and B are unknown, but estimates of the probabilities exist. Suppose that based on your current estimation of the probabilities, you should choose A, but yet choose B and B is successful. Then suppose that incorporating this outcome into your probability estimations makes B the right choice. Then plausibly, you could say, "that's a bad idea" and then respond with "that's was a good idea after all."
Or as another scenario, suppose there are two people, a decider and an observer. Each of which have their own private estimations as to the probable outcomes. From the observer's priors, choice B is bad, but from the decider's priors, choice B is good. When the observer see B implemented, he'll say that's a bad idea. But if the decider has better estimators, then it's likely the observer is wrong about the decider making a bad decision.
Arguably the football scenario that kicks off this post is like the second scenario. The announcer likely built up probability estimations for teams in general while the coach is considering probability estimations based on his experience with his team and as such the coach should have better estimators.
Though perhaps I'm reading too much into it...
January 19, 2007, 4:16 pmRay G:
Or, more simply put, the coach knows certain things about his 4th and goal situation; about his team's ability in such situations, about the psychology in play at that exact moment of the game that only those on the field can understand, he has studied the other team in such situations, he knows what to expect from the other team if they do fail, etc.
So the coach's decision was far from a 50/50 toss up, whereas the announcer's summation was exactly that, a coin toss.
January 19, 2007, 6:09 pmMesa EconoGuy:
I like this scenario, football risk evaluation. Probability-weighted decision trees are a good first step, but options pricing theory also comes into play here:
time to expiration (Dirk Koetter and Andy Reid being the most recent obvious examples); you cannot rely upon the “no bet†option late in the game, because you are dangerously close to “expiration.†This affects your behavior (or it should), because you must score, and time is running out.
Here’s Black-Scholes:
http://www.hoadley.net/options/bs.htm
Accordingly, you must weight the lower-success outcome more.
I have no idea how to frame the “go-for-two†argument in the first half.
January 19, 2007, 6:14 pmSol:
In fact, I remember hearing a study a few years back that said NFL coaches should go for touchdowns in these circumstances more than they do. Apparently on average, your odds of scoring on the play plus your odds of scoring if they stop you and start on their own one yard line and you stop them and get the ball back are much better than the odds they will stop you and manage to score starting from their own one yard line.
If that analysis is correct, then the announcers are merely perpetuating old stereotypes of what the best thing to do in those circumstances is. Probably because it's very easy to remember dramatic circumstances where a heroic goal line stand changes the direction of the game, and easy to forget the more likely common circumstance of the dramatic goal line stand that does absolutely nothing to the overall flow of the game -- and to forget those cases where the difference between a field goal and a touchdown there would have won the game, if the coach hadn't settled for 3.
January 20, 2007, 6:12 amBrad Warbiany:
All,
You're right, in that from the announcer's standpoint, he probably doesn't know enough about the situation to accurately make a call on whether it's a good decision.
That doesn't change Warren's point, in that the decision is either right or wrong, based on the probabilities involved. It doesn't even necessarily mean that you have to make the decision based on the right information for it to be a right decision, just that there is a right and wrong decision.
As an example, take poker. Let's say I'm holding a moderately decent hand, and another player bets big into me. I had noticed that earlier in the game, his mouth twitched when he bet big, and that he was bluffing on that hand. So I determine that based on the "tell" I discovered, I think he's bluffing. I call, he was bluffing, and I win the pot. Did I make the right decision? Maybe, maybe not. Maybe he just had a bunch of coffee that morning and was prone to twitching. Maybe he gets nervous on every big bet and twitches, whether he has good cards or not. Maybe he only bluffs 10% of the time when he's twitching like that, and while I made a wrong decision, I still got lucky with my preferred result.
I see the same thing in that TV show, Deal or No Deal. You're given a fairly simple probability decision on each turn. You have to decide whether it's better to bet or not to bet on opening another case. After a player takes the deal, they always decide to show "what would have happened" based on the next case the player would have opened, but in reality, if the next case you would have opened was the $0.01 case, it doesn't change whether the decision to take the deal was correct. Based on the information you had, the case could just as easily been the $1,000,000 case, thus the decision is right or wrong, regardless of the outcome, which is a probability matter.
January 20, 2007, 7:47 amJody:
Brad, Deal or No Deal doesn't work as an example as the probabilities are precisely known.
January 20, 2007, 12:31 pmBrad Warbiany:
Jody,
The probabilities aren't precisely known by most of the contestants (as far as I can tell). Frankly I think most of them are going on "gut instinct" rather than any real understanding of probability.
January 20, 2007, 8:42 pm