Yesterday I spouted off loosely about some behavioral economic (a mix of economics and psych) ideas, but I didn't mention where I'm picking them up. They are being gleaned from a course I'm auditing at Harvard, taught by Sendhil Mullainathan, who would probably cringe to know how I'm bastardizing (aka dumbing down) the theories of minds miles greater and more subtly nuanced than mine. Let's just hope he's not reading.
The particular concept I was writing about - that we tend to stick to our beliefs by spinning new information in a way that reinforces those beliefs - is related to a very cool concept that Sendhil was talking about a couple weeks ago. It's so cool, it's worth mucking up again from my memory.
Turns out our propensity to stick to our guns may not just be a self-serving, ego-stroking mechanism to make us feel good. It could be that it's just a by-product of a couple of logical flaws - or "biases," as the behavioral economist term them - that we humans have.
Here's the bias in question: we like to see patterns. Or to be more precise, we like to see information we know about the big picture show up in the little picture.
Here's an example: If you were asked - out of all the families that had five kids in the US, what percentage had them in this pattern - BGGBG - vs - GGGGG - you'd probably say the first pattern was more frequent than the second (as does almost everyone who's asked this question). A professional statistician, however, would tell you that both patterns are equally probable - and they'd be right. The reason you (and I) would guess the BGGBG pattern is because, well, we know that there are about 50/50 girls to boys and so it's just really, really weird to see 100% girls in one family. That's because you and I make the error of thinking just because the odds of something happening in an entire population are 50/50, those same odds should exist in a small sample (they don't). If you really wanted to test this theory out, you could flip a coin a few thousand times and see that any string of five flips (say, HTTHT) is no more likely than another (like HHHHH).
One place this bias shows up is in something called "the gambler's fallacy." If I ask you to bet on the flip of coin, it's 50/50 whether you'd bet heads or tails, right? But if we do a few rounds of coin tosses, and heads came up three times in a row, what would you be would be the next toss? Tails, right? (Be honest.) You, like 99% of humanity, think "Well, I know that half of all coin tosses turn up heads and half tails, if I've seen a lot of heads, a tails has to turn up soon, right?" Wrong. It's the same bias as above - we think the fact of 50% heads showing up should be seen in small samples as well as large samples. Again, it ain't the way it happens.
But there's another bias gamblers have which seems to be the opposite of the gambler's fallacy. This theory has it's own special name too: the "hot hand." Like it's name suggests, the "hot hand" says that gamblers will often get under the illusion that they are on a roll, having a lucky streak. If we were still playing our coin toss game and you kept seeing heads, you might begin to think - "okay, let's keep putting our money on heads." How can this be so? How can a gambler go from having a bias of thinking that the winds have got to shift to thinking their wave is going to ride out forever?
The two biases aren't as contradictory as they seem. In fact, they could different manifestations of the same bias. The difference is only in the pattern that the gambler expects to see. If he's looking for the pattern of HTHTHT, he'll have the gambler's fallacy. If thinking he's on a roll of HHHH... he'll get a hot hand. The interesting question is when and how he'll make the switch from one to the other? Behavioral economists seem to believe that we'll stick with our expectations for one pattern a long time, long past evidence shows up our expectations are way off. When the evidence builds up to a breaking point, though, we'll finally give up on our bias. Sadly, though, as the gambler suggests, we just jump to another set of expectations that are likely to be equally false.
That brings us back to yesterday's post. It's not just at the roulette table that we are this irrational. It's probably the case in our everyday beliefs. If we know something to be "true," we'll keep expecting (and looking for) evidence that supports our belief. It'll take a large heap of contradictory evidence (hurled at us successfully at high speeds) in order to unhook us from our beliefs. And when we finally snap out of it? Well, we just find a new belief - and pattern - to look for and, of course, continue to see.
Monday, October 12, 2009
Sunday, October 11, 2009
A loooong time in posting. So long, in fact, I am surprised that I managed to find my way back to this blog (thank you blogger.com for remembering me).
Kool-Aid Konfidential thoughts have been swilling in my head since last post eight months ago, but job searching, grad school applying and and other lame excusing has kept me from putting thoughts to text box. Instead, I've fulfilled my proselytizing urges with overly-eager responses to friends' emails and, worse, Facebook status updates. I return here to relieve my friends of future unsolicitated spoutings.
Today, I am desisting from responding to one such FB thread. Friend Andrew posted a clever image (above) from my new favorite blog simplecomplexity.com, and people got thinking: is this really how information and confusion dance together.
At first glance, it looks about right. Take health care spending. With no information about how our health care system works (or fails to work), one would be expected to be completely at a loss on how bring down runaway spending. Then you get some information: your friend gives you a convincing explanation of, say, defensive medicine and you say, aha! I know how to fix health care: cap medical malpractice suits. Great, no confusion. But then another friend tells you, no, no, no, medical malpractice suits aren't the problem, it's the fact that doctors are incentivized to do more procedures, not necessarily just cure illness. Sounds good, but so now who is right? We are back to confusion. Finally, consider you have a lot of other clever friends who have other likely explanations; that just means you'll get more and more confused.
Of course, Andrew's curve suggests a macro view of this micro example. Considering we aren't just concerned with one question (like health care policy) but have other concerns (the right diet, where to send our kids to kindergarten, what's the meaning of life) we can see there are lots of opportunities to get bewildered.
But, yet, we don't all walk around in a daze of confusion. Is that because we all stopped at "a little learning" and are not taking in new information?
The work of behavioral economists suggests so. It's not exactly the case that we stop taking information in, though; rather we are selective about the information we absorb and how we absorb it. We all have self-serving biases that either make us hear only what we want to hear -or, when a bit of unappealing information slaps us in the face, spinning it to our good fortune.
There comes a time, however, when we get hit with so much information that contradicts our view that we break out of our certainty. What happens then? Do we become open minded? Do we live in uncertainty and confusion, now having learned that hard-held views can be wrong? Unfortunately (or fortunately) no; instead of hanging out in the unknown, we wend our way back to certainty - but this time with our new viewpoint. This can go on for a while, getting us - what I think is - our real confusion curve:
Subscribe to:
Posts (Atom)