Monday, February 28, 2011

beauty contests and citizen legislatures

What if we got rid of our 538 congress members and 100 senators - and instead let 638 randomly picked Americans decide how to run the country?

At first glance, it seems a terrifying prospect. What, you might ask, would a 1st grade teacher, a Starbucks barrista or a marketing director (as equally plausible to get picked as anyone else) know about deciding budget matters, immigration policy, national security - or any other matter our lawmakers muddle over in a given year? (That is, if you're not cynical enough to ask the same question about politicians.)

But citizen legislatures were, after all, how democracy got started. And today there's a growing movement demonstrating that, when properly briefed, a roomful of random Joes and Jills can be remarkably sensible - even more sensible than a chamber of elected leaders.

Joe Klein gives a snapshot of one of those successful demonstrations in, of all places, China. Led by James Fishkin, the granddaddy of "deliberative democracy," 175 participants representing a city of 120,000 (60% of whom are farmers) decide which projects should get precedence and how to allocate the budget. Over three days they go through three rounds of being briefed by experts (with different views) and deliberating. In the end, according to Fishkin, "the public is quite smart if you give it a chance."

Politicians might be that smart if given a chance as well. Many of the obstacles to true deliberation among elected lawmakers have been mentioned before in this blog. When politicians are beholden to interest groups to finance their next campaign - and when ever vote and floor speech is open for scrutiny - they are left with little room for open, frank discussion or compromise.

But even if politicians didn't have to raise a dime for reelection and they shuttered the press out of Capitol Hill, elected lawmakers might still fall short of our random 635 deliberators.

Politicians' problem is a variation of Keynes' beauty contest problem. In the market, Keynes said, prices aren't set by how much people think a stock is valued; rather, prices depend upon how much people think other people think a stock is valued. It's as if you were asked to put money on which leggy lady would win a beauty contest; you wouldn't bet on who you thought was the most lovely, but rather who you thought everyone else would vote for.

In Keynes' beauty contest, as in the market, contestants and stocks end up with distorted valuations. Similarly, in elected legislatures, policy preferences also get distorted. Politicians don't necessarily (or ever) vote for the laws they think wisest; they vote for the laws they think their constituents will think are wisest. It's guesswork at best. But even if they could get a precise poll, since their constituents aren't sitting down looking at expert information and the trade-offs each bill has, that poll won't represent what voters would prefer if they had time and access to full information.

Our hypothetical congress of citizens - who don't care about getting elected - doesn't need to do any guesswork. Instead of asking "which policy will others like" they can go directly to "what policy do I think is best?" As on the stock market, they'll be less apt to inflate some policies while discounting others.

Monday, February 21, 2011

the party of inflammatory rhetoric

There's a good chance that, if you're liberal, you looked at the title of this post and assumed it refers to the Republican party. Same goes if you lean conservative; you probably guessed you were about to read about Democrats. That is, if you're like the people on my Facebook feed.

After the Giffords shooting in Phoenix last month, I posted a status update about "inflammatory rhetoric" that sparked a lively discussion among my friends that included quite a bit of inflammatory rhetoric itself. My mostly liberal friends were in agreement about one thing; the GOP and Tea Party were the main purveyors violently charged language in our national discourse. My minority of conservative friends had a very different view of reality. They saw the left as equally - or more - guilty of upping the ante on combative political dialogue.

None of these people were kooks. They all sincerely believed that their party was just as civil - or more so - than the other, and that the other party shouldered the blame for making our politics as hyper-partisan and vitriolic as it is today.

How, one has to wonder, could reasonable people see the world so radically differently?

There are at least two explanations for why we see the other party as more rhetorically irresponsible than ours.

For one, it is more likely that we are not coming across the nasty rhetoric from our own side - or at least as much as we're seeing the bombast from the other political party. In the diversified media world of today, more than ever we pick our information streams - watching the network, reading the blog and hanging with the people who share our political viewpoint. And, in a phenomenon Eli Pariser calls the Filter Bubble, more and more the information that filters up to us - from our Facebook feed or through personalization algorithms online - also conforms to our perspective. In that self-filtered and algorithm-filtered world, it comes as no surprise that you are more likely to see all the examples of the other party behaving badly. You are less likely to be served up naughty examples from your side of the political spectrum.

But even if we do come across examples of our political compadres getting incendiary we may not see those examples.

Last year I attended a left-leaning graduate program in public policy which, like many programs, had a student list-serve to announce events, group meetings, etc. One day I was taken aback by an invitation to a pro-immigration rally that suggested that "real" Americans would not support strict immigration policies. Given how many years liberals griped about suggestions they were not "real Americans" for opposing Bush policies, I would have thought there'd be an uproar to the offensive email. But not a peep. None of my liberal friends noticed its offensiveness until I pointed it out to them. My conservative classmates, however, did notice. But by that time they were used to being dissed by the culture of the school.

Both my liberal and my conservative classmates were falling victim to confirmation bias. Like its name suggests, confirmation bias says we tend to believe information that confirms our pre-existing view of the world. Information that does not fit that world view either gets molded to fit, is rejected or is just edited out. When that email came across my Democrat friends' inboxes and they saw that it was about a pro-immigration event, they would be disposed to think the opinions of the author were correct and that she would be speaking civilly. Any comment offensive to conservatives would go unnoticed. Republican students, however, would be prepped to see opinions that they would disagree with and, also likely, a dismissive attitude toward conservatives. The uncivil comment would jump out at them.

Each saw what they expected to see.

Of course, it's not just isolated incidents where confirmation bias and "filter bubbles" come to bear. Everyday we are seeing the transgressions of our enemies and missing the slips of our friends. Added up over the years, its no longer a surprise Democrats think Republicans are rude thugs and Republicans find Democrats to be patronizing prigs. That is, indeed, the picture each sees.

Which image is correct? Unfortunately that's not question that can be answered. The best one could saw is both, and neither.

fabulous confabulation

I've written this before - and will likely write it many times again - but by far the strongest factor in determining what you believe is the beliefs of the people around you. You are who you hang out with.

"Social proof," as social psychologists call the phenomenon, happens for many reasons. It's easier to do what your friends do because it saves you the time and mental energy of trying to figure out on your own what's "right" to do. It also keeps you from getting flack from those friends if you happen to disagree with them. But regardless of why we are so impressionable, about 90%-99% of our beliefs, preferences and values come by virtue of who are families are, where we went to school, the city we live in etc. (and that's being generous).

That may seem like obvious to some. But for most of us the idea that 99% of our beliefs and preferences aren't the product of independent thought may rankle. It feels like I'm doing my own thinking, but you're telling me I'm merely parroting my peers? Yup.

The reason it rankles is because our brain doesn't tell us we're just following the crowd; it makes us believe we've arrived at our own conclusions as the result of reasoned thought. That is, we confabulate.

A 2003 study from Geoffrey Cohen shows a stark example of social proof and confabulation at work. He gave a group of liberal and conservative participants two proposals for a welfare policy, one that gave generous welfare benefits and one with strict benefits. As expected, liberal participants liked the generous policy while the conservative subjects tended to prefer the stringent version. But then Cohen switched things up; he presented the generous policy as a Republican-supported initiative and the strict one as a Democrat plan, with arguments from either side tossed in. When this happened, as social proof would predict, liberals were more likely to back the strict benefits plan with conservatives preferring the generous plan.

Even though the participants were obviously swayed by which party supported the plans, that's not what they said happened. Instead subjects all reported that their preference was based upon the details of the proposal, their personal beliefs about government and their experiences with welfare.

Were they lying? Did they know their views were swayed by social proof, but were ashamed to admit it? Anything's possible, but more likely confabulation happened under the radar. Without missing a beat, their brains made up a rationale that suits their views of themselves as rational, independently thinking beings. Little did they - or any of us - know that we should never trusts what our brains tell us.

Friday, February 18, 2011

a sweet strategy for liberals?

One wing of political psychology aims to ferret out any root differences between liberals and conservatives. Even though I'm not a big fan of this school of research (I find it more interesting to look for the behaviors we all share), a popular - and persuasive - theory is that liberals and conservatives differ on how they prioritize five core values.

Jonathan Haidt's Moral Foundations Theory says we all make moral judgments using some combination of the values: caring for others; fairness; loyalty; respect for authority; and purity. Liberals, however, tend to place "caring for others" and "fairness" at the forefront of their moral decisions, while conservatives give more weight to loyalty, respect and purity.


As with moral preferences, moral values combine a little rational thought with a lot of emotion. But when it comes to "purity", the emotional factor takes on a visceral bent. The opposite of sensing purity is feeling disgust - and it turns out that people who are apt to "feel disgust" also tend to identify conservative.

The physiological connection is not just coincidental; it is real. So much so that researchers can manipulate one's "disgustometer" by feeding you bitter or sweet foods. Serve us broccoli rabe and we're likely to judge our fellow humans more harshly. Make that a Krispy Kreme donut and we feel more kindly towards each other.

So, Madame President - maybe you want to reconsider your campaign to get Americans eating less sugar. Or, at least maybe while you're at it, encourage us to substitute high fructose corn syrup with some Splenda.

when the subconscious slips out

A great compilation of crossed-wires hijacking the brain:


more evidence the mind believes what it wants to believe

In the last post I suggested that when the brain is prepared to expect something, it will imperceptibly adjust its perceptions to "see" what it expects to see.

It will also "feel" what it expects to feel, according to a new study.

We're all familiar with the placebo effect: patients who pop sugar pills, instead of real meds, will often report their symptoms gone (to a greater degree than patients who don't take real meds or a placebo). Now researchers have shown the reverse effect: if you give a patient a drug but don't tell them, they may not perceive any positive effects.

The researchers inflicted some pain (via heat) on a group of willing subjects, who were also tapped into an IV drip. Without any painkillers, the subjects reported experience pain at a level of 66 (out of a maximum of 100). When a strong painkiller was added to the drip - without them knowing - their pain levels dropped to 55, and when told that they had received a strong painkiller they reported feeling pain at a 39. That much is consistent with the "placebo effect" - the knowledge of getting meds can be as, or more, effective in reducing symptoms as the drug itself.

What struck the researchers, though, was what happened when they told subjects that they stopped giving in the painkiller, but in fact continued to drip it in. Subjects said their pain went back up to 64. They might as well have not been receiving any painkillers at all.

Pain is probably one of the most subjective of sensations, so it may not be a surprise that our experience of pain is "all in our head." Nonetheless, if our perception of pain can be entirely determined by our expectations of pain, then what other "perceptions" of ours may be partly, if not wholly, be pre-set by our minds?

Tuesday, February 15, 2011

filling in the blanks

I'm doing a little research on "civil dialogue," trying to figure out why it is we humans have such trouble being polite when we disagree with each other.

One of the reasons I keep butting into has to do with what may be our fundamental bias; we have a hard time disagreeing because we really don't understand how an intelligent, informed and unbiased person can possibly not agree with us.

The belief that we see the world "as it is" - as opposed to the world from a particular perspective - is what some psychologists call "naive realism." Cognitive and social psychologists tell us that we don't perceive reality; we instead perceive our brain's interpretation of the billions of bits of data our senses send us.

That "interpretation" process is, of course, invisible. We're just aware of the end result - the "reality" our brain shows us. What we walk away with is the interpretation; the details get left behind.

This happens even in the most trivial of ways.

Say, for instance, I show you the image below and ask you what the second figure is. You'll tell me it's a B, yes?


Now I show your friend the next image and ask him the same question.

He's going to say it's a 13 - and be pretty darn certain about it. Of course you've both looked at the identical figure but quite sensibly "seen" very different things.

That's because you've both just been bamboozled by your brain. Not that your brain was trying to trick you; instead it was doing what it does best - quickly analyzing information by referencing its vast store of patterns (or schema) as an aid. In fact, if you hand your brain a big "A" on one side of a page and a big "C" on the other, it already has a good idea of what it'll find in between. Even if the "B" you show it looks kind of funny, or is obscured in any way, your brain will "fill in" or otherwise modify the picture so a "B" is all it could be.

Now let's say I ask you and your friend to remember what you saw and ask you to compare notes the next day. I tell you you've seen the same image, but when you come together you realize that I'm obviously lying and your friend saw a different image - either that or your friend is partially blind or is messing around with you. After a bit, because you and your friend are clever people, you may begin to wonder "is it possible we did see the same image" and then it'll occur to you that a B and a 13 can look very similar, perhaps even identical.

Every thing gets smoothed out between you and your friend. You apologize for calling each other morons and move on. But let's say, instead of looking at a B/13 from different contexts, you were looking at a football play from opposite sides of a stadium (and from different teams), or you were watching a congressional meeting from different sides of the aisle. The levels of "interpretation" become infinitely more complex. Opponents the day after will compare notes and be baffled at each other's take on the event. Unlike you and your friend they will doubtfully get to the point where they wonder "how may we have observed the same event yet "seen" different outcomes." Instead, they'll think the other was partially blind or deaf - or that they were too biased by their beliefs to really "see" what happened.

Saturday, February 12, 2011

committing to Scientology

A couple months back I wrote about Cialdini's "commitment" principle; we rarely leap to new beliefs or actions, but if we first make a small commitment toward a belief or action, we're more apt to take larger and larger steps at a later time. That's how an organization that asks you to sign a petition for a cause one day will eventually get you to write a check for that cause.

The story of Paul Haggis' immersion in Scientology, in last week's New Yorker, is an example of the commitment principle writ large. Higher-ups in the church are given access to the "O.T.", the secret document that explains how 75 million years ago an inter-gallactic leader Xenu wiped out humans, transforming their souls into Thetans that now populate our bodies (or something like that). It's zany stuff, which makes it really hard to believe someone as sharp as Haggis could ever buy it. But by the time Haggis got to read the O.T., he was already too committed:
'“The process of induction is so long and slow that you really do convince yourself of the truth of some of these things that don’t make sense,” Haggis told me. Although he refused to specify the contents of O.T. materials, on the ground that it offended Scientologists, he said, “If they’d sprung this stuff on me when I first walked in the door, I just would have laughed and left right away.” But by the time Haggis approached the O.T. III material he’d already been through several years of auditing. His wife was deeply involved in the church, as was his sister Kathy. Moreover, his first writing jobs had come through Scientology connections. He was now entrenched in the community. Success stories in the Scientology magazine Advance! added an aura of reality to the church’s claims. Haggis admits, “I was looking forward to enhanced abilities.” Moreover, he had invested a lot of money in the program. The incentive to believe was high.'

Thursday, February 10, 2011

machine vs. man

NASA released a report this week that is a social psychologist's and complexity theorist's wet dream.

The engineers at the geekiest of federal agencies were called on to weigh in on Toyota's brakes mystery of 2009. After recalling cars suspected of having sticky accelerators and obstructing floor-mats, Toyota faced a final round of complaints claiming an electronic glitch - "sudden unintended acceleration" - which sped cars out of control and some drivers to their death. Toyota stood by their computerized driving system and blamed human error for the "runaway" cars. Consumer groups and victims (many now taking their claims to court) continue to fault Toyota's computerized cars.

NASA's engineers have now sided with Toyota's. After sifting through 280,000 lines of software code, examining 78 incidents (58 which have "black boxes" with pre-crash info), and subjecting 9 cars to extra tests - in a $1.5 million study that took 10 months - NASA concluded that any sudden acceleration happened because humans slammed on the accelerator when they meant to slam on the brakes.

Consumers are having none of it. They claim that NASA tested too few cars, especially when claims are that only 1 in 100,000 cars are going to get glitchy, testing 9 cars is bound to show negative results. And then there's the testimony of drivers like Rhonda Smith: "I looked at my feet - and I know it wasn't the floor mat - and they were firmly planted on the brake. I still stand by the truth that I told. I do firmly believe that there is a vehicle defect that they've just not found."

With two such stark - and decidedly rational and plausible - points of view, you gotta wonder what's going on here. Who's right and who's making stuff up?

Social psychologists and complexity theorists might tell you that is an answer we will never have. There are just too many things that could be going on.
  • Complexity and the machine. 280,000 lines of code is a lot of code. (By comparison, the space shuttle has 420,000 lines of code.) Even though NASA's own engineers combed Toyota's code through for bugs and found none, that doesn't mean that one didn't go missing or that, each time a new car's computer is produced a mini-mutation gets picked up. (If you've ever returned a faulty Dell or iPhone, this should be obvious.)
  • Complexity, the machine and the perfect storm. Even if every car came off the line with pristine code, the car's system hasn't been tested in every possible set of circumstances. Unpredictable catastrophic things occur when a new set of factors occur at once, like a "perfect storm." The code may work in all weather conditions, at all speeds, with any fuel level, when any other individual electronic feature is operating, and for all maneuvers - but may not work in severe humidity, at 73 mph, with 10% of the tank filled, while the window shield wipers are on and you're signaling left and slowing down to make a hard left turn. That might be the time the code runs into its one error.
  • Complexity and the human brain. What's hardest to imagine is that Rhonda from above, and other drivers like her, could so steadfastly swear that their feet were on the brakes when, as Toyota and NASA claim, they were actually slamming the gas pedal. But brains, especially in moments of panic, have a way of getting glitchy themselves. We are all capable of seeing things - or missing things that are right in front of our faces. Cops see guns that don't exist and life guards fail to see bodies lying at the bottom of pools. It's not a vision problem; our minds have an uncanny power to see what they believe exists, not what really does.
  • Complexity and human society. One of the things that helps us see things that aren't real is hearing about it in the news. I haven't seen any reports, but it wouldn't be surprising if the reports of "sudden unintended acceleration" shot up once the first new stories appeared. Just as imagined illnesses can proliferate through a school, so can imagined phenomena work their way through society. The complexity of human relations could also have something do with NASA's report; groups have a way of steering themselves to false consensus through "group think." NASA is not immune; one of the classic case studies on group think comes from NASA itself, when its engineers okayed the launch of the space shuttle in 1986.

for those who think we're going to hell in handbasket

I grew up in a city with one of the nation's highest crime rates: where subways were graffiti covered, urine-aromaed and only to be used between the hours of 8am and 8pm (for those who valued their lives); where apartment owners didn't just have an extra bolt or two, but were fools to not have a metal bar jammed between the floor boards and front door at night; and where getting mugged at knife-point was a rite of passage for every pre-teen.

In that same city today, violent crime has dropped by two thirds; subways are clean, air-conditioned and packed until the early hours of the night; the door bolts are not only gone, but New Yorkers are even known to leave their apartment doors unlocked; and the closest the average New Yorker gets to violence is an irate chihuahua at the nail salon.

So it can be disorienting when you come across an older New Yorker who shakes their head in dismay and complains about how the city is falling apart, times are tough and it's only going to get worse from here on out. In a city where every statistic and (almost) every neighborhood irrefutably demonstrates night and day improvement over the past 30 years, how could anybody imagine that the city is going downhill?

The answer, of course, is human nature. For whatever reason, we are programmed to romanticize the "good ol' days" and bemoan the sad trajectory of our communal future.

That's why it's good to get an occasional reminder that, globally speaking, things have been looking up over the past 50 years. Here's one comforting reminder from Alex Mack at Cato:


Tuesday, February 8, 2011

more evidence of polarization

As if it was needed:

  • G.W. Bush, Year 4
  • G.W. Bush, Year 5
  • G.W. Bush, Year 6
  • Obama, Year 2
  • G.W. Bush, Year 7
  • Obama, Year 1
  • Clinton, Year 4
  • G.W. Bush, Year 8
  • Reagan, Year 4
  • G.W. Bush, Year 3
(While my first guess about these numbers is that they might reflect an increase in Independents, which would leave the parties naturally more polarized - that doesn't look like it's the case.)

Monday, February 7, 2011

in defense of gaffes and ethnic jokes

There are few ways to better please the media than flubbing a fact or phrase (especially if you have a reputation as a non-serious individual) or coming off as a racist.

This week Christina Aquilera gratified the media with the former, rewriting the words of the national anthem during the Super Bowl, while BBC presenter Richard Hammond was the subject of the latter type of story when he suggested Mexican cars would be "lazy, feckless, flatulent, overweight, leaning against a fence asleep looking at a cactus with a blanket with a hole in the middle on as a coat."

The two stories are catnip for bloggers, tweeters and - when the media response has become a story in itself - for mainstream journalists. No need for extensive research or, really, any kind of analysis; just report the offending remark and watch the clicks roll in.

Of course, the popularity of such stories lies in two of our basic instincts; intellectual superiority and moral outrage.

The two hardly need explaining. Who, after all, does not get enjoyment out of seeing others screw up? (My guess is that some monks, a few geniuses and maybe autistic are the few who don't.) And who doesn't feel their blood brewing when experience some injustice? (Except, of course, for socio-paths.)

But even while moral outrage has its societal benefits and intellectual superiority can feel so darn good, the two reactions have their detractions. For one, they let us too easily dismiss people - and their ideas - for mistakes that could easily be made by anyone. More importantly, they permit us to get distracted (focusing on missteps instead of substance) encourage us to think in black and white rather than look for the complexity in life.

Thankfully, Aguilera and the BBC didn't take their public pummeling in silence. Christina asked us to look past the mistake and consider the sentiment behind her rendition of the national anthem. The BBC reminded us that humor is supposed to mess around with our ideas of right and wrong and that words can't be taken out of context (in this case, a comedy show).

Sunday, February 6, 2011

making policy on sand-piles

Seed Magazine posts an interview with Joshua Cooper Ramo, strategy consultant and author of The Age of the Unthinkable, offering a handy metaphor for the growing complexity and instability of our world: sand-piles.

If you think back to the pleasure and frustration of building sand hills, castles and sarcophoguses on the beach, you know what he means. Sprinkling sand has a predictable way of accumulating at the top of a pile and rolling down the sides - until an unpredictable fissure results in a mini-avalanche or sand-glacier crumbling your work. When and how the pile reaches its breaking point is not just a mystery to beach-goers; physicist computer models also can't calculate that degree of complexity to make accurate predictions.

In human history, grains of innovation have always built up our sandpile of civilization - while occasional cracks (financial crises, war, famine) set progress back a few years (or centuries). Nothing has changed today, except for the fact that the cycle of build-up and bust has been accelerated via information technology and globalization. Instead of expecting sudden tectonic shifts in the status quo every few centuries, we should be looking for them every couple of decades - or years.

But Ramo doesn't just advise that we be on the look-out for unpredictable changes; he suggests policy advisors factor in ever-shifting systems when designing policy. That is, instead of passing laws to fix problems once and for all, laws may need to be re-visited and tweaked often to adjust to the changing world.

That's wise advice for lawmakers, but advice it's hard to imagine politicians heeding. Preaching complexity and more involved government doesn't get many votes. To give lawmakers that latitude, we either have to convince Americans to embrace complexity - or we have to move to a more technocrat model (with agency work moved from the public eye). Again, it's hard to see that happen in the foreseeable future.


Friday, February 4, 2011

twitter-bashing Gladwell

Malcolm Gladwell is at it again. After deflating the Twitter-revolution bubble on the pages of the New Yorker in October, this week he pooh-poohed the role of Twitter and other social networking technologies in fomenting the revolt in Egypt. He is now the official Twitter-debunker.

For the guy who practically invented the term "tipping point," it's bizarre role.

Gladwell's argument, in the broadest of brush-strokes, is that Tweets do not make revolutions. "Deep roots and strong ties" do. In network theory, "strong" ties are those you make with people in your close circle; "weak" ties cross clusters of friends. According to Gladwell, people only start movements with people they know and trust; no matter how many Facebook friends we have, without real face-to-face relationships we'll be ineffective activists. Conversely, with strong connections and long-simmering frustrations, people can bring down Iron Curtains without the aid of Web 3.0.

The main problem with Gladwell's attack on the Twitterati is that he's battling straw men. No one claims that online networking on its own can spur a social movement; most internet cheerleaders simply claim that new information-sharing technologies can speed broad political action. Given the pace with which Egypt, Jordan and Yemen pulled together forceful demonstrations after Tunisia set the model, that lesser claim is hard to argue.

But if anyone were to argue that some, if not all, revolutions are more than accelerated by Twitter - that they are triggered by information sharing tools - you would think it would be Gladwell himself.

Clay Shirky lays the foundation of what such an argument would look like. Working off of Susanne Lohmann's study of East Germany in 1989, Shirky suggests that mass movements don't happen just when everyone is deeply discontent with the status quo (whether the status quo is a dictator or institutional racism). They don't even happen when everyone knows that everybody else shares their discontent. But when everyone knows that everyone knows that everybody's pissed off - that's when group action starts.

This is what Shirky says the military calls "shared awareness." You can imagine, Shirky describes, looking at a fire and noticing that others see the fire too - but it's only when you make eye-contact with with another onlookers that you all can go into coordinated action.

That moment of group "eye-contact" is exactly the kind of spark you'd think Gladwell would dig. It's the kind of tipping point that triggers an "information cascade", to use network theory terminology. Without "shared awareness", a nation of profoundly miserable people who know everyone else is equally unhappy, could go on being miserable for a very long time, if not forever. Think of North Korea.

North Korea is, of course, the extreme case. But extreme examples are telling. In a nation of virtually no information sharing, it's clear that the point of "shared awareness" may never be reached. On the other other, the Susanne Lohmann study and pretty much every revolution before 1990 make it clear that revolutions likewise don't need high-speed internet to get triggered.

Egyptians no doubt had other means of tipping each other off to "knowing they all knew". The big question is whether those other means - such as Al Jazeera, friend networks or the Muslim Brotherhood - would have been enough to bring Egypt to shared awareness? It's a question that will never have an answer - but it's not entirely implausible to argue that, without the internet, Egyptians may have had to wait many years longer before tipping each other into revolution.

entering a post-Beck era?

Whatever you think of his politics, most would agree that Glenn Beck is a newsman who prefers provoking emotions to objectively reporting the facts.

So for advocates of sensible journalism, it comes as good news that Beck's audience is on the wane - down to 1.8 million nightly viewers from his peak of 2.8 million.

But where have those viewers gone? No one knows, but there are a few possibilities.

1) They've tuned out - permanently. In the rosiest scenario, Beck viewers grew tired of conspiracy theories and fear-mongering - perhaps realizing that the United States was not, in fact, collapsing under socialism - and opted to spend more of their precious free time, if not reading The Economist, than at least watching more Family Guy reruns.

2) They've tuned out - for the time being. More likely Beck's viewers haven't had an awakening and now see Beck for the provocateur he is, but - now that Republicans and Tea Party have made gains in public office - they just don't feel the need to go to Beck for reassurance. Voices on the extreme tend to get more notice when their followers are freaked out. The lefty blogosphere lost a lot of its steam once Obama got in office; reassured that libertarians and born-again Christians were not going to take over America, liberals could put politics, and political blogs, to the side and get back to their day jobs. After the congressional elections in November, Beck viewers may be similarly mollified, to the dismay of Beck's advertisers.

3) They've gone to more vitriolic pastures. Most worrisome, it could be that Beck viewers haven't gone on permanent or temporary hiatus - but they've found pundits who stoke their fears even more. Even the most fantastical conspiracies can seem humdrum after you've heard them enough. Could it be that Beck listeners started to get bored with his rants, finding them too conventional and safe - and they're now listening to some guy that makes Beck seem positively Cronkite-ian?

Again, there's no way to tell which of the three scenarios is the truth.

Option #1 would be nice if it were true, but it goes against everything that is known about human nature. People rarely change their habits and their opinions; the idea that 1 million people came to realization that Beck is incredulous is hard to believe itself. Humans, sadly, more often move farther to the extremes than tack back to the center, especially when we surround ourselves with people who think like we do - so #3 is probably closer to the truth.

But I'll put most of my money on option #2. Supporting evidence is that, of the drop in followers, the steepest decline came among 25-54 year olds - that is, those who are more likely to have jobs, social lives and other interests calling them away from Beck-watching. With other demands on their time, this set would only be drawn to Beck at times of high political urgency (ie when the Democrats have full control of Congress and the White House), and would likewise chill off from Glenn when politics felt less dire. The best thing that could happen to Glenn would be a resounding victory for the Dems in 2012. (As a Democrat myself, I must admit I hope he gets that lucky.)

Thursday, February 3, 2011

justify your pleasure

NYC took one more step toward becoming our collective mother yesterday: the City Council voted 36-12 to ban cigarettes from the city's parks and beaches.

Predictably, there was no righteous roar from public, no unleashing of civil libertarian anger clamoring that the Big Apple's nanny state had, this time, gone too far. I will go out on a limb and say New Yorkers' tepid response had less to do with their appreciation of health concerns and wisdom of the legislation, but has almost everything to do with the fact that a) they don't smoke and b) they don't like the smell of smoke.

Of course, this is not what the average New Yorker will tell you. Being a bright crowd, they'll probably come up with some seemingly plausible arguments for why banning butts from parks and beaches is a justifiable health concern. They may even toss in a statement about "protecting the rights of non-smokers." None of their individual arguments will add up (there's no evidence that, outside of living or working with a smoker, second-hand smoke causes disease in an appreciable way), but that won't stop the insistence that the ban is justified, because my dear New Yorkers will have fallen victim to the "nebulous bad-ness" bias.

I don't know if anyone has identified this bias before, so let me outline its contours: something or someone has a number of attributes that are kind-of bad, but even though each attribute on its own wouldn't justify eliminating the thing or person in question, when you add them all up, the thing/person has got to go.

We saw this in the run-up and wake of the Iraq invasion. Saddam was a bad-ass dictator (true, but the US wasn't invading a lot of bad-ass dictators who were either our allies or who didn't sit among oil rich nations). Saddam was developing nukes that he could use to destabilize the region or even hand over to terrorists (no hard evidence of nukes, and even less evidence or logic as to why Saddam would give any to Islamists who hated him). Saddam was an unpredictable, suicidal, rogue leader (even though his history showed him to be calculating and, again, even though the US doesn't go around plucking off rogue leaders). The list could go on. But in spite of the fact that there was no one clear reason to overthrow Saddam, when you added up all the half-baked reasons people were left with a feeling "Well, he's just really bad, so I guess we've got to take him out."

Now imagine the "nebulous badness" bias at work in a criminal trial. "Okay, so there's no hard evidence that this guy killed his father. But did you see him in the courtroom? He didn't look so upset about his dad's death. Hasn't he also committed other violent crimes before? And he hasn't paid child support for years. Let's face it; this guy's bad news. Society's clearly better off with him locked up - and, you know, he's probably better off too. Maybe he'll get a GED or find Jesus. When you think about is, how could anyone sensible argue to let this guy go free?"

Our justice system, thankfully, frowns upon convictions based on "nebulous badness." It prefers proof that the accused has broken a specific law before removing him or her from society.

Legislatures don't have such strict standards, of course. Laws are passed all the time in order to improve the common good, but without the need of "proving" the the common good will, indeed, be improved. But even so, lawmakers are asked to have reasonable justification for their laws.

NYC city councilors, lacking any such reasonable justification for banning cigarettes from everywhere except sidewalks and private apartments, have relied instead on "nebulous badness." Let's face it, cigarettes are smelly and they kill. So, they may not kill people who happen to be sitting 10 feet down-wind on the public beach, buy why quibble - the stuff is bad.

What would be preferable is if the city were more honest. They're not banning smoking to protect the lives of innocent New Yorkers. They are banning something that the majority of New Yorkers find unpleasant, and couching that ban in language about public health and individual rights so it feels less like "tyranny of the majority" and feels more like "the right thing to do."

Wednesday, February 2, 2011

or maybe corruption empowers

Yesterday, when asking why politicians are so ethically challenged, I flippantly suggested that it's more likely that power leads to corruption rather than that corruption leads to power (at least in a democracy).

It turns out I'm wrong, according to a bunch of Dutch researchers. They tested a few scenarios in which people either followed or broke the rules. The rules breakers - those that, for example, tapped their cigarette ashes on the floor or fudged book-keeping rules - were viewed as more powerful than obedient rules-followers.

If we're naturally impressed by people who act like the rules don't apply to them, that's bad news for democracy - or any institution where an individual's success depends on others' perceptions. Could it be that we rally behind people who think they're above the law? No wonder, then, when they get into office that they're so willing to ignore the law.

Tuesday, February 1, 2011

power corrupts, but for a good cause

Politicians, few would dispute, are a spineless, duplicitous and conniving class who are all too willing to give into corrupting forces (illegal or otherwise).

While that much is clear, it's harder to say why politicians are more unethical than your average citizen. It could be that the profession attracts people who lack moral fiber. More likely, as Lord Acton adaged, it's power itself that makes politicians ethically challenged.

A few researchers from University of Richmond have a generous view of why this might be the case. It's not like once they're in power politicians simply do what they can to keep their clutches on power. As is usual with the human brain, it's far more complicated than that. According to the Richmond researchers, leaders of a group tend to inflate the importance of their group's goals (compared to mere members of that group). In doing so, leaders also are more willing to justify unethical behavior as a means to achieving their group's ends. Even more disturbingly, the higher they see their group's goals, the more easily they are able to justify immoral behavior.

This, of course, is the scary thing about the human brain; we rarely (if ever) do things that we think are outright bad - but we often do bad things in the name of the good (and so they don't seem so bad, really). It's also why pragmatists get uneasy with ideologues and zealots; heaven on earth is a nice idea, but getting there can be an ugly process.