Your Brain Hates You, and Other Hazards of Metrics

I love measuring things.

That’s not terribly special, of course. Human beings generally love knowing how much that is and which one is more and am I faster and fun stuff like that. We measure our economies, our jogging, our page loads, our friends, our clickthroughs, our sleep cycles, our faves and our carbon footprints… We invent buzzwords like “the quantified life” and “big data” to describe our relationships with those numbers and graphs and goals, and we build whole jobs and companies and industries around making them make sense.

Which is great, because measuring things helps us recognize problems and learn and improve and grow. Knowledge is power! Yay! Except, of course, when it isn’t.

The funny thing is that measuring something almost inevitably causes strange and unexpected stuff to happen. It’s not that measuring is bad, of course. The real problem is the human brain.

“I wanted to go canoeing this week, but I realized I wouldn’t get any FitBit points.” —My co-worker, having a horrible realization

People-brains are fantastic organs full of great tricks, but they’re also littered with hard-wired shortcuts, biases, and ruts — pitfalls that can sabotage well-meaning metrics-driven approaches to problem solving.

My plan would’ve worked if it wasn’t for those meddling humans

In 1924, the Western Electric Company wanted to figure out how to make their factory workers more productive. With the help of management experts from the Harvard Business School, they turned the Hawthorne Works plant in Cicero, Illinois into a giant A/B test. Over the course of nine years, they repeatedly changed factory conditions and dutifully recording the results. It was a data lover’s dream come true!

In one particular test, they changed the lighting levels in the factory every week. Would bright lights keep everyone on their toes, or would dim lighting lead to more relaxed, focused workers? The answer was clear: Yes. Both. Or… no? Maybe neither?

Crap.

Factory output, it turns out, went up when they brightened the lights. It also stayed up when they dimmed them, and stayed up when they returned to normal. And then, when the experiment ended, factory output drifted back to normal. The same pattern played out in other experiments, as well.

The sad truth the Harvard and Western Electric researchers discovered came to be known as The Hawthorne Effect, a form of observation bias. Simply knowing that they’re being measured changes peoples’ behavior, skewing attempts to gather actionable data.

Oh, don’t worry — it gets worse.

Looking into research on cognitive biases yields piles of well-researched pitfalls that should give any decision maker pause.

Humans tend to rank information and events we can easily remember as more relevant than the less noteworthy stuff — it’s called the Availability Heuristic. The effect can cascade, too: if you spend all of your time reading about startups that make it big, you’ll find it easier to recall success stories, leading to rosier predictions even when they’re unmerited. This bias can also skew the value we place on easy-to-access stats like follows, comment counts, and the seductively simple default view in Google Analytics.

Trying to figure out how to measure something complicated? Be careful — “close enough” metrics rarely stay that way. Goodheart’s Law, Campbell’s Law, and a host of other nerdy postulates all describe the same principle: If you care about X, but can only measure Y, it doesn’t matter how closely related they seem — people will inevitably game the system by focusing on Y. Test scores as a measure of intelligence, gross sales as a measure of business health, and lines of code as a measure of developer productivity are all familiar examples.

Don’t hate the player

Excellent books like Eli Pariser’s The Filter Bubble, Clay Johnson’s The Information Diet, and David Boyle’s The Sum of Our Discontent all explore the cultural effects of our fixation on metrics. And they generally agree that the dangers don’t lie in measurements, tests, metrics, or numbers in and of themselves.

Rather, it’s the danger of ignoring our own human hiccups and assuming that the data we gather and present can be trusted and understood without hard work and lots of humility.

Working with clients in the web world, that’s the kind of effort that separates tactics from strategy. Increasing the number of blog posts published on a web site, boosting ad impressions, or convincing more users to like the company Facebook page are all easy in isolation. Keeping a steady focus on why we’re doing those things and whether we’re accomplishing our broader goals, though — that’s what can keep us from chasing easy but deceptive measure of success.

In conclusion, my Klout score is down

Tricky or not, I still love measuring stuff. My first contribution to an open source project was a product ratings engine. I wrote a web crawler to compare the performance of KickStarter projects for fun! Prompted by a co-worker’s bad day, I designing a tool to track the emotional health of isolated remote workers. And these days, I’m elbows deep in mad-scientist plans to monitor my cats’ kibble intake with a RESTful API.

But when important decisions have to be made, I have to temper that natural enthusiasm. I ask people with different perspectives and experience to check my assumptions. I force myself to argue against my own data-driven conclusions, as honestly as I can. Most importantly, I try to ensure that the big-picture goals are crystal clear. We’re only human, after all, and outsmarting our own brains is a difficult prospect. Admitting our own biases and respecting the limits of our own understanding — those are the first steps to making better decisions.

blog comments powered by Disqus