Do Your Analytics Cheat the Truth?

Here is an excerpt from an article written by Michael Schrage for Harvard Business Review and the HBR Blog Network. To read the complete article, check out the wealth of free resources, obtain subscription information, and receive HBR email alerts, please click here.

*      *      *

Everyone’s heard the truism that there are lies, damned lies, and statistics. But sitting through a welter of analytics-driven top-management presentations provokes me into proposing a cynical revision: There are liars, damned liars, and statisticians.

The rise of analytics-informed insight and decision is welcome. The disingenuous and deceptive manner in which many of these statistics are presented is not. I’m simultaneously stunned and disappointed by how egregiously manipulative these analytics have become at the very highest levels of enterprise oversight. The only thing more surprising — and more disappointing — is how unwilling or unable so many senior executives are about asking simple questions about the analytics they see.

At one financial services firm, for example, call center analytics showed spike after spike of negative customer satisfaction numbers. Hold times and “problem resolution” times had noticeably increased. The presenting executive clearly sought greater funding and training for her group. The implied threat was the firm’s reputation for swift and responsive service was at risk.

Three simple but pointed questions later, her analytic gamesmanship became clear. What had been presented as a disturbing customer service trend was, in large part, due to a policy change effecting about 20% of the firm’s newly-retired customers. Between their age, possible tax implications, and an approval process requiring coordination with another department, these calls frequently stretched beyond 35 to 45 minutes.

What made the situation worse (and what might explain why the presenter chose not to break out the data) was a management decision not to route those calls to a specially trained team but to allow any customer representative to process the query instead. The additional delays that propagated undermined the entire function’s performance.

Every single one of the presenter’s numbers were technically accurate. But they were aggregated in a manner that made it look as if the function were under-resourced. The analytics deliberately concealed the outlier statistically responsible for making the numbers dramatically worse.

More damning was a simple queuing theory simulation demonstrating that if the call center had made even marginal changes in how it chose to manage that exceptional 20%, the aggregate call center performance numbers would have been virtually unaffected. Poor management, not systems underinvestment, was the real root cause problem.

Increasingly, I observe statistical sophisticates indulging in analytic advocacy — that is, the numbers are deployed to influence and win arguments rather than identify underlying dynamics and generate insight. This is particularly disturbing because while the analytics — in the strictest technical sense — accurately portray a situation, they do so in a way that discourages useful inquiry.

I always insist that analytics presentations and presenters explicitly identify the outliers, how they were defined and dealt with, and — most importantly — what the analytics would look like if they didn’t exist. It’s astonishing what you find when you make the outliers as important as the aggregates and averages in understanding the analytics.

*      *      *

Here is a direct link to the complete article.

Michael Schrage, a research fellow at MIT Sloan School’s Center for Digital Business, is the author of the books Serious Play (HBR Press), Who Do You Want Your Customers to Become? (HBR Press) and The Innovator’s Hypothesis (MIT Press).

Posted in

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.