The myth of actionability
A few weeks back, Gary Angel from SEMphonic published an oddly-titled post called “Why 100% Conversion is a Very Bad Thing” in which he calls into question the whole notion that a key performance indicator (KPI) is only good if a change in the indicator suggests a specific action that can be taken. Gary calls this kind of thinking “the myth of actionability” and says:
“The myth of actionability is conventional wisdom in web analytics – and it suggests that you shouldn’t report on anything unless changes in the measured value can be directly addressed by specific actions. In other words, if you can’t answer the question “What would I do if the value changed up/down?” then you shouldn’t report on the measure.This criteria is designed to eliminate lots of useless data from report sets and insure that what is in report sets has substantive value.
Unfortunately, I believe the criteria of actionability is unsound in almost every way: being both wrong-headed about the purpose of reporting and impossible to actually satisfy in the real-world.”
Obviously Gary is not one to pull punches. Unsound, wrong-headed, impossible … yowch!
Gary calls the myth of actionability “conventional wisdom” and I absolutely agree with him. Everywhere you go, when people are working out key performance indicators and building dashboards, the basis for inclusion or exclusion is usually “is there some action that a change in this metric will encourage us to take?”
Where does this kind of thinking arise? Well, let’s look at page 10 of The Big Book of Key Performance Indicators by Eric T. Peterson. In the section titled “What is a a Key Performance Indicator?” under the subsection on “Action”, in 2006 I explicitly stated:
“Key performance indicators should either drive action or provide a warm, comforting feeling to the reader; they should never be met with a blank stare. Ask yourself “If this number improves by 10 percent who should I congratulate?” and “If this number declines by 10 percent who should I scream at?” If you don’t have a good answer for both questions, likely the metric is interesting but not a key performance indicator.There is enough data in the world already. What most people need is data that helps them make decisions. If you’re only providing raw data, you’re part of the problem. If you’re providing clearly actionable data, you’re part of the solution. If you discover you’re already doing the latter (being part of the solution), give yourself a hug.”
Hmmm, it seems like I am one of the sources of “the myth of actionability” Gary is railing against. But it gets worse; while I was at JupiterResearch I published and presented a number of times on the subject of key performance indicators, and every time I talked about the subject, I stated unequivocally that the “core” of a good key performance indicator was it’s ability to drive action.
Remember, Gary said “Unsound, wrong-headed, impossible …”
Now, I don’t feel the need to defend myself, not because I disagree with Gary, but rather because I think Gary (and perhaps other folks) have taken the interpretation of “needs to drive action” to an unreasonable extreme. Let’s quickly have a look at the history of Eric T. Peterson’s guidance on key performance indicators:
- In 2004 in my first book Analytics Demystified, I wrote in Chapter 15: Bringing it All Together Using Key Performance Indicators that “the most common complaint about Web analytics data and the applications that provide said data is that there is simply “too much information”; too many graphs, too many charts, too many options, too many variables—too much for the average user to understand and make use of.“
- In 2005, in my second book, Web Site Measurement Hacks, I wrote in Hack #94: Use Key Performance Indicators that “the best KPIs are those that, when people look at them and realize that they’ve gone down from week to week, make people freak out and call meetings.” I also said, relevant to Gary’s complaint regarding the establishment of which indicators to use, “if you’re thinking about a number but cannot think of any action you would take if that number absolutely tanks, set that number aside.“
- In 2006, in my third book, The Big Book of Key Performance Indicators, I wrote “key performance indicators should either drive action or provide a warm, comforting feeling to the reader; they should never be met with a blank stare. Ask yourself “If this number improves by 10 percent who should I congratulate?” and “If this number declines by 10 percent who should I scream at?” If you don’t have a good answer for both questions, likely the metric is interesting but not a key performance indicator.“
If I’m wrong, at least I’m consistent huh?
When I first started pushing the idea that indicators needed to be tied to some type of reasonable action, my statements were a direct response to the dominant paradigm at the time: that all the information you needed to run your online business was contained in the hundreds of reports all web analytics applications generate, all you need to do is find the right data and take the appropriate action.
The problem I saw with this was, well, almost nobody was being successful with this strategy. Not only were most companies hamstrung and suffering from data overload leading to analysis paralysis, senior managers were asking for relevant data from the web analytics systems but not getting particularly satisfying responses from the people running the systems. Relatively boring metrics like “page views” and “visits” were being pushed up the food-chain, but except in rare cases, an increasing number of page views and visits were only loosely tied to increasing business success.
And so I proposed an Occam’s Razor for web analytics reporting, one that mandated that companies actually carefully consider the metrics bound for widespread distribution, and choose those metrics based on their ability to generate some action.
I never said, and I’m not sure anyone really says, the “actions” that would be taken were as granular and spuriously precise as “if this metric declines, reduce your PPC spending by 10% per 3% point decline observed.” Web analytics just doesn’t work that way folks, and here I agree with Gary when he writes:
No single measurement can ever suggest an action – cannot, in fact, even be interpreted directionally as either good or bad. Only in the context of a complete view of the business system (and the knowledge that all other things are equal or heading in some specific direction) can a judgement be made about the meaning of single measure. I think this make it clear that no one measure can ever really be “actionable” when taken in isolation. And if no one measure is actionable, then surely the criteria of actionability is fruitless.”
So let me clarify my position, as I am perhaps the high priest of the “cult of actionability”:
- At design time, key performance indicators should be included or excluded from a hierarchical reporting strategy as outlined in The Big Book of Key Performance Indicators based on the likelihood that the indicator will spur some type of action in the organization when the indicator unexpectedly changes.
- The action the organization would take, when unexpected change occurs, is never precise. The action is nearly always “conduct additional analysis” at which time the indicator’s definition provides at least the nominal basis for the starting point of the analysis.
At the end of the day, my view on key performance indicators is that they are intended to promote the visibility of web analytics throughout the organization, especially to the upper echelons where it is increasingly unlikely that traditional web analytics reports will be given the attention they deserve.
By creating a reasonable set of metrics and indicators, derived directly from the site’s business objectives and supporting click-stream activities, and then delivering said metrics throughout the organization with serious thought to definition, presentation, and potential for action, companies have been shown to significantly improve the level of attention given to web analytics data.
All of this helps to directly combat Gary’s observation that he too is “often disappointed in the report sets [SEMphonic] generate[s].” At the end of the day, regardless of which side of the fence you’re on, I believe we all agree that the central goal of web analytics is to help the business make better decisions. We do this by continually refining the web analytics business process and striving to better educate decision makers about the actions they can take to improve the web site. We repeat as necessary and hopefully go to bed happy.
Anyway, I’m a huge fan of Gary Angel so I hope we can continue this debate. What do you think? Am I crazy? Is Gary crazy? Or, like so much in our industry, is the reality something between the lines?