Reporting

The "K" in "KPI" is not for "1,000"

At the core of any effective performance measurement process are key performance indicators, or KPIs.

Did you catch the redundancy in that statement? Performance measurement uses performance indicators. What gets my goat — because it drives report bloat and the scheduled production of an unnecessary sea of data — is how often the “K” in KPI gets ignored. More times than I can count, I’ve been sent a “list of KPIs,” that, rather than being a set of 3-5 measures with targets established, is a barfed out list of metrics and data:

Barfing Metrics

I had a self-humoring epiphany last week that, perhaps, marketers get confused by the acronym and think that “K” stands for “1,000” rather than for “key!” Through that lens, perhaps they’re falling short — lists of 20 or 30 KPIs are still well short of 1,000! My favorite response to my idle ephiphany (shared on Twitter, of course, because that’s what Twitter is for, right?) was from Eric Matisoff:

kkpi_matisoff

Not only did I realize that I’d seen the phrase “key KPIs” used myself…I saw this phrase in writing two days later!

NO, people! No. No. NO!!!

This bothers me (obviously!) — not just when it happens, but the fact that it happens so. So, why does it happen, and what can we do about it?

The History of Digital Analytics Does Not Help

As an industry, we are stuck with a pretty persistent albatross of history. When I started in web analytics, the data we had access to was generated once a month when our web analytics platform (Netgenesis) crunched through the server log files and published several hundred reports as static HTML pages. The analysts needed to know what those reports were so that they could quickly find the ones that would be most useful in answering the business questions at hand. When no such report was in the monthly list of published reports, we would either dive into a cumbersome (hours to run a simple query) ad hoc analysis tool, configure a new report to be added to the monthly list, or both.

We might look at the new report once or twice over the next few months…but the report never went away.

It got to the point where it took the first 10 days of each month for the ever-growing list of monthly reports to be published by the tool. In many cases, data from those reports was getting pulled into other reports with data from other sources. We got to that dreaded point where the report for any given month was often not published until 3 weeks into the following month. Egad!

But, in some ways, it was our only option. We didn’t have quick and efficient access to ad hoc queries of the data that we now have on many front. So, the reports were, really, mini data marts. High latency, expensive, and low value mini data marts, but mini data marts nonetheless. Somehow, though, we often still seem to be stuck with that mindset: a recurring report is the one shot we have to pull all the data we might want to look at. That’s silly. And inefficient. Our monthly (or on-demand) performance measurement reports need to be short (one screen), clear (organized around business goals), and readily intepreted (“at a glance” read of whether goals are being met or not).

KPIs Are Actually Quite Simple (if not Easy) to Identify

KPIs are the core of performance measurement. They’re not there for analysis (although they may be the jumping off point that triggers analysis). They’re not the only data that anyone can ever look at. They’re not even the only data that will go on a dashboard (but they will get much more prominent treatment than other metrics on the dashboard). I use the “two magic questions” to identify KPIs:

  1. What are we trying to achieve?
  2. How will we know if we’re doing that?

The answer to the second question is our list of KPIs, but we have to clearly and concisely articulate what we’re trying to achieve first! And that question gets skipped as often as Lindsey Lohan dons an ankle monitor.

I like to think of the answer to the first question as the conversation I would have with a company executive when we find ourselves riding on an elevator and making idle chit chat. She asks, “What are you working on these days?” I (the marketer) respond:

  • “Rolling out our presence on Twitter.”
  • “Creating a new microsite for our latest campaign.”
  • “Redesigning the home page of the site.”
  • “Expanding our paid media investment to Facebook.”)

She then asks, “What’s that going to do for us?” (This is the first of the two magic questions.) I’m not going to start spouting metrics. I’m going to answer the question succinctly in a way that expresses the value to the business:

  • “With Twitter, we’re working to put our brand and our brand’s personality in the minds of more consumers by engaging with them in a positive, timely, and meaningful way.”
  • “We will be giving consumers who find out about our new product through any channel a place to go to get more detailed information so that they can purchase with confidence.”
  • “We will make visitors to our home page more aware of the services we offer, rather than just the products we sell.”
  • “We will introduce potential customers to our brand efficiently by targeting consumers who have a profile and interests that make them likely targets for our products.”

As marketers, we actually tend to suck at having a ready and repeatable answer to that  question. If we have that, then we’re 75% of the way to identifying a short list of meaningful KPIs, because the KPIs are then viewable through the lens of whether they are actually metrics appropriate for measuring what we’re trying to achieve.

A KPI Without a Target Is Not a KPI

“Visits is one of our KPIs, and we had 225,000 visits to the site last month.”

Is that good? Bad? Who knows? In the absence of an explicitly articulated target, we simply look at how the KPI changed from the prior month and, perhaps, how it compared to the same month in the prior year. That’s fine…if the target established for the KPI was based on one of these historical baselines. All too often, though, there is no agreement and alignment around what the target is.

If we accept that KPIs have to explicitly have targets set (and those targets aren’t necessarily fixed numbers — they can be based on some expected growth percentage or compare), then the list of KPIs automatically gets shorter. Setting targets takes thought and effort, so it’s not practical to set targets for 25 different metrics. If we hone in on 3-5 KPIs, then we can gnash our teeth about the lack of historical baselines or industry benchmarks to use in setting targets…and then set targets anyway! We will roll up our sleeves, get creative, realize that there is a SWAG aspect of setting the target…and then set a target that we will use as an appropriate frame of reference going forward. It’s not an impossible exercise, nor is it one that takes an undue amount of time.

Did I Mention that “K” is for “Key?”

Perhaps it is a quixotic quest, but I’ll take any company I can get in this battle for sanity. Let’s get the “key” back in KPIs! If you’re up for saddling up and tilting at this particular windmill, feel free to snag a copy of my performance measurement planning template as one of your armaments!

donquixotic

Reporting

The Ugly Truth About Benchmarks

Why Do We Want Benchmarks in the First Place?

As Garrison Keillor says every week, in Lake Wobegon, “all the kids are above average.” If we can simply be “above average,” then we know we’re pulling away from mediocrity. And that’s what we want with benchmarks — we want to know what “average” is so that we know the exact height of the measurement bar that, if we clear it, we can claim success (if not necessarily supremacy). It’s something to aim for that must be attainable, because others have attained it.

We’re surrounded with benchmarks in our personal lives, too: doctors tell us how our weight, blood pressure, and cholesterol compare to benchmarks for healthy people of the same age, gender, and height; standardized testing in schools are compared to statewide benchmarks; salary surveys tell us (generally in a flawed way) benchmarks for pay for others in our field. We’re used to benchmarks, and we want to use them to set targets for the key performance indicators (KPIs) for our marketing initiatives.

Benchmark = Target…right?

All too often, I run up against someone who equates a benchmark with a target. That’s dangerous for two reasons:

  • Benchmarks are a reasonable sanity check, but targets should be driven by what success will really look like — where does a particular metric need to be in order to justify the investment required to get there?
  • If targets are solely driven by benchmarks, then it’s an easy (if faulty) deductive leap to believe that, in the absence of a benchmark, no target can be set

So, resolved: benchmarks are not targets.

The Benchmarks We Most Want Are the Ones We Can’t Realistically Have

The easiest, and, in most cases, most relevant and useful benchmarks generally come from your own historical data. If you’re considering an initiative that will improve a certain metric, then your track record with that metrics is a fantastic baseline input into target-setting. Since that data is usually readily available, it gets used. It’s when a totally new initiative is launching — a Facebook page, a mobile app, a community contest — that we get the most anxious about what a “reasonable target” is and, therefore, launch a quest to find benchmarks.

The problem is that these are most often the benchmarks that are least likely to be available. Or, if they are available, there is so much variability inside the data set that it’s hard to put much stock in the data.

Even with something as massively established as email marketing, getting a reasonable benchmark for something as common as open rate has a lot of underlying variables mucking up the data:

  • The type of e-mail — newsletter vs. general promotion vs. targeted promotion vs. something else
  • The target of the e-mail — internal house list vs. rented list, for instance
  • The specific industry and consumer type the emails target
  • The email platform in use and how it captures and calculates open rate
  • The basic deliverability of the emails included in the benchmark, as driven by content, email platform, and user type

If all of these factors are at work with something as established as e-mail, then what does that mean for a relatively knew and evolving medium like social media or mobile? Almost every time we launch a new Facebook page, we get asked what the “benchmark is for new fan growth.” In that case, the single biggest driver of fans — outside of brands that have a massive number of rabidly enthusiastic customers — is the promotion of the page, be it through Facebook advertising, through channels the brand already owns (email database, web site, TV advertising, etc.), or through paid promotion elsewhere. It’s an unsatisfactory reality…but it’s reality nevertheless.

Should We Just Abandon All Hope, Then?

There are some cases where relevant and appropriate benchmarks are available. For instance, Google Analytics provides benchmark data for common web metrics based on sites of “similar size” and in a user-selectable site category/industry. Twitalyzer can be used to gather benchmarks using all of the tracked users who fall into a given “community.” Email marketing platforms often do provide benchmark data by industry, but they can fall short on the critical “e-mail type” front. When benchmarks are available, by all means use them as an input!

In the absence of available benchmarks, meaningful targets can absolutely still be set. It’s just largely a matter of ferreting out stakeholder expectations. Expectations always exist, even if they are claimed to not:

Expectations almost always exist. In the (real) example illustrated above, I pointed out that, if there truly were no expectations, then there would have been no “shock.”

The expectations that exist may not be precise , but, with a little bit of probing, you can generally find a range, below which the initiative will undoubtedly be judged as disappointing, and above which the initiative will certainly be judged a success. Starting with that range and then narrowing down as best you can and getting agreement of this target range from all of the key stakeholders is just smart performance measurement.

Reporting

Department Store KPIs (an analogy)

A couple of weeks ago, I had a conversation with the newest member of the analytics team at Resource Interactive, Matt Coen. I shared with him my “Measuring digital marketing is like measuring the Mississippi River” analogy, and he, in turn, shared with me his department store analogy. I’m a big fan of using stories and analogies to get across fundamental measurement concepts, so, with his permission, I’m passing along his perspective (and, of course, in the translation from a verbal story to the written word, I’m finding that I’m taking some liberties!).

The story is a great illustration of two things:

  • How key performance indicators (KPIs) generally cannot live in isolation – driving a single KPI to a certain result is easy, but businesses operate on more than one dimension (for instance, total sales can be boosted by dropping the price well below cost…but that kills profitability)
  • Why no company can have a single set of KPIs. The appropriate KPIs depend on what and who is being measured.

Onto the Story

Let’s take a fictional department store. At this store, each department has a department manager who is responsible for all aspects of the department, including the department’s P&L. In addition, all of the departments have a KPI regarding inventory turnover – if any product sits on the shelves for too long, the store loses money. All of the departments have this KPI because, overall, the store has an inventory turnover KPI.

The office supplies department manager is seeing his inventory turnover suffer, and, by digging into the data, he realizes that pens are killing him – no one is buying them, and it’s hurting his turnover rate.

He goes to the store manager and tells him, “I’m having trouble moving pens, and that’s hurting my inventory turnover rate. You may not be seeing it at the overall store level, but it’s got to be negatively impacting that KPI. I need to move pens to the checkout line display.”

The manager scratches his head and agrees to the change – inventory turnover is one of his KPIs, the department manager is being data driven, and he’s even come to the store manager with a proposed solution! Woo-hoo! He promptly instructs his team to remove the candy from the checkout lines and replace them with pens.

Sure enough, pen sales pick up, and the department manager is thrilled.

But, the candy department manager immediately shows up in the store manager’s office and tells him, “My sales are way below target. When I developed my forecast, it was with the assumption that candy would be at the checkout lines. It’s a major impulse buy and that’s where 25% of my department sales occur!”

The store manager really didn’t need this additional headache. He was already seeing a dip in the overall store margin, and he’d realized that he might have acted too hastily when responding to the office supplies department manager’s request, because, not only is candy much more of an impulse buy – so the increase in pen sales didn’t make up for the loss in candy sales – but candy is a higher margin product.

When the store manager agreed to the change, he was making a decision based on how it would impact someone else’s KPIs. And, he focused on a single KPI – inventory turnover – rather than complementary KPIs – inventory turnover and margin.

This analogy can be applied to any number of marketing scenarios. An easy one is a web site, where the owner of a niche site section makes a case for featuring that section very prominently on the home page (the department store checkout line display) in the interest of driving more traffic to his site.

It’s a useful tale!