Reporting vs. Analysis
In my mind, all too often, we erroneously equate “reporting” with “analysis.” This can lead to a lot of cycles of spinning confusedly through reams of data or, worse, the belief that we “took action from the data” just because we converted a spreadsheet into a chart.
A former colleague of mine, Shane Stephens, and I sat down a few years ago and decided that there are really three different ways to use data:
Operational Reporting
This is when data is being reported at a high frequency and, often, at a very granular level with a discretely defined role in a given process. A daily report of all bookings from the prior day for a given salesperson’s territory is one example (it’s only a good example if his/her process includes reviewing that list each day and following up with any customers that he needs to check in on once they have placed an order). A call center report that breaks down wait times by different controllable factors is another example — used for adjusting staffing throughout the week, for instance. We even included an invoice-“printing” system as an operational report — it’s a highly detailed, highly structured report that gets sent to a customer letting him/her know what payment is due.
That’s all I’ll write about operational reporting — in a lot of ways, it’s pretty simple. Trouble arises, though, when someone starts to repurpose such a report: “I get a daily detail report of bookings for my region, so I’m just going to combine all of those into a spreadsheet to see what my bookings to date for the quarter are.” Or, same compilation, but, “…so I can analyze sales in my territory.” This winds up being darn cumbersome and can create all sorts of issues with data interpretation and application. Maybe I’ll come back to that later.
Metrics Reporting
Metrics reporting typically has aggregated data: total bookings for a territory, total bookings for the company, lead-to-sales conversion rate, etc. Key Performance Indicators (KPIs) are always metrics, but all metrics aren’t necessarily KPIs. Metrics are very different from operational reports. And, they’re a lot easier to turn into vortexes of wasted energy.
There are at least two ways that I’ve seen people get into a death spiral with metrics:
- Confusing metrics with analysis. They’re wildly different…which should become evident by the end of this post.
- Starting with the data when determining metrics instead of starting with objectives
I’ll tackle the latter first. The “easy” way to get data quickly is to start out by asking what data is easily available, and then choosing your metrics from that list. This is just wrong, wrong, WRONG! It’s tempting to do, and even experienced analysts who know what a slippery slope this is can easily fall into the trap. But, it’s still WRONG! (not that I have strong opinions here…)
In the long run, the right place to start when determining metrics is with what you’re trying to accomplish in business terms rather than data terms: “We’re trying to improve the effectiveness of our direct marketing efforts,” “We’re trying to grow the company,” “We’re trying to make the company more profitable,” “We’re trying to improve the user experience on our Web site.” A couple of these teeter on the edge of sounding like being in “data terms.” For instance, isn’t “grow the company” the same as “increase revenue?” Maybe. Maybe not.
The next step is to tighten up the definitions of what your objectives are. Still, stay away from thinking about the data. Think about how you would explain to your spouse, a friend, or a peer in another department what it is you are really trying to accomplish.
Random aside: A lot of business articles and books claim that, to establish good metrics, you have to start at the very top level of the company and then drill down to more detailed/granular metrics. That’s one of the fundamental premises for the Balanced Scorecard, I think. I like a lot about the balanced scorecard approach…but not this piece of it. It may work for some companies, but, in my experience, it’s just too much to try to start at the highest level and then drill all the way down to get any metrics. Rather, if the top levels of an organization have clearly articulated the company’s vision, strategy, and high level tactics, I’m all for empowering individual departments to figure out what they should be achieving and then starting there with the metrics. This does mean there needs to be some validation of metrics once they’re settled upon in order to ensure alignment. But, I’ve never seen a department (or even a project — project metrics should follow the same approach) that doesn’t get 85% of the way there by just knowing the company and understanding their role and then working through their own proposed metrics.
Back to the main point on metrics. Once you have really, really clear, tight objectives, you can sit back and brainstorm on how to measure your progress towards them. What you’ll find is that there are some objectives that can be measured very easily with a single metric. But, with other objectives, there will not be a perfect metric. In those cases, you can shoot for one or more proxies for the objective. This is actually a good sign — it means you’ve got some clear objectives that are hard to measure. That’s a damn sight better than having clear metrics with objectives that are hard to articulate!
You’re not quite done at this point with metrics. It’s absolutely critical that you set targets for each metric. It’s really tempting to want to “just measure it for a while, because we don’t even have a baseline.” Resist the temptation! If you can get some set of historical data within the next day or two, fine. Wait. I won’t be that persnickety. But, if not, set a target anyway! Come up with a number that is so high/good that you don’t need any historical data to know you’d be thrilled to hit it. Come up with the opposite — a number that would be so low/bad that you would know there’s a problem. Start working from both directions to see how much you can close the gap before being in “no idea”-land. Then, split the difference. Sure, you may be WAY off, but it’s going to be a much more useful discussion once you have actual data to put against it.
The last step is a validation step, really. For each metric, ask yourself what you would do if you missed the target. Are there actions that you (or your department) can and would take if you missed the target? Or, would you simply go and tell another department that they have a problem (Oopsy! That means it’s a metric for them — not for you; it’s their call as to whether they should use it!). Would you cross your fingers and wait for another month and hope the number looks better then? If that’s the case, you’re admitting that you either don’t know how to actually impact the number or you can’t impact the number. It’s not a valid metric.
Enough on that (starting to think I bit off more than I should’ve with my first real entry here).
Moving on to…
Analysis
Analysis is very different from metrics reporting. While metrics reporting is all about measuring the performance of a person, a department, a process, a project, or a company…and knowing what corrective action to take if there is a performance issue…analysis is about trying to figure out what’s going on with something.
The best way to approach analysis is to start with a hypothesis. If you don’t have a clear hypothesis, you’ll find yourself going in even worse circles than if you started with the data when identifying your metrics. Put simply:
- Start with a clear hypothesis
- Ask yourself what action you will take if the hypothesis is disproven or not disproven. If there are not clearly different actions…then you’re wasting your time. It might be a fun analysis, but it’s not going to be particulary worthwhile (contrary to popular belief, data mining, which is one form of analysis, is not simply a case of, “dump all the data into a fancy tool and see what it spits back out that you can use” — you need hypotheses for data mining!)
- Develop an approach that would enable you to disprove the hypothesis with as little data as possible
- Get that data…and only that data
- Perform the analysis
It’s tempting to pull extra data just so it’s there. And, that’s okay, as long as you don’t expand the scope of the data-pulling dramatically. Generally, just remember that it is a lot easier to sequence together a series of small analyses (if we disprove hypothesis X then we will test sub-hypothesis Y) than trying to do it all at once in one fell swoop.
That’s all for now!