Analysis, Reporting

What is "Analysis?"

Stephen Few had a recent post, Can Computers Analyze Data?, that started: “Since ‘business analytics’ has come into vogue, like all newly popular technologies, everyone is talking about it but few are defining what it is.” Few’s post was largely a riff off of an article by Merv Adrian on the BeyeNETWORK: Today’s ‘Analytic Applications’ — Misnamed and Mistargeted. Few takes issue (rightly so), with Adrian’s implied definition of the terms “analysis” and “analytics.” Adrian outlines some fair criticisms of BI tool vendors, but Few’s beef regarding his definitions are justified.

Few defines data analysis as “what we do to make sense of data.” I actually think that is a bit too broad, but I agree with him that analysis, by definition, requires human beings.

Fancy NancyWith data “coming into vogue,” it’s hard to walk through a Marketing department without hearing references to “data mining” and “analytics.” Given the marketing departments I tend to walk through, and given what I know of their overall data maturity, this is often analogous to someone filling the ice cube trays in their freezer with water and speaking about it in terms of the third law of thermodynamics.

I’ve got a 3-year-old daughter, and it’s through her that I’ve discovered the Fancy Nancy series of books, in which the main character likes to be elegant and sophisticated well beyond her single-digit age. She regularly uses a word and then qualifies it as “that’s a fancy way to say…” a simpler word. For instance, she notes that “perplexed” is a fancy word for “mixed up.”

“Analytics” is a Fancy Nancy word. “Web analytics” is a wild misnomer. Most web analysts will tell you there’s a lot of work to do with just basic web site measurement. And, that work is seldom what I would consider “analytics.” As cliché as it is, you can think about data usage as a pyramid, with metrics forming the foundation and analysis (and analytics) being built on top of them.

Metrics Analysis Pyramid

There are two main types of data usage:

  • Metrics / Reporting — this is the foundation of using data effectively; it’s the way you assess whether you are meeting your objectives and achieving meaningful outcomes. Key Performance Indicators (KPIs) live squarely in the world of metrics (KPIs are a fancy way to say “meaningful metrics”). Avinash Kaushik defines KPIs brilliantly: “Measures that help you understand how you are doing against your objectives.” Metrics are backward-looking. They answer the question: “Did I achieve what I set out to do?” They are assessed against targets that were set long before the latest report was pulled. Without metrics, analysis is meaningless.
  • Analysis — analysis is all about hypothesis testing. The key with analysis is that you must have a clear objective, you must have clearly articulated hypotheses, and, unless you are simply looking to throw time and money away, you must validate that the analysis will lead to different future actions based on different possible outcomes. Analysis tends to be backward looking as well — asking questions, “Why did that happen?”…but with the expectation that, once you understand why something happened, you will take different future actions using the knowledge.

So, what about “analytics?” I asked that question of the manager of a very successful business intelligence department some years back. Her take has always resonated with me: “analytics” are forward-looking and are explicitly intended to be predictive. So, in my pyramid view, analytics is at the top of the structure — it’s “advanced analysis,” in many ways. While analysis may be performed by anyone with a spreadsheet, and hypotheses can be tested using basic charts and graphs, analytics gets into a more rigorous statistical world: more complex analysis that requires more sophisticated techniques, often using larger data sets and looking for results that are much more subtle. AND, using those results, in many cases, to build a predictive model that is truly forward-looking.

The key is that the foundation of your business (whether it’s the entire company, or just your department, or even just your own individual role) is your vision. From your vision comes your strategy. From your strategy come your objectives and your tactics. If you’re looking to use data, the best place to start is with those objectives — how can you measure whether you are meeting them, and, with the measures you settle on, what is the threshold whereby you would consider that you achieved your objective? Attempting to do any analysis (much less analytics!) before really nailing down a solid foundation of objectives-oriented metrics is like trying to build a pyramid from the top down. It won’t work.

Reporting

Performance Measurement — Starting in the Middle

Like a lot of American companies, Nationwide (Nationwide: Car Insurance as well as the various other Nationwide businesses) goes into semi-shutdown mode between Christmas and New Years. I like racking up some serious consecutive days off as much as the next guy…but it’s also awfully enjoyable to head into work for at least a few days during that period. This year, I’m a new employee, so I don’t have a lot of vacation built up, anyway, and, even though the company would let me go into deficit on the vacation front, I just don’t roll that way. As it is, with one day of vacation, I’m getting back-to-back four-day weekends, and the six days I’ve been in the office when most people are out…has been really productive!

I’m a month-and-a-half into my new job, which means I’m really starting to get my sea legs as to what’s what. And, that means I’m well aware of the tornado of activity that is going to hit when the masses return to work on January 5th. So, in addition to mailbox cleanup, training catch-up, focussed effort on some core projects, and the like, I’ve been working on nailing down the objectives for my area for 2009. In the end, this gets to performance measurement on several levels: of me, of the members of my team, of my manager and his organization, and so on. And that’s where, “Start in the middle” has come into play.

There are balanced scorecard (and other BPM theoreticians) who argue that the only way to set up a good set of performance measures is to start at the absolute highest levels of the organization — the C-suite — and then drill down deeper and deeper from there with ever-more granular objectives and measures until you get down to each individual employee. Maybe this can work, but I’ve never seen that approach make it more than two steps out of the ivory tower from whence it was proclaimed.

On the other extreme, I have seen organizations start with the individual performer, or at the team level, and start with what they measure on a regular basis. The risk there — and I’ve definitely run into this — is that performance measures can wind up driven by what’s easy to measure and largely divorced from any real connection to measuring meaningful objectives for the organization.

Nationwide has a performance measurement structure that, I’m sure, is not all that unique among large companies. But, it’s effective, in that in combines both of the above approaches to get to something meaningful and useful. In my case:

  • There is an element of the performance measurement that is tied to corporate values — values are something that (should be) universal in the company and important to the company’s consistent behavior and decision-making, so that’s a good element to drive from the corporate level
  • Departmental objectives — nailing down high-level objectives for the department, which then get “drilled down” as appropriate and weighted appropriately at the group and individual level; these objectives are almost exclusively outcome-based (see my take on outputs vs. outcomes)
  • Team/individual objectives — a good chunk of these are drilldowns from the departmental objectives. But, they also reflect the tactics of how those objectives will be met and, in my mind, can include output measures in addition to outcome measures. 

What I’ve been working on is the team objectives. I have a good sense of the main departmental objectives that I’m helping to drive, so that’s good — that’s “the middle” referenced in the title of this post.

The document I’m working to has six columns:

  • Objectives — the handful of key objectives for my team; I’m at four right now, but I suspect there will be a fifth (and this doesn’t count the values-oriented corporate objective or some of the departmental objectives that I will need to support, but which aren’t core to my daily work)
  • Measures — there is a one-to-many relationship of objectives to measures, and these are simply what I will measure that ties to the objective; the multiple measures are geared towards addressing different facets of the objective (e.g., quality, scope, budget, etc.)
  • Weight — all objectives are not created equal; in my case, for 2009, I’ve got one objective that dominates, a couple of objectives that are fairly important but not dominant, and an objective that is a lower priority, yet is still a valid and necessary objective
  • Targets — these are three columns where, for each measure, we define the range of values for: 1) Does Not Meet Expectations, 2) Achieves Expectations, and 3) Exceeds Expectations

It’s tempting to try to fill in all the columns for each objective at once. That’s a mistake. The best bet is to fill in each column first, then move to the next column.

This is also freakishly similar to the process we semi-organically developed when I was at National Instruments working on key metrics for individual groups. Performance measurement maturity-wise, Nationwide is ahead of National Instruments (but it is a much larger and much older company, so that is to be expected), in that these metrics are tied to compensation, and there are systems in place to consistently apply the same basic framework across the enterprise.

This exercise kills more than one bird with a single slingshot load:

  • Performance measurement for myself and members of my team — the weights assigned are for the entire team; when it comes to individuals (myself included), it’s largely a matter of shifting the weights around; everyone on my team will have all of these objectives, but, in some cases, their role is really to just provide limited support for an objective that someone else is really owning and driving, so the weight of each objective will vary dramatically from person to person
  • Roles and responsibilities for team members — this is tightly related to the above, but is slightly different, in that the performance measurement and objectives are geared towards, “What do you need to achieve,” and it’s useful to think through “…and how are we going to do that?”
  • Alignment with partner groups — my team works closely with IT, as well as with a number of different business areas. This concise set of objectives is a great alignment tool, since achieving most my objectives requires collaboration with other groups; we need to check that their objectives are in line with ours. If they’re not, it’s better to have the discussion now rather than halfway through the coming year when “inexplicable” friction has developed between the teams because they don’t share priorities
  • Identifying the good and the bad — if done correctly (and, frankly, my team’s are AWESOME), then we’ll be able to check up on our progress fairly regularly throughout the year. At the end of 2009, it’s almost a given that we will have “Did not achieve” for some of our measures. By honing in on where we missed, we’ll be able to focus on why that was and how we can correct it going forward.

It’s a great exercise, and is probably the work that I did in this lull period that will have the impact the farthest into 2009.

I’ll let you know how things shake out!

Reporting, Social Media

Social Media Success Metrics. Or…at Least Objectives.

Jeremiah Owyang has a post on his Web Strategist blog titled Why Your Social Media Plan should have Success Metrics. Based on the URL of the post, it looks like Owyang initially titled the entry “Why Your Social Media Plan should Indicate What Does Success Look Like.” Admittedly, the original title is a bit clunky. But, in the cleanup, he actually oversimplified the main point of his post, which is that it’s important to have some clear idea of why you’re tackling social media and some idea what you’re hoping to get out of it. He includes some examples:

A few examples of what success could look like for you:

  • We were able to learn something about customers we’ve never know before
  • We were able to tell our story to customers and they shared it with others
  • A blogging program where there are more customers talking back in comments than posts
  • An online community where customers are self-supporting each other and costs are reduced
  • We learn a lot from this experimental program, and pave the way for future projects, that could still be a success metric
  • We gain experience with a new way of two-way communication
  • We connect with a handful of customers like never before as they talk back and we listen
  • We learned something from customers that we didn’t know before

One of the commenters correctly pointed out that none of these examples were “metrics” per se. I say, “Cool!” Owyang’s point is spot on — be clear on why you’re tackling social media. And, you know what? If it’s, “Because I don’t understand it and don’t ‘get’ it and figure the best way to learn is to dive in and do it,” then that’s okay! Of course, if that is the only reason you are dipping your phallanges into social media, then you should also set a target date for when you’re going to evaluate whether you are going to continue — with more focussed objectives — or whether you are going to reduce your focus on it.

The metrics will come. Sometime, they’re not crisp, clean, perfect metrics. That’s okay. I’m a fan of proxy measures, as well as the occasional use of subjective measures. Quantitative measures that aren’t tied to clear objectives, on the other hand, drive me bonkers.

So, what are my objectives with this part of my personal social media experimenting? Very simply, they’re as follows:

  • See if I can “do” it — post with some level of substance on a sustained basis
  • Give myself an outlet for expressing my opinions and frustrations about data usage (when it’s not appropriate to express them directly to the person who triggered the need for an outlet)
  • Learn about blogging technologies

The jury is still a bit out on the first objective, but it’s looking like the answer is, “I can.”

I am clearly hitting the second objective (and will continue to do so).

I’ve become intimate with both Blogger and WordPress, as well as dabbled with Technorati, Feedburner, Yahoo! Pipes, and any number of social networking and social bookmarking platforms, so I’d say I’m well on my way to the third.

I’m not feeling the need to reset my objectives just yet.