Reporting

Performance Measurement — Starting in the Middle

Like a lot of American companies, Nationwide (Nationwide: Car Insurance as well as the various other Nationwide businesses) goes into semi-shutdown mode between Christmas and New Years. I like racking up some serious consecutive days off as much as the next guy…but it’s also awfully enjoyable to head into work for at least a few days during that period. This year, I’m a new employee, so I don’t have a lot of vacation built up, anyway, and, even though the company would let me go into deficit on the vacation front, I just don’t roll that way. As it is, with one day of vacation, I’m getting back-to-back four-day weekends, and the six days I’ve been in the office when most people are out…has been really productive!

I’m a month-and-a-half into my new job, which means I’m really starting to get my sea legs as to what’s what. And, that means I’m well aware of the tornado of activity that is going to hit when the masses return to work on January 5th. So, in addition to mailbox cleanup, training catch-up, focussed effort on some core projects, and the like, I’ve been working on nailing down the objectives for my area for 2009. In the end, this gets to performance measurement on several levels: of me, of the members of my team, of my manager and his organization, and so on. And that’s where, “Start in the middle” has come into play.

There are balanced scorecard (and other BPM theoreticians) who argue that the only way to set up a good set of performance measures is to start at the absolute highest levels of the organization — the C-suite — and then drill down deeper and deeper from there with ever-more granular objectives and measures until you get down to each individual employee. Maybe this can work, but I’ve never seen that approach make it more than two steps out of the ivory tower from whence it was proclaimed.

On the other extreme, I have seen organizations start with the individual performer, or at the team level, and start with what they measure on a regular basis. The risk there — and I’ve definitely run into this — is that performance measures can wind up driven by what’s easy to measure and largely divorced from any real connection to measuring meaningful objectives for the organization.

Nationwide has a performance measurement structure that, I’m sure, is not all that unique among large companies. But, it’s effective, in that in combines both of the above approaches to get to something meaningful and useful. In my case:

  • There is an element of the performance measurement that is tied to corporate values — values are something that (should be) universal in the company and important to the company’s consistent behavior and decision-making, so that’s a good element to drive from the corporate level
  • Departmental objectives — nailing down high-level objectives for the department, which then get “drilled down” as appropriate and weighted appropriately at the group and individual level; these objectives are almost exclusively outcome-based (see my take on outputs vs. outcomes)
  • Team/individual objectives — a good chunk of these are drilldowns from the departmental objectives. But, they also reflect the tactics of how those objectives will be met and, in my mind, can include output measures in addition to outcome measures. 

What I’ve been working on is the team objectives. I have a good sense of the main departmental objectives that I’m helping to drive, so that’s good — that’s “the middle” referenced in the title of this post.

The document I’m working to has six columns:

  • Objectives — the handful of key objectives for my team; I’m at four right now, but I suspect there will be a fifth (and this doesn’t count the values-oriented corporate objective or some of the departmental objectives that I will need to support, but which aren’t core to my daily work)
  • Measures — there is a one-to-many relationship of objectives to measures, and these are simply what I will measure that ties to the objective; the multiple measures are geared towards addressing different facets of the objective (e.g., quality, scope, budget, etc.)
  • Weight — all objectives are not created equal; in my case, for 2009, I’ve got one objective that dominates, a couple of objectives that are fairly important but not dominant, and an objective that is a lower priority, yet is still a valid and necessary objective
  • Targets — these are three columns where, for each measure, we define the range of values for: 1) Does Not Meet Expectations, 2) Achieves Expectations, and 3) Exceeds Expectations

It’s tempting to try to fill in all the columns for each objective at once. That’s a mistake. The best bet is to fill in each column first, then move to the next column.

This is also freakishly similar to the process we semi-organically developed when I was at National Instruments working on key metrics for individual groups. Performance measurement maturity-wise, Nationwide is ahead of National Instruments (but it is a much larger and much older company, so that is to be expected), in that these metrics are tied to compensation, and there are systems in place to consistently apply the same basic framework across the enterprise.

This exercise kills more than one bird with a single slingshot load:

  • Performance measurement for myself and members of my team — the weights assigned are for the entire team; when it comes to individuals (myself included), it’s largely a matter of shifting the weights around; everyone on my team will have all of these objectives, but, in some cases, their role is really to just provide limited support for an objective that someone else is really owning and driving, so the weight of each objective will vary dramatically from person to person
  • Roles and responsibilities for team members — this is tightly related to the above, but is slightly different, in that the performance measurement and objectives are geared towards, “What do you need to achieve,” and it’s useful to think through “…and how are we going to do that?”
  • Alignment with partner groups — my team works closely with IT, as well as with a number of different business areas. This concise set of objectives is a great alignment tool, since achieving most my objectives requires collaboration with other groups; we need to check that their objectives are in line with ours. If they’re not, it’s better to have the discussion now rather than halfway through the coming year when “inexplicable” friction has developed between the teams because they don’t share priorities
  • Identifying the good and the bad — if done correctly (and, frankly, my team’s are AWESOME), then we’ll be able to check up on our progress fairly regularly throughout the year. At the end of 2009, it’s almost a given that we will have “Did not achieve” for some of our measures. By honing in on where we missed, we’ll be able to focus on why that was and how we can correct it going forward.

It’s a great exercise, and is probably the work that I did in this lull period that will have the impact the farthest into 2009.

I’ll let you know how things shake out!

Reporting, Social Media

Social Media ROI: Stop the Insanity!

I’ve taken a run at this before…but my assertion that the emperor has no clothes didn’t stick. Either that, or the dozens of people who read this blog simply agree with me in principle, but don’t really think it’s worth the effort to raise a stink.

Regardless, I’m not quite ready to let it go. And I do think this is important. Connie Bensen’s recent post (cross-posted on the Marketing 2.0 blog) on the subject had me cheering…and crying…at the same time!

Maybe it’s because I’ve had the good fortune to know and work with some incredibly sharp CFO-types in my day. Most notably, for my entire eight years at National Instruments, the CFO (not necessarily his official title the whole time, but that was his role) was Alex Davern — a diminutively statured, prematurely white-haired Irishman who arguably knows the company’s business and market as well or better than anyone else in the company. He is a numbers guy by training…who gets that numbers are a tool, a darn important tool, but not the be-all end-all.

I had to sit down with — or stand up in front of — Alex on several occasions and pushinitiatives that had a hefty price tag for which I was a champion or at least a key stakeholder — a web content management system, a web analytics tool, and a customer data integration initiative. I never had to pitch a social media initiative to Alex, and I don’t know exactly how I would have done it. But, I seriously doubt that I would have pitched that “ROI is Return on Influence when it comes to social media.” I can feel the pain in my legs as I write this, just imagining myself being taken down at the knees by his Irish brogue.

Here’s the deal. Let’s back up to ROI as return on investment. Return. On. Investment. It’s a formula:

Both numbers have the same unit of measure — let’s go with US dollars — so that the end result is a straight-up ratio. Measured as a percentage. This is a bit of an oversimplification, and there are scads of ways to actually calculate ROI. A pretty common one is to use “net income” as the Return, and “book value of assets” as the Investment. With me so far? You acquired the assets along the way, and they have some worth (let’s not go down the path of that you might have spent more…or less…to acquire them than their “book value”). The return is how much money they made for you.

Now, let’s look at ROI as “Return on Influence” (I’ll skip “Return on Interaction” here — I can get plenty verbose without a repetitive example):

Hmmm… The construct starts to break down on several fronts. First off, you’re going to have a hard time measuring both of these in like units. That’s sorta’ the point of all of the debate on ROI — “influence” is hard to quantify. But, that’s not actually the main beef I have on this front. At the end of the day, your return is still “what value did we garner from our social media efforts?” Maybe that isn’t measured in direct monetary terms. But, really, is this whole discussion about mapping the level of Influence to some Return, or, rather, is it about assessing the Influence that you garner from some Investment? A more appropriate (conceptual) formula would be:

But, IOI, as pleasantly symmetrical as it is, really doesn’t get us very far, does it? So, let’s go back to Alex as a proxy for the Finance-oriented decision-makers in your company. You have two options when making your case for social media investment:

  • The Cutesy Option — waltz in with an opening that, frankly, is a bit patronizing: “What you have to understand about ROI when it comes to social media is that ROI is really Return on Influence rather than Return on Investment”
  • The Value Option — know your business (chances are the Finance person does); know your company’s strategy; know the challenges your company is facing; frame your pitch in those terms

Obviously, I’m a proponent of the second. I don’t really have a problem with starting the discussion with, “Trying to do an ROI calculation on a social media investment is, at best, extremely difficult and, at worst, not possible. But, there is real value to the business, and that’s what I’m going to talk about with you. And, I’ll talk about how we can quantify that value and the results we think we can achieve.”

Connie’s post has a great list to work from for that case. But…more on that in my next post.

Oh, yeah. the picture at the beginning of this post. And the title. Susan Powter, people! Stop the insanity!!!