Reporting

Performance Measurement — Starting in the Middle

Like a lot of American companies, Nationwide (Nationwide: Car Insurance as well as the various other Nationwide businesses) goes into semi-shutdown mode between Christmas and New Years. I like racking up some serious consecutive days off as much as the next guy…but it’s also awfully enjoyable to head into work for at least a few days during that period. This year, I’m a new employee, so I don’t have a lot of vacation built up, anyway, and, even though the company would let me go into deficit on the vacation front, I just don’t roll that way. As it is, with one day of vacation, I’m getting back-to-back four-day weekends, and the six days I’ve been in the office when most people are out…has been really productive!

I’m a month-and-a-half into my new job, which means I’m really starting to get my sea legs as to what’s what. And, that means I’m well aware of the tornado of activity that is going to hit when the masses return to work on January 5th. So, in addition to mailbox cleanup, training catch-up, focussed effort on some core projects, and the like, I’ve been working on nailing down the objectives for my area for 2009. In the end, this gets to performance measurement on several levels: of me, of the members of my team, of my manager and his organization, and so on. And that’s where, “Start in the middle” has come into play.

There are balanced scorecard (and other BPM theoreticians) who argue that the only way to set up a good set of performance measures is to start at the absolute highest levels of the organization — the C-suite — and then drill down deeper and deeper from there with ever-more granular objectives and measures until you get down to each individual employee. Maybe this can work, but I’ve never seen that approach make it more than two steps out of the ivory tower from whence it was proclaimed.

On the other extreme, I have seen organizations start with the individual performer, or at the team level, and start with what they measure on a regular basis. The risk there — and I’ve definitely run into this — is that performance measures can wind up driven by what’s easy to measure and largely divorced from any real connection to measuring meaningful objectives for the organization.

Nationwide has a performance measurement structure that, I’m sure, is not all that unique among large companies. But, it’s effective, in that in combines both of the above approaches to get to something meaningful and useful. In my case:

  • There is an element of the performance measurement that is tied to corporate values — values are something that (should be) universal in the company and important to the company’s consistent behavior and decision-making, so that’s a good element to drive from the corporate level
  • Departmental objectives — nailing down high-level objectives for the department, which then get “drilled down” as appropriate and weighted appropriately at the group and individual level; these objectives are almost exclusively outcome-based (see my take on outputs vs. outcomes)
  • Team/individual objectives — a good chunk of these are drilldowns from the departmental objectives. But, they also reflect the tactics of how those objectives will be met and, in my mind, can include output measures in addition to outcome measures. 

What I’ve been working on is the team objectives. I have a good sense of the main departmental objectives that I’m helping to drive, so that’s good — that’s “the middle” referenced in the title of this post.

The document I’m working to has six columns:

  • Objectives — the handful of key objectives for my team; I’m at four right now, but I suspect there will be a fifth (and this doesn’t count the values-oriented corporate objective or some of the departmental objectives that I will need to support, but which aren’t core to my daily work)
  • Measures — there is a one-to-many relationship of objectives to measures, and these are simply what I will measure that ties to the objective; the multiple measures are geared towards addressing different facets of the objective (e.g., quality, scope, budget, etc.)
  • Weight — all objectives are not created equal; in my case, for 2009, I’ve got one objective that dominates, a couple of objectives that are fairly important but not dominant, and an objective that is a lower priority, yet is still a valid and necessary objective
  • Targets — these are three columns where, for each measure, we define the range of values for: 1) Does Not Meet Expectations, 2) Achieves Expectations, and 3) Exceeds Expectations

It’s tempting to try to fill in all the columns for each objective at once. That’s a mistake. The best bet is to fill in each column first, then move to the next column.

This is also freakishly similar to the process we semi-organically developed when I was at National Instruments working on key metrics for individual groups. Performance measurement maturity-wise, Nationwide is ahead of National Instruments (but it is a much larger and much older company, so that is to be expected), in that these metrics are tied to compensation, and there are systems in place to consistently apply the same basic framework across the enterprise.

This exercise kills more than one bird with a single slingshot load:

  • Performance measurement for myself and members of my team — the weights assigned are for the entire team; when it comes to individuals (myself included), it’s largely a matter of shifting the weights around; everyone on my team will have all of these objectives, but, in some cases, their role is really to just provide limited support for an objective that someone else is really owning and driving, so the weight of each objective will vary dramatically from person to person
  • Roles and responsibilities for team members — this is tightly related to the above, but is slightly different, in that the performance measurement and objectives are geared towards, “What do you need to achieve,” and it’s useful to think through “…and how are we going to do that?”
  • Alignment with partner groups — my team works closely with IT, as well as with a number of different business areas. This concise set of objectives is a great alignment tool, since achieving most my objectives requires collaboration with other groups; we need to check that their objectives are in line with ours. If they’re not, it’s better to have the discussion now rather than halfway through the coming year when “inexplicable” friction has developed between the teams because they don’t share priorities
  • Identifying the good and the bad — if done correctly (and, frankly, my team’s are AWESOME), then we’ll be able to check up on our progress fairly regularly throughout the year. At the end of 2009, it’s almost a given that we will have “Did not achieve” for some of our measures. By honing in on where we missed, we’ll be able to focus on why that was and how we can correct it going forward.

It’s a great exercise, and is probably the work that I did in this lull period that will have the impact the farthest into 2009.

I’ll let you know how things shake out!

Leave a Reply