Analysis, Reporting

Performance Measurement vs. Analysis

I’ve picked up some new terminology over the course of the past few weeks thanks to an intermediate statistics class I’m taking. Specifically — what inspired this post — is the distinction between two types of statistical studies, as defined by one of the fathers of statisical process control, W. Edwards Deming. There’s a Wikipedia entry that actually defines them and the point of making the distinction quite well:

  • Enumerative study: A statistical study in which action will be taken on the material in the frame being studied.
  • Analytic study: A statistical study in which action will be taken on the process or cause-system that produced the frame being studied. The aim being to improve practice in the future.

…In other words, an enumerative study is a statistical study in which the focus is on judgment of results, and an analytic study is one in which the focus is on improvement of the process or system which created the results being evaluated and which will continue creating results in the future. A statistical study can be enumerative or analytic, but it cannot be both.

I’ve now been at three different schools in three different states where one of the favorite examples used for processes and process control is a process for producing plastic yogurt cups. I don’t know if Yoplait just pumps an insane amount of funding into academia-based research, or if there is some other reason, but I’ll go ahead and perpetuate it by using the same as an example here:

  • Enumerative study — imagine that the yogurt cup manufacturer is contractually bound to provide shipments where less than 0.1% of the cups are defective. Imagine, also, that to fully test a cup requires destroying it in the process of the test. Using statistics, the manufacturer can pull a sample from each shipment, test those cups, and, if the sampling is set up properly, be able to predict with reasonable confidence the proportion of defective cups in the entire shipment. If the prediction exceeds 0.1%, then the entire shipment can be scrapped rather than risking a contract breach. The same test would be conducted on each shipment.
  • Analytic study — now, suppose the yogurt cup manufacturer finds that he is scrapping one shipment in five based on the process described in the enumerative study. This isn’t a financially viable way to continue. So, he decides to conduct a study to try to determine what factors in his process are causing cups to come out defective. In this case, he may set up a very different study — isolating as many factors in the process as he can to see if can identify where the trouble spots in the process itself are and fix them.

It’s not an either/or scenario. Even if an analytics study (or series of studies) enables him to improve the process, he will likely still need to continue the enumerative studies to identify bad batches when they do occur.

In the class, we have talked about how, in marketing, we are much more often faced with analytic situations rather than enumerative ones. I don’t think this is the case. As I’ve mulled it over, it seems like enumerative studies are typically about performance measurement, while analytic studies are about diagnostics and continuous improvement. See if the following table makes sense:

Enumerative Analytic
Performance management Analysis for continuous improvement
How did we do in the past? How can we do better in the future?
Report Analysis

Achievement tests administered to schoolchildren are more enumerative than analytic — they are not geared towards determining which teaching techniques work better or worse, or even to provide the student with information about what to focus on and how going forward. They are merely an assessment of the student’s knowledge. In aggregate, they can be used as an assessment of a teacher’s effectiveness, or a school’s, or a school district’s, or even a state’s.

“But…wait!” you cry! “If an achievement test can be used to identify which teachers are performing better than others, then your so-called ‘process’ can be improved by simply getting rid of the lowest performing teachers, and that’s inherently an analytic outcome!” Maybe so…but I don’t think so. It simply assumes that each teacher is either good, bad, or somewhere in between. Achievement tests do nothing to indicate why a bad teacher is a bad teacher and a good teacher is a good teacher. Now, if the results of the achievement tests are used to identify a sample of good and bad teachers, and then they are observed and studied, then we’re back to an analytic scenario.

Let’s look at a marketing campaign. All too often, we throw out that we want to “measure the results of the campaign.” My claim is that there are two very distinct purposes for doing so…and both the measurement methods and the type of action to be taken are very different:

  • Enumerative/performance measurement — Did the campaign perform as it was planned? Did we achieve the results we expected? Did the people who planned and executed the campaign deliver on what was expected of them?
  • Analytic/analysis — What aspects of the campaign were the most/least effective? What learnings can we take forward to the next campaign so that we will achieve better results the next time?

In practice, you will want to do both. And, you will have to do both at the same time. I would argue that you need to think about the two different types and purposes as separate animals, though, rather than expecting to “measure the results” and muddle them together.

Reporting

Performance Measurement — Starting in the Middle

Like a lot of American companies, Nationwide (Nationwide: Car Insurance as well as the various other Nationwide businesses) goes into semi-shutdown mode between Christmas and New Years. I like racking up some serious consecutive days off as much as the next guy…but it’s also awfully enjoyable to head into work for at least a few days during that period. This year, I’m a new employee, so I don’t have a lot of vacation built up, anyway, and, even though the company would let me go into deficit on the vacation front, I just don’t roll that way. As it is, with one day of vacation, I’m getting back-to-back four-day weekends, and the six days I’ve been in the office when most people are out…has been really productive!

I’m a month-and-a-half into my new job, which means I’m really starting to get my sea legs as to what’s what. And, that means I’m well aware of the tornado of activity that is going to hit when the masses return to work on January 5th. So, in addition to mailbox cleanup, training catch-up, focussed effort on some core projects, and the like, I’ve been working on nailing down the objectives for my area for 2009. In the end, this gets to performance measurement on several levels: of me, of the members of my team, of my manager and his organization, and so on. And that’s where, “Start in the middle” has come into play.

There are balanced scorecard (and other BPM theoreticians) who argue that the only way to set up a good set of performance measures is to start at the absolute highest levels of the organization — the C-suite — and then drill down deeper and deeper from there with ever-more granular objectives and measures until you get down to each individual employee. Maybe this can work, but I’ve never seen that approach make it more than two steps out of the ivory tower from whence it was proclaimed.

On the other extreme, I have seen organizations start with the individual performer, or at the team level, and start with what they measure on a regular basis. The risk there — and I’ve definitely run into this — is that performance measures can wind up driven by what’s easy to measure and largely divorced from any real connection to measuring meaningful objectives for the organization.

Nationwide has a performance measurement structure that, I’m sure, is not all that unique among large companies. But, it’s effective, in that in combines both of the above approaches to get to something meaningful and useful. In my case:

  • There is an element of the performance measurement that is tied to corporate values — values are something that (should be) universal in the company and important to the company’s consistent behavior and decision-making, so that’s a good element to drive from the corporate level
  • Departmental objectives — nailing down high-level objectives for the department, which then get “drilled down” as appropriate and weighted appropriately at the group and individual level; these objectives are almost exclusively outcome-based (see my take on outputs vs. outcomes)
  • Team/individual objectives — a good chunk of these are drilldowns from the departmental objectives. But, they also reflect the tactics of how those objectives will be met and, in my mind, can include output measures in addition to outcome measures. 

What I’ve been working on is the team objectives. I have a good sense of the main departmental objectives that I’m helping to drive, so that’s good — that’s “the middle” referenced in the title of this post.

The document I’m working to has six columns:

  • Objectives — the handful of key objectives for my team; I’m at four right now, but I suspect there will be a fifth (and this doesn’t count the values-oriented corporate objective or some of the departmental objectives that I will need to support, but which aren’t core to my daily work)
  • Measures — there is a one-to-many relationship of objectives to measures, and these are simply what I will measure that ties to the objective; the multiple measures are geared towards addressing different facets of the objective (e.g., quality, scope, budget, etc.)
  • Weight — all objectives are not created equal; in my case, for 2009, I’ve got one objective that dominates, a couple of objectives that are fairly important but not dominant, and an objective that is a lower priority, yet is still a valid and necessary objective
  • Targets — these are three columns where, for each measure, we define the range of values for: 1) Does Not Meet Expectations, 2) Achieves Expectations, and 3) Exceeds Expectations

It’s tempting to try to fill in all the columns for each objective at once. That’s a mistake. The best bet is to fill in each column first, then move to the next column.

This is also freakishly similar to the process we semi-organically developed when I was at National Instruments working on key metrics for individual groups. Performance measurement maturity-wise, Nationwide is ahead of National Instruments (but it is a much larger and much older company, so that is to be expected), in that these metrics are tied to compensation, and there are systems in place to consistently apply the same basic framework across the enterprise.

This exercise kills more than one bird with a single slingshot load:

  • Performance measurement for myself and members of my team — the weights assigned are for the entire team; when it comes to individuals (myself included), it’s largely a matter of shifting the weights around; everyone on my team will have all of these objectives, but, in some cases, their role is really to just provide limited support for an objective that someone else is really owning and driving, so the weight of each objective will vary dramatically from person to person
  • Roles and responsibilities for team members — this is tightly related to the above, but is slightly different, in that the performance measurement and objectives are geared towards, “What do you need to achieve,” and it’s useful to think through “…and how are we going to do that?”
  • Alignment with partner groups — my team works closely with IT, as well as with a number of different business areas. This concise set of objectives is a great alignment tool, since achieving most my objectives requires collaboration with other groups; we need to check that their objectives are in line with ours. If they’re not, it’s better to have the discussion now rather than halfway through the coming year when “inexplicable” friction has developed between the teams because they don’t share priorities
  • Identifying the good and the bad — if done correctly (and, frankly, my team’s are AWESOME), then we’ll be able to check up on our progress fairly regularly throughout the year. At the end of 2009, it’s almost a given that we will have “Did not achieve” for some of our measures. By honing in on where we missed, we’ll be able to focus on why that was and how we can correct it going forward.

It’s a great exercise, and is probably the work that I did in this lull period that will have the impact the farthest into 2009.

I’ll let you know how things shake out!

Presentation, Reporting

Dashboard Design Part 3 of 3: An Iterative Tale

On Monday, we covered the first chapter of this bedtime tale of dashboard creation: a cutesy approach that made the dashboard into a straight-up reflection of our sales funnel. Last night, we followed that up with the next performance management tracking beast — a scorecard that had lots (too much) detail and too much equality across the various metrics. Tonight’s tale is where we find a happy ending, so snuggle in, kids, and I’ll tell you about…

Version 3 – Hey…Windows Was a Total POS until 3.1…So I’m Not Feeling Too Bad!

(What’s “POS?” Um…go ask your mother. But don’t tell her you heard the term from me!)

As it turned out, versions 1 and 2, combined with some of the process evolution the business had undergone, combined with some data visualization research and experimentation, meant that I was a week’s worth of evenings and a decent chunk of one weekend away from something that actually works:

Some of the keys that make this work:

  • Heavy focus on Few’s Tufte-derived “data-pixel ratio” –- asking the question for everything on the dashboard: “If it’s not white space, does it have a real purpose for being on the dashboard?” And, only including elements where the answer is, “Yes.”
  • Recognition that all metrics aren’t equal –- I seriously beefed up the most critical, end-of-the-day metrics (almost too much – there’s a plan for the one bar chart to be scaled down in the future once a couple other metrics are available)
  • The exact number of what we did six months ago isn’t important -– I added sparklines (with targets when available) so that the only specific number shown is the month-to-date value for the metric; the sparkline shows how the metric has been trending relative to target
  • Pro-rating the targets -– it made for formulas that were a bit hairier, but each target line now assumes a linear growth over the course of the month; the target on Day 5 of a 30-day month is 1/6 of the total target for the month
  • Simplification of alerts -– instead of red/yellow/green…we went to red/not red; this really makes the trouble spots jump out

Even as I was developing the dashboard, a couple of things clued me in that I was on a good track:

  • I saw data that was important…but that was out of whack or out of date; this spawned some investigations that yielded good results
  • As I circulated the approach for feedback, I started getting questions about specific peaks/valleys/alerts on the dashboard – people wound up skipping the feedback about the dashboard design itself and jumping right to using the data

It took a couple of weeks to get all of the details ironed out, and I took the opportunity to start a new Access database. The one I had been building on for the past year still works and I still use it, but I’d inadvertently built in clunkiness and overhead along the way. Starting “from scratch” was essentially a minor re-architecting of the platform…but in a way that was quick, clean and manageable.

My Takeaways

Looking back, and telling you this story, has given me a chance to reflect on what the key learnings are from this experience. In some cases, the learning has been a reinforcement what I already knew. In others, they were new (to me) ideas:

  • Don’t Stop after Version 1 — obviously, this is a key takeaway from this story, but it’s worth noting. In college, I studied to be an architect, and a problem that I always had over the course of a semester-long design project was that, while some of my peers (many of whom are now successful practicing architects) wound up with designs in the final review that looked radically different from what they started with, I spent most of the semester simply tweaking and tuning whatever I’d come up with in the first version of my design. At the same time, these peers could demonstrate that their core vision for their projects was apparent in all designs, even if it manifested itself very differently from start to finish. This is a useful analogy for dashboard design — don’t treat the dashboard as “done” just because it’s produced and automated, and don’t consider a “win” simply because it delivered value. It’s got to deliver the value you intended, and deliver it well to truly be finished…and then the business can and will evolve, which will drive further modifications.
  • Democratizing Data Visualization Is a “Punt” — in both of the first two dashboards, I had a single visualization approach and I applied that to all of the data. This meant that the data was shoe-horned into whatever that paradigm was, regardless of whether it was data that mattered more as a trend vs. data that mattered more as a snapshot, whether it was data that was a leading indicator  vs. data that was a direct reflection of this month’s results, or whether the data was a metric that tied directly to the business plan vs. data that was “interesting” but not necessarily core to our planning. The third iteration finally broke out of this framework, and the results were startlingly positive.
  • Be Selective about Detailed Data — especially in the second version of the scorecard, we included too much granularity, which made the report overwhelming. To make it useful, the consumers of the dashboard needed to actually take the data and chart it. One of the worst things a data analyst can do is provide a report that requires additional manipulation to draw any conclusions.
  • Targets Matter(!!!) — I’ve mounted various targets-oriented soapboxes in the past, but this experience did nothing if it didn’t shore up that soapbox. The second and third iterations of the dashboard/scorecard included targets for many of the metrics, and this was useful. In some cases, we missed the targets so badly that we had to go back and re-set them. That’s okay. It forced a discussion about whether our assumptions about our business model were valid. We didn’t simply adjust the targets to make them easier to hit — we revisited the underlying business plan based on the realities of our business. This spawned a number of real and needed initiatives.

Will There Be Another Book in the Series?

Even though I am pleased with where the dashboard is today, the story is not finished. Specifically:

  • As I’ve alluded to, there is some missing data here, and there are some process changes in our business that, once completed, will drive some changes to the dashboard; overall, they will make the dashboard more useful
  • As much of a fan as I am of our Excel/Access solution…it has its limitations. I’ve said from the beginning that I was doing functional prototyping. It’s built well enough with Access as a poor man’s operational data store and Excel as the data visualization engine that we can use this for a while…but I also view it as being the basis of requirements for an enterprise BI tool (in this regard, it jibes with a parallel initiative that is client-facing for us). Currently, the dashboard gets updated with current data when either the Director of Finance or I check it out of Sharepoint and click a button. It’s not really a web-based dashboard, it doesn’t allow drilling down to detailed data, and it doesn’t have automated “push” capabilities. These are all improvements that I can’t deliver with the current platform.
  • I don’t know what I don’t know. Do you see any areas of concern or flaws with the iteration described in this post? Have you seen something like this fail…or can you identify why it would fail in your organization?

I don’t know when this next book will be written, but you’ll read it here first!

I hope you’ve enjoyed this tale. Or, if nothing else, it’s done that which is critical for any good bedtime story: it’s put you to sleep!  🙂

Presentation, Reporting

Dashboard Design Part 2 of 3: An Iterative Tale

Yesterday, I described my first shot at developing a weekly corporate dashboard for my current company. It was based on the concept of the sales funnel and, while a lot of good came out of the exercise…it was of no use as a corporate performance management tool.

Tonight’s bedtime story will be chapter 2, where the initial beast was slain and a new beast was created in its place. Gather around, kids, and we’ll explore the new and improved beast…

Version 2: A Partner in Crime and a Christmas Tree Scorecard

Several months after the initial dashboard had died an abrupt and appropriate death, we found ourselves backing into looking at monthly trends on a regular basis for a variety of areas of the business. I was involved, as was our Director of Finance. I honestly don’t remember exactly how it happened, but a soft decree hit both of us that we needed to be circulating that data amongst the management team on a weekly basis.

Now, several very positive things had happened by this point that made the task doable:

  • We’d rolled into a new year, and the budgeting and planning that led up to the new year led to a business plan with more specific targets being set around key areas of the business
  • We had cleaned up our processes — the reality of them rather than simply the theory; they were still far from perfect, but they had moved in the right direction to at least have some level of consistency
  • We had achieved greater agreement/buy-in/understanding that there was underlying and necessary complexity in our business, both our business model and our business processes

Although I would still say we failed, we at least failed forward.

As I recall, the Director of Finance took a first cut at the new scorecard, as he was much more in the thick of things when it came to providing the monthly data to the executive team. I then spent a few evenings filling in some holes and doing some formatting and macro work so that we had a one-page scorecard that showed rolling month-to-month results for a number of metrics. These metrics still flowed loosely from the top to the bottom of a marketing and sales funnel:

Some things we did right:

  • Our IT organization had been very receptive to my “this is a nuisance”-type requests over the preceding months and had taken a number of steps to make much of the data more accessible to me much more efficiently (my “data update” routine dropped from taking my computer over an hour to complete to taking under 5 minutes); “my” data for the scorecard was still pulled from the same underlying Access database, but it was pulled using a whole new set of queries
  • We incorporated a more comprehensive set of metrics -– going beyond simply Sales and Marketing metrics to capture some key Operations data
  • We accepted that we needed to pull some data from the ERP system -– the Director of Finance would handle this and had it down to a 5-minute exercise on his end
  • Because we had targets for many of the metrics, we were able to use conditional formatting to highlight what was on track and what wasn’t. And, we added a macro that would show/hide the targets  to make it easy to reduce the clutter on the scorecard (although it was still cluttered even with the targets hidden)
  • We reported historical data -– the totals for each past month, as well as the color-coding of where that month ended up relative to its target.
  • We allowed a few metrics that did not have targets set -– offending my purist sensibilities, and, honestly, this was the least useful data, but it was appropriate to include in some cases.

We even included limited “drilldown” capability — hyperlinks next to different rows in the scorecard (not shown in the image above) that, when clicked, jumped to another worksheet that had more granular detail.

But the scorecard was still a failure.

We found ourselves updating it once a week and pulling it up for review in a management meeting…and increasingly not discussing it at all. As a matter of fact, just how much of an abstract-but-not-useful picture this weekly exercise became really became clear when we got to version 3…and quickly realized how much of the data we had let lapse when it came to updates.

So, what was wrong with it? Several things:

  • Too much detailed data –- because we had forsaken graphical elements almost entirely, we were able to cram a lot of data into a tabular grid. We found ourselves including some metrics to make the scorecard “complete” simply because we could – for instance, if we included total leads and, as a separate metric, leads who were entirely new to the company, then, for the sake of symmetry, we included the number of leads for the month who were already in our database: new + existing = total. This was redundant and unnecessary
  • We treated all of the metrics the same -– everything was represented as a monthly total, be it the number of leads generated, the number of opportunities closed, the amount of revenue booked, or the headcount for the company; we didn’t think about what really made sense – we just presented it all equally
  • No pro-rating of the targets –- we had a simple red/yellow/green scheme for the conditional formatting alerts; but, we compared the actuals for each metric to the total targets for the month; this meant that, for the first half of the month, virtually every metric was in the red

Pretty quickly, I saw that version 2 represented some improvements from version 1, but, somehow, wasn’t really any better at helping us assess the business.

At that point, we fell into a pretty common trap of data analysts: once a report has stabilized, we find a way to streamline its production and automate it as much as possible simply to remove the tedium of the creation. I’ve got countless examples from my own experience where a BI or web analytics tool has the ability to automate the creation and e-mailing of reports out. Once it’s automated, the cost to produce it each day/week/month goes virtually to zero, so there is no motivation to go back and ask, “Is this of any real value?” Avinash Kaushik calls this being a “reporting squirrel” (see Rule #3 on his post: Six Rules for Creating A Data-Driven Boss) or a “data puke” (see Filter #1 in his post: Consultants, Analysts: Present Impactful Analysis, Insightful Reports), and it’s one of the worst places to find yourself.

Even though I was semi-aware of what had happened, the truth is that we would likely still be cruising along producing this weekly scorecard save for two things:

  • What was acceptable for internal consumption was not acceptable for the reports we provided to our clients. The other almost-full-time analyst in the company and I had embarked on some aggressive self-education when it came to data visualization best practices; we started trolling The Dashboard Spy site, we read some Stephen Few, we poked around in the new visualization features of Excel 2007, and generally started a vigorous internal effort to overhaul the reporting we were providing to our clients (and to ourselves as our own clients)
  • The weekly meeting where the managers reviewed the scorecard got replaced with an “as-needed” meeting, with the decision that the scorecard would still be prepared and presented weekly…to the entire company

So, what really happened was that fear of being humiliated internally spurred another hasty revision of the scorecard…and its evolution into more of a dashboard.

And that, kids, will be the subject of tomorrow’s bedtime tale. But, as you snuggle under your comforter and burrow your head into your pillow, think about the approach I’ve described here. Do you use something similar that actually works? If so, why? What problems do you see with this approach? What do you like?

Presentation, Reporting

Dashboard Design Part 1 of 3: An Iterative Tale

One of my responsibilities when I joined my current company was to institute some level of corporate performance management through the use of KPIs and a scorecard or dashboard. It’s a small company, and it was a fun task. In the end, it took me over a year to get to something that really seems to work. On the one hand, that’s embarrassing. On the other hand, it was a side project that never got a big chunk of my bandwidth. And, like many small companies, we have been fairly dynamic when it comes down to nailing down and articulating the strategies we are using to drive the company.

Looking back, there have been three very distinct versions of the corporate scorecard/dashboard. What drove them, what worked about them, and what didn’t work about them, makes for an interesting story. So gather around, children, and I will regale you with the tale of this sordid adventure. Actually, we don’t have time to go through the whole story tonight, so we’ll hit one chapter a day for the next three days.

If you want to click on your flashlight and pull the covers over your head and do a little extra reading after I turn off the light, Avinash Kaushik has a recent post that was timely for me to read as I worked up this bedtime tale: Consultants, Analysts: Present Impactful Analysis, Insightful Reports. The post has the seven “filters” Avinash developed as he judged a WAA competition, and it’s a bit skewed towards web analytics reporting…but, as usual, it’s pretty easy to extrapolate his thoughts to a broader arena. The first iteration of our corporate dashboard would have gotten hammered by most of his filters. Where we are today (which we’ll get to in due time), isn’t perfect, but it’s much, much better when assessed against these filters.

One key piece of background here is that the technology I’ve had available to me throughout this whole process does not include any of the big “enterprise BI” tools. All three of the iterations were delivered using Excel 2003 and Access 2003, with some hooks into several different backend systems.

That was fine with me for a couple of reasons:

  • It allowed me to produce and iterate on the design quickly and independently – I didn’t need to pull in IT resources for drawn-out development work
  • It was cheap – I didn’t need to invest in any technology beyond what was already on my computer

So, let’s dive in, shall we?

Version 1: The “Clever” Approach As I Learned the Data and the Business

I rolled out the first iteration of a corporate dashboard within a month of starting the job. I took a lot of what I was told about our strategy and objectives at face value and looked at the exercise as being a way to cut my teeth on the company’s data, as well as a way to show that I could produce.

The dashboard I came up with was based on the sales funnel paradigm. We had clearly defined and deployed stages (or so I thought) in the progression of a prospect from the point of being simply a lead all the way through being an opportunity and becoming revenue. We believed that what we needed to keep an eye on week to week was pretty simple:

  • How many people were in each stage
  • How many had moved from one stage to another

We had a well-defined…theoretical…sales funnel. We had Marketing feeding leads into that funnel. Sure, the data in our CRM wasn’t perfect, but by reporting off of it, we would drive improvements in the data integrity by highlighting the occasional wart and inconsistency. Right…?

I crafted the report below. Simply put, the numbers in each box represented the number of leads/opportunities at that stage of our funnel, and the number in each arrow between a box represented the number who had moved from one box to another over the prior week.

High fives all around!

Except…

It became apparent almost immediately the the report was next to useless when it came to its intended purpose:

  • It turned out, our theoretical funnel really didn’t match reality – our funnel had all sorts of opportunities entering and exiting mid-funnel…and there was generally a reasonable explanation each time that happened.
  • There were no targets for any of these numbers – I’d quietly raised this point up front, but was rebuffed with the even-then familiar refrain: “We can’t set a target until we look at the data for a while.” But…no targets were ever set. Partly because…
  • “Time” was poorly represented – the arrows represented a snapshot of movement over the prior week…but no trending information was available
  • Much of the data didn’t “match” the data in the CRM – while the data was coming from the underlying database tables in the CRM, I had to do some cleanup and massaging to make it truly fit the funnel paradigm. Between that and the fact that I was only refreshing my data once/week, a comparison of a report in the CRM to my weekly report invariably invited questions as to why the numbers were different. I could always explain why, and I was always “right,” and it wasn’t exactly that people didn’t trust my report…but it just made them question the overall point a little bit more.
  • I had access to the data in some of our systems…but not all of them; most importantly, our ERP system was not something that had data that was readily accessible either through scheduled report exports or an ODBC connection; and, at the end of the day…that’s where several of our KPIs (in reality…if not named as such) lived; back to my first point, there were theoretical ways to get financial data out of our CRM…but, in practice, there was often a wide gulf between the two.

As I labored to address some of these issues, I wound up with several versions of the report that, tactically, did a decent job…but made the report more confusing.

The sorts of things I tried included:

  • Adding arrows and numbers that would conditionally appear/disappear in light gray that showed non-standard entries/exits from the funnel
  • Adding information within each box to indicate how it compared to the prior week (still not a “trend,” but at least a week-over-week comparison)
  • Adding moving averages for many of the numbers
  • Adding a total for the prior 12 weeks for many of the numbers

All told, I had five different iterations on this concept — each time taking feedback as to what it was lacking or where it was confusing and trying to address it.

To no avail.

Even as I look back on the different iterations now, it’s clear that each iteration introduced as many new issues as it addressed existing ones.

Still, some real good had come of the exercise:

  • I understood the data and our processes quite well -– tracking down why certain opportunities behaved a certain way gave me a firehose sip of knowledge into our internal sales processes
  • With next to zero out-of-pocket technology investment, I’d built a semi-automated process for aggregating and reporting the data –- I had to run a macro in MS Access that took ~1 hour to run (it was pulling data across the Internet from our SaaS CRM) and then do a “Refresh All” in Excel; I still had a little bit of manual work each week, so it took me ~30 minutes each time I produced the report
  • I’d built some credibility and trust with IT –- as I dove in to try to understand the data and processes, I was quickly asking intelligent questions and, on occasion, uncovering minor system bugs

Unfortunately, none of these were really the primary intended goal of the dashboard. The report really just wasn’t of much use to anyone. This came to a head one afternoon after I’d been dutifully producing it each week (and scratching my head as to what it was telling me) when the CEO, in a fit of polite but real pique, popped off, “You know…nobody actually looks at this report! It doesn’t tell us anything useful!” To which I replied, “I couldn’t agree more!” And stopped producing it.

A few months passed, and I focused more of my efforts on helping clean up our processes and doing various ad hoc analyses –- using the knowledge and technology I had picked up through the initial dashboard development, most assuredly…but the idea of a dashboard/scorecard migrated to the back burner.

Tomorrow, kiddies, as I tuck you in at night, I’ll tell the tale of Version 2 — a scorecard with targets! As you drift off to sleep though, ponder this version. What would you have done differently? What problems with it do you see? Is there anything that looks like it holds promise?