Dashboard Design Part 2 of 3: An Iterative Tale
Yesterday, I described my first shot at developing a weekly corporate dashboard for my current company. It was based on the concept of the sales funnel and, while a lot of good came out of the exercise…it was of no use as a corporate performance management tool.
Tonight’s bedtime story will be chapter 2, where the initial beast was slain and a new beast was created in its place. Gather around, kids, and we’ll explore the new and improved beast…
Version 2: A Partner in Crime and a Christmas Tree Scorecard
Several months after the initial dashboard had died an abrupt and appropriate death, we found ourselves backing into looking at monthly trends on a regular basis for a variety of areas of the business. I was involved, as was our Director of Finance. I honestly don’t remember exactly how it happened, but a soft decree hit both of us that we needed to be circulating that data amongst the management team on a weekly basis.
Now, several very positive things had happened by this point that made the task doable:
- We’d rolled into a new year, and the budgeting and planning that led up to the new year led to a business plan with more specific targets being set around key areas of the business
- We had cleaned up our processes — the reality of them rather than simply the theory; they were still far from perfect, but they had moved in the right direction to at least have some level of consistency
- We had achieved greater agreement/buy-in/understanding that there was underlying and necessary complexity in our business, both our business model and our business processes
Although I would still say we failed, we at least failed forward.
As I recall, the Director of Finance took a first cut at the new scorecard, as he was much more in the thick of things when it came to providing the monthly data to the executive team. I then spent a few evenings filling in some holes and doing some formatting and macro work so that we had a one-page scorecard that showed rolling month-to-month results for a number of metrics. These metrics still flowed loosely from the top to the bottom of a marketing and sales funnel:
Some things we did right:
- Our IT organization had been very receptive to my “this is a nuisance”-type requests over the preceding months and had taken a number of steps to make much of the data more accessible to me much more efficiently (my “data update” routine dropped from taking my computer over an hour to complete to taking under 5 minutes); “my” data for the scorecard was still pulled from the same underlying Access database, but it was pulled using a whole new set of queries
- We incorporated a more comprehensive set of metrics -– going beyond simply Sales and Marketing metrics to capture some key Operations data
- We accepted that we needed to pull some data from the ERP system -– the Director of Finance would handle this and had it down to a 5-minute exercise on his end
- Because we had targets for many of the metrics, we were able to use conditional formatting to highlight what was on track and what wasn’t. And, we added a macro that would show/hide the targets to make it easy to reduce the clutter on the scorecard (although it was still cluttered even with the targets hidden)
- We reported historical data -– the totals for each past month, as well as the color-coding of where that month ended up relative to its target.
- We allowed a few metrics that did not have targets set -– offending my purist sensibilities, and, honestly, this was the least useful data, but it was appropriate to include in some cases.
We even included limited “drilldown” capability — hyperlinks next to different rows in the scorecard (not shown in the image above) that, when clicked, jumped to another worksheet that had more granular detail.
But the scorecard was still a failure.
We found ourselves updating it once a week and pulling it up for review in a management meeting…and increasingly not discussing it at all. As a matter of fact, just how much of an abstract-but-not-useful picture this weekly exercise became really became clear when we got to version 3…and quickly realized how much of the data we had let lapse when it came to updates.
So, what was wrong with it? Several things:
- Too much detailed data –- because we had forsaken graphical elements almost entirely, we were able to cram a lot of data into a tabular grid. We found ourselves including some metrics to make the scorecard “complete” simply because we could – for instance, if we included total leads and, as a separate metric, leads who were entirely new to the company, then, for the sake of symmetry, we included the number of leads for the month who were already in our database: new + existing = total. This was redundant and unnecessary
- We treated all of the metrics the same -– everything was represented as a monthly total, be it the number of leads generated, the number of opportunities closed, the amount of revenue booked, or the headcount for the company; we didn’t think about what really made sense – we just presented it all equally
- No pro-rating of the targets –- we had a simple red/yellow/green scheme for the conditional formatting alerts; but, we compared the actuals for each metric to the total targets for the month; this meant that, for the first half of the month, virtually every metric was in the red
Pretty quickly, I saw that version 2 represented some improvements from version 1, but, somehow, wasn’t really any better at helping us assess the business.
At that point, we fell into a pretty common trap of data analysts: once a report has stabilized, we find a way to streamline its production and automate it as much as possible simply to remove the tedium of the creation. I’ve got countless examples from my own experience where a BI or web analytics tool has the ability to automate the creation and e-mailing of reports out. Once it’s automated, the cost to produce it each day/week/month goes virtually to zero, so there is no motivation to go back and ask, “Is this of any real value?” Avinash Kaushik calls this being a “reporting squirrel” (see Rule #3 on his post: Six Rules for Creating A Data-Driven Boss) or a “data puke” (see Filter #1 in his post: Consultants, Analysts: Present Impactful Analysis, Insightful Reports), and it’s one of the worst places to find yourself.
Even though I was semi-aware of what had happened, the truth is that we would likely still be cruising along producing this weekly scorecard save for two things:
- What was acceptable for internal consumption was not acceptable for the reports we provided to our clients. The other almost-full-time analyst in the company and I had embarked on some aggressive self-education when it came to data visualization best practices; we started trolling The Dashboard Spy site, we read some Stephen Few, we poked around in the new visualization features of Excel 2007, and generally started a vigorous internal effort to overhaul the reporting we were providing to our clients (and to ourselves as our own clients)
- The weekly meeting where the managers reviewed the scorecard got replaced with an “as-needed” meeting, with the decision that the scorecard would still be prepared and presented weekly…to the entire company
So, what really happened was that fear of being humiliated internally spurred another hasty revision of the scorecard…and its evolution into more of a dashboard.
And that, kids, will be the subject of tomorrow’s bedtime tale. But, as you snuggle under your comforter and burrow your head into your pillow, think about the approach I’ve described here. Do you use something similar that actually works? If so, why? What problems do you see with this approach? What do you like?