Dashboard Design Part 1 of 3: An Iterative Tale
One of my responsibilities when I joined my current company was to institute some level of corporate performance management through the use of KPIs and a scorecard or dashboard. It’s a small company, and it was a fun task. In the end, it took me over a year to get to something that really seems to work. On the one hand, that’s embarrassing. On the other hand, it was a side project that never got a big chunk of my bandwidth. And, like many small companies, we have been fairly dynamic when it comes down to nailing down and articulating the strategies we are using to drive the company.
Looking back, there have been three very distinct versions of the corporate scorecard/dashboard. What drove them, what worked about them, and what didn’t work about them, makes for an interesting story. So gather around, children, and I will regale you with the tale of this sordid adventure. Actually, we don’t have time to go through the whole story tonight, so we’ll hit one chapter a day for the next three days.
If you want to click on your flashlight and pull the covers over your head and do a little extra reading after I turn off the light, Avinash Kaushik has a recent post that was timely for me to read as I worked up this bedtime tale: Consultants, Analysts: Present Impactful Analysis, Insightful Reports. The post has the seven “filters” Avinash developed as he judged a WAA competition, and it’s a bit skewed towards web analytics reporting…but, as usual, it’s pretty easy to extrapolate his thoughts to a broader arena. The first iteration of our corporate dashboard would have gotten hammered by most of his filters. Where we are today (which we’ll get to in due time), isn’t perfect, but it’s much, much better when assessed against these filters.
One key piece of background here is that the technology I’ve had available to me throughout this whole process does not include any of the big “enterprise BI” tools. All three of the iterations were delivered using Excel 2003 and Access 2003, with some hooks into several different backend systems.
That was fine with me for a couple of reasons:
- It allowed me to produce and iterate on the design quickly and independently – I didn’t need to pull in IT resources for drawn-out development work
- It was cheap – I didn’t need to invest in any technology beyond what was already on my computer
So, let’s dive in, shall we?
Version 1: The “Clever” Approach As I Learned the Data and the Business
I rolled out the first iteration of a corporate dashboard within a month of starting the job. I took a lot of what I was told about our strategy and objectives at face value and looked at the exercise as being a way to cut my teeth on the company’s data, as well as a way to show that I could produce.
The dashboard I came up with was based on the sales funnel paradigm. We had clearly defined and deployed stages (or so I thought) in the progression of a prospect from the point of being simply a lead all the way through being an opportunity and becoming revenue. We believed that what we needed to keep an eye on week to week was pretty simple:
- How many people were in each stage
- How many had moved from one stage to another
We had a well-defined…theoretical…sales funnel. We had Marketing feeding leads into that funnel. Sure, the data in our CRM wasn’t perfect, but by reporting off of it, we would drive improvements in the data integrity by highlighting the occasional wart and inconsistency. Right…?
I crafted the report below. Simply put, the numbers in each box represented the number of leads/opportunities at that stage of our funnel, and the number in each arrow between a box represented the number who had moved from one box to another over the prior week.
High fives all around!
Except…
It became apparent almost immediately the the report was next to useless when it came to its intended purpose:
- It turned out, our theoretical funnel really didn’t match reality – our funnel had all sorts of opportunities entering and exiting mid-funnel…and there was generally a reasonable explanation each time that happened.
- There were no targets for any of these numbers – I’d quietly raised this point up front, but was rebuffed with the even-then familiar refrain: “We can’t set a target until we look at the data for a while.” But…no targets were ever set. Partly because…
- “Time” was poorly represented – the arrows represented a snapshot of movement over the prior week…but no trending information was available
- Much of the data didn’t “match” the data in the CRM – while the data was coming from the underlying database tables in the CRM, I had to do some cleanup and massaging to make it truly fit the funnel paradigm. Between that and the fact that I was only refreshing my data once/week, a comparison of a report in the CRM to my weekly report invariably invited questions as to why the numbers were different. I could always explain why, and I was always “right,” and it wasn’t exactly that people didn’t trust my report…but it just made them question the overall point a little bit more.
- I had access to the data in some of our systems…but not all of them; most importantly, our ERP system was not something that had data that was readily accessible either through scheduled report exports or an ODBC connection; and, at the end of the day…that’s where several of our KPIs (in reality…if not named as such) lived; back to my first point, there were theoretical ways to get financial data out of our CRM…but, in practice, there was often a wide gulf between the two.
As I labored to address some of these issues, I wound up with several versions of the report that, tactically, did a decent job…but made the report more confusing.
The sorts of things I tried included:
- Adding arrows and numbers that would conditionally appear/disappear in light gray that showed non-standard entries/exits from the funnel
- Adding information within each box to indicate how it compared to the prior week (still not a “trend,” but at least a week-over-week comparison)
- Adding moving averages for many of the numbers
- Adding a total for the prior 12 weeks for many of the numbers
All told, I had five different iterations on this concept — each time taking feedback as to what it was lacking or where it was confusing and trying to address it.
To no avail.
Even as I look back on the different iterations now, it’s clear that each iteration introduced as many new issues as it addressed existing ones.
Still, some real good had come of the exercise:
- I understood the data and our processes quite well -– tracking down why certain opportunities behaved a certain way gave me a firehose sip of knowledge into our internal sales processes
- With next to zero out-of-pocket technology investment, I’d built a semi-automated process for aggregating and reporting the data –- I had to run a macro in MS Access that took ~1 hour to run (it was pulling data across the Internet from our SaaS CRM) and then do a “Refresh All” in Excel; I still had a little bit of manual work each week, so it took me ~30 minutes each time I produced the report
- I’d built some credibility and trust with IT –- as I dove in to try to understand the data and processes, I was quickly asking intelligent questions and, on occasion, uncovering minor system bugs
Unfortunately, none of these were really the primary intended goal of the dashboard. The report really just wasn’t of much use to anyone. This came to a head one afternoon after I’d been dutifully producing it each week (and scratching my head as to what it was telling me) when the CEO, in a fit of polite but real pique, popped off, “You know…nobody actually looks at this report! It doesn’t tell us anything useful!” To which I replied, “I couldn’t agree more!” And stopped producing it.
A few months passed, and I focused more of my efforts on helping clean up our processes and doing various ad hoc analyses –- using the knowledge and technology I had picked up through the initial dashboard development, most assuredly…but the idea of a dashboard/scorecard migrated to the back burner.
Tomorrow, kiddies, as I tuck you in at night, I’ll tell the tale of Version 2 — a scorecard with targets! As you drift off to sleep though, ponder this version. What would you have done differently? What problems with it do you see? Is there anything that looks like it holds promise?