Analytics Strategy, Excel Tips, Presentation

Data Visualization Tips and Concepts (Monish Datta calls it "stellar")*

Columbus Web Analytics Wednesday was sponsored by Resource Interactive last week, and it was, as usual, a fun and engaging event:

Web Analytics Wednesday -- Attendees settling in

We tried a new venue — the Winking Lizard on Bethel Road — and were pretty pleased with the accommodations (private room, private bar, very reasonable prices), so I expect we’ll be back.

Relatively new dad Bryan Cristina had a child care conflict with his wife…so he brought along Isabella (who was phenomenally calm and well-behaved, and is cute as a button!):

Bryan and Isabella

I presented on a topic I’m fairly passionate about — data visualization. The presentation was well-received (Monish Datta really did tweet that it was “stellar”)  and generated a lot of good discussion. I had several requests for copies of the presentation, so I’ve modified it slightly to make it more Slideshare-friendly and posted it. If you click through on the embedded version below, you can see the notes for each slide by clicking on the “Notes on Slide X” tab underneath the slideshow, or you can download the file itself (PowerPoint 2007), which includes notes with each slide (I think you might have to create/login to a Slideshare account, which it looks like you can do quickly using Facebook Connect).

 

 

 

 

I had fun putting the presentation together, as this is definitely a topic that I’m passionate about!

* The “Monish Datta” reference in the title of this post, while accurate, is driven by my never-ending quest to dominate search rankings for searches for Monish. I’m doing okay, but not exactly dominating.

http://b.scorecardresearch.com/beacon.js?c1=7&c2=7400849&c3=1&c4=&c5=&c6=

Reporting

The Perfect Dashboard: Three Pieces of Information

I’ve been spending a lot of time lately working on dashboards — different dashboards for different purposes for different clients, with a heavy emphasis on making dashboards that can be efficiently updated. I’m finding that I keep coming back to two key principles:

  • A dashboard, by definition, fits on a single page — this is straight out of Stephen Few’s book Information Dashboard Design: The Effective Visual Communication of Data; I was skeptical that this was really possible when I first read it, but I’ve increasingly become a believer…with the caveat that there is ancillary data that can be provided with a dashboard as backup/easy drilldowns
  • The dashboard must include logic to dynamically highlight the areas that require the most attention.

The second principle is the focus of this post.

Actionable Metrics

It’s become cliché to observe that data must be converted to information that drives action. I’ve got no argument with that, but, all too often, the people who make this statement would also see this statement as blasphemy:

Most metrics should drive no action most of the time

Any good performance measurement system is based on a structured set of meaningful metrics. Each of those metrics has a target set, either as a hard number, as a comparison to a prior period, as a comparison to some industry measure, or something else.

Here’s the key, though: most of those metrics will likely come in within their target range most of the time! That’s a good thing, because it is rare that a business is equipped to chase more than a handful of issues at once.

A Conceptual (If Not Realistic) Dashboard

At the end of the day, when your user looks at a dashboard, here’s what they really are hoping to get:

Conceptual Dashboard

This is as actionable as it gets:

  • Only the areas that are performing well outside of expectations are shown
  • What’s actually happening is stated in plain English
  • The person viewing the dashboard has a concise list of what he/she needs to start looking into immediately

Will your users ever tell you this is what they’re looking for? No! And, if asked, the reasons why not would include:

  • “I need to see everything that is going on — not just the stuff that is performing outside targets (…because I’m just not comfortable trusting that we designed a dashboard that is good enough to catch all the things that really matter).”
  • “My boss is likely to ask me about her specific pet metric…so I need to have that information at my fingertips, even if it’s not going to drive me to take new action.”
  • “I need to see all of the data so that I can identify patterns and correlations across different aspects of the marketing program.”

Arguing any of these points is an exercise in futility. Between the explosion of data that is available, the fact that not a day passes without a Major Marketing Mind talking about how important it is for us to leverage the wealth of data at our fingertips, and the fact that humans are inherently distrustful of automation until they have seen it working successfully for an extended period of time, all mean that a dashboard, in practice, has to include a decent chunk of information that will not drive any new action.

But the Concept Is Still Useful

I believe the conceptual dashboard above is a useful guiding vision. At the end of the day, we want to provide, and our users want to receive, information that is clear and concise, which the dashboard above certainly is. if we morph the concept above just a little bit, though, we get a dashboard that is only slightly less impactful but heads off all of the concerns listed earlier:

Conceptual Dashboard

Get the idea? The same highlights pop, but additional data is included, and it all still fits on a single page. Obviously, the real dashboard would be one step further diluted from this by presenting actual metrics — numbers, sparklines, etc. But, by working hard to keep all of the on-target data as muted as possible, some clever use of bold and color through conditional formatting can still make what’s important pop.

Parting Thoughts and Clarifications

Any dashboard, whether it’s managed through an enterprise BI tool, through Microsoft Excel, or even through PowerPoint, should be designed so that the structure of the dashboard does not change from one reporting period to the next — the same metrics appear in the same place week in and week out. BUT, within that structure, there should be a concerted effort to make sure that the metrics that are the farthest off target (usually the ones that are the farthest off target in a bad way, but if something is off the charts in a good way, that needs to be highlighted as well) are what the user’s eye is drawn to. And, furthermore, those are the metrics that warrant the first pass of drilling down to look for root causes.

Presentation, Reporting

Dashboard Design Part 3 of 3: An Iterative Tale

On Monday, we covered the first chapter of this bedtime tale of dashboard creation: a cutesy approach that made the dashboard into a straight-up reflection of our sales funnel. Last night, we followed that up with the next performance management tracking beast — a scorecard that had lots (too much) detail and too much equality across the various metrics. Tonight’s tale is where we find a happy ending, so snuggle in, kids, and I’ll tell you about…

Version 3 – Hey…Windows Was a Total POS until 3.1…So I’m Not Feeling Too Bad!

(What’s “POS?” Um…go ask your mother. But don’t tell her you heard the term from me!)

As it turned out, versions 1 and 2, combined with some of the process evolution the business had undergone, combined with some data visualization research and experimentation, meant that I was a week’s worth of evenings and a decent chunk of one weekend away from something that actually works:

Some of the keys that make this work:

  • Heavy focus on Few’s Tufte-derived “data-pixel ratio” –- asking the question for everything on the dashboard: “If it’s not white space, does it have a real purpose for being on the dashboard?” And, only including elements where the answer is, “Yes.”
  • Recognition that all metrics aren’t equal –- I seriously beefed up the most critical, end-of-the-day metrics (almost too much – there’s a plan for the one bar chart to be scaled down in the future once a couple other metrics are available)
  • The exact number of what we did six months ago isn’t important -– I added sparklines (with targets when available) so that the only specific number shown is the month-to-date value for the metric; the sparkline shows how the metric has been trending relative to target
  • Pro-rating the targets -– it made for formulas that were a bit hairier, but each target line now assumes a linear growth over the course of the month; the target on Day 5 of a 30-day month is 1/6 of the total target for the month
  • Simplification of alerts -– instead of red/yellow/green…we went to red/not red; this really makes the trouble spots jump out

Even as I was developing the dashboard, a couple of things clued me in that I was on a good track:

  • I saw data that was important…but that was out of whack or out of date; this spawned some investigations that yielded good results
  • As I circulated the approach for feedback, I started getting questions about specific peaks/valleys/alerts on the dashboard – people wound up skipping the feedback about the dashboard design itself and jumping right to using the data

It took a couple of weeks to get all of the details ironed out, and I took the opportunity to start a new Access database. The one I had been building on for the past year still works and I still use it, but I’d inadvertently built in clunkiness and overhead along the way. Starting “from scratch” was essentially a minor re-architecting of the platform…but in a way that was quick, clean and manageable.

My Takeaways

Looking back, and telling you this story, has given me a chance to reflect on what the key learnings are from this experience. In some cases, the learning has been a reinforcement what I already knew. In others, they were new (to me) ideas:

  • Don’t Stop after Version 1 — obviously, this is a key takeaway from this story, but it’s worth noting. In college, I studied to be an architect, and a problem that I always had over the course of a semester-long design project was that, while some of my peers (many of whom are now successful practicing architects) wound up with designs in the final review that looked radically different from what they started with, I spent most of the semester simply tweaking and tuning whatever I’d come up with in the first version of my design. At the same time, these peers could demonstrate that their core vision for their projects was apparent in all designs, even if it manifested itself very differently from start to finish. This is a useful analogy for dashboard design — don’t treat the dashboard as “done” just because it’s produced and automated, and don’t consider a “win” simply because it delivered value. It’s got to deliver the value you intended, and deliver it well to truly be finished…and then the business can and will evolve, which will drive further modifications.
  • Democratizing Data Visualization Is a “Punt” — in both of the first two dashboards, I had a single visualization approach and I applied that to all of the data. This meant that the data was shoe-horned into whatever that paradigm was, regardless of whether it was data that mattered more as a trend vs. data that mattered more as a snapshot, whether it was data that was a leading indicator  vs. data that was a direct reflection of this month’s results, or whether the data was a metric that tied directly to the business plan vs. data that was “interesting” but not necessarily core to our planning. The third iteration finally broke out of this framework, and the results were startlingly positive.
  • Be Selective about Detailed Data — especially in the second version of the scorecard, we included too much granularity, which made the report overwhelming. To make it useful, the consumers of the dashboard needed to actually take the data and chart it. One of the worst things a data analyst can do is provide a report that requires additional manipulation to draw any conclusions.
  • Targets Matter(!!!) — I’ve mounted various targets-oriented soapboxes in the past, but this experience did nothing if it didn’t shore up that soapbox. The second and third iterations of the dashboard/scorecard included targets for many of the metrics, and this was useful. In some cases, we missed the targets so badly that we had to go back and re-set them. That’s okay. It forced a discussion about whether our assumptions about our business model were valid. We didn’t simply adjust the targets to make them easier to hit — we revisited the underlying business plan based on the realities of our business. This spawned a number of real and needed initiatives.

Will There Be Another Book in the Series?

Even though I am pleased with where the dashboard is today, the story is not finished. Specifically:

  • As I’ve alluded to, there is some missing data here, and there are some process changes in our business that, once completed, will drive some changes to the dashboard; overall, they will make the dashboard more useful
  • As much of a fan as I am of our Excel/Access solution…it has its limitations. I’ve said from the beginning that I was doing functional prototyping. It’s built well enough with Access as a poor man’s operational data store and Excel as the data visualization engine that we can use this for a while…but I also view it as being the basis of requirements for an enterprise BI tool (in this regard, it jibes with a parallel initiative that is client-facing for us). Currently, the dashboard gets updated with current data when either the Director of Finance or I check it out of Sharepoint and click a button. It’s not really a web-based dashboard, it doesn’t allow drilling down to detailed data, and it doesn’t have automated “push” capabilities. These are all improvements that I can’t deliver with the current platform.
  • I don’t know what I don’t know. Do you see any areas of concern or flaws with the iteration described in this post? Have you seen something like this fail…or can you identify why it would fail in your organization?

I don’t know when this next book will be written, but you’ll read it here first!

I hope you’ve enjoyed this tale. Or, if nothing else, it’s done that which is critical for any good bedtime story: it’s put you to sleep!  🙂

Presentation, Reporting

Dashboard Design Part 2 of 3: An Iterative Tale

Yesterday, I described my first shot at developing a weekly corporate dashboard for my current company. It was based on the concept of the sales funnel and, while a lot of good came out of the exercise…it was of no use as a corporate performance management tool.

Tonight’s bedtime story will be chapter 2, where the initial beast was slain and a new beast was created in its place. Gather around, kids, and we’ll explore the new and improved beast…

Version 2: A Partner in Crime and a Christmas Tree Scorecard

Several months after the initial dashboard had died an abrupt and appropriate death, we found ourselves backing into looking at monthly trends on a regular basis for a variety of areas of the business. I was involved, as was our Director of Finance. I honestly don’t remember exactly how it happened, but a soft decree hit both of us that we needed to be circulating that data amongst the management team on a weekly basis.

Now, several very positive things had happened by this point that made the task doable:

  • We’d rolled into a new year, and the budgeting and planning that led up to the new year led to a business plan with more specific targets being set around key areas of the business
  • We had cleaned up our processes — the reality of them rather than simply the theory; they were still far from perfect, but they had moved in the right direction to at least have some level of consistency
  • We had achieved greater agreement/buy-in/understanding that there was underlying and necessary complexity in our business, both our business model and our business processes

Although I would still say we failed, we at least failed forward.

As I recall, the Director of Finance took a first cut at the new scorecard, as he was much more in the thick of things when it came to providing the monthly data to the executive team. I then spent a few evenings filling in some holes and doing some formatting and macro work so that we had a one-page scorecard that showed rolling month-to-month results for a number of metrics. These metrics still flowed loosely from the top to the bottom of a marketing and sales funnel:

Some things we did right:

  • Our IT organization had been very receptive to my “this is a nuisance”-type requests over the preceding months and had taken a number of steps to make much of the data more accessible to me much more efficiently (my “data update” routine dropped from taking my computer over an hour to complete to taking under 5 minutes); “my” data for the scorecard was still pulled from the same underlying Access database, but it was pulled using a whole new set of queries
  • We incorporated a more comprehensive set of metrics -– going beyond simply Sales and Marketing metrics to capture some key Operations data
  • We accepted that we needed to pull some data from the ERP system -– the Director of Finance would handle this and had it down to a 5-minute exercise on his end
  • Because we had targets for many of the metrics, we were able to use conditional formatting to highlight what was on track and what wasn’t. And, we added a macro that would show/hide the targets  to make it easy to reduce the clutter on the scorecard (although it was still cluttered even with the targets hidden)
  • We reported historical data -– the totals for each past month, as well as the color-coding of where that month ended up relative to its target.
  • We allowed a few metrics that did not have targets set -– offending my purist sensibilities, and, honestly, this was the least useful data, but it was appropriate to include in some cases.

We even included limited “drilldown” capability — hyperlinks next to different rows in the scorecard (not shown in the image above) that, when clicked, jumped to another worksheet that had more granular detail.

But the scorecard was still a failure.

We found ourselves updating it once a week and pulling it up for review in a management meeting…and increasingly not discussing it at all. As a matter of fact, just how much of an abstract-but-not-useful picture this weekly exercise became really became clear when we got to version 3…and quickly realized how much of the data we had let lapse when it came to updates.

So, what was wrong with it? Several things:

  • Too much detailed data –- because we had forsaken graphical elements almost entirely, we were able to cram a lot of data into a tabular grid. We found ourselves including some metrics to make the scorecard “complete” simply because we could – for instance, if we included total leads and, as a separate metric, leads who were entirely new to the company, then, for the sake of symmetry, we included the number of leads for the month who were already in our database: new + existing = total. This was redundant and unnecessary
  • We treated all of the metrics the same -– everything was represented as a monthly total, be it the number of leads generated, the number of opportunities closed, the amount of revenue booked, or the headcount for the company; we didn’t think about what really made sense – we just presented it all equally
  • No pro-rating of the targets –- we had a simple red/yellow/green scheme for the conditional formatting alerts; but, we compared the actuals for each metric to the total targets for the month; this meant that, for the first half of the month, virtually every metric was in the red

Pretty quickly, I saw that version 2 represented some improvements from version 1, but, somehow, wasn’t really any better at helping us assess the business.

At that point, we fell into a pretty common trap of data analysts: once a report has stabilized, we find a way to streamline its production and automate it as much as possible simply to remove the tedium of the creation. I’ve got countless examples from my own experience where a BI or web analytics tool has the ability to automate the creation and e-mailing of reports out. Once it’s automated, the cost to produce it each day/week/month goes virtually to zero, so there is no motivation to go back and ask, “Is this of any real value?” Avinash Kaushik calls this being a “reporting squirrel” (see Rule #3 on his post: Six Rules for Creating A Data-Driven Boss) or a “data puke” (see Filter #1 in his post: Consultants, Analysts: Present Impactful Analysis, Insightful Reports), and it’s one of the worst places to find yourself.

Even though I was semi-aware of what had happened, the truth is that we would likely still be cruising along producing this weekly scorecard save for two things:

  • What was acceptable for internal consumption was not acceptable for the reports we provided to our clients. The other almost-full-time analyst in the company and I had embarked on some aggressive self-education when it came to data visualization best practices; we started trolling The Dashboard Spy site, we read some Stephen Few, we poked around in the new visualization features of Excel 2007, and generally started a vigorous internal effort to overhaul the reporting we were providing to our clients (and to ourselves as our own clients)
  • The weekly meeting where the managers reviewed the scorecard got replaced with an “as-needed” meeting, with the decision that the scorecard would still be prepared and presented weekly…to the entire company

So, what really happened was that fear of being humiliated internally spurred another hasty revision of the scorecard…and its evolution into more of a dashboard.

And that, kids, will be the subject of tomorrow’s bedtime tale. But, as you snuggle under your comforter and burrow your head into your pillow, think about the approach I’ve described here. Do you use something similar that actually works? If so, why? What problems do you see with this approach? What do you like?

Presentation, Reporting

Dashboard Design Part 1 of 3: An Iterative Tale

One of my responsibilities when I joined my current company was to institute some level of corporate performance management through the use of KPIs and a scorecard or dashboard. It’s a small company, and it was a fun task. In the end, it took me over a year to get to something that really seems to work. On the one hand, that’s embarrassing. On the other hand, it was a side project that never got a big chunk of my bandwidth. And, like many small companies, we have been fairly dynamic when it comes down to nailing down and articulating the strategies we are using to drive the company.

Looking back, there have been three very distinct versions of the corporate scorecard/dashboard. What drove them, what worked about them, and what didn’t work about them, makes for an interesting story. So gather around, children, and I will regale you with the tale of this sordid adventure. Actually, we don’t have time to go through the whole story tonight, so we’ll hit one chapter a day for the next three days.

If you want to click on your flashlight and pull the covers over your head and do a little extra reading after I turn off the light, Avinash Kaushik has a recent post that was timely for me to read as I worked up this bedtime tale: Consultants, Analysts: Present Impactful Analysis, Insightful Reports. The post has the seven “filters” Avinash developed as he judged a WAA competition, and it’s a bit skewed towards web analytics reporting…but, as usual, it’s pretty easy to extrapolate his thoughts to a broader arena. The first iteration of our corporate dashboard would have gotten hammered by most of his filters. Where we are today (which we’ll get to in due time), isn’t perfect, but it’s much, much better when assessed against these filters.

One key piece of background here is that the technology I’ve had available to me throughout this whole process does not include any of the big “enterprise BI” tools. All three of the iterations were delivered using Excel 2003 and Access 2003, with some hooks into several different backend systems.

That was fine with me for a couple of reasons:

  • It allowed me to produce and iterate on the design quickly and independently – I didn’t need to pull in IT resources for drawn-out development work
  • It was cheap – I didn’t need to invest in any technology beyond what was already on my computer

So, let’s dive in, shall we?

Version 1: The “Clever” Approach As I Learned the Data and the Business

I rolled out the first iteration of a corporate dashboard within a month of starting the job. I took a lot of what I was told about our strategy and objectives at face value and looked at the exercise as being a way to cut my teeth on the company’s data, as well as a way to show that I could produce.

The dashboard I came up with was based on the sales funnel paradigm. We had clearly defined and deployed stages (or so I thought) in the progression of a prospect from the point of being simply a lead all the way through being an opportunity and becoming revenue. We believed that what we needed to keep an eye on week to week was pretty simple:

  • How many people were in each stage
  • How many had moved from one stage to another

We had a well-defined…theoretical…sales funnel. We had Marketing feeding leads into that funnel. Sure, the data in our CRM wasn’t perfect, but by reporting off of it, we would drive improvements in the data integrity by highlighting the occasional wart and inconsistency. Right…?

I crafted the report below. Simply put, the numbers in each box represented the number of leads/opportunities at that stage of our funnel, and the number in each arrow between a box represented the number who had moved from one box to another over the prior week.

High fives all around!

Except…

It became apparent almost immediately the the report was next to useless when it came to its intended purpose:

  • It turned out, our theoretical funnel really didn’t match reality – our funnel had all sorts of opportunities entering and exiting mid-funnel…and there was generally a reasonable explanation each time that happened.
  • There were no targets for any of these numbers – I’d quietly raised this point up front, but was rebuffed with the even-then familiar refrain: “We can’t set a target until we look at the data for a while.” But…no targets were ever set. Partly because…
  • “Time” was poorly represented – the arrows represented a snapshot of movement over the prior week…but no trending information was available
  • Much of the data didn’t “match” the data in the CRM – while the data was coming from the underlying database tables in the CRM, I had to do some cleanup and massaging to make it truly fit the funnel paradigm. Between that and the fact that I was only refreshing my data once/week, a comparison of a report in the CRM to my weekly report invariably invited questions as to why the numbers were different. I could always explain why, and I was always “right,” and it wasn’t exactly that people didn’t trust my report…but it just made them question the overall point a little bit more.
  • I had access to the data in some of our systems…but not all of them; most importantly, our ERP system was not something that had data that was readily accessible either through scheduled report exports or an ODBC connection; and, at the end of the day…that’s where several of our KPIs (in reality…if not named as such) lived; back to my first point, there were theoretical ways to get financial data out of our CRM…but, in practice, there was often a wide gulf between the two.

As I labored to address some of these issues, I wound up with several versions of the report that, tactically, did a decent job…but made the report more confusing.

The sorts of things I tried included:

  • Adding arrows and numbers that would conditionally appear/disappear in light gray that showed non-standard entries/exits from the funnel
  • Adding information within each box to indicate how it compared to the prior week (still not a “trend,” but at least a week-over-week comparison)
  • Adding moving averages for many of the numbers
  • Adding a total for the prior 12 weeks for many of the numbers

All told, I had five different iterations on this concept — each time taking feedback as to what it was lacking or where it was confusing and trying to address it.

To no avail.

Even as I look back on the different iterations now, it’s clear that each iteration introduced as many new issues as it addressed existing ones.

Still, some real good had come of the exercise:

  • I understood the data and our processes quite well -– tracking down why certain opportunities behaved a certain way gave me a firehose sip of knowledge into our internal sales processes
  • With next to zero out-of-pocket technology investment, I’d built a semi-automated process for aggregating and reporting the data –- I had to run a macro in MS Access that took ~1 hour to run (it was pulling data across the Internet from our SaaS CRM) and then do a “Refresh All” in Excel; I still had a little bit of manual work each week, so it took me ~30 minutes each time I produced the report
  • I’d built some credibility and trust with IT –- as I dove in to try to understand the data and processes, I was quickly asking intelligent questions and, on occasion, uncovering minor system bugs

Unfortunately, none of these were really the primary intended goal of the dashboard. The report really just wasn’t of much use to anyone. This came to a head one afternoon after I’d been dutifully producing it each week (and scratching my head as to what it was telling me) when the CEO, in a fit of polite but real pique, popped off, “You know…nobody actually looks at this report! It doesn’t tell us anything useful!” To which I replied, “I couldn’t agree more!” And stopped producing it.

A few months passed, and I focused more of my efforts on helping clean up our processes and doing various ad hoc analyses –- using the knowledge and technology I had picked up through the initial dashboard development, most assuredly…but the idea of a dashboard/scorecard migrated to the back burner.

Tomorrow, kiddies, as I tuck you in at night, I’ll tell the tale of Version 2 — a scorecard with targets! As you drift off to sleep though, ponder this version. What would you have done differently? What problems with it do you see? Is there anything that looks like it holds promise?

Analysis, Presentation, Reporting

The "Action Dashboard" — Avinash Mounts My Favorite Soapbox

Avinash Kaushik has a great post today titled The “Action Dashboard” (An Alternative to Crappy Dashboards. As usual, Avinash is spot-on with his observations about how to make data truly useful. He provides a pretty interesting 4-quadrant dashboard framework (as a transitional step to an even more powerful dashboard). I’ve gotten red in the face more times than I care to count when it comes to trying to get some of the concepts he presents across. It’s a slow process that requires quite a bit of patience. For a more complete take on my thoughts check out my post over on the Bulldog Solutions blog.

And, yes, I’m posting here and pointing to another post that I wrote on a completely different blog. We’ve recently re-launched the Bulldog Solutions blog — new platform, and, we hope, with a more focussed purpose and strategy. What I haven’t fully worked out yet is how to determine when to post here and when to post there…and when to post here AND there (like this post).

It may be that we find out that we’re not quite as ready to be as transparent as we ought to be over on the corporate blog, in which case this blog may get some posts that are more “my fringe opinion” than will fly on the corporate blog. I don’t know. We’ll see. I know I’m not the first person to face the challenge of contributing to multiple blogs (I’ve also got my wife’s and my personal blog…but that one’s pretty easy to carve off).