An Aspirational Report Structure
The life of an analyst invariably includes responsibility for some set of recurring reports: daily, weekly, monthly, and even quarterly. I hate reports. Or, to be more precise, I have come to hate the term “report.” (I’ve also developed something of an aversion to the term “analysis,” but that’s a topic for another day.)
To make a bold claim: corporate cultures force employees to develop cognitive dissonance when it comes to recurring reports:
- We believe we need to get them, that they’re supposed to be lengthy, that emailing them as PowerPoint presentations makes sense, and that we’re supposed to use them to drive the business forward
- We often don’t actually look at them when they arrive in our inboxes, and, when we do, we simply scroll through each slide, look at it long enough to know what it’s saying, and then close the file and get on with our lives
It’s cognitive dissonance because we believe we need something…even though, in practice, it’s not something we get much value out of.
Occasionally, this cognitive dissonance actually bubbles up into our consciousness — unrecognized for what it is — and results in a conversation with the analyst or the analytics manager that is, essentially, a single statement:
“I get this report each month, and it has the data I asked for in it, but it doesn’t include any meaningful insights or recommendations that actually help me drive the business.”
I’ve seen or heard a statement to this effect more times than I can count. I’ve also seen the typical reaction: the report gets longer! Analysts are pleasers by nature. If we hear “the report isn’t working,” and that’s articulated as a, “it’s missing something” (insights, recommendations) critique, then the “obvious” way to fix it is to “add the missing stuff.”
In my mind, I look like this when this happens (apologies to any kiddos reading this who don’t get the Susan Powter reference):
Performance Measurement vs. Hypothesis Validation
I’ve been beating a fairly persistent drum over the past few years that is a pretty simple rhythm. But, it’s also an, apparently, somewhat radical departure from long-established corporate norms that have not evolved, even though the tools we have for analytics have advanced dramatically.
This tune I’ve been pounding out has an easy, if not particularly melodic, claim as its chorus: there are fundamentally two (and only two) ways that analytics can actually be used to drive a business forward: performance measurement and hypothesis validation:
There’s a temptation to look at the above image and make the leap that “performance measurement” is “reporting” and that “hypothesis validation” is “analysis.” I’d be fine with that leap…if that’s how most businesspeople actually defined the words. And they don’t.
Consider the following requests or statements made every day in businesses around the world:
- “Be sure to include insights and recommendations in the monthly report!”
- “The report must include analysis of what happened.”
- “Can you analyze the results of the recent campaign and tell me if it was successful?”
- “I need a weekly insights report for the web site.”
Ick. Ugh. And phooey! These all represent a conflation of performance measurement and hypothesis validation. I hate the fact that both “hypothesis” and “validation” are four-syllable, fancy-pants-sounding words, but it’s hard to argue that they’re not more descriptive and precise than “analysis.”
My Dreamy-Dream State of Information Delivery
Let me wave my magic wand to how I believe analytics stuff should be delivered. It’s pretty simple:
- The only recurring reports are: single-screen dashboards with KPIs (with targets) organized under clearly worded business goals. They are automated or semi-automated and are pushed out to stakeholders with minimal latency on whatever schedule makes sense. If it’s a weekly report that runs Sunday through Saturday, and the recipient actively uses email, it hits their inbox (embedded in the email body — NOT as an attachment) on Sunday morning at 12:01 AM. It’s got clear indicators for each KPI as to whether the metric is on track or not. It does not include any qualitative commentary, insights, or recommendations (see my last post).
- Additional information is delivered in an ad hoc fashion and is driven entirely by what hypotheses were prioritized for validation, when they were validated, and who actually cares about the results. The core “results” are delivered on no more than five slides or pages (and, ideally, fit on 1 or 2). When appropriate, there is a supplemental appendix, a separate detailed writeup that goes into the methodology and detailed results, and/or a reasonably cleanly formatted spreadsheet with the deeper data.
“OMG, Tim! Are you actually saying that the only regularly scheduled reporting we should be doing are 1-page dashboards? And, are you saying that we should never be presenting more than 5 slides of information when we present analysis results?”
Yup. That’s my dreamy dream world.
But, sadly, although common sense says that makes sense — both from saving analysts time and preventing narcolepsy-by-PowerPoint — that is a too-radical shift for most companies.
So…
A Recurring Report Structure that Deviously Moves in This Direction
Let’s get a little pragmatic. Harry Potter had less trouble taking out Voldemort than he would have had if he’d tried to drastically curtail the volume and length of corporate monthly reports.
Photo by Christian Guthier
So, let’s accept that monthly reports are here to stay for the foreseeable future. How can we make them better? Well, we have to combine performance measurement and hypothesis validation. So, first, I still say we do a low-latency, one-page, automated or quasi-automated distribution of the performance measurement dashboard. The intro to that email is pretty simple:
“Below is the performance measurement dashboard for X for <timeframe>. We are in the process of developing the full report. Let us know if you have any specific questions or comments based on the dashboard itself.”
That’s it. It’s in the stakeholders’ inboxes on the first day of the month. Everyone knows at a glance if anything has gone awry performance-wise. If something in that limited set of data is unexpected, the recipients are asked to quickly respond. In many cases, someone on the distribution list will already know what the root cause is, (“Oh. Yeah. We turned off paid search 3 weeks into the month. We forgot to let everyone know we’d done that.”) In other cases, a KPI’s miss of its target will be cause for immediate concern, and the root cause is not immediately known. That’s good! Better to have the alarm raised on Day 1 rather than on Day 9 when a “full report” is finally made available! The more people hypothesizing early about the root cause, the better!
So, then, what does the “full report” that goes out several days or a week later look like? Like this:
- Title slide
- Dashboard — the exact same one that was sent out on the first of the month
- A slide listing all of the questions (hypotheses — in plain English!) that were tackled during the previous month and an indicator as to whether each question was definitively answered or not (some of these may have been spawned by the initial dashboard — trying to get to the root cause of a problem that manifested itself there), and an indicator if there is action that should be taken for each question. It’s a simple table.
- One slide for each question that was definitively answered — the title of the slide is the question. Big, bold text underneath provides the answer. Big, bold text that summarizes the recommended action (“No action warranted” is an acceptable recommended action, as long as there is a different potential answer to the question that would have led to actual action). The body of the slide is a clean and clear set of context (including a chart or two as warranted) providing the essence of why the answer is what it is.
- A single slide of “(Preliminary) questions to be addressed this month.” Some of the questions that were not definitively answered (from slide 3) may be included here if additional work will likely support answering them.
That’s it. It’s still pretty radical, but it can safely be labeled a “monthly report” because it has a defined structure and has multiple pages!
The last slide is “Preliminary” because, if the report is presented to stakeholders, this is the opportunity to look ahead from an analytics perspective. You have your audience’s attention (because you’ve delivered such useful information in an easy-to-consume way), and you want to now collaborate with them to make sure that you are spending your time as efficiently as possible. It quickly becomes clear that the last slide in this month’s report is the basis for the third slide in next month’s report (other questions will come in over the course of the month that will adjust the final list of questions answered), and it will help stakeholders learn that analysis isn’t merely an after-the-fact (after the end of the month) exercise — it starts by looking ahead to what the business wants to learn!
A note on one-time reports
Sometimes, a campaign runs for a short enough period of time that there is only a single “report” rather than a recurring weekly or monthly report. In these situations, the structure is still very similar. But, the “questions” in the last slide are limited to “questions to be answered through future campaigns.” Otherwise, the structure is identical.
Still too radical?
What do you think? Would this report structure work in your organization (it’s not a theoretical construct — I’ve used it with multiple clients and it’s a direction I try to subtly evolve any report where I inherit some other report structure)?