3-Legged Stool of Effective Analytics: Plan, Measure, Analyze
Several weeks ago, Stéphane Hamel wrote a post that got me all re-smitten with his thought process. In the post, he postulated that there are three heads of online analytics. He covered three different skillsets needed to effectively conduct online analytics: business acumen, technical (tools) knowledge, and analysis. And, he made the claim that no one person will ever excel at all three, which led to his case for building out teams of “analysts” who have complementary strengths.
I’ve had several unrelated experiences with different clients and internal teams of late that have led me to try to capture, in a similar fashion, the three-legged stool of an online analytics program. Just as others have started tacking on additional components to Stéphane’s three skillsets, I’m sure my three-legged stool will quickly become a traditional chair…then some sort of six-legged oddity. But, I’d be thrilled if I could consistently communicate the basics to my non-analyst co-workers and clients:
I hold to a pretty strict distinction between “measurement and reporting” and “analysis,” and I firmly believe there is value in “reporting,” as long as that reporting is appropriately set up and applied.
Just as I believe that reporting should generally occur either as a one-time event (campaign wrap-up, for instance) or at regular intervals, I firmly believe that testing and analysis should not be forced into a recurring schedule. It’s fine (desirable) to be always conducting analysis, but the world of “present the results of your analysis — and your insights and recommendations therein — once/month on the first Wednesday of the month” is utterly asinine. Yet…it’s a mindset with which a depressing majority of companies operate.
Reporting Done Poorly…Which Is an Unfortunately Ubiquitous Habit
I’ve been client side. I’ve been agency side. I’ve done a decent amount of reading on human nature as it relates to organizational change. My sad conclusion:
The business world has conditioned itself to confuse “cumbersome decks of data” with “reporting done well.”
It happens again and again. And again. And…again! It goes like this:
- Someone asks for some data in a report
- Someone else pulls the data
- The data raises some additional questions, so the first person asks for more data.
- The analyst pulls more data
- The initial requestor finds this data useful, so he/she requests that the same data be pulled on a recurring schedule
- The analyst starts pulling and compiling the data on a regular schedule
- The requestor starts sharing the report with colleagues. The colleagues see that the report certainly should be useful, but they’re not quite sure that it’s telling them anything they can act on. They assume that it’s because there is not enough data, so they ask the analyst to add in yet more data to the report
- The report begins to grow.
- The recipients now have a very large report to flip through, and, frankly, they don’t have time month in and month out to go through it. They assume their colleagues are, though, so they keep their mouths shut so as to not advertise that the report isn’t actually helping them make decisions. Occasionally, they leaf through it until they see something that spikes or dips, and they casually comment on it. It shows that they’re reading the report!
- No one tells the analyst that the report has grown too cumbersome, because they all assume that the report must be driving action somewhere. After all, it takes two weeks of every month to produce, and no one else is speaking up that it is too much to manage or act on!
- The analyst (now a team of analysts) and the recipients gradually move on to other jobs at other companies. At this point, they’re conditioned that part of their job is to produce or receive cumbersome piles of data on a regular basis. Over time, it actually seems odd to not be receiving a large report. So, if someone steps up and asks the naked emperor question: “How are you using this report to actually make decisions and drive the business?”…well…that’s a threatening question indeed!
In the services industry, there is the concept of a “facilitated good.” If you’re selling brainpower and thought, the theory goes, and you’re billing out smart people at a hefty rate, then you damn well better leave behind a thick binder of something to demonstrate that all of that knowledge and consultation was more than mere ephemera!
And, on the client side, if the last 6 consultancies and agencies that you worked with all diligently delivered 40-slide PowerPoint decks or 80-page reports, then, by golly, you’re going to look askance at the consultant who shows up and aims for actionable concision!
Nonetheless, I will continue my quixotic quest to bring sanity to the world. So, onto the three legs of my analytics stool…
First, Plan (Dammit!!!)
Get a room full of experienced analysts together and ask them where any good analytics program or initiative starts, and you’ll get a unanimous response that it starts: 1) at the beginning of the initiative, and 2) with some form of rigorous planning.
The most critical question to answer during analytics planning is: “How are we going to know if we’re successful?” Of course, you can’t answer that question if you haven’t also answered the question: “What are we trying to accomplish?” Those are the two questions that I wrote about in this Getting to Great KPIs post.
Of course, there are other components of analytics planning:
- Where will the data come from that we’ll use?
- What other metrics — beyond the KPIs — will we need to capture?
- What additional data considerations need to be factored into the effort to ensure that we are positioned for effective analysis and optimization down the road?
- What (if any) special tagging, tracking, or monitoring do we need to put into place (and who/how will that happen)?
- What are the known limitations of the data?
- What are our assumptions about the effort?
- …and more
In my experience both agency-side and client-side, this step regularly gets skipped like it’s a smooth round, rock in the hand of an adolescent male standing on the shore of a lake on a windless day.
An offshoot of the planning is the actual tagging/tracking/monitoring configuration…but I consider that an extension of the planning, as it may or may not be required, depending on the nature of the initiative.
Next, Measure and Report
Yup. Measurement’s important. That’s how you know if you’re performing at, above, or below your KPIs:
Here’s where I start to get into debates, both inside the analytics industry and outside. I strongly believe that it is perfectly acceptable to deliver reports without accompanying insights and analysis. Ideally, reports are automated. If they’re not automated, they’re produced damn quickly and efficiently.
Dashboards — the most popular form of reports — have a pretty simple purpose: provide an at-a-glance view of what has happened since the last update, and ensure that, at a glance, any anomalies jump out. More often than not, there won’t be anomalies, so there is nothing that needs to be analyzed based on the report! That’s okay!
I was discussing this concept with a co-worker recently, and, in response to my claim that reports should simply get delivered with minimal latency and, at best, a note that says, “Hey, I noticed this apparent anomaly that might be important. I’m going to look into it, but if you (recipient) have any ideas as to what might be going on, I’d love to get your thoughts,” she responded:
I think this makes sense, but wouldn’t we provide some analysis as to the “why” on the monthly reports?
I immediately went to the “dashboard in your car” analogy (I know — it breaks down on a lot of fronts, but it works here) with my response:
You don’t look at your fuel gauge when you get in the car every day and ask, “Why is the needle pointing where it is?” You take a quick look, make sure it’s not pegged on empty, and then go about your day.
That’s measurement. It may spawn analysis, but, often, it does not. And that’s to be expected!
Which Brings Us to Testing and Analysis
Analysis requires (or, at least, is much more likely to yield value in an efficient manner) having conducted some solid planning and having KPI-centric measurement in place. But, the timing of analysis shouldn’t be forced into a fixed schedule.
The bottom part of the figure above gets to the crux of the biscuit when it comes to timing: sometimes, the best way to answer a business question is through analyzing historical data. Sometimes, the best way to answer a question is through go-forward testing. Sometimes, it’s a combination of the two (develop a theory based on the historical data, but then test it by making a change in the future and monitoring the results). Sometimes the analysis can be conducted very quickly. Other times, the analysis requires a large chunk of analyst time and may take days or weeks to complete.
Facilitating the collaboration with the various stakeholders, managing the analysis projects (multiple analyses in flight at once — starting and concluding asynchronously based on each effort’s unique nature), can absolutely fall under the purview of the analyst (again referencing Stéphane’s post, this should be an analyst with a strong “head” for business acumen).
In Conclusion…(I promise!)
There is a fundamental flaw in any approach to using data that attempts to bundle scheduled reporting with analysis. It forces efforts to find “actionable insights” in a context where there may very well be none. And, it perpetuates an assumption that it’s simply a matter of pointing an analyst at data and waiting for him/her to find insights and make recommendations.
I’ve certainly run into business users who flee from any effort to engage directly when it comes to analytics. They hide behind their inboxes lobbing notes like, “You’re the analyst. YOU tell me what my business problem is and make recommendations from your analysis!” I’m sure some of these users had one too many (and one is “too many”) interactions with an analyst who wanted to explain the difference between a page view and a visit, or who wanted to collaboratively sift through a 50-page deck of charts and tables. That’s not good, and that analyst should be flogged (unless he/she is less than two years out of college and can claim to have not known any better). But, using data to effectively inform decisions is a collaborative effort. It needs to start early (planning), it needs to have clear, concise performance measurement (KPI-driven dashboards), and it needs to have flexibility to drive the timing and approach of analyses that deliver meaningful results.