Gilligan's Unified Theory of Analytics (Requests)
The bane of many analysts’ existence is that they find themselves in a world where the majority of their day is spent on the receiving end of a steady flow of vague, unfocused, and misguided requests:
“I don’t know what I don’t know, so can you just analyze the traffic to the site and summarize your insights?”
“Can I get a weekly report showing top pages?”
“I need a report from Google Analytics that tells me the gender breakdown for the site.”
“Can you break down all of our metrics by: new vs. returning visitors, weekend vs. weekday visitors, working hours vs. non-working hours visitors, and affiliate vs. display vs. paid search vs. organic search vs. email visitors? I think there might be something interesting there.”
“Can you do an analysis that tells me why the numbers I looked at were worse this month than last?”
“Can you pull some data to prove that we need to add cross-selling to our cart?”
“We rolled out a new campaign last week. Can you do some analysis to show the ROI we delivered with it?”
“What was traffic last month?”
“I need to get a weekly report with all of the data so I can do an analysis each week to find insights.”
The list goes on and on. And, in various ways, they’re all examples of well-intended requests that lead us down the Nefarious Path to Reporting Monkeydom. It’s not that the requests are inherently bad. The issue is that, while they are simple to state, they often lack context and lack focus as to what value fulfilling the request will deliver. That leads to the analyst spending time on requests that never should have been worked on at all, making risky assumptions as to the underlying need, and over-analyzing in an effort to cover all possible bases.
I’ve given this a lot of thought for a lot of years (I’m not exaggerating — see the first real post I wrote on this blog almost six years ago…and then look at the number of navel-gazing pingbacks to it in the comments). And, I’ve become increasingly convinced that there are two root causes for not-good requests being lobbed to the analytics team:
- A misperception that “getting the data” is the first step in any analysis — a belief that surprising and actionable insights will pretty much emerge automagically once the raw data is obtained.
- A lack of clarity on the different types and purposes of analytics requests — this is an education issue (and an education that has to be 80% “show” and 20% “tell”)
I think I’m getting close to some useful ways to address both of these issues in a consistent, process-driven way (meaning analysts spend more time applying their brainpower to delivering business value!).
Before You Say I’m Missing the Point Entirely…
The content in this post is, I hope, what this blog has apparently gotten a reputation for — it’s aimed at articulating ideas and thoughts that are directly applicable in practice. So, I’m not going to touch on any of the truths (which are true!) that are more philosophical than directly actionable:
- Analysts need to build strong partnerships with their business stakeholders
- Analysts have to focus on delivering business value rather than just delivering analysis
- Analysts have to stop “presenting data” and, instead “effectively communicate actionable data-informed stories.”
All of these are 100% true! But, that’s a focus on how the analyst should develop their own skills, and this post is more of a process-oriented one.
With that, I’ll move on to the three types of analytics requests.
Hypothesis Testing: High Value and SEXY!
Hands-down, testing and validation of hypotheses is the sexiest and, if done well, highest value way for an analyst to contribute to their organization. Any analysis — regardless of whether it uses A/B or multivariate testing, web analytics, voice of the customer data, or even secondary research — is most effective when it is framed as an effort to disprove or fail to disprove a specific hypothesis. This is actually a topic I’m going to go into a lot of detail (with templates and tools) on during one of the eMetrics San Francisco sessions I’m presenting in a couple of weeks.
The bitch when it comes to getting really good hypotheses is that “hypothesis” is not a word that marketers jump up and down with excitement over. Here’s how I’m starting to work around that: by asking business users to frame their testing and analysis requests in two parts:
Part 1: “I believe…[some idea]”
Part 2: “If I am right, we will…[take some action]”
This construct does a couple of things:
- It forces some clarity around the idea or question. Even if the requestor says, “Look. I really have NO IDEA if it’s ‘A’ or ‘B’!” you can respond with, “It doesn’t really matter. Pick one and articulate what you will do if that one is true. If you wouldn’t do anything different if that one is true, then pick the other one.”
- It forces a little bit of thought on the part of the requestor as to the actionability of the analysis.
And…it does this in plain, non-scary English.
So, great. It’s a hypothesis. But, how do you decide which hypotheses to tackle first? Prioritization is messy. It always is and it always will be. Rather than falling back on the simplistic theory of “effort and expected impact” for the analysis, how about tackling it with a bit more sophistication:
- What is the best approach to testing this hypothesis (web analytics, social media analysis, A/B testing, site survey data analysis, usability testing, …)? That will inform who in your organization would be best suited to conduct the analysis, and it will inform the level of effort required.
- What is the likelihood that the hypothesis will be shown to be true? Frankly, if someone is on a fishing expedition and has a hypothesis that making the background of the home page flash in contrasting colors…common sense would say, “That’s a dumb idea. Maybe we don’t need to prove it if we have hypotheses that our experience says are probably better ones to validate.”
- What is the likelihood that we actually will take action if we validate the hypothesis? You’ve got a great hypothesis about shortening the length of your registration form…but the registration system is so ancient and fragile that any time a developer even tries to check the code out to work on it, the production code breaks. Or…political winds are blowing such that, even if you prove that always having an intrusive splash page pop up when someone comes to your home page is hurting the site…it’s not going to change.
- What will be the effort (time and resources) to validate the hypothesis? Now, you damn well better have nailed down a basic approach before answering this. But, if it’s going to take an hour to test the hypothesis, even if it’s a bit of a flier, it may be worth doing. If it’s going to take 40 hours, it might not be.
- What is the business value if this hypothesis gets validated (and acted upon)? This is the “impact” one, but I like “value” over “impact” because it’s a little looser.
I’ve had good results when taking criteria along these lines and building a simple scoring system — assigning High, Medium, Low, or Unknown for each one, and then plugging in some weighted scores for each value for each criteria. The formula won’t automatically prioritize the hypotheses, but it does give you a list that is sortable in a logical way, It, at least, reveals the “top candidates” and the “stinkers.”
Performance Measurement (think “Reporting”)
Analysts can provide a lot of value by setting up automated (or near-automated) performance measurement dashboards and reports. These are recurring (hypothesis testing is not — once you test a hypothesis, you don’t need to keep retesting it unless you make some change that makes sense to do so).
Any recurring report* should be goal- and KPI-oriented. KPIs and some basic contextual/supporting metrics should go on the dashboard, targets need to be set (and set up such that alerts are triggered when a KPI slips). Figuring out what should go on a well-designed dashboard comes down to answering two questions:
- What are we trying to achieve? (What are our business goals for this thing we will be reporting on?)
- How will we know that we’re doing that? (What are our KPIs?)
They need to get asked and answered in order, and that’s a messier exercise oftentimes than we’d like it to be. Analysts can play a strong role in getting these questions appropriately answered…but that’s a topic for another time.
Every other recurring report that is requested should be linkable back to a dashboard (“I have KPIs for my paid search performance, so I’d like to always get a list of the keywords and their individual performance so I have that as a quick reference if a KPI changes drastically.”)
Having said that, a lot of tools can be set up to automatically spit out all sorts of data on a recurring basis. I resist the temptation to say, “Hey…if it’s only going to take me 5 minutes to set it up, I shouldn’t waste my time trying to validate its value.” But, it can be hard to not appear obstructionist in those situations, so, sometimes, the fastest route is the best. Even if, deep down, you know you’re delivering something that will get looked at the first 2-3 times it goes out…and will never be viewed again.
Quick Data Requests — Very Risky Territory (but needed)
So, what’s left? That would be requests of the,. “What was traffic to the site last month?” ilk. There’s a gross misperception when it comes to “quick” requests that there is a strong correlation between the amount of time required to make the request and the amount of time required to fulfill the request. Whenever someone tells me they have a “quick question,” I playfully warn them that the length of the question tends to be inversely correlated to the time and effort required to provide an answer.
Here’s something I’ve only loosely tested when it comes to these sorts of requests. But, I’ve got evidence that I’m going to be embarking on a journey to formalize the intake and management of these in the very near future, so I’m going to go ahead and write them down here (please leave a comment with feedback!).
First, there is how the request should be structured — the information I try to grab as the request comes in:
- The basics — who is making the request and when the data is needed; you can even include a “priority” field…the rest of the request info should help vet out if that priority is accurate.
- A brief (255 characters or so) articulation of the request — if it can’t be articulated briefly, it probably falls into one of the other two categories above. OR…it’s actually a dozen “quick requests” trying to be lumped together into a single one. (Wag your finger. Say “Tsk, tsk!”
- An identification of what the request will be used for — there are basically three options, and, behind the scenes, those options are an indication as to the value and priority of the request:
- General information — Low Value (“I’m curious,” “It would be be interesting — but not necessarily actionable — to know…”)
- To aid with hypothesis development — Medium Value (“I have an idea about SEO-driven visitors who reach our shopping cart, but I want to know how many visits fall into that segment before I flesh it out.”)
- To make a specific decision — High Value
- The timeframe to be included in the data — it’s funny how often requests come in that want some simple metric…but don’t say for when!
- The actual data details — this can be a longer field; ideally, it would be in “dimensions and metrics” terminology…but that’s a bit much to ask for many requestors to understand.
- Desired delivery format — a multi-select with several options:
- Raw data in Excel
- Visualized summary in Excel
- Presentation-ready slides
- Documentation on how to self-service similar data pulls in the future
The more options selected for the delivery format, obviously, the higher the effort required to fulfill the request.
All of this information can be collected with a pretty simple, clean, non-intimidating intake form. The goal isn’t to make it hard to make requests, but there is some value in forcing a little bit of thought rather than the requestor being able to simply dash off a quickly-written email and then wait for the analyst to fill in the many blanks in the request.
But that’s just the first step.
The next step is to actually assess the request. This is the sort of thing, generally, an analyst needs to do, and it covers two main areas:
- Is the request clear? If not, then some follow-up with the requestor is required (ideally, a system that allows this to happen as comments or a discussion linked to the original request is ideal — Jira, Sharepoint, Lotus Notes, etc.)
- What will the effort be to pull the data? This can be a simple High/Medium/Low with hours ranges assigned as they make sense to each classification.
At that point, there is still some level of traffic management. SLAs based on the priority and effort, perhaps, and a part of the organization oriented to cranking out those requests as efficiently as possible.
The key here is to be pretty clear that these are not analysis requests. Generally speaking, it’s a request for data for a valid reason, but, in order to conduct an analysis, a hypothesis is required, and that doesn’t fit in this bucket.
So, THEN…Your Analytics Program Investment
If the analytics and optimization organization is framed across these three main types of services, then conscious investment decisions can be made:
- What is the maximum % of the analytics program cost that should be devoted to Quick Data Requests? Hopefully, not much (20-25%?).
- How much to performance measurement? Also, hopefully, not much — this may require some investment in automation tools, but once smart analysts are involved in defining and designing the main dashboards and reports, that is work that should be automated. Analysts are too scarce for them to be doing weekly or monthly data exports and formatting.
- How much investment will be made in hypothesis testing? This is the highest value
With a process in place to capture all three types of efforts in a discrete and trackable way enables reporting back out on the value delivered by the organization:
- Hypothesis testing — reporting is the number of hypotheses tested and the business value delivered from what was learned
- Performance measurement — reporting is the level of investment; this needs to be done…and it needs to be done efficiently
- Quick data requests — reporting is output-based: number of requests received, average turnaround time. In a way, this reporting is highlighting that this work is “just pulling data” — accountability for that data delivering business value really falls to the requestors. Of course, you have to gently communicate that or you won’t look like much of a team player, now, will you?
Over time, shifting an organization to think it terms of actionable and testable hypotheses is the goal — more hypotheses, fewer quick data requests!
And, of course, this approach sets up the potentially to truly close the loop and follow through on any analysis/report/request delivered through a Digital Insight Management program (and, possibly, platform — like Sweetspot, which I haven’t used, personally, but which I love the concept of).
What Do You Think?
Does this make sense? It’s not exactly my opus, but, as I’ve hastily banged it out this evening, I realize that it includes many of the ways that I’ve had the most success in my analytics career, and it includes many of the structures that have helped me head off the many ways I’ve screwed up and had failures in my analytics career.
I’d love your thoughts!
*Of course, there are always valid exceptions.