A Pragmatic Approach to "Test and Learn"
“We’re going to use a ‘test and learn’ approach” has become as common a buzzphrase as, “We’re going to be data-driven,” and “We’re going to derive actionable insights.” I’m not a fan of buzzphrases. Buzzphrases tend to originate as statements of an aspirational goal that then quickly morph to be treated as accepted reality. When someone like Eric Peterson steps up and delves into one of these buzzphrases, I do figurative backflips of joy.
The devil is in the details (which is not only a buzzphrase, but a full-on cliché…but it’s true!). And, the details come down to the right people with a valid process using capable tools. When it comes to “test and learn,” the gap between concept and actual implementation often seems to be a true chasm.
The concept: use a combination of A/B (and multivariate) testing and the analysis of historical data to test hypotheses. Based on the disproving or failure to disprove each hypothesis, take appropriate action to drive continuous improvement.
The actual implementation: HiPPOs, lack of clarity on what the KPIs are (without KPIs to optimize against, there can be no optimization), limited resources, over-focus on a specific technique or tool as “the answer,” inability to coordinate/align between marketers/designers/strategists/analysts, analyses resulting in “light gray or dark gray” conclusions rather than “black or white” ones, and so on.
On top of the challenges that have always existed, even in the “simple” world of a brand’s digital presence being primarily limited to their web site and the drivers of traffic to the site (SEO, SEM, banner ads, affiliate programs), we now operate in a world that includes social media. And, most of a brand’s social media activity cannot be A/B tested in a classical sense, so that tried-and-trued (but, alas, still too rare) technique is not available.
None of these challenges mean that “test and learn” is an unattainable ideal. But, it does mean that a strong process with a diligent steward (read: an analyst who is willing to expend some bandwidth as a project manager) is in order. For reasons we’ll cover at a later date, I’m working on codifying such a process, based on what has (and hasn’t) worked for me in past and current roles. Here we go!
Step 1: Develop a Structured, Living Learning List
Step 1 is key. When we talk about learning in a digital data context, we’re talking about a never-ending process. This isn’t “Algebra I,” where a syllabus can be developed once, locked down, and then pulled out semester after semester to each new set of incoming students. Rather, we’re talking about a list that will grow over time. Use Excel. Use MS Access. Use a spreadsheet in Google Drive. Or, get fancy, and use Sharepoint or Jive or any of a gazillion knowledge management platforms. It doesn’t really matter. But, having a centralized, living, taggable and trackable list of “learning possibilities” is critical. Otherwise, great ideas can be fleeting and temporal — lost to a tragedy of poor timing and imperfect human memory.
This list is a list of learning opportunities that any stakeholder (core or extended) proposed as being a useful target for testing and analysis. Here’s a start for what should be captured for each item on the list:
- A title for the learning opportunity
- The name of the person who submitted it
- The date it was submitted
- A description of the question being asked
- The potential business impact (High/Medium/Low) that answering the question would enable
Those are “core” pieces of information that the person submitting the opportunity needs to provide. In addition, the list needs to include some other fields to be populated over time:
- The status of the question (open, in work, cancelled/rejected, completed, etc.)
- The date the question was cancelled or completed
- What sort of testing or analysis would be required to answer the question (historical data analysis, secondary research, primary research, A/B or multivariate testing, in-market experimentation, etc.)
- The level of effort / time estimated to answer the question
- A summary of the results of the analysis (the “answer” to the question) and where the full analysis can be found
Once you’ve settled on where this list will live, who will maintain it, and exactly what fields it will contain, it’s time to move on to…
Step 2: Capture Ideas from Stakeholders
A fairly common delusion in business is that analysts have access to all of the data and have tools at their disposal that will crunch that data in a way such that insights will magically emerge. It doesn’t work that way. Analysis is an exercise in asking smart and valid business questions and then using the specifics of each question to frame and execute an analysis to get an answer.
Analysts are only ONE source for smart and valid business questions!
The reason we set up the list in Step 1 the way we did was so that we can capture more questions than we could possibly ever answer. That’s a fantastic situation in which to be, because it means you have a vast pool of learning opportunities to draw from, rather than scrambling around to find one or two things worth analyzing.
The idea is to make it very easy for any stakeholder who has an idea or a question to quickly and easily get it recorded and available for consideration for analysis: the developer who read a Mashable article that inspired a thought about the current web site, the designer who was torn between the treatment of the global navigation on the site, the marketer who knows an upcoming campaign will be a litmus test as to whether a particular channel is worth the company’s investment or not, etc.
This doesn’t mean this process is all about volume. The “input form” (the first bulleted list above) should force some basic consideration on the part of the submitter to qualify the idea. The fact is, it’s the people in the organization who are most interested in being data-informed, and the people who are most interested in moving the business forward, will be the people who engage in this quasi-crowdsourced learning process.
Step 3: Develop a Means of Prioritizing the Ideas
While Step 2 is intended to be as egalitarian as possible, the act of prioritizing the ideas is, by necessity, much less so. The HiPP is the HiPP for a reason (I say HiPP here rather than HiPPO because we’re talking about the Highest Paid Person rather than her specific Opinions), and the person who is accountable for the budget that pays for the analyst deserves a greater say in where the analyst spends her time.
First, of course, there needs to be some set of agreed-to criteria for prioritizing the ideas. The specifics will vary, but these will likely include:
- The likely impact to the business if the question is successfully answered (including how quickly the organization will be able to act on the results and the cost to the organization to do so)
- The long-term applicability of the results of the exercise
- The expected time and cost to conduct the analysis
An assessment of these criteria should be captured and recorded in the list developed in Step 1. But, they are inherently subjective assessments, so it still comes back to people-driven decision making as to what gets tested and analyzed and when. There are several ways to tackle this — the ones listed below are all ones that I have used with success in one form or another over the years, but I’m sure there are others:
- Have a core team of stakeholders regularly review the list of questions and decide which ones to tackle and when (see Step 5)
- Set up a way for everyone who submitted ideas to also vote on ideas — a “thumbs-up” for an idea moves it up the list such that the more people who give it a boost, the more likely it is to be a question that gets tackled
- A modified combination of the above is to vary the voting weight of each person; I’ve been through exercises where everyone is given an amount of (fake) money that they get to “invest” in the questions they would like to see answered. They can invest in as few or as many questions as they like. With this approach, the key decision makers / budget owners can be given more money to invest
This is one of the trickier aspects of managing the program, because it requires that the right people are actively engaged with the process and available to participate on a recurring basis (see Step 5).
Step 4: Have a Clear Start/End for Each Analysis
Step 4 and Step 5 are all about process and rigor. Each accepted/approved learning opportunity should be treated as a mini-project and managed as such. In most cases, the analyst can also be the project manager. But, in some cases, the project management may be something that a professional project manager should take on. Key elements of project managing the analysis include:
- Identifying all of the people who will be involved or impacted in some way (a RASCI matrix is handy for this)
- Development of a work breakdown structure — a list of all of the steps that will be required to complete the test or analysis, as well as the sequence of that effort
- Development of a project schedule — when each task on the work breakdown structure will be tackled (in some cases, there will be a “wait and see” period that needs to be built into the schedule; with social media, especially, it’s common to need to explicitly change a tactic for a period of time — a week or two — and then evaluate the results of that change; in these cases, there are periods of time when there is no actual analysis “work” being performed on the project)
- Establishment and publication of key milestones
- A plan for communication — who will be updated, when, and how
- A commitment to regularly checking the project schedule against the actual work completed
“Egad!” you exclaim. “All of THAT is supposed to be done by the analyst?” Well, yes. It doesn’t have to be a huge deal. For many analyses, this can all be done in an Excel spreadsheet with a few recurring Outlook or Google Calendar reminders. In many cases, the test or analysis may be so small that the “schedule” is a single day. But, having the diligence to think through the what, the who, the how, and the when — and then managing the work to the results of that thinking — brings rigor to the work and credibility to the process. It prevents wheel-spinning, and, by feeding back to the list from Step 1, helps build an inventory of discrete, completed work over time that can then be used to assess the overall effectiveness of the analytics program and ensure that what has been learned in the past gets applied in the future.
Step 5: Develop a Fixed Cadence for Updates
As I noted at the beginning of Step 4, that step and this one are complementary. And, in some ways, they are in tension:
- Step 4 — recognize that each analysis is different and unique and has it’s own schedule; that schedule may be a single day, it may be a week, it may be several months. Treat it as such and manage each effort as a small project
- Step 5 — establish a fixed cadence for providing communication, updates, and assessment of the overall program
The fixed cadence may be daily (although I’ve never had a case where that is warranted, the Agile development methodology dictates daily stand-ups, so it may be that there are analytics programs where that is warranted), weekly, or even monthly. Having been at an agency for the last three years, monthly was often the most frequent cadence I could manage. This fixed cadence can include the delivery of recurring performance measurement results (KPI-driven dashboards) if those require an in-person review. But, the focus of these communications should be on the learning plan:
- What questions from the list have been answered since the last update
- What questions are currently being worked on and how (historical data analysis, A/B test, adjustment of digital tactics for a fixed period of time to measure results, etc.)
- What new questions have come in for consideration
If this cadence includes a formal meeting with the stakeholders, which is ideal, then a discussion that generates new questions, as well as the prioritization of new questions, can also be part of this meeting.
“Test and Learn” Is the Core of Analysis
In addition to laying out a practical process for effectively driving continuous learning, I hope I have also illustrated that “testing” is inextricably bound with “analysis” and vice versa. We can’t treat testing as being limited to A/B and multivariate testing and analysis as being limited to historical data. To truly learn in a way that delivers business value, the focus has to be on the business questions. Depending on the question, the best way to answer the question should be selected from a comprehensive arsenal at the analyst’s disposal, and the overall process has to be rigorously managed.