Digital and Social Measurement Based on Causal Models
Working for an agency that does exclusively digital marketing work, with a heavy emphasis on emerging channels such as mobile and social media, I’m constantly trying to figure out the best way to measure the effectiveness of the work we do in a way that is sufficiently meaningful that we can analyze and optimize our efforts.
Fairly regularly, I’m drawn into work where the team has unrealistic expectations of the degree to which I can accurately quantify the impact of their initiatives on their top (or bottom) line. I’ve come at these discussions from a variety of angles:
- Acknowledging why expectations are inflated — what I’ve termed The Frustration Gap
- Articulating the myriad ways that media mix modeling (MMM) has started to crumble
- Developing a handy analogy based on the Mississippi River to explain the complexity of today’s measurement reality
- Putting forth a framework for social media measurement that makes the distinction between measuring the performance of individual channels and measuring the overall brand outcomes that result from these channels working in concert
This post is largely an evolution of the last link above. It’s something I’ve been exploring over the past six months, and which was strongly reinforced when I read John Lovett’s recent book. As I’ve been doing measurement planning (measurement strategy? marketing optimization planning?) with clients, it’s turned out to be quite useful when I have the opportunity to apply it.
Initially, I referred to this approach as developing a “logical model” (that’s even what I called it towards the end of my second post that referenced John’s book), but that was a bit bothersome, since “logical model” has a very specific meaning in the world of database design. Then, a couple of months ago, I stumbled on an old Harvard Business Review paper about using non-financial measures for performance measurement, and that paper introduced the same concept, but referred to it as a “causal model.” I like it!
How It Works
The concept is straightforward, it’s not particularly time-consuming, it’s a great exercise for ensuring everyone involved is aligned on why a particular initiative is being kicked off, it sets up meaningful optimization work as individual tactics and campaigns are implemented, and it positions you to be able to demonstrate a link (correlation) between marketing activities and business results.
This approach acknowledges that there is no existing master model that shows exactly how a brand’s target consumers interact and respond to brand activity. The process starts with more “art” than “science” — knowledge of the brand’s target consumers and their behaviors, knowledge of emerging channels and where they’re most suited (e.g., a QR code on a billboard on a busy highway…not typically a good match), and a hefty dose of strategic thought.
The exact structure of this sort of model varies widely from situation from situation, but I like to have my measurable objectives — what we think we’re going achieve through the initiative or program that we believe has underlying business value — listed on the left side of the page, and then build linkages from that to a more definitive business outcome on the right:
It should fit on a single page, and it requires input from multiple stakeholders. Ultimately, it can be a simple illustration of “why we’re doing this” for anyone to review and critique. If there are some pretty big leaps required, or if there are numerous steps along the way to get to tangible business value, then it begs the question: “Is this really worth doing?” It’s an easy litmus test as to whether an initiative makes sense.
What I’ve found is that this exercise can actually alter the original objectives in the planning stage, which is a much better time and place to alter them than once execution is well under way!
Once the model is agreed to, then you can focus on measuring and optimizing to the outputs from the base objectives — using KPIs that are appropriate for both the objective and the “next step” in the causal model.
And, over time, the performance of those KPIs can be correlated with the downstream components of the causal model to validate (and adjust) the model itself.
This all gets back to the key that measurement and analytics is a combination of art and science. Initially, it’s more art than science — the science is used to refine, validate, and inform the art.