When There's Just Not Enough Good Data to Draw a Conclusion
I ran into an interesting, but not uncommon scenario this week. I had someone who was trying to do a back-of-the-napkin calculation to assess the relationship between leads and revenue for a company. This is probably the most common relationship that marketers want to find. In the seven years I’ve been in Marketing, I have never seen a company actually pull this off as a really tight correlation, but I’ve seen many, many people try.
There are lots of reasons for this: long and variable sales cycles, many marketing-driven interactions with a lead before Sales even starts working it, changing sales processes, changing product offerings, changing marketing processes, and, yes, revenue that comes in from sources that did not originate with Marketing.
So, obviously, this is a futile exercise, right?
Not exactly.
Back-of-the-napkin analyses, where one person takes macro numbers, makes reasonable assumptions, and then comes up with a result do have value, even if they give a data purist heartburn. They do a couple of things:
- They force that one person to step back and think about the moving parts in the process. This can and should spark discussions and questions. Some of those questions are answerable (e.g., “How many times does Marketing touch our prospects that ultimately turn into opportunities for Sales?” “How long does it take for a new prospect who ultimately becomes a Sales opportunity to make that conversion?”) and, as long as they’re validated to make sure the answers are in some way actionable, these questions are worth trying to answer.
- If multiple people come at the same question with their own napkins, and making their own assumptions, based on their own experience and expertise, then all of those napkins can be laid on the same conference room table and discussed. That’s a Wisdom of Crowds-type opportunity. There’s a guarantee that everyone will come up with different answers, but the answers that are similar are interesting, because they were arrived at with different approaches. The outliers can drive a useful discussion as to what assumptions and approaches were taken, which will broaden the perspective of all of the napkin analysts and drive some agreement and consensus as to what is going on with the business.
Are these analyses valid from a statistically rigorous perspective? Probably not. But, if they drive agreement on what a valid analysis should be, and if it highlights trouble spots that are preventing that analysis from happening, then they can still drive positive change.
In the case that spawned this post, we stumbled across one risk of these types of analyses of which you need to be wary. The person who did the analysis didn’t think the result he got could possibly be “right.” So, he wanted to dive deeper into the data to try to get an accurate answer. He listed the assumptions he’d made, and he requested, basically, that we go through and validate those assumptions. For each one, though, we could unequivocally say that his instincts and experience — which drove his assumptions — were going to be much more valid than the data for various reasons. So, instead, I did my own back-of-the-napkin analysis with my own assumptions and my own experience…and came up with a pretty similar result to what he came up with.
That seemed to work.