Sometimes, it doesn't make sense to look at the data
(YET AGAIN aiming for a briefer entry)
We got into a discussion today at work about how we were going to optimize certain aspects of our marketing efforts. Specifically, we were discussing shifting some aspects of some of our campaigns to make the leads from them more segmented in a way that could trigger different sales processes for following up. As one of the data folk, and the only one in the meeting, all eyes turned to me with that, “Well, Tim, what can you do to tell us how we should tweak and tune these aspects?”
I’m all for using data wherever possible, but this was a classic case of where the data was a rathole waiting to happen. In this case, we were discussing aspects of the campaigns that have only been formalized for the past few months. And, we’re a relatively small company. While we’ve dramatically increased the number of leads we pass to Sales each month, we’re still playing well in the 4-digit range for total volume, and well in the single digits for the number of discrete campaigns we execute each month. That’s a paucity of data when it comes to, in hindsight, trying to glean insights about multi-dimensioned aspects of the data.
In this case, it makes a lot more sense to tackle two approaches simultaneously. For the very-near-term changes, we need to talk to the experienced Sales and Marketing personnel at the company — use their experience as a data source. At the same time, we need to start some pretty rudimentary A/B testing — apply some design of experiments to the process.
Now, an academic or a theorist may say the first part of this is a horrible idea, because those experienced personnel may well have deeply-ingrained myths as to what works and what doesn’t. And, there’s a chance — although a small chance — that this is true. But, there’s a twofold benefit to getting their opinions and running with them: 1) they’re more likely right than they are wrong, and 2) that gets their buy-in and shows them that you value their input. That’s the “human nature” element of things — build the relationship and trust first, so that when your A/B testing turns up some interesting results, they might actually agree to help implement them. Even better, pull those same people in to help you pick what tests to do when, AND pull them in to help interpret the results.
It’s just darn naive to view the world as a place where data is so abundant, so clean, and so clear-cut that it will spit out results that are so irrefutable that there’s no need to apply any soft skills to manage change.