You Might Be Overanalyzing If…
I was working with a client last week who was looking to update their lead scoring. This wasn’t any fancy-schmancy multidimensional lead scoring — it was plain ol’ pick-a-few-fields-and-assign-’em-some-values lead scoring. Which is a great place to start.
In this case, the company was in the process of streamlining their registration form on one area of their web site. This was an experiment to see if we could improve their registration form conversion rates by reducing the number and complexity of the fields they required visitors to fill out. We took a good hard look at the fields and asked two things: 1) Do we really need to know this information up front? 2) Is the information “easy” to provide (an “Industry” list with 25 fields was deemed “hard,” because the visitor had to scan through the whole list and then make a judgment call as to which industry most fit his situation).
The result was that we combined a couple of fields, removed a couple of fields, and reworded one question and the possible answers. So far, so good. The kicker was that these changes, while still giving us all of the same underlying information that the company was using to assess the quality of their leads, required changing the lead scoring formula. The formula was going from having three variables to two, because two of the scored variables had been merged into a single, much shorter, much clearer field.
We interrupt this blog entry to provide an aside on cognitive dissonance
The company’s existing three-variable lead score was fairly problematic. When qualitatively assessing a batch of leads, the Sales organization could always pick out a number of high-scoring leads whom they were not interested in calling, and they could pick out a number of low-scoring leads who they absolutely wanted to reach out to. “Our lead score is pretty awful,” was the general consensus.
At the same time, the lead score was used at an aggregate level — by the same people — to assess the results from various lead generating activities. “We had 35 leads that scored over 1,000! This event was great!”
We’ll go with the wiktionary defintion of cognitive dissonance: “a conflict or anxiety resulting from inconsistencies between one’s beliefs and one’s actions or other beliefs.” In this case, a strongly held belief that the lead scoring was fatally flawed, and an equally strongly held belief that the lead score was a great way to assess the results of lead gen efforts.
Initially, we (I) actually let the latter belief prevail, and I struggled to come up with a new lead scoring formula and value weighting that would provide as similar as possible an assessment of each lead between the old scoring system and the new.
And I kept hitting dead ends.
In then occurred to me that, by going through the exercise to streamline the fields, we had actually gained some valuable insight into what the Sales organization did/did not see as important qualification criteria for the leads that were sent to them.
So, I started over.
The two scored fields that we were planning to continue to capture were “Job Role” and “Annual Revenue.” Job role was a hybrid of job title and department — a short list that really honed in on the types of people who were most likely to be influencers or decision-makers when it came to the company’s services. We’d discovered, while getting to those fields on the registration form, that if a company had greater than $25 million in revenue in any year, the Sales organization wanted to talk to them regardless of their role in the company. Likewise, there were a handful of job roles that, regardless of the (reported) annual revenue, Sales wanted to talk to them as well. So, we started by making sure that those “trump” values would put the lead over the qualification threshhold regardless of the other field’s value. We then worked backwards from there to the mid-tier fields — fields that, if the other field was promising, then Sales would want to talk to the lead. And so on from there. This was much more an exercise in logic than an exercise in analysis. But, it made more sense than the lead score it was replacing.
As a check, we compared a sample of leads using both the old and new scoring methods. We highlighted a random set of leads that would have moved from below the qualification threshhold in the old scoring system to above it in the new, and vice versa. The majority of these shifts made sense. And, overall, we were looking like we would be qualifying a slightly higher percentage of leads under the new scoring system. We patted ourselves on the back, summarized the changes, the logic, and the before-vs-after results…and headed down to Sales to make sure they were looped in and could identify any gaping holes in our logic.
Instead…they honed in on two things:
-
The slight increase in leads that would be qualified using the new system
-
One lead who had a very low level job title…at a >$1 billion company — she was not qualified under the old system but became qualified under the new
Things then got a bit ugly, which was unfortunate. Cognitive dissonance again. The old system let plenty of not-good leads through to Sales and kept just as many good leads out. And it was not really fixable by simply tweaking the formula. It was broken.
The new system took input directly from the Sales organization and, using the two attributes they cared about the most, applied a logical approach. But, lead scoring is not perfect. The only way to have a “perfect” lead score is to ask your leads 50 questions, check the veracity of all of their answers, and build up a very complex system for taking all of those variables into account. In a way, multidimensional lead scoring is a step in that direction…without putting an undue burden on the lead to answer so many questions, and without requiring a PhD and a Cray supercomputer to develop the right formula.
But, lead scoring is really simply intended to identify the “best” leads, to disqualify the clearly bad leads, and to leave a pretty big gray area where the quality of the lead simply isn’t known. It’s then up to the individual situation to determine where in that gray area to put the qualification threshold. The higher the threshhold, the fewer false positives and more false negatives there will be. The lower the threshold, the fewer false negatives, but the more false positives.
“Analysis paralysis” is a cliché, but it’s a well-warranted one. Looking for perfection when you shouldn’t expect it to exist can be crippling.