The Most Important ‘Benchmarks’ Are Your Own
Companies typically expend unnecessary energy worrying about comparing themselves to others. What analyst hasn’t been asked: “What’s a typical conversion rate?” Unfortunately, conversion rates can vary so dramatically—by vertical, by product, by purchase cycle, by site design—not to mention, the benchmark data available is typically so generic that it is essentially useless.
To explain how benchmarks fail to provide insight, let’s pretend instead of conversion rate we’re talking about a new runner.* Sarah is starting a “Couch to 5K.” In planning her training, Sarah might wonder, “What’s the average running pace?” It’s an understandable question – she wants to understand her progress in context. However, a runner’s pace depends on their level of fitness, physical attributes, terrain, distance, weather and more. In theory, there could be some value in a benchmark pace for a runner just like Sarah: age, size, fitness level, training schedule, terrain, even climate. Ideally, this data would be trended, so she could understand that in Week 4 of her training plan, most people “just like her” are running a 11:30 min/mile. By Week 8, she should be at 10:30.
However, that’s not how benchmarks work. Benchmark data is typically highly aggregated, including professional runners through newbies, each running in such a wide variety of conditions that it’s essentially rendered useless. Why useless? Because the benchmark is only helpful if it’s truly comparable.
But let’s say Sarah had this (highly generic) ‘average pace’ available to her. How would it make any difference to her progress towards a 5K? If her starting pace was slower than the average, she would set goals to slowly increase it. But even if her starting pace was faster than the average, she wouldn’t pat herself on the back and stop training. She would still set goals to slowly increase it. Thus, her actions would be the same regardless of what the data told her.
That’s the issue with benchmarks. Knowing them makes no difference to the actions the business takes, and data is only useful if it drives action. Back to a business context: Let’s say your conversion rate is 3%, and the industry average is 2.5%. No business is going to stop trying to optimize just because they’re above a generic average.
Ultimately, the goal of a business is to drive maximum profit. This means continually optimizing for your KPIs, guided by your historic performance, and taking in to account your business model and users. So instead of getting sidetracked by benchmarks, help your stakeholders by focusing on progress against internal measures.
First set up a discussion to review the historical trends for your KPIs. Come armed with not only the historical data, but also explanations for any spikes or troughs. Be sure to call out:
- Any major traffic acquisition efforts, especially lower-qualified paid efforts, because they might have negatively affect historical conversion rates.
- Any previous site projects aimed at increasing conversion, and their impact.
With this information, you can work with stakeholders to set a tangible goal to track toward, and brainstorm marketing campaigns and optimization opportunities to get you there. For example, you might aim to increase the conversion rate by 0.5% each quarter, or set an end of year goal of 2.6%. Your historical review is critical to keep you honest in your goal setting. After all, doubling your conversion rate is a pretty unrealistic goal if you have barely moved the needle in two years!
Be sure to test and measure (and document!) your attempts to optimize this rate, to quantify the impact of changes. (For example, “Removing the phone number from our form increased the conversion rate by 5%.”) This will help you track what actually moves the needle as you progress toward your goal.
* Awesome fitness analogy courtesy of Tim Patten.