A sample of how my visitor engagement index drives insights
While I have not had time to write Part V of my series on measuring visitor engagement, I wanted to take a few minutes to address some comments folks have made about the metric recently. It’s very encouraging to see folks like Gary Angel and Daniel Markus pushing the conversation about measuring engagement along as I can think of few more qualified to critique this work.
Gary Angel, who had very nice things to say about the metric, commented on how in some areas the metric is biased, specifically towards search engines and specific types of content. Gary is concerned that the Brand Index will unfairly bias towards search engines (given that one component is searches for brand-specific terms like “eric t. peterson” and “web analytics demystified”.) I examined this effect and it turns out that “branded searches” make up only a small part of the index for my site but Gary makes an excellent point, unnecessary bias should be removed from the index whenever possible. As such, in my current calculation I have removed this weighting from the Brand Index, redefining said index to only be direct sessions (non-search, non-referred.)
Score one for Gary.
Gary also commented that:
“… if I’m using my metric to measure the “engagement” produced by visitors who used a specific part of a site (like the blog or the press releases), it’s vitally important that my metric not include a strong built in bias toward one of the areas (like blogging). Some analysts might argue that this represents a flaw in the metric Eric proposes. I don’t think so. Every metric carries with it some biases – and no metric is appropriate to every situation.”
This is a good point, one that had been made by a handful of other folks who critiqued the metric early on. The problem I have with removing the Blog Index (ratio of blog reading sessions to all sessions) is the evidence that my weblog is a prime driver of engagement with my site and overall web analytics brand: Over the last 12 months, weblog subscribers are nearly 400 percent more likely to have returned to the site recently than non-readers; those visitors not subscribed to my blog (e.g., in Bloglines or Google Reader) but who are still reading blog content are 300 percent more likely to have returned recently.
Score one for Eric.
One thing worth noting, the way I am using Visual Site to measure weblog readership and subscription, this activity does not show up as traditional “page views” unless the reader A) reads the post on my web site or B) clicks through to the web site (at which time the post appears as a session “referrer”) — Visual Site is able to track external RSS and XML-based content using a non-page view event (something I call “reads”.) Not all web analytics systems afford their operators this flexibility so I thought it would be worth bringing up. This is part of the reason that the Blog Index needs to be a separate index, not part of the Click Depth Index as some have questioned.
But enough about Gary … Daniel Markus posted what I surmise to be a nice post about my visitor engagement metric at Marketing Facts late last week in which he called my calculation “the mother of all Web Analytics KPIs.” The post is entirely in Dutch and my Dutch is horrible so I wrote to Daniel and asked for a rough translation . While there were many good comments about the metric, they raised two concerns:
- The calculation is complicated and difficult to understand.
- There was some question of the utility of this metric, essentially calling into question the overall “actionability” (not a word) of visitor engagement.
Regarding the complexity of the calculation, as Gary has so eloquently stated any number of times, no indicator or metric is any use without understanding its components, its definition, and its inherent biases. Clearly the onus is on the web analyst to explain the metric and it’s definition to any audience they present engagement data, especially given the complete lack of formality around measuring “engagement” (at least until you started reading my posts on the subject.)
Given the complexity of the calculation, the latter concern is valid but one that misses the point of the metric. There are any number of loose definitions of “engagement” floating around in our community — duration, page views, average page views per session, sessions per visitor, etc. But none of these more easily understood (note: not easily interpreted) metrics, in my mind, captures the essence of an engaged visitor.
Visitor engagement has to be examined over diverse criteria, simple assessments simply do not work. To wit:
- To say that session duration is a good measure of engagement is fine, unless the visitor never returns to the site.
- To say that a high number of page views is a good measure of engagement is fine, unless the visitor runs up those page views in a very short period of time and was unlikely able to actually read content.
- To say that recency of visit is a good measure of engagement is fine, unless the visitor has only looked at your home page and left.
- To say that direct visits are a good measure of engagement is fine, unless those direct visits lead to short sessions of few pages viewed and the visitors never return.
I believe that the complexity of the calculation is where visitor engagement derives its value. For practitioners who are lucky enough to have access to a platform that can actually make this calculation and who are willing to take the time to explain to their audience what the metric measures and what its limitations and biases are, the metric can yield insights that would be unlikely to fall out of “traditional” web analytics.
I will leave you with an example of how I am deriving small insights from my measurement of visitor engagement.
Marshall Sponder is the WebMetricsGuru blogger and all-in-all a pretty nice guy. He and I had a little tiff awhile back over Avinash’s web analytics blogger index (something Avinash has stopped doing for some reason …) when I was less than complimentary about the volume of web analytics posts that he produced relative to his blogging in general. Examining traffic metrics from Marshall’s blog I would interpret the value of having a good relationship with him based on a set of commonly understood data:
Almost no volume and no books sold. Come on Marshall, let’s see a nice recommendation for Analytics Demystified already! 😉
But wait, what if I have a closer look at the measured engagement of the visitors he’s been sending to my site:
While my “average” visitor to the site is only 24.2 percent engaged, visitors from Marshall’s posts are nearly 40 percent engaged with my site and, more importantly, of these visitors almost 10 percent are “highly engaged” (50 percent engagement or better.)
Marshall may not be selling books yet, but I have the nagging feeling if he tried even just a little, he could probably drive pretty good numbers given the engagement of the audience he referrers.
Now just imagine that you were running a million or billion dollar business, looking for new opportunities on the Internet. You have hundreds-if-not-thousands of sites sending you visitor traffic all day, every day. Maybe some of these people make purchases, but maybe you have nothing for them to purchase … how do you decide who to spend more time with and who to ignore?
Me, I’m going to write nice things about Marshall Sponder and if the folks from e-consultancy call me and want to do another interview, I’m taking that call right away! How’s that for a KPI defining an action?