Analysis, Social Media

If the Data Looks too Amazing to Be True…

I’ve hauled out this same anecdote off and on for the past decade:

Back in the early aughts [I’m not Canadian, but I know a few of ’em], I was the business owner of the web analytics tool for a high tech B2B company. We were running Netgenesis (remember Netgenesis? I still have nightmares), which was a log file analysis tool that generated 100 or so reports each month and published them as static HTML pages. It took a week for all of the reports to process and publish, but, once published, they were available to anyone in the company via a web interface. One of the product marcoms walked past my cubicle one day early in the month, then stopped, backed up, and stuck his head in: “Did you see what happened to traffic to <the most visited page on our site other than the home page> last month?” I indicated I had not. We pulled up the appropriate report, and he pointed to a step function in the traffic that had occurred mid-month — traffic had jumped 3X and stayed there for the remainder of the month.

“I made a couple of changes to the meta data on the page earlier in the month. This really shows how critical SEO is! I shared it with the weekly product marketing meeting [which the VP of Marketing attended most weeks].”

I got a sinking feeling in my stomach, told him I wanted to look into it a little bit, and sent him on his way. I then pulled up the ad hoc analysis tool and started doing some digging and quickly discovered that a pretty suspicious-looking user-agent seemed to be driving an enormous amount of traffic. It turned out that Gomez was trying to sell into the company and had just set up their agent to ping that page so they could get some ‘real’ data for an upcoming sales demo. Since it was a logfile-based tool, and since the Gomez user agent wasn’t one that we were filtering out, that traffic looked like normal, human-based traffic. When the traffic from that user-agent was filtered out, the actual overall visits to the page had not shown any perceptible change. I explained this to the product marcom, and he then had to do some backtracking on his claims of a wild SEO success (which he had continued to make in the course of the few hours since we’d first chatted and I’d cautioned him that I was skeptical of the data). The moral of the story: If the data looks too dramatic to be true, it probably is!

This anecdote is an example of The Myth of the Step Function (planned to be covered in more detail in Chapter 10 of the book I’ll likely never get around to writing) — the unrealistic expectation that analytics can regularly deliver deep and powerful insights that lead to immediate and drastic business impact. And, the corollary to that myth is the irrational acceptance of data that shows such a step function.

Any time I do training or a presentation on measurement and analytics, I touch on this topic. In an agency environment, I want our client managers and strategists to be comfortable with web analytics and social media analytics data. I even want them to be comfortable exploring the data on their own, when it makes sense. But, (or, really, it’s more like “BUT“), I implore them that, if they see anything that really surprises them, to seek out an analyst to review the data before sharing it with the client. More often than not, the “surprise” will be a case of one of two things:

  • A misunderstanding of the data
  • A data integrity issue

All of this is to say, I know this stuff. I have had multiple experiences where someone has jumped to a wholly erroneous conclusion when looking at data that they did not understand or that was simply bad data. I’d even go so far as to say it’s one of my Top Five Pieces of Personal Data Wisdom!

And yet…

When I did a quick and simple data pull from an online listening tool last week, I had only the slightest of pauses before jumping to a conclusion that was patently erroneous.

Maybe it’s good to get burned every so often. And, I’m much happier to be burned by a frivolous data analysis shared with the web analytics community than to be burned by a data analysis for a paying client. It’s tedious to do data checks — it’s right up there with proof-reading blog posts! — and it’s human nature to want to race to the top of the roof and start hollering when a truly unexpected result (or a more-dramatically-than-expected affirming result) comes out of an analysis.

For me, though, this was a good reminder that taking a breath, slowing down, and validating the data is an unskippable step.

Analytics Strategy

The Privacy Apogee

The biggest topic that you will grapple with in 2011 is consumer privacy. We are at the most liberal and lenient point of consumer privacy in the history of time. It’s primarily because digital data is spewed by consumers with each click, like, Tweet, share, and update with reckless abandon. Consumers are barely aware of the digital footprints they’re creating and we don’t know how to handle it. There are no rules here.

Consumers are racing to new digital medium at breakneck speeds to be early adopters of the next best thing and are literally addicted to digital. Our obsession is so ravenous that almost half of smartphone users will wake up in the middle of the night to check for digital updates. It’s not their fault really, in fact I include myself in this frantic race to get the newest browser, the latest app, or to connect with nearly anyone who asks. Heck, I downloaded the Owner’s Manual to a Hyundai on my iPad within seconds of watching a TV commercial just because I could. I have no idea what data Hyundai now has on me and if or when I’ll start receiving ads or emails containing must-have offers for a car that I probably won’t ever buy (although it looks sweet!). My point is that we’re on the precipice of a substantive change in the way that consumer data is collected and utilized. If we (and by “we” I mean we digital measurers, organizations and institutions) don’t get our acts together in the first quarter of Q1 then we will have regulation forced upon us.

In my opinion, the number one most critical component for even getting off the ground with privacy protection is education. We must educate consumers, organizations, developers and governments to have a meaningful conversation about privacy. If we fall short of that, ignorance about how data is collected, how it’s used, and who uses it, will continue to be vilified by consumers and media sources that don’t know What they Know.

To that end, I’m working on a concept that I’m calling the Privacy Apogee.

Those of you who are up to speed on your celestial mechanics will know that an apogee reflects the furthest point of orbit from earth. What I seek to explore is the farthest point of ethical data collection from a consumer. My working diagram above depicts your average consumer at the epicenter of privacy and the way we track his digital activities using technology that extends from innocuous to invasive. My plan is to flesh out this concept with current tracking capabilities and potential consumer benefits. Moreover, I intend to create a blueprint for accountability. Ultimately the goal is to produce an infographic that conveys several things:

For consumers

      – The Privacy Apogee will illustrate data tracking capabilities that exist today and highlight some of the benefits of opting-in to these tracking practices.

For developers – It will offer guidance on what methods of data to collect and how to communicate data collection, storage and utilization practices in clear language.

For organizations – The Privacy Apogee will illustrate just how far – is too far – by showing what’s technically possible and what’s morally ethical.

In creating this work, I hope to educate and inform the masses by offering a public service that will open some eyes to the critical imperative for self-regulation before we have governmental mandates forced upon us. The Privacy Apogee will illustrate current technological capabilities for tracking consumers’ digital actions and offer both positive and negative repercussions of those actions.

So back to you Captain Blackbeak…I’m listening and this is what I’m doing to create change. It’s a change in perception. A change in education. And a change in direction for our industry. But like you, I cannot do this alone and need the support and mindshare of our industry. With the help of my partner Eric and the industry #measure pros out there my goal is to crowd source this idea to ensure that I’ve fully considered the technology capabilities and the benefits of tracking practices, So I need your help. The Web Analyst’s Code of Ethics is one part of this, but I’ll be working to define the pros and cons of data collection and the methods by which we accomplish our task. Stay tuned for more, as this is just the beginning…

But in the meantime, what do you think?

Analytics Strategy, Social Media

Is It Just Me, or Are There a Lot of #measure Tweets These Days?

<Standard “good golly I haven’t been blogging with my planned weekly frequency / been busy / try to get back on track in 2011” disclaimer omitted>

Update: This update almost warrants deleting this entire post…but I’m going to leave it up, anyway. See Michele Hinojosa’s link in the comment for a link to an Archivist archive of #measure tweets that goes back to May 2010 and doesn’t show anything like the spike the data below shows, and also shows an average monthly tweet volume of roughly 3X what the November spike below shows. Kevin Hillstrom also created a Twapper Keeper archive back in early November 2010, and the count of tweets in that archive to date looks to be in line with what the Archivist archive is showing. So…wholly invalid data and conclusion below!!!

Corry Prohens’s holiday e-greeting email included a list of hist “best of” for web analytics for 2010, and he really nailed it. That just further validates what all web analysts know: Corry is, indeed “Recruiter Man” for our profession. He’s planning to turn the email into a blog post, so, I’ll sit back and wait for that. But, I did suggest that the #measure hashtag probably deserved some sort of shout out (I actually dubbed #measure my “web analytics superhero-sans-cape” in my interview as part of Emer Kirrane‘s “silly series”).

That got me to thinking: how much, really, has the #measure community grown since it’s formal rollout in late July 2009 via an Eric Peterson blog post?

10 minutes in my handy-dandy online listening platform, and I had a nice plot of messages by month:

Yowza! My immediate speculation is that the jump that started in October was directly related to the Washington, D.C. eMetrics conference in the first week of October — the in-person discussions of social media, combined with the continuing adoption of smartphones, combined with the live tweeting that occurred at the conference itself (non-Twitter users at the conference picking up on how Twitter was being effectively used by their peers). That’s certainly a testable hypothesis…but it’s not one I’m going to test right now (add a comment if you’ve got a competing hypothesis or two — maybe I will dive a little deeper if we get some nice competing theories to try out; this will definitely — the horror! — fall in the “interesting but not actionable” category, so, shhhh!!!, don’t point your business users to this post!).

It’s also possible that the data is not totally valid — gotta love the messiness of social media! I’d love to have someone else do a quick “conversation volume” analysis of #measure tweets to see if similar results crop up. Unfortunately, Twitter doesn’t make that sort of historical data available, I shut off my #measure RSS feed archive a few months ago, and, apparently, no one (myself included) ever set up a TwapperKeeper archive for it. So, I can’t immediately think of an alternative source to use to check the data.

Thoughts? Observations? Harsh criticisms? Comment spammers (I know I can always count on you to chime in, you automated, Akismet-busting robots, you!)?


Analytics Strategy

Santa Puts Aprimo Under the Tree!

2011 is shaping up to be the year of big marketing. And luckily for us measurers, smart marketing is founded in data and measurement. With IBM’s recent acquisition rampage and now Teradata’s plans to buy Aprimo, there is unprecedented choice for integrated enterprise marketing solutions. Teradata announced today it’s intentions to buy the Enterprise Marketing Management leader for $525M with a closing date anticipated for sometime in Q1 2011. It’s a smart move in my opinion because the days of big data management and the ability to harness the consumer data firehose for elevated marketing are upon us.

On the executive briefing this morning, I pointed a question by asking if this acquisition was a response to IBM’s recent buying spree and the answer was a definitive no. Bill Godfrey, Aprimo’s Chief Executive Officer, quickly pointed out that Aprimo’s technology set covers 8 categories and that only one competes directly with the IBM/Unica offering. He iterated, “This is not a copy-cat move” with mild umbrage. Mr. Godfrey went on to eloquently explain that the merger pursues an independent strategy that brings a unified platform covering a very broad end-to-end spectrum of functionality. While the story sounded familiar, it’s a good one. It leverages the database storage and business analytics capabilities of Teradata and layers the marketing management and operations proficiency of Aprimo on top. This enterprise-ready integrated solution fuels a marketers’ paradise where insights are churned from data, which pumps intelligent life into automated marketing. All this happens within a closed-loop system that improves over time. Sounds rosy doesn’t it? To paraphrase Teradata’s CMO Darryl McDonald, “The combined solution will help accelerate revenue generating campaigns and leverage data for strategic insights and quick response.

Keep in mind that this isn’t entirely new territory for Teradata who has been offering marketing products to its customers for some time. With IWI (Integrated Web Intelligence) and TRM (Teradata Relationship Manager), it’s already servicing digital data integration and intelligent marketing to it’s customers. Yet, it will be interesting to see how many existing clients and new organizations adopt this complete functionality. My hunch is that this stack is not for the feint of heart nor the bootstrapped organization. It will work best with deeply integrated datasets, stored within big iron and activated using some complex Marketing Resource Management capabilities. All things that both Teradata and Aprimo excel at. But fair warning: Mom & Pop shops need not apply. However, if you’re a large enterprise looking to accelerate your marketing prowess, then this may be the solution you’ve had on your wish list all these years.

While integrating these technologies may take a while, and the promise of an end-to-end solution is no trivial pledge, I’m bullish on the deal. This is a step forward for marketers because it has the potential to deliver the ERP system they never had. It still doesn’t cover everything, but the combined solution sure does handle some critical moving parts.

Congrats to everyone at Aprimo for building an attractive offering and to Teradata for recognizing it. And Happy Holidays to all!

Adobe Analytics, General

Tracking Form Errors (Part 3)

(Estimated Time to Read this Post = 4 Minutes)

In this series of blog posts, I have been talking about how to see what types of Form Errors your website visitors are receiving so you can improve conversion. So far, we have learned how to see how many Form Errors your website is getting, which fields are causing those and how many Form Errors you get per Form and Visit. As my regular readers know, I like to go beyond the basics, so now we are going to kick it up a notch and get into some real fun stuff. Fasten your seat belts!

Which Fields on Which Forms?
In my first post of this series I shared a simplistic way to learn which form fields caused errors using a List sProp. However, correlating this to specific forms was a bit trickier. Here I will show how to do this, even if you don’t have Discover. The trick here is to set a Form Errors eVar that stores all of the fields which had an error when the above Form Errors Success Event is set. Since eVars have a longer character length, this should be possible for most forms that aren’t too long (which they shouldn’t be anyway!). I like to do this by concatenating the field values into one long string with a separator between each field. Here is an example of the report you want to have:

This report will look a bit like the one I described in the previous post, but as you will see, it is much more powerful since it is in the conversion area and can take advantage of Conversion Subrelations. Besides being able to see which combination of field errors are troubling users, you can open your Form ID reports, find a specific form and then break it down by this new Form Error eVar to see the specific form fields causing problems by form as shown here:

Using this report, we can see that for the first form shown above, 66% of the times visitors get a Form Error, they had eight form field errors (or left them blank). This data, when coupled with observational data using a tool like ClickTale can be invaluable in driving increased form conversions!

What % of Required Form Fields Have Errors?

While the above report, which shows Form Field Errors by Form, is powerful, one question it doesn’t answer is: How many of the required fields on my forms are not being filled out by users? The answer to this question can help you figure out which fields should/shouldn’t be required. So to answer this question, what you want to do is to look at each form that loads on your website and calculate how many fields the user received an error for and then divide that number by the total number of required form fields. For example, if you have a form with eight required fields, and the current user received two errors on that form, the calculation would be 2/8 or 25%. You should then pass this 25% value to an eVar when you are setting the Form Errors Success Event. Once you do this for all forms, you will have a report that looks like the one shown here. Using this report we can see that the highest number of Form Errors are cases where users are getting errors on every field (which is most likely people leaving all fields blank). Maybe our users don’t realize that these fields are required and we can do some testing to create a better experience or reduce the number of required fields?

If we want to see which forms are the ones that have the highest 100% Form Field Error Rate, all we need to do is break the above report down by Form ID:

Finally, if you are doing a good job of grouping your website forms using SAINT Classifications, you can see some super-cool reports. In the following report, I have grouped all of my website forms into high-level buckets of Demo and Free Trial. Then I broke this report down by the percentage of required fields that result in Form Errors.

You can see here that most website visitors on Demo forms are getting errors for 100% of the fields (probably leaving them blank!), while for the Free Trial, the largest percentage of required fields with errors is 10%. Interesting data indeed!

Final Thoughts
In this post, we have covered some advanced ways to see which fields produce errors on each form, see this by form and seen how to know which forms have the highest total required field error rates. These reports can provide an enormous amount of insight into what is happening on your forms with respect to errors and once you understand your visitor’s form behavior, you can apply these learnings to all forms on your site. In my next post, I will cover a tangentially related item (related to Forms, but not as much about Form Errors) that I think is super-cool.

Between this post and the last post, hopefully you have some food for thought when it comes to tracking how your website forms are doing so you improve your conversion rates…

General

Commerce Department and WAA Code of Ethics

Thanks to Tim Evans I was alerted to a report about the Commerce Department weighing in on privacy issues online.  Suffice to say I agree with the direction Commerce is giving the Obama administration.  Specifically the idea that, according to CNN’s Money, “the government ‘enlist the expertise and knowledge of the private sector’ to create ‘voluntary codes of conduct that promote informed consent and safeguard personal information.'”

More or less exactly what John Lovett and I proposed back in September of this year.

I have started reaching out to the media on this point — that we in the digital measurement community are already taking matters into our own hands and stepping up — but we could use your help! Please, if you know anyone in the press, send a link to this blog post along to them and help spread the word that as a community we can take responsibility for our own actions and we are willing to do what is right for consumers around the globe.

This issue affects all of us in the digital measurement sector — vendors, consultants, and practitioners alike. Please help us create awareness about our efforts.

Here are links to the relevant background materials:

Here is the link to the near-final draft of the Web Analysts Code of Ethics:

  • Last chance to shape the Web Analysts Code of Ethics (WAA Blog)

The Standards sub-committee for the Code of Ethics met yesterday and as I publish this blog post John Lovett is presenting the final version to the Web Analytics Association Board of Directors.  We expect the Code to be available to sign at the WAA web site in the coming weeks.

Adobe Analytics

Phew…!

Phew. It’s been crazy weeks for me lately. At the moment, we just put up the tree, kids are all quiet and I’m drinking a glass of red. It’s one of the rare moments these days that I have in solace…and it’s gone…the littlest one is squirmy with hiccups.

Okay, I’m back. Made a bottle and made the hand off to Mommy. I haven’t blogged in a long while and there so much to say but I just haven’t had time. So here’s the johnlovett highlight reel for Fall 2010:

  • We welcomed a new baby into our home. And that makes three. Three boys that is. I always thought the jump from one kid to two was really no problem. But, I can tell you that increasing the number of kids another 33% 50% is a big jump indeed. [The 33% designates the percentage of quantitative reasoning skills I’ve lost in the past month.] Our house is busier than ever with an 18-month old climbing the walls and an eldest brother at five running the show. Everybody is happy and healthy so I’m immensely grateful for the lack of sleep and craziness.
  • I’m writing a book for Wiley on Social Media Metrics. And it’s one of the hardest things I’ve ever done. I’ve got the story in my head and know what I want to write, yet cranking out 40 page chapters every other week is really tough. I’m nearly half way through my manuscript and I love the way its coming together. Although, if you’ve got a social analytics story of smashing success, miserable failure or sheer brilliance, I’d love to talk with you. I could always use more.
  • My business is off-the-charts busy. Looking back on twelve months since joining Demystified and I couldn’t be happier. It’s been a great year and the work I’m doing is motivating me to maintain work-a-holic proportions. Since Labor Day I spent 8 weeks on the road visiting clients, working on changing our industry and speaking at events from coast to coast with a business trip to Italy as a big November finale. I made it home with four days to spare before the baby was born. Whew.
  • And I’m happier than I’ve ever been. Who knew that chaos could be so rewarding? I always knew this was the case, but I love my job and I truly love the #measure industry. As measurers of digital medium, our roles are about to become indispensable. We’re on the precipice of a big data explosion and we’ll have the skills to float to the top. Big data is going to rush like a flood over enterprises and marketers alike and we measurers will be ready to slice and dice our way to sensibility. I like our chances.

More to follow on all these topics as I’m working three concurrent projects, writing two white papers and working through book chapters at present… Oh, and it’s my turn to change diapers, so I’m out.

Talk to y’all soon.
John

Adobe Analytics

Tracking Form Errors (Part 2)

In my last post, I started the process of identifying which form fields were producing the most errors. In this post, I will cover some related topics that will allow you to quantify how often you are getting Form Errors and how effective, in general, your forms are at converting website visitors.

How Many Form Errors Are You Producing?
While the solution I identified in my last post showed which form fields had more errors than others, in the web analytics space, we like hard, concrete numbers! Therefore, I would recommend that you set a Success Event each time website visitors encounter at least one form error (assuming you do validation when the Form Submit button is clicked). By setting a Success Event, you will have a nice chart that shows you the overall trend of Form Errors as shown here:

If you are passing a Name or ID for each form you have on your website, you can also use this Success Event to see which forms are getting the most number of errors like this:

In addition, you can set an Alert for the overall Form Error metric or for a specific Form Name/ID:

 

How Is Each Form Doing?
While knowing how many Errors a form gets is cool, as is often the case, we in the web analytics field care more about ratios! In the report above, it is alarming to see that the first form had 85 Form Errors but how do we know if that is good or bad? If we create a Calculated Metric to compare Form Errors to Form Views, we can see how many Form Errors visitors had in relation to each time the same Form was viewed. Based upon the data below, we can see a wide range of Form Error percentages depending upon the form:


Some of these percentages are quite high and represent amazing opportunities to do testing to see if they can be improved! In addition, when you create a calculated metric, besides just seeing it in an eVar report like the one above, you can also see it as a standalone metric. This means that you can see the overall trend of Form Errors per Form View (or Visit) to see if we are getting better or worse over time. This might make a great KPI metric for the team focused on Forms and Form Completions:

Final Thoughts
In my last post I covered a simple way to see which fields are causing problems for your visitors. In this post, I showed you how to quantify your Form Errors, see how much of an issue you may have and even see which Forms have the most Errors. In my next post I will show you some advanced ways to see which fields are causing errors and how to break this down by Form. Stay tuned!

Between this post and the last post, hopefully you have some food for thought when it comes to tracking how your website forms are doing so you improve your conversion rates…

Analysis, Reporting

Reporting: You Can't Analyze or Optimize without It

Three separate observations from three separate co-workers over the past two weeks all resonated with me when it comes to the fundamentals of effective analytics:

  • As we discussed an internal “Analytics 101” class  — the bulk of the class focusses on the ins and outs of establishing clear objectives and valid KPIs — a senior executive observed: “The class may be mislabeled. The subject is really more about effective client service delivery — the students may see this as ‘something analysts do,’ when it’s really a a key component to doing great work by making sure we are 100% aligned with our clients as to what it is we’re trying to achieve.”
  • A note added by another co-worker to the latest updated to the material for that very course said: “If you don’t set targets for success up front, someone else will set them for you after the fact.”
  • Finally, a third co-worker, while working on a client project and grappling with extremely fuzzy objectives, observed: “If you’ve got really loose objectives, you actually have subjectives, and those are damn tough to measure.”

SEO search engine optimization indiaIt struck me that these comments were three sides to the same coin, and it got me to thinking about how often I find myself talking about performance measurement as a critical fundamental building block for conducting meaningful analysis.

“Reporting” is starting to be a dirty word in our industry, which is unfortunate. Reporting in and of itself is extremely valuable, and even necessary, if it is done right.

Before singing the praises of reporting, let’s review some common reporting approaches that give the practice a bad name:

  • Being a “report monkey” (or “reporting squirrel” if you’re an Avinash devotee) — just taking data requests willy-nilly, pulling the numbers, and returning them to the requestor
  • Providing “all the data” — exercises of listing out every possible permutation/slicing of a data set, and then providing a many-worksheeted spreadsheet to end users so that they can “get any data they want”
  • Believing that, if a report costs nothing to generate, then there is no harm in sending it — automation is a double-edged sword, because it can make it very easy to just set up a bad report and have it hit users’ inboxes again and again without adding value (while destroying the analyst’s credibility as a value-adding member of the organization)

None of these, though, are reasons to simply toss reporting aside altogether. My claim?

If you don’t have a useful performance measurement report, you have stacked the deck against yourself when it comes to delivering useful analyses.

Let’s walk through a logic model:

  1. Optimization and analysis are ways to test, learn, and drive better results in the future than you drove in the past
  2. In order to compare the past to the future (an A/B test is a “past vs. future” because the incumbent test represents the “past” and both the incumbent and the challenger represent “potential futures”), you have to be able to quantify “better results”
  3. Quantifying “better results” mean establishing clear and meaningful measures for those results
  4. In order for measures to be meaningful, they have to be linked to meaningful objectives
  5. If you have meaningful objectives and meaningful measures, then you have established a framework for meaningfully monitoring performance over time
  6. In order for the organization to align and stay aligned, it’s incredibly helpful to actually report performance over time using that framework, quod erat demonstrandum (or, Q.E.D., if you want to use the common abbreviation — how in the hell the actual Latin words, including the correct spelling, were not only something I picked up in high school geometry in Sour Lake, TX, but that has actually stuck with me for over two decades is just one of those mysteries of the brain…)

So, let’s not just bash reporting out of hand, okay? Entirely too many marketing organizations, initiatives, and campaigns lack truly crystallized objectives. Without clear objectives, there really can’t be effective measurement. Without effective measurement, there cannot be meaningful analysis. Effective measurement, at it’s best, is a succinct, well-structured, well visualized report.

Photo: Greymatterindia

Adobe Analytics, General

Tracking Form Errors (Part 1)

Almost all websites have forms. Whether you are a B2B/Lead generation site, an eCommerce site, a travel site, etc… you most likely have forms. More importantly, you have people who don’t fill out your forms correctly and get some sort of error message. While error messages are a fact of life, in the web analytics/optimization world these are painful since you work so hard to get people to your site, to read your content and then agree to give you personal information. That is a lot of time and money spent only to have someone potentially abandon because they have problems with your forms. This represents your “low hanging fruit” so to speak – people who have already decided they like you and want to give you their information! In this series of posts, I am going to share some techniques for seeing how much of a problem your website has with form errors and in the next few posts I will cover some more advanced things you can do to diagnose these form error issues.


Which Fields Produce the Most Errors?
The first step in diagnosing form error issues is understanding which form fields are causing issues. Unfortunately, since a user might receive more than one error message, you have to pass in multiple values to a SiteCatalyst variable. This can be done using the Products variable, but since that is often already being used for more important purposes, I will suggest that you use a List Traffic Variable (sProp) to capture these values. Unfortunately, List sProps are not well documented and have some specific limitations (see Knowledge Base ID# 2305). All you need to know is that List sProps allow you to pass in delimited values and when you view them in the sProp report, these values will be split out. Let’s look at an example. Here we see a form in which a user has attempted to submit the form without filling out some required fields. What we want to do is capture which fields this user messed up (could mean incorrect value or leaving blank) so we can see which ones are messed up the most often. In this case, we see that the form errors are related to Job Title, E-mail Address, Phone #, Company Name and the MSA checkbox.

So in this case we can use a List sProp to capture the fields giving us errors. Here is how it would look in the JavaScript Debugger:

Unfortunately, List sProps are still constrained to the 100 character limit so if you have long forms you are out of luck or you can select the most important form fields to capture. Once you have captured the fields, you can open the sProp report and you will see something that looks like this:

In this case, we can see that we are getting the greatest number of errors on the Phone Number form field on the US website (I have added the site since forms exist in multiple sites). I could also filter this sProp report for just US or Japan form fields by using a text search of “us:” or “jp:” as needed. This report should help steer you in the right direction when it comes to fixing basic form field issues.

Correlating Form Field Errors to Forms
Once you have seen which form field errors, the next logical question is to see which forms had which errors. Unfortunately, one of the limitations of List sProps is that they cannot be used in Traffic Data Correlations. Therefore, if you want to breakdown form field errors by Form, you will need to use the Discover product as shown here:

If you don’t have access to Discover and seeing this type of breakdown is important to you, you may want to consider using the Products variable instead of a List sProp since the Products variable comes with full Subrelations by default (though this implementation will be significantly more difficult). I will also be covering a different way to approach this in my next post so stay tuned!

Final Thoughts
If you are not currently tracking form field errors, hopefully this will give you some ideas on how you can start the process of seeing where you are tripping up your visitors. Keep in mind that this post is just a start and that the next few posts will go into more advanced stuff you can do and how you can identify your biggest opportunities for improving conversion.

Analytics Strategy, Conferences/Community, General

FTC "Do Not Track?" Bring it on …

As the hubub around consumer privacy continues I was gently prodded by a friend to pipe up in the conversation.  While my feelings about how we have ended up in this position are pretty clear, and while my partner John and I have proposed what we believe is a step in the right direction regarding online privacy and the digital measurement community, it seems that some type of ban or limitation on online tracking is becoming inevitable.

Without getting political or debating the reality of what we can and cannot know about online visitors I have a single word response to the FTC:

Whatever.

Before you accuse me of changing my stripes or going completely nuts consider this: If the FTC is able to somehow pull off the creation of a universal opt-out mechanism, and if the browser developers support this mechanism despite clear and compelling reasons not to, and if consumers actually widely adopt the mechanism — all pretty big “ifs” in my humble opinion — then I believe the digital measurement industry will do what I have already described as inevitable:

We will hold a revolution!

Since my tenure at JupiterResearch back in 2005 I have been telling anyone who would listen to stop worrying about counting every visitor, visit, and page view and instead start thinking about statistically relevant samples, confidence intervals, and the algorithmic use of data to conduct analysis.  Yes, you need to work to ensure data quality — of course you do — but you don’t have to do it at the expense of your sanity, your reputation, or your job …

See, it turns out in our community it doesn’t really matter whether we are able to measure 100% of the population, 90% of the population, or even 80% of the population — what matters is that we are able to analyze our visitor populations and that are able to draw reasonable conclusions from that analysis.  Oh, we have to be empowered to conduct analysis as well, but that’s a whole other problem …

Statistical analysis of the data … trust me, it’s going to be all the rage in a few years. I’m not saying this simply because I have a white paper describing the third generation of digital measurement tools that will empower this type of analysis … although I would encourage you to download and read “The Coming Revolution in Web Analytics” (freely available thanks to the generous folks at SAS!)

I’m saying this because every day I see the writing on the wall.  Data volumes are increasing, data sources are increasing, and demands for insights are increasing, all while professional journalists, politicians, and political appointees are supposedly protecting our “God-given right to surf the Internet in peace” without any regard to the businesses, employees, and investors who depend to a greater or lesser degree on web-collected data to provide a service, pay their bills, and make a profit …

Okay, sorry, that was editorializing.  My bad.

Still, rather than wring our hands and gripe about how much the credit card companies know (which is a silly argument given that credit card companies provide tangible value in exchange for the data they collect … it’s called “money”) I believe it is time to do three things:

  1. Suck it up.
  2. Hold yourself to a higher standard.
  3. Buy “Statistics in Plain English” and start reading.

The good news is that we have access to lots and lots of great statistical analysis of sampled data today — we just might not realize it.  Consider:

Have I mentioned Excel, Tableau, and R?  Hopefully by now you get the gist … statistics is already all around us all the time, perhaps just not exactly where we expect it or, in the context of lower rates of data collection, where we will ultimately need it to be.

Perhaps the most encouraging evidence that we will be able to make this shift is the increasing attention the digital world is getting from traditional business intelligence market leaders like Teradata, FICO, IBM, and SAS.  I, for one, am more or less convinced that the gap between “web analytics” and “Analytics” is about to be closed even further … and here’s one guy that seems to agree with me.

We don’t need to thumb our noses at the privacy people — quite the opposite, and to this end John and I will be sitting down with a representative from the Center for Democracy and Privacy and Adobe’s Chief Privacy Officer MeMe Rasmussen at the next Emetrics in San Francisco! We also don’t need to stick our head’s back in the sand and hope this issue will simply go away — it won’t, trust me.

We need to prepare.

Prepare by committing yourself to not being that scary data miner that consumers are supposedly so afraid of; prepare by improving your data quality to the extent that you are able; and prepare by starting to communicate to leadership that it really doesn’t matter if you can count every visitor, every visit, and every page view — what matters is your ability to analyze data using the tools at your disposal to deliver value back to the business.

If you’re not sure how to do that, call us.

Viva la revolution!

DISCLOSURE: I mentioned and linked to lots of vendors in this post which I normally do not do. Some are clients of Analytics Demystified, others are not. If you have concerns about why we linked to one company and not another please don’t hesitate to email me directly.

Adobe Analytics, General

A/B Test Bounce Rates

(Estimated Time to Read this Post = 4 Minutes)

In the past, I have written about Bounce Rates, Traffic Source Bounce Rates , Segment Bounce Rates and Site Wide Bounce Rates. In the latter, I even promised I was finished writing about Bounce Rates, but, alas, I have yet another Bounce Rate installment. I was recently in a conversation with a peer and she asked me how they could see the bounce rates of the various landing page A/B tests they were running via Test&Target. I told her that this was easy to do if you follow my instructions in the Segment Bounce Rate post, but she asked if I could write a brief post with more specifics so here it is…

Why A/B Bounce Rates?
Before getting into the solution, let’s re-visit why this is of interest. Test&Target (and other tools like GWO) are wonderful when it comes to optimizing landing pages. They allow you to alter content/creative elements and see what works and what doesn’t. I have seen many cases where clients have used tools like Test&Target to change content based upon when brought the user to the website (i.e. Search Keyword) or demographic information (i.e. Location). Regardless of the reason you want to test, if it is a landing page, one of the questions you often get asked is related to Bounce Rate. Understanding how many people saw “Version A” of a test and bounced vs. those who saw “Version B” and bounced usually comes up for discussion. To answer this question using Omniture/Adobe tools you have the following options:

  • Create a unique page name for each test variation and use the regular Pages report and Bounce rate metric. However, this can get very messy, so unless your website is small, I don’t recommend this approach.
  • Use ASI or Discover to build a segment for people coming from “Version A” or “Version B” and then compare the bounce rates. This is a viable option if you have access to these tools and are well versed in Segmentation.
  • Attempt to track Bounce Rates from within Test&Target. This does not come out-of-the-box, but if you have mboxes on all of the pages the landing page links to, I have heard of some people setting conversion events on the landing page and the subsequent pages, but I don’t think this is for novices (if you are interested, I’m sure @brianthawkins could figure out a way to hack this together!)
  • Do what I suggest below!

Implementing A/B Bounce Rates
Luckily, implementing this in SiteCatalyst is relatively simple. All you need to do is the following:

  1. Enable a new Traffic Variable (sProp)
  2. In this new sProp, concatenate the Test&Target ID and the Page Name on each page of your website
  3. Enable Pathing on the new sProp

That’s it! By concatenating the Test&Target ID and the Page Name, you create a unique join between the two and can find the combination of the Test ID you care about and the page name that you expect them to have landed on. Once you find this combination in the report, you can add your Bounce Rate Calculated Metric (Single Access/Entries – which hopefully you already have as a Global Calculated Metric) and you are done. Here is an example of a report:

In this report, you have all of the ID’s associated with the US Home Page, how many Entries each received and the associated Bounce Rate. If you wanted, you could perform a search for the specific Test&Target Test ID you care about and then your report would be limited to just those ID’s. In the example above, we have multiple tests taking place on the US Home Page. However, in the following example we can see a case where there is just one test taking place on the UK Home Page and the associated Bounce Rate of each:

Other Cool Stuff
But wait…there’s more! Since you have enabled Pathing on this new A/B Test sProp, there are some other cool things you can do. First, you can look at a trended view of the report above to see how the Bounce Rate fluctuates during the course of the test. To do this, simply switch to the trended view and choose your time frame:

Another benefit of having Pathing enabled on this sProp is that you can see how visitors from various tests navigated your site using all of the out-of-the-box Pathing reports. Here is an example of a next page flow for one of the tests:

You can run the preceding report for each test variation and compare the path flows to see if one version pushes people more often to the places you want them to go. Another report you could run is a Fall-Out report which can show you how often people from a specific test made it through your desired checkpoints:

In this example, instead of seeing how the general population falls-out from the Home Page to a Product Page and then to a Form Page, we can limit the funnel to only those people who were part of Test ID “18964:1:0.” I like to run this report and the corresponding one for the other test version(s) and add them all to a SiteCatalyst Dashboard where I can see the fall-out rates side by side.

Final Thoughts
As you can see, by doing a little up-front work, you can add an enormous amount of insight into how your A/B tests are performing on your site including Bounce Rates, Next Page Flows, Fall-Out, etc…Enjoy!

Analysis, Social Media

Twitter Analytics — Turmoil Abounds, and I'm a Skeptic

Last week was a little crazy on the Twitter front, with two related — but very different —  analytics-oriented announcements hitting the ‘net within 24 hours of each other. Let’s take a look.

Selling Tweet Access

On Wednesday, Twitter announced they would be selling access to varying volumes of tweets, with 50% of all tweets being available for the low, low price </sarcasm> of $360,000/year. It appears there will be a variety of options, with “50%” being the maximum tweet volume, but with other options in the offing to get 5% of all tweets, 10% of all tweets, or all tweets/references/retweets that are tied to a specific user. All of these sound like they’re going to come with some pretty tight usage constraints, including that they can’t be resold and that the actual tweet content can’t be published.

Twitter has made an API available almost from the moment the service was created. That’s one of the reasons the service grew so explosively — developers were able to quickly build a range of interfaces to the tool that were better than what Twitter’s development team was able to create. But, the API came with limitations — a very tight limit on how often an application could get updates, and a tight limit on just how many updates could be pushed/pulled at once.

As various Twitter analytics-type services began to crop up, Twitter opened up a “garden hose” option — developers could contact Twitter, show that they had a legitimate service with a legitimate need, and they could get access to more tweets more often through the API. Services like Twitalyzer, TweetReach, and Klout jumped all over that option and have built out robust and useful solutions over the course of the last 6-12 months. Now it looks like Twitter is looking to coil up the garden hose, which could spell a permanent end to the growing season for these services. This will be a shame if it comes to pass.

For a steep price, these paid options from Twitter will have limited use: limited to some basic monitoring/listening and some basic performance measurement. Even with the $360K/year option, providing half of the tweets seems problematic when you consider Twitter from a social graph perspective — in theory, half of the network ripple from any given tweet will be lost, or, more confusingly, will crop up as a 2nd or 3rd degree effect with no ability to trace it back to its source because the path-to-the-source passes through the “unavailable 50%!”

This data also won’t be of much use as a listen-and-respond tool. Imagine a brand that has a fantastic ability to monitor Twitter and appropriately engage and respond…but appears schizophrenic because they’re operating with one eye closed (and paying a pretty penny to do even that!). To be clear, for any given brand or user, only a tiny fraction of all tweets are actually of interest, but that tiny fraction is going to be spread across 100% of the Twitterverse, so only having access to a 5%, 10%, or even 50% sample means that relevant tweets will be missed.

Online listening platforms — Radian6, SM2, Buzzmetrics, Crimson Hexagon, Sysomos, etc. — may actually have deep enough pockets to pay for these tweets to improve their own underlying data…but they will have to significantly alter the services they provide in order to comply with the usage guidelines for the data.

Ugh.

Twitter Analytics

On Thursday, Mashable reported that Twitter Analytics was being tested by selected users. Unfortunately, I’m not one of those users (<sniff><sob>), so I’m limited to descriptions in the Mashable article. Between that article and Pete Cashmore’s (Mashable CEO) editorial on cnn.com, I’ve got pretty low expectations for Twitter Analytics.

Both pieces seem somewhat naive in that they overplay the value to brands that Facebook has delivered with Facebook Insights, and they confuse “pretty graphs” with “valuable data.” All I can think to do is rattle off a series of reactions from the limited information I’ve been able to dig up:

  • Replies/references over time: um…thanks, but that’s always been something that’s pretty easy to get at, so no real value there.
  • Follows/unfollows: this seems to be taking a page directly from Facebook Insights with it’s new fans/removed fans reporting (which, by the way, never agrees with the “Total Fans” data available in the same report, but I digress…); this has marginal value — in practice, unless a user is really pissing off followers or baiting them to follow with a very specific promotional giveaway (“Follow us and retweet this and you’ll be entered to win a BRAND NEW CAR!!!”), there’s probably not going to be a big spike in unfollows, and it isn’t that hard to trend “total followers” over time, so I can’t get too excited about this, either
  • Unfollows (cont’d.): “tweets that cause people to unfollow” is another apparent feature of Twitter analytics. Really? Was that something that someone living on planet Earth came up with? This sounds nifty initially, but, in practice, isn’t going to be of much use. If a user posts offensive, highly political (for a non-political figure user), or obnoxiously self-promoting tweets…he’s going to lose followers. I don’t think “analytics” will really be needed to figure out the root cause (if it was a single tweet) driving a precipitous follower drop. Common sense should suffice for that.
  • Retweets: this is like references, in that it’s not really that hard to track, and I wouldn’t be surprised at all if Twitter Analytics only counts retweets that use the official Twitter retweet functionality, rather than using a looser definition that includes “RT @<username>” occurrences (which are retweets that are often more valuable, because they can include additional commentary/endorsement by the retweeters)
  • Impressions: I’m expecting a simplistic definition of impressions that is based just on the number of followers, which is misleading, because most users of Twitter see only a fraction of the tweets that cross their stream. Twitalyzer calculates an “effective reach” and Klout calculates a “true reach” — both make an attempt to factor in how receptive followers are to messages from the user. None of these measures is going to be perfect, but I’m happier relying on companies whose sole focus is analytics trying to tinker with a formula than I am with the “owner” of the data coming up with a formula that they think makes sense.

With the screen caps I’ve seen, there is no apparent “export data” button, and that’s a back-breaker. Just as Facebook Insights is woefully devoid of data export capabilities (the “old interface” enables data export…but not of some of the most useful data, and API access to the Facebook Insights data doesn’t exist, as best as I’ve been able to determine), Twitter looks like they may be yet another technology vendor who doesn’t understand that “their” dashboard is destined to be inadequate. I’m always going to want to combine Twitter data with data that Twitter doesn’t have when it comes to evaluating Twitter performance. For instance, I’m going to want to include referrals from Twitter to my web site, as well as short URL click data in my reporting and analysis.

Ikong Fu speculated during an exchange (on Twitter) that Twitter may also, at some point, include their internal calculations of a user’s influence in Twitter Analytics:

I didn’t realize that Twitter was calculating an internal reputation score. It makes sense, though, that that would be included when they make recommendations of who else a user might want to follow. I found a post from Twitter’s blog back in July that announced the rollout of  “follow suggestions,” and that post indicated these were based on “algorithms…built by our user relevance team.” The only detail the post provided was that these suggestions were “based on several factors, including people you follow and the people they follow.” That sounds more like a social graph analysis (“If you’re following 10 people who are all following the same person who you are not following, then we’re going to recommend that you follow that person”) than an analysis of each user’s overall influence/quality. Again…I’m more comfortable with third party companies who are fully focussed on this measurement and who make their algorithms transparent providing me with that information than I am with Twitter in that role.

So, Where Does This Leave Us?

Maybe, for once, I’m just seeing a partially filled glass of data as being half empty rather than half full (okay, so that’s the way I view most things — I’m pessimistic by nature). In the absence of more information, though, I’m forced to think that, just as I was headed towards analytics amour when it came to Twitter data, Twitter is making some unfortunate moves and rapidly smudging the luster right off of that budding relationship.

Or, maybe, I’m unfairly pre-judging. Time will tell.

Social Media

WAW Recap: Marketing to Hispanics Using Social Media

We jumped a little afield of web analytics at this month’s Columbus Web Analytics Wednesday: Why Marketing to Hispanics Using Social Media works. The event was hosted and sponsored by Social Media Spanish, and it was chock full of good information. Natasha Pongonis and Eric Diaz presented a host of statistics about both the growth of the Hispanic population in the U.S., the many ways that Hispanics are heavier users of social media than the population as a whole, and how smart brands are targeting Hispanics using social media. They posted the full presentation on the Social Media Spanish blog, and it’s definitely worth checking out.

It’s a complex topic, which Natasha Pongonis highlighted early on with this chart (yes, sadly, it’s a pie chart) showing a breakdown of Hispanics in the U.S. (click to view a larger version):

One of the takeaways here was that there are a significant number of American Hispanics who prefer to communicate in English…and a significant number who prefer to communicate in Spanish! Some of the discussion later in the presentation centered on this challenge — simply “targeting Hispanics” is too broad of a classification, as even something as basic as which language to use in that targeting varies!

It’s a challenge: with social media platforms evolving rapidly in conjunction with evolving consumer expectations, marketers are faced with pros and cons of just about any strategy using these tools.

Not only was the content great, but Social Media Spanish secured a great venue, great food, and even a real photographer! I brought my camera, but Alison Horn really took some great shots, which she has posted as a set on Flickr. Check ’em out!

Adobe Analytics, Analytics Strategy, General

Tracking Lead Gen Forms by Page Name

Every once in a while, as a web analyst, I get frustrated by stuff and feel like there has to be a better way to do what I am trying to do. Many times you are able to find a better way, often times you are not. In this case, I had a particular challenge and did find a cool way to solve it. You may not have the same problem, but, if for no other reason than to get it off my chest, I am writing this as a way to exhale and bask in my happiness of solving a web analytics problem…

My Recent Problem
So what was the recent problem I was facing that got me all bent out of shape? It had to do with Lead Generation forms, which are a staple of B2B websites like mine. Let me explain. Many websites out there, especially B2B websites, have Lead Generation as their primary objective. In past blog posts, I have discussed how you can track Form Views, Form Completes and Form Completion Rates. However, over time, your website may end up with lots of forms (we have hundreds at Salesforce.com!). In a perfect world, each website form would have a unique identifier so you can see completion rates independently. That isn’t asking too much is it? However, as I have learned, we rarely live in a perfect world!

Through some work I did in SiteCatalyst, I found that our [supposedly unique] form identifier codes were being copied to multiple pages on multiple websites. While this causes no problems from a functionality standpoint – visitors can still complete forms – what I found was that the same Form ID used in the US was also being used in the UK, India, China, etc… Therefore, when I ran our Form reports and looked at Form Views, Form Completes and Form Completion Rate by Form ID, I had no idea that I was looking at data for multiple countries. For example, if you look at this report nothing seems out of the ordinary right?

However, look what happened when I broke this report (last row of above report) down by a Page Name eVar:

At first, I thought I was going crazy! How can this unique Form ID be passed into SiteCatalyst on eleven different form pages on nine country sites? This caused me to dig deeper, so I did a DataWarehouse report of Form ID’s by Page Name and found that an astounding number of Form Pages on our global websites shared ID’s. Suddenly, I panicked and realized that whenever I had been reporting on how Forms were performing, I was really reporting on how they were performing across several pages on multiple websites. In the example above, I realized that the 34.669% Form Completion Rate I was reporting for the US version of the form in question was really reporting data with the same ID for forms residing on websites in Germany, China, Mexico, etc… While the majority was coming the the form I was expecting, 22% was coming from other pages! Not good!

The Solution
So there I was. Stuck in web analytics hell, reporting something different than I thought I was. What do you do? The logical solution was be to do an audit and make sure each Form page on the website had a truly unique ID. However, that is easier said than done when your web development team is already swamped. Also, even if you somehow manager to fix all of the ID’s, what is preventing these ID’s from getting duplicated again? We looked at all types of process/technology solutions and then realized that there is an easy way to fix this by doing a little SiteCatalyst trickery.

So what did we do? We simply replaced the Form ID eVar value with a new value that concatenated the Page Name and the Form ID on every Form Page and Form Confirmation Page. By concatenating the Page Name value, even if the same Form ID was used on multiple pages, the concatenated value would still be unique. For example, the old Form ID report looked like the one above:

But the new version looked like this:

With this new & improved report, when I was reporting for a particular form on a particular site/page, I could search by the form pagename and be sure I was only looking at results from that page. Also, a cool side benefit of this approach is that you could add a Form ID to the search function to quickly find all pages that had the same Form ID in case you ever did want to clean up your Form ID’s:

Implementation Gothca!
However, there is one tricky part of this solution. While it is certainly easy to concatenate the s.pagename value with the Form ID on the Form page, what about the Form Confirmation page? The Form Confirmation page is where you should be setting your Form Completion Success Event and that page is going to have a different pagename. If your Form ID report doesn’t have the same Page Name + Form ID value for both the Form View and Form Complete Success Event, you cannot use a Form Completion Rate Calculated Metric. For this reason, you need to use the Previous Value Plug-in to pass the previous pagename on the Form Confirmation page. Doing this will allow you to pass the name of the “Form View” page on both the Form View and Form Complete page of your site so you have the same page name value merged with the Form ID.

A Few More Things
Finally, while the Form ID report above serves this particular function, it is not very glamorous and it might not be the most user-friendly report for your users. If you want to provide a more friendly experience you can do the following with SAINT Classifications:

  1. Classify the Form ID value by its Page Name so your users can see Form Views, Form Completions and the Form Completion Rate by Page Name
  2. Classify the Form ID value by the Form ID if for some reason you want to go back to seeing the report you had previously

Final Thoughts
Well there you have it. A very specific solution to a specific problem I encountered. If you have Lead Generation Forms on your website, maybe it will help you out one day. If not, thanks for letting me get this out of my system!

Analytics Strategy, Conferences/Community, General

Are you in Atlanta? I will be, next week!

Just a quick note to those of you in the greater Atlanta (GA) metropolitan region to let you know I will be in town next week working and participating in two awesome events:

  1. A blow-out Atlanta Web Analytics Wednesday, sponsored by the fine folks at Unica, where I will be moderating a “practitioner panel” with Delta, Home Depot, and How Stuff Works. Sadly we only had room for 60 odd people and that list filled up almost right away … but you can write to Jane Kell and ask to be put on the waiting list just in case people have to back out.
  2. A presentation to Atlanta CHI on “Getting to Know Your Users Using Data” at the Georgia Tech Research Institute Conference Center. As this is a somewhat mixed audience my presentation will be a high-level walk through the systems and processes we all leverage on a daily basis.

As far as I know the CHI event is not sold out and costs $35 for the general public, $10 for students with ID, and nothing (free!) if you are a member of Atlanta CHI!

For those of you not in Atlanta, apologies for this utterly useless blog post. Hopefully I will make it to your town soon and can make it up to you …

Reporting, Social Media

Twitter Performance Measurement with (a Heavy Reliance on) Twitalyzer

My Analyzing Twitter — Practical Analysis post a few weeks ago wound up sparking a handful of fantastic and informative conversations (“conversations” in the new media use of the term: blog comments, e-mails, and Twitter exchanges in addition to one actual telephone discussion). That’s sort of the point of social media, right? The fact that I can now use these discussions as an example of why social media has real value isn’t going to convince people who view it just as a way to tell the world the minutia of your life, because they would point out that gazing at one’s navel to better understand navel-gazing…is still just navel-gazing. So, yeah, if a brand knows that 145 million consumers have signed up for Twitter and knows that they are welcome to leverage it as a marketing channel, but just don’t fundamentally believe that it’s a channel to at least consider using, then neither anecdotes nor good-but-not-perfect data is going to convince them.

Many brands, though, are convinced that Twitter is a channel they should use and are willing to put some level of resources towards it. But, the question still remains: “How do we most effectively measure the results of our investment?” Everything in Twitter occurs at a micro level — 140 characters at a time. A single promotion with a direct response purchase CTA can be measured, certainly, but that’s an overly myopic perspective. So, what is a brand to do? For starters, it’s important to recognize there are (at least) three fundamentally different types of “measurement” of Twitter:

  • Performance measurement — measuring progress towards specific objectives of the Twitter investment
  • Analysis and optimization — identifying opportunities to improve performance in the channel
  • Listening (and responding) — this is an area where social media has really started blurring the line between traditional outbound marketing, PR, consumer research, and even a brand’s web site; with Twitter, there is the opportunity to gather data (tweets) in near real-time and then respond and engage to selected tweets…and whose job is that?

The kicker is that all three of these types of “measurement” can use the same underlying data set and, in many cases, the same basic tools (with traditional web analytics, both performance measurement and analysis often use the same web analytics platform, and plenty of marketers don’t understand the difference between the two…but I’m going to maintain some  self-discipline and avoid pursuing that tangent here!).

This post is devoted to Twitter performance measurement, with a heavy, heavy dose of  Twitalyzer as a recommended key component of that approach. Have I done an exhaustive assessment of all of the self-proclaimed Twitter analytics tools on the market? No. I’ll leave that to Forrester analysts. I’ve gone deep with one online listening platform and have done a cursory survey of a mid-sized list of tools and found them generally lacking in either the flexibility or the specificity I needed (I will touch on at least one other tool in a future post that I think complements Twitalyzer well, but I need to do some more digging there first). Twitalyzer was (and continues to be) designed and developed by a couple of guys with serious web analytics chops — Eric Peterson and Jeff Katz. They’ve built the tool with that mindset — the need for it to have flexibility, to trend data, to track measures against pre-established targets, and to calculate metrics that are reasonably intuitive to understand. They’ve also established a business model where there is “unlimited use” at whichever plan level you sign up for — there is no fixed number of reports that can be run each month, because, generally, you want to see a report’s results and iterate on the setup a few times before you get it tuned to what you really need. So, there’s all of that going for it before you actually dive into the capabilities.

One more time: this is not a comprehensive post of everything you can do with Twitalyzer. That would be like trying to write a post about all the things you can do with Google Analytics, which is more of a book than a post. For a comprehensive Twitalyzer guide, you can read the 55-page Twitalyzer handbook.

Metrics vs Measures

The Twitalyzer documentation makes a clear distinction between “metrics” and “measures,” and the distinction has nothing to do with whether the type of data is useful or not. Measures are simply straight-up data points that you could largely get by simply looking at your account at any point in time — following count, follower count, number of lists the user is included on, number of tweets, number of replies, number of retweets, etc. Metrics, on the other hand, are calculated based on several measures and include things like influence, clout, velocity, and impact. Obviously, metrics have some level of subjectivity in the definition, but there are a number of them available, and everywhere a metric is used, you are one click away from an explanation of what goes into calculating it. The first trick is choosing which measures and metrics tie the most closely to your objectives for being on Twitter (“increase brand awareness” is a a very different objective from “increase customer loyalty by deepening consumer engagement”). The second trick is ensuring that the necessary stakeholders in the Twitter effort buy into them as valid indicators of performance.

For both metrics and measures, Twitalyzer provides trended data…as best they can. Twitalyzer is like most web analytics packages in that historical data is not magically available when you first start using the tool. Now, the reason for that being the case is very different for Twitalyzer than it is for web analytics tools. Basically, Twitter does not allow unlimited queries of unlimited size into unlimited date ranges. So, Twitalyzer doesn’t pull all of its measures and calculate all of the metrics for a user unless someone asks the tool to. The tool can be “asked” in two ways:

  • Someone twitalyzes a username (you get more data if it’s an account that you can log into, but Twitalyzer pulls a decent level of data even for “unauthenticated” accounts)
  • All of the tracked users in a paid account get analyzed at least once a day

When Twitalyzer assesses an account, the tool looks at the last 7 days of data. So, as I understand it, if you’re a paid user, then any “trend” data you look at is, essentially, showing a rolling 7-day average for the account (if you’re not a paid user, you could still go to the site each day and twitalyze your username and get the same result…but if you really want to do that, then suck it up and pay $10/month — it’ll be considerably cheaper if you have even the most basic understanding of the concept of opportunity costs). This makes sense, in that it reasonably smooths out the data.

Useful Measures

There isn’t any real magic to the measures, but the consistent capture of them with a paid account is handy. And, what’s nice about measures is that anyone who is using Twitter sees most of the measures any time they go to their page, so they are clearly understood. Some measures that you should consider (picking and choosing — selectivity is key!) include:

  • Followers — this is an easy one, but it’s the simplest indication as to whether consumers are interested in interacting with your brand through Twitter; and if your follower count ever starts declining, you’ve got a very, very sick canary in your Twitter coal mine — consumers who, at one time, did want to interact with you are actively deciding they no longer want to do so; that’s bad
  • Lists — the number of lists the user is a member of is another measure I like, because each list membership is an occasion where a reasonably sophisticated Twitter user has decided that he/she has stopped to think about his relationship with your brand, has categorized that relationship, AND has the ability to then share that category with other users.
  • Replies/References — if other Twitter users are aware of your presence and are actually referencing it (“@<username>”), that’s generally a good thing (although, clearly, if that upticks dramatically and those references are very negative, then that’s not a good thing)
  • Retweets — people are paying attention to what you’re saying through Twitter, and they’re interested in it enough to pass the information along

Twitalyzer actually measures unique references and unique retweets (e.g., if another user references the tracked account 3 times, that is 3 references but only 1 unique reference — think visits vs. page views in web analytics), but, as best as I can tell, doesn’t make those measures directly available for reporting. Instead, they get used in some of the calculated metrics.

A few other measures to consider that you won’t necessarily get from Twitalyzer include:

  • Referrals to your site — there are two flavors of this, and you should consider both: referrals from twitter.com to your site (are Twitter users sharing links to your site overall?), and clickthroughs on specific links you posted (which you can track through campaign tracking, manually through a URL shortener service like bit.ly or goo.gl, or through Twitalyzer)
  • Conversions from referrals — this is the next step beyond simply referrals to your site and is more the “meaningful conversion” (not necessarily a purchase, but it could be) of those referrals once they arrive on your site
  • Volume and sentiment of discussions about your brand/products — Twitalyzer does this to a certain extent, but it does it best when the brand and the username are the same, and I’m inclined to look to online listening platforms as a more robust way to measure this for now

Calculated Metrics

Now, the calculated metrics are where things really get interesting. Each calculated metric is pretty clearly defined (and, thankfully, there is ‘nary a Greek character in any of the definitions, which makes them, I believe, easier for most marketers to swallow and digest). This isn’t an exhaustive list of the available metrics, but the ones I’m most drawn to as potential performance measurement metrics are:

  • Impact — this combines the number of followers the user has, how often the user tweets, the number of unique references to the user, and the frequency with which the user is uniquely retweeted and uniquely retweets others’ tweets; this metric gets calculated for other Twitter users as well and can really help focus a brand’s listening and responding…but that’s a subject for another post
  • Influence — a measure of the likelihood that a tweet by the user will be referenced or retweeted
  • Engagement — a lot of brands still simply “shout their message” out to the Twitterverse and never (or seldom) reference or reply to other users; Twitalyzer calculates engagement as a ratio of how often the brand references other user compared to how often other users reference the brand; so, this is a performance measure that is highly influenced by the basic approach to Twitter a brand takes, and many brands have an engagement metric value of 0%. It’s an easy metric to change…as long as a brand wants to do so
  • Effective Reach — this combines the user’s influence score and follower count with the influence score and follower count of each user who retweeted the user’s tweet to “determine a likely and realistic representation of any user’s reach in Twitter at any given time.” Very slick.

There are are a number of other calculated metrics, but these are the ones I’m most jazzed about from a performance measurement standpoint. (I’m totally on the fence both with Twitalyzer’s Clout metric and Klout‘s Klout score, which Twitalyzer pulls into their interface — there’s a nice bit of musing on the Klout score in an AdAge article from 30-Sep-2010, but the jury is still out for me.)

Setting Goals

Okay, so the next nifty aspect of Twitalyzer when it comes to performance measurement is that you can set goals for specific metrics:

Once a goal is set, it then gets included on trend charts when viewing a specific metric. “But…what goal should I set for myself? What’s ‘normal?’ What’s ‘good?'” I know those questions will come, and the answer isn’t really any better than it is for people who want to know what the “industry benchmark for an email clickthrough rate” is. It’s a big fat “it depends!” But, assessing what your purpose for using Twitter is, and then translating that into clear objectives, and then determining which metrics make the most sense, it’s pretty easy to identify where you want to “get better.” Set a goal higher than where you are now, and then track progress (Twitalyzer also includes a “recommendations” area that makes specific notes about ways you can alter your Twitter behavior to improve the scores — the metrics are specifically designed so that the way to “game” the metrics…is by being a better Twitter citizen, which means you’re not really gaming the system).

I’d love to have the ability to set goals for any measure in the tool, but, in practice, I don’t expect to do any regular performance reporting directly from Twitalyzer’s interface for several reasons:

  • There are measures that I’ll want to include from other sources
  • The current version of the tool doesn’t have the flexibility I need to put together a single page dashboard with just the measures and metrics I care about for any given account — the interface is one of the cleanest and easiest to use that I’ve seen on any tool, but, as I’ve written about before, I have a high bar for what I’d need the interface to do in order for the tool itself to actually be my ultimate dashboard

Overall, though, goal-setting = good, and I appreciate Eric’s self-admitted attempt to continue to steer the world of marketing performance measurement to a place where marketers not only establish the right metrics, but they set targets for them as well, even if they have to set the targets based on some level of gut instinct. You are never more objective about what it is you can accomplish than you are before you try to accomplish it!

But, Remember, That’s not All!

So, this post has turned into something of a Twitalyzer lovefest. Here’s the kicker: the features covered in this post are the least interesting/exciting aspects of the tool. Hopefully, I’ll manage to knock out another post or two on actually doing analysis with the tool and how I can easily see it being integrated into a daily process for driving a brand’s Twitter investment. Twitalyzer is focussed on Twitter and getting the most relevant information for the channel directly out of the API, unlike online listening platforms that cover all digital/social channels and, in many cases, are based on text mining of massive volumes of data (which, as I understand it, is generally purchased from one of a small handful of web content aggregators). It’s been designed by marketing analysts — not by social media, PR, or market research people.  It’s pretty cool and does a lot considering how young it is (and the 4.0 beta is apparently just around the corner). Like any digital analytics tool, it’s going to have a hard time keeping up with the rapid evolution of the channel itself, but it’s one helluva start!

Analytics Strategy, Conferences/Community

Updated "Web Analysts Code of Ethics"

Just in case you hadn’t seen this already I wanted to call your attention to the updated (version 2) “Web Analysts Code of Ethics” over at the Web Analytics Association blog. John Lovett and the members of the Standards Subcommittee did a wonderful job condensing my original work down into a more easily digested document.

The committee is still looking for comments on this version so please, please head over, read the update, and let us know what you think.

Thanks to John and the WAA for making this happen for all of us!

Adobe Analytics, General

Tracking Recurring Revenue

Recently, I had the pleasure of meeting a web analyst whose business relied on a subscription model. In a subscription model, you often sell a product initially and then there is a subsequent Recurring Revenue stream (normally monthly). During the conversation, I explained how I would address this in SiteCatalyst and since it is a somewhat advanced concept, thought I would share the same info here in case there are blog readers out there who also have Recurring Revenue models.

Why Is Recurring Revenue A Challenge?
So why is tracking Recurring Revenue in SiteCatalyst difficult? As always, I like to explain through an example. Let’s imagine that you sell a popular CRM product that has an initial sale price and then a monthly subscription. A visitor comes to your website from a Bing keyword of “CRM” and they end up purchasing your product for $10,000. You can track this $10,000 sale online and attribute it to the Bing keyword. However, what do you do after one month? Let’s say the customer pays $1,000 each month after the initial $10,000. How do you attribute the recurring monthly $1,000 to the Bing keyword that originally brought the customer? Many clients I have seen stop at the initial sale, but this is problematic. What if there are some marketing campaigns that bring in a lot of initial sales, but those campaigns produce customers who quit the subscription after two months? Perhaps there are other marketing campaigns that generate lower initial sale amounts, but result in customers who are retained for several years. How do you compare “apples to apples” in this case if you cannot tie both initial and subscription revenue to the original marketing campaign?

The answer for most clients is to simply pass the original marketing campaign to their back-end system and do all of the reporting outside of a tool like SiteCatalyst. However, this has the following negative consequences:

  1. As a web analyst, you are now out of the loop which is not good for your program (or your career!)
  2. There are hundreds of online web-only data points that you know about the original sale (i.e. visit number, internal search terms used, internal promos used, etc…). Are you going to pass all of these data points to your company’s data warehouse and do analysis there? If you are a big company that may be possible, but what if you are a small or mid-sized business?

If you are like me, at a minimum, I like to have all important data in SiteCatalyst so I can have a seat at the table. With this in mind, the following section will describe what you need to do if you want to get Recurring Revenue into your SiteCatalyst implementation so it can be tied to the same data points as the initial sale.

Recurring Revenue in SiteCatalyst Reports
If you have read my past blog posts or even just my last post on Product Returns, you may know that one of my favorite SiteCatalyst features is Transaction ID (I suggest re-reading this post!). At a high level, Transaction ID allows you to set an ID associated with a transaction and later upload offline metrics that are dynamically associated with any Conversion Variable (eVar) values which were active at the time the Transaction ID was set. Just as Transaction ID was important to solving our Product Returns issue, it can be used similarly to solve the aforementioned Recurring Revenue challenge.

When visitors make their initial subscription purchase, you can set a Transaction ID value on the confirmation page. Doing this allows you to establish “key” that can be used later to upload Recurring Revenue and tie it to all of the eVar values associated with the original sale. Keep in mind that you will have to work with your Adobe account manager to get Transaction ID set-up. Additionally, Transaction ID is normally only used for 90 days, but in this case you will need to work with your account manager to get it extended perpetually (or for as long as you want to include Recurring Revenue). For example, if you want to associate two years of revenue with the eVar values that contributed to the original sale, then your Transaction ID data must persist for two years.

Once you have Transaction ID enabled and have started passing Transaction ID’s for online purchases, the next step is to create a new “Recurring Revenue” [Currency] Incrementor Event. Transaction ID uploads are similar to Data Sources and, as such, can only import data as Incrementor Success Event. Setting up a new Incrementor Event is easily done through the Admin Console.

Once you have Transaction ID set-up and your new Recurring Revenue currency Success Event, you need to generate a Data Sources template file that you can upload on a monthly/weekly/daily basis. This file will consist of each subscription account that is still active and the amount of [monthly] Recurring Revenue that should be recorded for each date. Normally, you will have subscriptions that expire at varying dates so you may decide to upload a file on a daily basis which represents all those who are starting a new subscription cycle on that date. The Data Sources template that you will create should have the following columns:

  1. Date – Use the date that the new subscription cycle starts, not the date of the original sale. This date will determine which month the Recurring Revenue will appear in when using SiteCatalyst. NOTE: Please keep in mind that there is currently a SiteCatalyst restriction that you cannot upload a Transaction ID file that has dates spanning more than 90 days. You can upload dates that are more than 90 days old, but the date ranges for the entire file upload cannot be more than 90 days (kind of lame in my opinion!).
  2. Transaction ID – The ID set when the initial subscription sale took place
  3. Product Name/ID – Same value that was passed to the Products Variable during the original online subscription purchase
  4. Recurring Revenue – Amount that the client will be charged for the next subscription cycle

When you are done, your Data Sources upload file might look something like this:

Seeing Recurring Revenue in SiteCatalyst Reports
Once you have successfully uploaded some Recurring Revenue data, it is time to see how all of this looks in SiteCatalyst. To do this, open the Products report and add Revenue and our new Recurring Revenue metrics. The report should look like this:

Next, you can create a Calculated Metric which combines the two metrics to create a “Total Revenue” metric as shown here:

Finally, since Transaction ID allows you to apply the eVar values that were associated with the original transaction to the new Recurring Revenue data, you can use Subrelation break downs for the Products report by Campaign and see both Revenue and Recurring Revenue (and the Total Revenue Calculated Metric)!

In the above report, we can see an example of the quandary I described in the beginning of this post. When we break down our first product (Sales Cloud) by marketing campaign, if we look at the Revenue column, it looks like we should be focusing our marketing spend on Bing Branded Keywords. However, when we add our Recurring Revenue, we can see that the majority of our Recurring Revenue and the most of our Total Revenue is coming from E-mail. Perhaps that is the best place to concentrate our marketing budget…

Final Thoughts
So there you have it. A simple, yet [hopefully] effective approach for making sure that you show all Revenue you are helping generate whether it takes place during the initial sale or subsequently… If you have comments/questions, please leave a comment here…

Analysis, Social Media

Four Ways that Media Mix Modeling (MMM) Is Broken

Many companies rely on some form of media mix modeling (or “marketing mix modeling”) to determine the optimal mix of their advertising spend. With the growth of “digital” media and the explosion of social media, these models are starting to break down. That puts many marketing executives in a tough bind:

  1. Marketing, like all business functions, must be data-driven — more so now than ever
  2. Digital is the “most measurable medium ever” (although their are wild misperceptions as to what this really means)
  3. Ergo, digital media investments must be precisely measured to quantify impact on the bottom line

For companies that have built up a heavy reliance on media mix modeling (MMM), the solution seems easy: simply incorporate digital media into the model! What those of us who live and breathe this world recognize (and lament over drinks at various conferences for data geeks), is that this “simple” solution simply doesn’t work. Publicly, we say, “Well…er…it’s problematic, but we’re working on it, and the modeling techniques are going to catch up soon.”

My take: don’t hold your breath that MMM is going to catch up — even if it catches up to today’s reality, it will already be behind, because digital/social/mobile will have continued its explosive evolution (and complexity to model).

Believe it or not, I’m not saying that MMM should be completely abandoned. It still has it’s place, I think, but there are a lot of things it’s going to really, really struggle to address. I’d actually like to see companies who provide MMM services weigh in on what that is. At eMetrics earlier this month, I attended a session where the speaker did just that. Skip ahead to the last section to find out who!

Geographic Test/Control Data

Both traditional and digital marketing have a mix of geo-specific capabilities. The cost of TV, radio, print, and out-of-home (OOH) marketing provides an imperative to geo-target when appropriate (or simply to minimize the peanut butter effect of spreading a limited investment so thinly that it doesn’t have an impact anywhere). Many digital channels, though, such as web sites and Facebook pages, are geared towards being “available to everyone.” Other channels – SEM, banner ads, and email, for instance – can be geo-targeted, but there often isn’t a cost/benefit reason to do so. Without different geographic placements of marketing, the impact on sales in “exposed areas” vs. “unexposed areas” cannot be teased out:

Cross-Channel Interaction

While marketers have long known that multi-channel campaigns produce a whole that is greater than the sum of the parts, the sheer complexity that digital has introduced into the equation forces MMM to guess at attribution. For example, we know (or, at least, we strongly suspect) that a large TV advertising campaign will not only provide a lift in sales, but it will also produce a lift in searches for a brand. Those increased searches will increase SEM results, which will drive traffic to the brand’s web site. Consumers who visit the site can then be added to a retargeting campaign. Those are four different marketing channels that all require investment…but which one gets the credit when the consumer buys?

This is both data capture and a business rules question. Entire companies (Clearsaleing being the one that I hear the most about) have been built just to address the data capture and application of business rules. While they provide the tools, they’re a long way from really being able to capture data across the entire continuum of a consumer’s experience. The business rules question is just as significant — most marketers’ heads will explode if they’re asked to figure out what the “right” attribution is (and simply trying different attribution models won’t answer the question — different models will show different channels being the “best”). Is this a new career option for Philosophy majors, perhaps?

Fragmentation of Consumer Experiences

This one is related to the cross-channel interaction issue described above, but it’s another lens applied to the same underlying challenge. Consumer behavior is evolving — there are exponentially more channels through which consumers can receive brand exposure (I picked up the phrase “cross-channel consumer” at eMetrics, which is in the running for my favorite three-letter phrase of 2010!). Some of these channels operate both as push and pull, whereas traditional media is almost exclusively “push” (marketers push their messages out to consumers through advertising):

We’re now working with an equation that has wayyyyyyyy more variables, each of which has a lesser effect than the formulas we were trying to solve when MMM first came onto the scene. HAL? Can you help? This is actually beyond a question of simply “more processing power.” It’s more like predicting what the weather will be next week — even with meteoric advancements in processing power and a near limitless ability to collect data, the models are still imprecise.

Self-Fulfilling Mix

Finally, there is a chicken-and-egg problem. While there are reams of secondary research documenting the shifting of consumer behavior from offline to online consumption…many brands still disproportionately invest in offline marketing. It’s understandable — they’re waiting for the data to be able to “prove” that digital marketing works (and prove it with an unrealistic degree of accuracy — digital his held to a higher standard than offline media, and the “confusion of precision with accuracy” syndrome is alive and well). But, when digital marketing investments are overly tentative (and those investments are spread across a multitude of digital channels), the true impact of digital can’t be detected because it’s dwarfed by the impact of the massive — if less efficient — investments in offline marketing:

If I shoot a pumpkin simultaneously with a $1,500 shotgun and a $30 BB gun and ask an observer to tell me how much of an impact the BB gun had…

So, Should We Just Start Operating on Faith and Instinct?

I wrote early in this post that I think MMM has its place. I don’t fully understand what that place is, but the credibility of anyone whose bread is buttered by their MMM book of business who stands up and says, “Folks, MMM has some issues,” immediately skyrockets. That’s exactly what Steve Tobias from Marketing Management Analytics (MMA) did at eMetrics. In his session, “Marketing Mix Modeling: How to Make Digital Work for a True ROI,” he talked at length about many of the same challenges I’ve described in this post (albeit in greater detail and without the use of cartoon-y diagrams). But, he went on to lay out how MMA is using traditional MMM in conjunction with panel-based data (in his examples, he used comScore for the analysis) to get “true ROI” measurement. All I’ve seen is that presentation, so I don’t have direct experience with MMA’s work in action, but I liked what I heard!

Adobe Analytics, General, Industry Analysis

Our Engagement Metric in use at Philly.com

Those of you who have read my blog for long know that I have written a tremendous amount about measures of visitor engagement online. In addition to numerous blog posts we have published a 50 page white paper describing how to measure visitor engagement and every year I give a half-dozen presentations on the subject. Unlike some people who seem to fear new ideas and others who disapprove of anything they themselves do not create I have long been a champion for evolving our use of metrics in web analytics to satisfy business needs.

But don’t take my word for it, read about how the nice folks at Philly.com are using a near complete version of my calculation to better understand their audience.

Cool, huh?

The thing I love about this article is that Philly.com is openly talking about their use of my engagement metric.  What’s better is that their sharing prompted another super-great organization (PBS) to comment that they too have been using my engagement metric for years.

Awesome.

I have been honored to work with several companies in the past three years who have implemented my metric and variations thereof but most treat the metric as a competitive secret. Given that most are in the hard-pressed and hyper-competitive online media world I understand, but I’m certainly happy to see Philly.com and Chris Meares share their story with the world.

Anyway, check out the article and, if you’re brave, download our white paper on visitor engagement and give it a read. If you are in media and are stuck trying to figure out how to get web analytics to work for you (instead of the other way around) give me a call. I’m more than happy to discuss how our measure of engagement might be able to help your business grow.

Analytics Strategy

My Letter To The C-Suite

The following originally posted in Exact Target’s 10 Ideas To Turn Into Results report. It’s part of their Letters to the C-Suite Series and this is my letter…

To The Executive Team:

Do you even know who your customers are anymore? Chances are, you probably don’t. You
catch fleeting glimpses of them as they open your emails or pop onto your website for a quick
visit. You might even momentarily engage with them when they drop into your store to browse
around or see your products firsthand. Or maybe you meet them ever so briefly as they feign
interest in your brand by “liking” something you posted on Facebook.

If you’re doing it right, your business is collecting feedback across many customer
touch points.

But you only really hear them when they shout from the rooftops, irate and full of vim. That’s
probably where you begin to learn what’s on their minds. But do you even know that it’s the
same person who was showing you all that love during your last promotion? Probably not.
In actuality, few companies really know their customers. Whether your customers are end
users or other businesses, how they interact with your brand, where they discover new
information, and how they communicate is changing at an astounding rate. Customers
are increasingly unaffected by traditional marketing conventions, and their tolerance for
redundant messaging, static content, and conflicting brand information is nonexistent. They
don’t see your organization like you do—in departmentalized silos of categories, products,
business units, and operating divisions. To them, you’re just that brand they either love, hate,
or treat with ambivalence. That is, until you knock their socks off by impressing them with your
service, support, and relevance. Yet, to really deliver value to your customers, you need to get
to know them. This starts by remembering the interactions you have with them and building
off of these activities.

Digital communication is the new reality, and treating customers through digital channels is
synonymous with how you’d treat someone you meet in person. Listen to what they’re saying
and respond with appropriate dialog. But most importantly, remember these things (because
upon your next conversation, your customer might just remember you):

• Your memory of customers exists at the database level.

• By maintaining customer profiles and appending them with attributes that contain history,
activity, and propensity (among other things), you can truly begin to have meaningful
interactions.

• To do this effectively, the database must contain information from all your touch points.

This includes transactional systems, web analytics, call centers, mobile devices, social
media, ATMs, stores, email systems, and whatever else you’re using to reach out.
Bringing your data together through integrations enables you to achieve a holistic picture of
your customers. A little scared by this? Well, you should be. Customer behaviors are going to
fundamentally change the way you engage with your audience. If you’re not equipped, they’re
going to take their conversations (and their wallets) elsewhere. By integrating your data, you
open opportunities for new customer dialogs.

Take my word for it—it’s happening NOW.

Your Agent For Change,
John Lovett

 

Analysis, Analytics Strategy, Reporting, Social Media

Analyzing Twitter — Practical Analysis

In my last post, I grabbed tweets with the “#emetrics” hashtag and did some analysis on them. One of the comments on that post asked what social tools I use for analysis — paid and free. Getting a bit more focussed than that, I thought it might be interesting to write up what free tools I use for Twitter analysis. There are lots of posts on “Twitter tools,” and I’ve spent more time than I like to admit sifting through them and trying to find ones that give me information I can really use. This, in some ways, is another one of those posts, except I’m going to provide a short list of tools I actually do use on a regular basis and how and why I use them.

What Kind of Analysis Are We Talking About?

I’m primarily focussed on the measurement and analysis of consumer brands on Twitter rather than on the measurement of one’s personal brand (e.g., @tgwilson). While there is some overlap, there are some things that make these fundamentally different. With that in mind, there are really three different lenses through which Twitter can be viewed, and they’re all important:

  • The brand’s Twitter account(s) — this is analysis of followers, lists, replies, retweets, and overall tweet reach
  • References of the brand or a campaign on Twitter — not necessarily mentions of @<brand>, but references to the brand in tweet content
  • References to specific topics that are relevant to the brand as a way to connect with consumers — at Resource Interactive, we call this a “shared passion,” and the nature of Twitter makes this particularly messy, but, to whatever level it’s feasible, it’s worth doing

While all three of these areas can also be applied in a competitor analysis, this is the only mention (almost) I’m going to make of that  — some of the techniques described here make sense and some don’t when it comes to analyzing the competition.

And, one final note to qualify the rest of this post: this is not about “online listening” in the sense that it’s not really about identifying specific tweets that need a timely response (or a timely retweet). It’s much more about ways to gain visibility into what is going on in Twitter that is relevant to the brand, as well as whether the time spent investing in Twitter is providing meaningful results. Online listening tools can play a part in that…but we’ll cover that later in this post.

Capturing Tweets?

When it comes to Twitter analysis, it’s hard to get too far without having a nice little repository of tweets themselves.  Unfortunately, Twitter has never made an endless history of tweets available for mining (or available for anything, for that matter). And, while the Library of Congress is archiving tweets, as far as I know, they haven’t opened up an API to allow analysts to mine them. On top of that, there are various limits to how often and how much data can be pulled in at one time through the Twitter API. As a consumer, I suppose I have to like that there are these limitations. As a data guy, it gets a little frustrating.

Two options that I’ve at least looked at or heard about on this front…but haven’t really cracked:

  • Twapper Keeper — this is a free service for setting up a tweet archive based on a hashtag, a search, or a specific user. In theory, it’s great. But, when I used it for my eMetrics tweet analysis, I stumbled into some kinks — the file download format is .tar (which just means you have to have a utility that can uncompress that format), and the date format changed throughout the data, so getting all of the tweets’ dates readable took some heavy string manipulation
  • R — this is an open source statistics package, and I talked to a fellow several months ago who had used it to hook into Twitter data and do some pretty intriguing stuff. I downloaded it and poked around in the documentation a bit…but didn’t make it much farther than that

I also looked into just pulling Tweets directly into Excel or Access through a web query. It looks like I was a little late for that — Chandoo documented how to use Excel as a Twitter client, but then reportd that Twitter made a change that means that approach no longer works as of September 2010.

So, for now, the best way I’ve found to reliably capture tweets for analysis is with RSS and Microsoft Outlook:

  1. Perform a search for the twitter username, a keyword, or a hashtag from http://search.twitter.com (or, if you just want to archive tweets for a specific user, just go to the user’s Twitter page)
  2. Copy the URL for the RSS for the search (or the user)
  3. Add a new RSS feed in MS Outlook and paste in the URL

From that point forward, assuming Outlook is updating periodically, the RSS feeds will all be captured.

There’s one more little trick: customize the view to make it more Excel/export-friendly. In Outlook 2007, go to View » Current View » Customize Current View » Fields. I typically remove everything except From, Subject, and Received. Then go to View » Current View » Format Columns and change the Received column format from Best Fit to the dd-Mmm-yy format. Finally, remove the grouping. This gives you a nice, flat view of the data. You can then simply select all the tweets you’re interested in, press <Ctrl>-<C>, and then paste them straight into Excel.

I haven’t tried this with hundreds of thousands of tweets, but it’s worked great for targeted searches where there are several thousand tweets.

Total Tweets, Replies, Retweets

While replies and retweets certainly aren’t enough to give you the ultimate ROI of your Twitter presence, they’re completely valid measures of whether you are engaging your followers (and, potentially, their followers). Setting up an RSS feed as described above based on a search for the Twitter username (without the “@”) will pick up both all tweets by that account as well as all tweets that reference that account.

It’s then a pretty straightforward exercise to add columns to a spreadsheet to classify tweets any number of ways by some use of the IF, ISERROR, and FIND functions. These can be used to quickly flag each tweet  as a reply, a retweet, a tweet by the brand, or any mix of things:

  • Tweet by the brand — the “From” value is the brand’s Twitter username
  • Retweet — tweet contains the string “RT @<username>
  • Reply — tweet is not a retweet and contains the string “@<username>

Depending on how you’re looking at the data, you can add a column to roll up the date — changing the tweet date to be the tweet week (e.g., all tweets from 10/17/2010 to 10/23/2010 get given a date of 10/17/2010) or the tweet month. To convert a date into the appropriate week (assuming you want the week to start on Sunday):

=C1-WEEKDAY(C1)+1

To convert the date to the appropriate month (the first day of the month):

=DATE(YEAR(C1),MONTH(C1),1)

C1, of course, is the cell with the tweet date.

Then, a pivot table or two later, and you have trendable counts for each of these classifications.

This same basic technique can be used with other RSS feeds and altered formulas to track competitor mentions, mentions of the brand (which may not match the brand’s Twitter username exactly), mention of specific products, etc.

Followers and Lists

Like replies and retweets, simply counting the number of followers you have isn’t a direct measure of business impact, but it is a measure of whether consumers are sufficiently engaged with your brand. Unfortunately, there are not exactly great options for tracking net follower growth over time. The “best” two options I’ve used:

  • Twitter Counter — this site provides historical counts of followers…but the changes in that historical data tend to be suspiciously evenly distributed. It’s better than nothing if you don’t have a time machine handy. (See the Twitalyzer note at the end of this post — I may be changing tools for this soon!)
  • Check the account manually — getting into a rhythm of just checking an account’s total followers is the best way I’ve found to accurately track total followers over time; in theory a script could be written and scheduled that would automatically check this on a recurring basis, but that’s not something I’ve tackled

I also like to check lists and keep track of how many lists the Twitter account is included on. This is a measure, in my mind, of whether followers of the account are sufficiently interested in the brand or the content that they want to carve it off into a subset of their total followers so they are less likely to miss those tweets and/or because they see the Twitter stream as being part of a particular “set of experts.” Twitalyzer looks like it trends list membership over time, but, since I just discovered that it now does that, I can’t stand up and say, “I use that!” I may very well start!

Referrals to the Brand’s Site

This doesn’t always apply, but, if the account represents a brand, and the brand has a web site where the consumer can meaningfully engage with the brand in some way, then measuring referrals from Twitter to the site are a measure of whether Twitter is a meaningful traffic driver. There are fundamentally two types of referrals here:

  • Referrals from tweeted links by the brand’s Twitter account that refer back to the site — these can be tracked by a short URL (such as bit.ly), by adding campaign tracking parameters to the URL so the site’s web analytics tool can identify the traffic as a brand-triggered Twitter referral, or both. The campaign tracking is what is key, because it enables measuring more than simply “clicks:” whether the visitors are first-time visitors to the site or returning visitors, how deeply they engaged with the site, and whether they took any meaningful action (conversions) on the site
  • “Organic” referrals — overall referrals to the site from twitter.com. Depending on which web analytics tool you are using on your site, this may or may not include the clickthroughs from links tweeted by the brand.

By looking at referral traffic, you can measure both the volume of traffic to the site and the relative quality of the traffic when compared to other referral sources for the site.

(If the volume of that traffic is sufficiently high to warrant the effort, you may even consider targeting content on the landing page(s) for Twitter referral traffic to try to engage visitors more effectively– you know the visitor is engaged with social media, so why not test some secondary content on the page to see if you can use that knowledge to deliver more relevant content and CTAs?)

Word Clouds with Wordle

While this isn’t a technique for performance management, it’s hard to resist the opportunity to do a qualitative assessment of the tweets to look for any emerging or hot topics that warrant further investigation. Because all of the tweets have been captured, a word cloud can be interesting (see my eMetrics post for an example). Hands-down, Wordle makes the nicest word clouds out there. I just wish it was easier to save and re-use configuration settings.

One note here: you don’t want to just take all of the tweet content and drop it straight into Wordle, as the search criteria you used for the tweets will dwarf all of the other words. If you first drop the tweets into Word, you can then do a series of search and replaces (which you can record as a macro if you’re going to repeat the analysis over time) — replace the search terms, “RT,” and any other terms that you know will be dominant-but-not-interesting with blanks.

Not Exactly the Holy Grail…

Do all of these techniques, when appropriately combined, provide near-perfect measurement of Twitter? Absolutely not. Not even close. But, they’re cheap, they do have meaning, and they beat the tar out of not measuring at all. If I had to pick one tool that I was going to bet on that I’d be using inside of six months for more comprehensive performance measurement of Twitter, it would be Twitalyzer. It sure looks like it’s come a long way in the 6-9 months since I last gave it a look. What it does now that it didn’t do initially:

  • Offers a much larger set of measures — you can pick and choose which measures make sense for your Twitter strategy
  • Provides clear definitions of how each metric is calculated (less obfuscated than the definitions used by Klout)
  • Allows trending of the metrics (including Lists and Followers).

Twitalyzer, like Klout, and Twitter Counter and countless other tools, is centered on the Twitter account itself. As I’ve described here, there is more going on in Twitter that matters to your brand than just direct engagement with your Twitter account and the social graph of your followers. Online listening tools such as Nielsen Buzzmetrics can provide keyword-based monitoring of Twitter for brand mentions and sentiment — this is not online listening per se, really, but it is using online listening tools for measurement.

For the foreseeable future, “measuring Twitter” is going to require a mix of tools. As long as the mix and metrics are grounded in clear objectives and meaningful measures, that’s okay. Isn’t it?

Adobe Analytics, General

Tracking Product Returns

If you sell products or services on your website, you are probably working diligently to be sure that you are tracking the appropriate Orders, Units and Revenue associated with each product sold (you may even be doing some advanced stuff like I described here). Doing this allows you to see all sorts of wonderful things, like what pages lead to sales and what online campaigns have better ROI than others. However, one formidable challenge that web analysts don’t like to talk about is Product Returns. How often have you bought something online only to ship it back or return it to a brick & mortar store associated with the website? If customers return products in significant enough numbers, all of the great online data you have collected may be inaccurate.

I have seen some companies apply a “rule of thumb (dumb?)” in which they discount sales by 20% across the board to account for returns, but how does that help you determine if a specific marketing campaign is good or bad when your SiteCatalyst reports only show the good? By not tying product returns directly to their corresponding online sales, your web analytic reports will be inherently flawed. The truth is that I have seen very few clients who have adequately addressed this issue, so I thought I would suggest an idea that I think is an appropriate way to deal with Product Returns. Even if you don’t sell things through a shopping cart, I encourage you to read this post as its principles are applicable to any situation in which you have an online success that later is retracted in some manner offline.

Tracking Product Returns
The best way to understand the tracking of Product Returns, is through an example. Let’s pretend that are a web analyst for Apple and a first time visitor comes to the website from the Google paid search keyword “ipod” and purchases two iPods for $50 each. In SiteCatalyst, when we open the Products report, we would see $100 for the Product labeled “ipod” and if we broke it down by visit number, the same $100 would be attributed to visit number one. So far, so good…

However, let’s imagine that one of these iPods was returned to a local Apple Store. Now our reality has changed. The paid search keyword and first visit combination has now only led to $50, but SiteCatalyst still shows $100. If we create calculated metrics to compare our Revenue per Marketing spend, suddenly our ROI just got cut in half, but this is not reflected in SiteCatalyst. This might cause us to misallocate marketing dollars to campaigns that look to be good at first, but in reality are not as profitable as others when Product Returns are added to the mix (especially if we automate Paid Search using SearchCenter!). I don’t know about you, but I certainly wouldn’t want to be the one telling my boss to invest in marketing campaigns that turn out to be “duds!”

So how do we fix this mess? In order to track Product Returns, you’ll need to re-familiarize yourself with the Transaction ID feature of SiteCatalyst (I suggest re-reading this post!). At a high level, Transaction ID allows you to set an ID associated with a transaction and later upload offline metrics that are dynamically associated with any Conversion Variable (eVar) values which were active at the time the Transaction ID was set (phew!). In this case, you need to ensure that you are setting a Transaction ID value when the original online sale takes place. By doing this, you create a “key” that will allow you to upload Product Return data later and back it out of its corresponding online sale. Keep in mind that you will have to work with your Adobe account manager to get Transaction ID set-up and you’ll want to be sure that the Transaction ID table persists for as long of a time frame that you require to upload return data (default is 90 days, but this can be extended).

Once you have Transaction ID enabled and have started passing Transaction ID’s for online purchases, the next step is to create a new “Product Return Amount” [Currency] Incrementor Event. Transaction ID uploads are similar to Data Sources and, as such, import data as Incrementor Success Events. Setting up a new Incrementor Event is easily done through the Admin Console.

Once you have Transaction ID set-up and your new Product Return Amount currency Success Event, you need to use Data Sources to generate a Product Returns template which you can populate and upload on a daily/weekly/monthly basis. This file will contain the following columns:

  1. Date – I suggest you use the date that the original purchase took place, not the date of the return, so there is no lag time. NOTE: Please keep in mind that there is currently a SiteCatalyst restriction that you cannot upload a Transaction ID file that has dates spanning more than 90 days. You can upload dates that are more than 90 days old, but the date ranges for the entire file upload cannot be more than 90 days (kind of lame in my opinion!).
  2. Transaction ID – The ID associated with the original online sale
  3. Product Name/ID – Same value that is passed to the Products Variable during the original online purchase
  4. Product Return Amount – This is the total $$ amount per product that is being returned

When you are done, your upload file might look something like this:

Using Product Returns in SiteCatalyst Reports
Once you have successfully uploaded some Product Return data, it is time to see how all of this looks in SiteCatalyst. To do this, open the Products report and add Revenue and our new Product Return Amount metrics. The report should look like this:

Next, we can create a Calculated Metric which subtracts the Product Return Amount from Revenue to create a “Net Revenue” metric as shown here:

Finally, since Transaction ID allows you to apply the eVar values that were associated with the original transaction to the new “return” transaction, you can even use Subrelation break downs for the Products report by Visit Number and see both Revenue and Returns (and the Calculated Metric) by Visit Number (pretty cool huh?)!

Orders & Units (Advanced)
For those that are a bit more advanced, I wanted to let you know that the above solution does not back out Orders or Units for Product Returns. Backing out Orders is a bit tricky since you may only want to remove the Order if the entire Order is returned. Units is a bit easier as you can simply create a second Incrementor Success Event for “Returned Units” as we did above. However, I suggest that you start with Revenue since most of your questions around Product Returns will be related to Revenue.

Finally, SiteCatalyst does provide an out of the box Data Sources template for Product Returns which can be found in the Data Sources Manager:

However, I have not used this template myself and I would have the following potential concerns:

  1. I don’t believe that this template uses Transaction ID which can be problematic as you will be unable to use the Product Return metric with all of your pre-existing eVar reports
  2. It looks like this template uses the same Revenue, Orders and Units Success Events to back out Product Return data. I feel like this can be a recipe for disaster if something goes wrong. With my approach, the worst case scenario (if you upload some bad Product Return data) is that your new Product Return metrics are temporarily off. If you use the standard Revenue, Orders and Units metrics, a mistake can be fatal and hidden amongst your normal online metrics (I never mess with Revenue, Orders or Units!).

For these reasons, I suggest you talk to your Adobe Account Manager if you want to pursue this route.

Final Thoughts
So if Product Returns are something that you have to deal with, the above is my suggested way to handle them. Those of you who work in Retail day in and day out may have come up with some other ways to deal with Product Returns, so if there are other best practices out there, please leave a comment here. Thanks!

Analytics Strategy, Social Media

eMetrics Washington, D.C. 2010 — Fun with Twitter

I took a run at the #emetrics tweets to see if anything interesting turned up. Rather than jump into Nielsen Buzzmetrics, which was an option, I just took the raw tweets from the event and did some basic slicing and dicing of them.

[Update: I’ve uploaded the raw data — cleaned up a bit and with some date/time parsing work included — in case you’d like to take another run at analyzing the data set. It’s linked to here as an Excel 2007 file]

The Basics of the Analysis

I constrained the analysis to tweets that occurred between October 4, 2010, and October 6, 2010, which were the core days of the conference. While tweets occurred both before and after this date range, these were the days that most attendees were on-site and attending sessions.

To capture the tweets, I set up a Twapper Keeper archive for all tweets that included the #emetrics hashtag. I also, certainly, could have simply set up an RSS feed and used Outlook to capture the tweets, which is what I do for some of our clients, but I thought this was a good way to give Twapper Keeper a try.

The basic stats: 1,041 tweets from 218 different users (not all of these users were in attendance, as this analysis included all retweets, as well as messages to attendees from people who were not there but were attending in spirit).

Twapper Keeper

Twapper Keeper is free, and it’s useful. The timestamps were inconsistently formatted and/or missing in the case of some of the tweets. I don’t know if that’s a Twapper Keeper issue, a Twitter API issue, or some combination. The tool does have a nice export function that got the data into a comma-delimited format, which is really the main thing I was looking for!

Twitter Tools Used

Personally, I’ve pretty much settled on HootSuite — both the web site and the Droid app — for both following Twitter streams and for tweeting. I was curious as to what the folks tweeting about eMetrics used as a tool. Here’s how it shook out:

So, HootSuite and TweetDeck really dominated.

Most Active Users

On average, each user who tweeted about eMetrics tweeted 4.8 times on the topic. But, this is a little misleading — there were a handful of very prolific users and a pretty long tail when you look at the distribution.

June Li and Michele Hinojosa were the most active users tweeting at the conference by far, accounting for 23% of all tweets between the two of them directly (and another 11% through replies and retweets to their tweets, which isn’t reflected in the chart below — tweet often, tweet with relevancy, and your reach expands!):

Tweet Volume by Hour

So, what sessions were hot (…among people tweeting)? The following is a breakdown of tweets by hour for each day of the conference:

Interestingly, the biggest spike (11:00 AM on Monday) was not during a keynote. Rather, it was during a set of breakout sessions. From looking at the tweets themselves, these were primarily from the Social Media Metrics Framework Faceoff session that featured John Lovett of Web Analytics Demystifed and Seth Duncan of Context Analytics. Of course, given the nature of the session, it makes sense that the most prolific users of Twitter attending the conference would be attending that session and sharing the information with others on Twitter!

The 2:00 peak on Monday occurred during the Vendor Line-Up session, which was a rapid-fire and entertaining overview of many of the exhibiting vendors (an Elvis impersonator and a CEO donning a colonial-era wig are going to generate some buzz).

There was quite a fall-off after the first day in overall tweets. Tweeting fatigue? Less compelling content? I don’t know.

Tweet Content

A real challenge for listening to social media is trying to pick up hot topics from unstructured 140-character data. I continue to believe that word clouds hold promise there…although I can’t really justify why a word frequency bar chart wouldn’t do the job just as well.

Below is a word cloud created using Wordle from all 1,041 tweets used in this analysis. The process I went through was that I took all of the tweets and dropped them in MS Word and then did a handful of search-and-replaces to remove the following words/characters:

  • #emetrics
  • data
  • measure
  • RT

These were words that would come through with a very strong signal and dominate potentially more interesting information. Note: I did not include the username for the person who tweeted. So, occurrences of @usernames were replies and retweets only.

Here’s the word cloud:

What jumped out at me was the high occurrence of usernames in this cloud. This appears to be a combination of the volume of tweets from that user (opening up opportunities for replies and retweets) and the “web analytics celebrity” of the user. The Expedia keynote clearly drove some interest, but no vendors generated sufficient buzz to really drive a discussion volume sufficient to bubble up here.

As I promised in my initial write-up from eMetrics, I wasn’t necessarily expecting this analysis to yield great insight. But, it did drive me to some action — I’ve added a few people to the list of people I follow!

Analytics Strategy

Gilligan's eMetrics Recap — Washington, D.C. 2010

I attended the eMetrics Marketing Optimization Summit earlier this week in D.C., and this post is my attempt to hash out my highlights from the experience. Of all the conferences I’ve attended (I’m not a major conference attendee, but I’m starting to realize that, by sheer dint of advancing age, I’m starting to rack up “experience” in all sorts of areas by happenstance alone), this was one that I walked away from without having picked up on any sort of unintended conference theme. Normally, any industry conference is abuzz about something, and that simply didn’t seem to be the case with this one.

(In case you missed it, the paragraph above was a warning that this post will not have a unifying thread! Let’s plunge ahead nonetheless!)

Voice of the Customer

It’s good to see VOC vendors aggressively engaging the “traditional web analytics” audience. Without making any direct effort, I repeatedly tripped over Foresee Results, iPerceptions, OpinionLabs, and CRM Metrix in keynotes, sessions, the exhibit hall, and over meals.

My takeaway? It’s a confusing space. Check back in 12-18 months and maybe I’ll be able to pull off a post that provides a useful comparison of their approaches. If I had my ‘druthers, we’d pull off some sort of bracketed Lincoln-Douglas style debate at a future eMetrics where these vendors were forced to engage each other directly, and the audience would get to vote on who gets to advance – not necessarily judging which tool is “better” (I’m pretty sure each tool is best-in-class for some subset of situations…although I know at least one of the vendors above who would vigorously tell me this is not the case), but declaring a winner of each matchup so that we would get a series of one-on-one debates between different vendors that would be informative for the audience.

Cool Technology

I generally struggle to make my way around an exhibit hall, so I didn’t come anywhere close to covering all of the vendors This wasn’t helped by the fact that I talked to a couple of exhibitors early on that were spectacularly unappealing. That wasn’t exactly a great motivator for continuing the process. There were, however, several tools that intrigued me:

  • Ensighten – if you’re reading this blog, then chances are you read “real” blogs, too, and you likely caught that Eric Peterson recently wrote a paper on Tag Management Systems (sponsored by Ensighten). It’s worth a read. Ensighten was originally developed in-house at Stratigent and then spun off as a separate business with Josh Manion at the helm. Their corny (but highly effective) schtick at the conference was that they were starting a “tagolution” (a tagging revolution). That gave them high visibility…but I think they’ve got the goods to back it up. Put simply, you deploy the Ensighten javascript on your site instead of all of the other tags you need (web analytics, media tracking, VOC tools, etc.). When the page loads, that javascript makes a call to Ensighten, which returns all of the tags that need to be executed. Basically, you get to manage your tags without touching the content on your site directly. And, according to Josh, page performance actually improves in most cases (he had a good explanation as to why — counter-intuitive as it seems). Very cool stuff. Currently, they’re targeting major brands, and the price point reflects this – “six figures” was the response when I asked about cost for deploying the solution on a handful of domains. Ouch.
  • DialogCentral – this is actually an app/service from OpinionLabs, and I have no idea what kind of traction it will get. But, as I stood chatting with the OpinionLabs CIO, I pulled out my Droid and had had a complete DialogCentral experience in under a minute. The concept? Location-based services as a replacement for “tell us what you think” postcards at physical establishments. You fire up their app (iPhone) or their mobile site (dialogcentral.com will redirect to the mobile site if you visit it with a mobile device). DialogCentral then pulls up your location and nearby establishments (think Foursquare, Gowalla, Brightkite-type functionality to this point), and then lets you type in feedback for the establishment. That feedback then gets sent to the establishment, regardless of whether the venue is a DialogCentral customer. Obviously, their hope is that companies will sign on as customers and actually promote the feedback mechanism in-store, at which point the feedback pipeline gets much smoother. It’s an intriguing idea — a twist-o’-the-old on all of the different “publicly comment on this establishment” aspects of existing services.
  • Clicktale – these guys have been around for a while, and I was vaguely familiar with them, but got an in-depth demo. They use client-side code (which, presumably, could be managed through Ensighten — I’m just sayin’…) to record intra-page mouse movements and clicks. They then use that data to enable “replays” of the activity as well as to generate page-level heatmaps of activities and mouse placement. Their claim (substantiated by research) is that mouse movements are a pretty tight proxy for eye movement, so you get a much lower cost / broadly collected set of (virtual) eye-tracking data. And, the tool has all sorts of triggering and filtering capabilities to enable honing in on subsets of activity. Pretty cool stuff.
  • ShufflePoint – this wasn’t an exhibiting vendor, but, rather, the main gist of one of the last sessions of the conference. The tool is a poor man’s virtual-data-mart enabler. Basically, it’s an interface to a variety of tool APIs (Google Analytics, Google Adwords, Constant Contact, etc. – Facebook and Twitter are apparently in the pipeline) that allows you to build queries and then embed those queries in Excel. I’ve played around with the Google Analytics API enough to get it hooked into Excel and pulling data…and know that I’m not a programmer. Josh Katinger of Accession Media was the presenter, and he struck me as being super-pragmatic, obsessive about efficiency, and pretty much bullshit-free (I found out after I got to the airport that a good friend of mine from Austin, Kristin Farwell, actually goes wayyy back with Josh, and she confirmed that this was an accurate read). We’ll be giving ShufflePoint a look!

Social Media Measurement

I was expecting to hear a lot more on social media measurement at the conference…but it really wasn’t covered in-depth. Jim Sterne kicked off with a keynote on the subject (he did recently publish a book on the topic, which is now sitting on my nightstand awaiting a read). And, there was a small panel early on the first day where John Lovett got to discuss the framework he developed with Jeremiah Owyang (which is fantastic) this past spring. But, other than that, there really wasn’t much on the subject.

MMM and Cross-Channel Analytics

Steve Tobias from Marketing Management Analytics conducted a session that focused on the challenges of marketing mix modeling (MMM) in a digital world. I felt pretty smart as he listed multiple reasons why MMM struggles to effectively incorporate digital and social media, because many of his points mirrored what I’ve put together on the exact same subject (to be clear, he didn’t get his content from me!). It was good to get validation on that front from a true expert on the subject.

Where things got interesting, though, was when Steve talked about how his company is dealing with these challenges by supplementing their MMM work (their core strength) with “cross-channel analytics.” By “cross-channel analytics,” he meant panel-based measurement. Again, I felt kinda’ smart (and, really, it’s all about me and my feelings, isn’t it?), as I keep thinking (and I’ve got this in some internal presentations, too), that panel-based measurement is going to be key in truly getting a handle on cross-channel/cross-device consumer interactions and their impact.

The People

One of the main reasons to go to a conference like eMetrics is the people — catching up with people you know, meeting people you’ve only “known” digitally, and meeting people you didn’t know at all.

For me, it was great to again get to chat with Hemen Patel from CRM Metrix, John Lovett from Analytics Demystified, Corry Prohens from IQ Workforce, and the whole Foresee Results gang (Eric F., Eric H., Chris, Maggie,…and more). And, it wound up being a really special treat to see Michelle Rutan, who I take credit for putting on the web analytics career path way back when we worked at National Instruments together…and she was presenting (as an amusing aside, I credit Michelle’s husband, Ryan — although they weren’t even dating at the time — as being pretty key to helping me understand the mechanics of page tagging; he’s credited by name in one of the most popular posts on this blog)!

I actually got to meet Stéphane Hamel in person, which was a huge treat (I saw a lot of other web analytics celebrities, but never wound up in any sort of conversation with them — maybe next time), as well as Jennifer Day, who I’ve swapped tweets with for a while.

Digital Analytics folk are good peeps. That’s all there is to it.

Twitter (and Twapper Keeper) Means More to Come!

I actually managed to have the presence of mind to set up a Twapper  Keeper archive for #emetrics shortly before the conference started, and I’m hoping to have a little fun with that in the next week or two. We’ll see if any insights (I’m not promising actionable insights, as I’ve decided that term is wildly overused) emerge. I picked up a few new people to follow just based on the thoroughness and on-pointed-ness of their tweets — check out @michelehinojosa (who also is blogging her eMetrics takeaways) if you’re looking to expand your follower list.

It was a good conference!

Adobe Analytics, Analytics Strategy, General, Reporting

Presentations from Analytics Demystified

This week is somewhat bittersweet for me because it marks the very first time I have missed an Emetrics in the United States since the conference began. And while I’m certainly bummed to miss the event, knowing that my partner John is there representing the business makes all the difference in the world. If you’re at Emetrics this week, please look for John (or Twitter him at @johnlovett) and say hello.

If you’re like me and not going to the conference perhaps I can interest you in one of the four (!!!) webcasts and live events I am presenting this week:

  • On Tuesday, October 5th I will be presenting my “Web Analytics 201” session to the fine folks at Nonprofit Technology Network (NTEN) who we partner with on The Analysis Exchange. You need to be a NTEN member to sign up but if you are I’d love to talk with you!
  • On Wednesday, October 6th I will be doing a free webcast for all our friends in Europe talking about our “no excuses” approach towards measuring engagement in the online world. Sponsored by Nedstat (now part of comScore) all attendees will get a free copy of our recent white paper on the same topic.
  • Also on Wednesday, October 6th (although at a slightly more normal time for me) I will be presenting our Mobile (and Multi-channel) Measurement Framework with both our sponsor OpinionLab and a little consumer electronics retailer you may have heard of … Best Buy! The webcast is open to everyone and all attendees will also get a copy of our similarly themed white paper.
  • On Thursday, October 7th I will be at the Portland Intensive Social Media workshop presenting with Dean McBeth (of Old Spice fame) and Hallie Janssen from Anvil Media. I will be presenting John and Jeremiah’s Social Marketing Analytics framework and am pretty excited about the event!

All-in-all it promises to be a very busy week presenting content so I hope to hear from some of you on the calls or see you in person on Thursday.

Adobe Analytics, General

Hidden SiteCatalyst Features

(Estimated Time to Read this Post = 4 Minutes)

One of the funny things about SiteCatalyst (you will notice I can’t yet bring myself to call it Adobe SiteCatalyst!) is that there are some really cool features that are hidden. In some cases, it almost seems like someone has gone out of their way to hide them, but I like to look at these “hidden gems” as a sort of rite of passage. In this post I will share some of the ones I have found and hope that maybe you know of others so that all of us can learn! Also, if you haven’t read my old blog post on SiteCatalyst Time Savers, I encourage you to do so!

The Magic Triangle and Checkboxes
If you are like me, it may seem like you spend most of your day adding/removing metrics from reports! This can be a very time consuming process, so you might as well be as efficient as possible. However, I often find that new SiteCatalyst users add extra steps to the process because they don’t know a few easy tricks in the Metrics window. The first trick is that you can change the column that is used for sorting by clicking the [very] little triangle next to each metric. It amazes me how many people add metrics, wait for the report to load and then click on a column to sort and wait for the report to load again! Multiply that by twenty reports and it becomes a real time suck! Instead, simply click the triangle until it turns green (soon to be Adobe red?) and you are done!

But wait! There’s more…You will also notice that there are a bunch of check boxes next to each metric. Those check boxes are used to choose which metrics you want to graph with your report. You don’t have to graph every metric in the report, which may confuse your audience. Also, I find that many clients don’t take advantage of the fact that you can display two graphs per report. To do this, all you need to do is check off one of the boxes on the left and right side. This is helpful if some of your metrics are numbers and them are percentages. It is the closest SiteCatalyst comes to a secondary axis you may be used to in Excel.

Remove Subrelation BreakDowns
If you frequently use eVar Subrelation reports, you may find that after breaking one eVar down by another eVar, you want to go back to the report before it was subrelated. For example, let’s say you have opened aTraffic Driver report and broken it down by Offer Type as shown here:

Now let’s say you change the date range and some other report settings and then decide you want to just see Traffic Driver Type by itself again. Unfortunately, if you use the trusty “Back” button in your browser, you will have to re-do all of those customized settings. However, there are actually two ways to remove this subrelation without losing any work.

The first way to do this is to click on the “Broken Down by:” link shown in red above. Once you click on this, you will see a list of all of your variables and you can choose the bottom-most one labeled “None.” The other way is to click the green magnifying glass icon you used to create the Subrelation and do the same thing as shown here:

Double Your Searching Pleasure
Another thing I have noticed that a lot of SiteCatalyst users don’t know is that you can add search criteria to two different variables if you are using a Subrelation report. To do this, click on the Advanced Search link and then you can use the drop down boxes to choose which variable/search term combination you want:

In this case, I have chosen to filter for all Traffic Driver Types containing “SEO” and can proceed to enter my search criteria for Offer Type…

Inherit Segments
If you use DataWarehouse or ASI, you probably spend a lot of time creating Segments. If so, you may find times where you want to re-use some parts of a segment you have already created. When I first started using SiteCatalyst, I did this by printing out my segments and re-creating them manually. This is both time-consuming and prone to error, so I found the trick to do this more efficiently:

There it is! See how easily you can copy an existing segment? Do you see it? If not, would you believe me if I told you that there are actually two different ways to re-use segments on the above screen?

The first way to do this is to click the icon to the right of the Segment title. This will pop-up a new window which allows you to pick an existing segment you want your new segment to be based upon.

The second way to do this is to use the Segment Library. You can access the library by clicking its name next to the “Components” item. The Library is used to store commonly use segment building blocks. In the example below, I have created a Page View container that looks for Pages where the IP Address Geography is in the United States. By dragging this over to the Library, I can re-use this anytime I am creating a new segment.

Filter Report Suites
If you have Admin rights to your SiteCatalyst implementation and deal with a lot of report suites, the Admin Console quickly becomes one of your best friends. One of the most time consuming Admin Console tasks is finding the report suites you are looking for among all of your report suites. I frequently see people scanning up and down over and over hunting for the report suites they need. Fortunately, there is a much better way that is somewhat hidden – using the “Saved Searches” feature of the Admin Console. Using this feature you can create a filter to find report suites such that even if you add new ones, they will be added to your saved search if they meet the criteria.

Here is a real-life example. When I joined Salesforce.com, we had a lot of report suites and I began creating new report suites. When I created the new report suites, I simply added the phrase “New” to my new report suites titles. Once I did this, I clicked on the “Add” link within the Saved Searches area of the report suite manager and created a “Saved Search” rule like this:

Keep in mind that this is a very basic rule. You can actually add multiple criteria items and can build rules that take into account any of the following report suite criteria:

Lastly, if you are an Admin, be sure to read my past blog post with even more Admin Console Tips.

Final Thoughts
So there you have it. As you can see, none of these items are critical showstoppers, but I have found that knowing them can help speed up your day and give you SiteCatalyst bragging rights! Do you know of others? If so, please share them here as comments!

Analytics Strategy, Excel Tips, Presentation

Data Visualization Tips and Concepts (Monish Datta calls it "stellar")*

Columbus Web Analytics Wednesday was sponsored by Resource Interactive last week, and it was, as usual, a fun and engaging event:

Web Analytics Wednesday -- Attendees settling in

We tried a new venue — the Winking Lizard on Bethel Road — and were pretty pleased with the accommodations (private room, private bar, very reasonable prices), so I expect we’ll be back.

Relatively new dad Bryan Cristina had a child care conflict with his wife…so he brought along Isabella (who was phenomenally calm and well-behaved, and is cute as a button!):

Bryan and Isabella

I presented on a topic I’m fairly passionate about — data visualization. The presentation was well-received (Monish Datta really did tweet that it was “stellar”)  and generated a lot of good discussion. I had several requests for copies of the presentation, so I’ve modified it slightly to make it more Slideshare-friendly and posted it. If you click through on the embedded version below, you can see the notes for each slide by clicking on the “Notes on Slide X” tab underneath the slideshow, or you can download the file itself (PowerPoint 2007), which includes notes with each slide (I think you might have to create/login to a Slideshare account, which it looks like you can do quickly using Facebook Connect).

 

 

 

 

I had fun putting the presentation together, as this is definitely a topic that I’m passionate about!

* The “Monish Datta” reference in the title of this post, while accurate, is driven by my never-ending quest to dominate search rankings for searches for Monish. I’m doing okay, but not exactly dominating.

http://b.scorecardresearch.com/beacon.js?c1=7&c2=7400849&c3=1&c4=&c5=&c6=

Reporting

Department Store KPIs (an analogy)

A couple of weeks ago, I had a conversation with the newest member of the analytics team at Resource Interactive, Matt Coen. I shared with him my “Measuring digital marketing is like measuring the Mississippi River” analogy, and he, in turn, shared with me his department store analogy. I’m a big fan of using stories and analogies to get across fundamental measurement concepts, so, with his permission, I’m passing along his perspective (and, of course, in the translation from a verbal story to the written word, I’m finding that I’m taking some liberties!).

The story is a great illustration of two things:

  • How key performance indicators (KPIs) generally cannot live in isolation – driving a single KPI to a certain result is easy, but businesses operate on more than one dimension (for instance, total sales can be boosted by dropping the price well below cost…but that kills profitability)
  • Why no company can have a single set of KPIs. The appropriate KPIs depend on what and who is being measured.

Onto the Story

Let’s take a fictional department store. At this store, each department has a department manager who is responsible for all aspects of the department, including the department’s P&L. In addition, all of the departments have a KPI regarding inventory turnover – if any product sits on the shelves for too long, the store loses money. All of the departments have this KPI because, overall, the store has an inventory turnover KPI.

The office supplies department manager is seeing his inventory turnover suffer, and, by digging into the data, he realizes that pens are killing him – no one is buying them, and it’s hurting his turnover rate.

He goes to the store manager and tells him, “I’m having trouble moving pens, and that’s hurting my inventory turnover rate. You may not be seeing it at the overall store level, but it’s got to be negatively impacting that KPI. I need to move pens to the checkout line display.”

The manager scratches his head and agrees to the change – inventory turnover is one of his KPIs, the department manager is being data driven, and he’s even come to the store manager with a proposed solution! Woo-hoo! He promptly instructs his team to remove the candy from the checkout lines and replace them with pens.

Sure enough, pen sales pick up, and the department manager is thrilled.

But, the candy department manager immediately shows up in the store manager’s office and tells him, “My sales are way below target. When I developed my forecast, it was with the assumption that candy would be at the checkout lines. It’s a major impulse buy and that’s where 25% of my department sales occur!”

The store manager really didn’t need this additional headache. He was already seeing a dip in the overall store margin, and he’d realized that he might have acted too hastily when responding to the office supplies department manager’s request, because, not only is candy much more of an impulse buy – so the increase in pen sales didn’t make up for the loss in candy sales – but candy is a higher margin product.

When the store manager agreed to the change, he was making a decision based on how it would impact someone else’s KPIs. And, he focused on a single KPI – inventory turnover – rather than complementary KPIs – inventory turnover and margin.

This analogy can be applied to any number of marketing scenarios. An easy one is a web site, where the owner of a niche site section makes a case for featuring that section very prominently on the home page (the department store checkout line display) in the interest of driving more traffic to his site.

It’s a useful tale!

Analytics Strategy

Minimize Robot Traffic

Robots are cool. I like robots when they build cars, try to plug oil spills and clean carpets. The only types of robots I don’t like are the ones that hit websites repeatedly and throw off my precious web analytics data! Do you have a problem with these types of robots? Would you know how to see if you do? I find that many web analytics customers don’t even know how to see this, so in this post I will share what I do to monitor robots and hope that others out there will share other ways they deal with robots.

Why Should I Care About Robots?
This is often the first question I get. Who cares? Here are my reasons for caring about minimizing robots hitting your site:

  1. If you use Visits or Unique Visitors as part of any of your website KPI’s (i.e. Revenue/Unique Visitor), you should care because robots are inflating your denominator and dragging your conversion rates down
  2. If you are tasked with reducing Bounce Rates on your site, you should care as robots will often be seen as bounces
  3. Omniture (and other web analytics vendors) often bill you by website traffic (server calls) so you may be paying $$$ for junk data
  4. Often times web analytic KPI’s have razor-thin differences month over month and having a lot of garbage data can mean the difference between making a good and bad website business decision

Do I Have a Problem?
The first step is to identify if you have a problem with robots. Unfortunately, SiteCatalyst does not currently have an “out-of-the-box” way to alert you if you have a problem (@VaBeachKevin has added this to the Idea Exchange so please vote!), but in the meantime, here is my step-by-step approach to determining this:

  • Create a recurring DataWarehouse report that sends you Page Views and Visitors for each IP address hitting your site (If you store the Omniture Visitor ID in an sProp, I would use that in place of IP address). This can be daily, weekly or monthly depending on how much traffiic your website receives. I sometimes add the Country/City as well (you’ll see why later).

  • When you receive this report, it should look something like this:

  • Once you have the data, I create a calculation which divides Page Views by Visitors and then sort by that column (if you have a lot of data from different days/weeks, you can create a pivot table). The result should look like the report below where you will start to see which IP addresses are viewing a lot of pages on your site per visitor. Keep in mind that this doesn’t mean they are all bad. It is common for small companies or individuals to share IP addresses. The goal of this step is just to identify the IP addresses that might be issues. In the example below, you can see that the the top two IP addresses appear to be a bit different than the rest. While it may make you feel good that these unique visitors liked your website so much they viewed thousands of pages each, you might be fooling yourself!

  • Once you have this list, I like to do some research on the the top IP Address offenders. You can do this via a basic Whois IP Lookup or you can invest in a reverse IP lookup service.

What Do I Do If I Find Robots?
If after reviewing the top offending IP addresses you find that you do, in fact, have a robot hitting your site, you have a few options:

  1. Work with your IT group to exclude these IP addresses from hitting your website. This is your best option since it will be the most reliable and reduce your web analytics server call cost.
  2. Work with Omniture’s Engineering Services team to create a DB Vista Rule that will move these website hits to a new report suite so it will not pollute your data. The best part of this option is that you don’t have to engage with your IT team and you can add/remove IP addresses anytime you want via FTP. Unfortunately, you will still be hit with server call charges for this (not to mention the cost of the DB Vista Rule!), but if you also pass data to Omniture Discover, you might save money there by not passing bad data to Discover.
  3. Work with Omniture’s Engineering Services team to build a custom solution for dealing with robots…

Employee Traffic
While I don’t want to imply that your co-workers are robots, I wanted to mention employee traffic in this post as well since it is tangentially related. I find that many Omniture customers don’t exclude their own employees from their web analytics reports. This can be a huge mistake if you have a lot of employees or have employees who actively use the website. For example, at my employer (Salesforce.com), we use our website to log into our internal systems which are all run on Salesforce.com! This means that we have thousands of employees hitting our website every day to log in to our “cloud” applications and that traffic should not count towards our marketing/website goals. Therefore, we manually exclude all employee traffic from our reports by IP address to minimize the impact of employee traffic impacting our KPI’s. While we don’t consider this to be robot traffic, we address it in the same manner by passing employee traffic to its own report suite. One cool by-product of placing employee traffic in its own report suite is that you can see how often your own employees are using your website so you can show management that the dollars they give you serve multiple audiences!

Final Thoughts
As I stated in the beginning of this post, this is just one way to investigate and deal with robots. If you have other techniques, please share them here! Thanks!

Analytics Strategy

Free white paper on Tag Management Systems

This last week I was in London thanks to the good graces of our friends at Tealeaf to deliver a keynote speech at their EMEA customer conference. After the event, both reporters and conference attendees asked me “What is the most important technology trend in web analytics today?”

I have been asked this hundreds of times in my career as an analyst and consultant and the answer used to be tricky. In the past I’ve opined “multichannel integration”, “segmentation”, “application usability” and even “none, it’s about people and process, not technology.”

This time, however, my answer was clear: Tag Management Systems.

Tag management has become a nightmare for many companies.  As we outlined in our white paper with ObservePoint on the need for a “Chief Data Officer”, tagging and data collection has gotten out of control in companies of all sizes. Information Technology supports one set of tag-based tools, marketing deploys their own stuff via content management systems (CMS), advertising and other individual stakeholders drop their tags, and before you know it you have a dozen or more scripts included at various points across your site.

In a way, because of fragmentation in the marketplace this situation was inevitable. But it doesn’t have to be this way.

Emerging tag management systems (TMS) are rapidly transforming the data capture and technology deployment landscape, replacing inefficient, individual installations with a “one stop shop” able to manage any number of tag-based technologies via a single user interface. Early adopters of these systems are reporting a profound transformation of both their ability to manage data capture and their relationship with Information Technology.

From a web analytics perspective this is what we call a “win/win.”

One of these vendors is Ensighten, a company founded by a group of folks who have a long established reputation in digital measurement. Their CEO Josh Manion and I go back pretty far, and so when he told me about their platform I was immediately intrigued but somewhat skeptical.

I had already seen nearly all of the competing solutions in the market and walked away a variety of concerns. I’d even gone so far as trying to establish an “Open Tag Alliance” initiative with one vendor, which unfortunately collapsed due to time constraints.

Needless to say, I was impressed with Ensighten, so much so that I asked Josh if we could partner with him (something we rarely do with technology vendors.) He agreed, and so we are proud to announce that we are the first deployment and integration partner to sign up with Ensighten.

In support of our partnership we agreed to write a paper detailing what we see as the advantages of tag management systems. Titled “The Myth of the Universal Tag and the Future of Digital Data Collection”, this short paper outlines the need for TMS, the rationale behind deployments, and the opportunity for return on the investment. The paper is freely available now at the Ensighten web site or you can write us directly for a complimentary copy.

Readers should note that there are a handful of tag management solutions in the market today. At Analytics Demystified we believe the growth in the sector is validation of the opportunity — each of these companies have good stories to tell and an expanding customer base.

You should consider a tag management system if you are:

  • Frustrated with the “one tag, one project, one timeline” model of tag deployment;
  • Switching vendors and looking to gain leverage over future deployments;
  • Heavily invested in Flash but have long struggled to measure the technology;
  • Managing globally distributed sites but have little centralized control over tags;
  • Looking to add Q/A and workflow management to your tag deployments;
  • Concerned at all about the quality and data accuracy from your web analytics.

If any of these criteria apply to you I would strongly encourage you to give John or I a call. We’ll be more than happy to walk you through the current tag management system vendor landscape at no charge and point you towards whatever solutions seems right for you.

We welcome you to the age of tag management systems and we hope you will join us in welcoming Ensighten to the market.

Your next actions:

Analysis, Reporting

Dear Technology Vendor, Your Dashboard Sucks (and it’s not your fault)

Working in measurement and analytics at a digital marketing agency, I find myself working with a seemingly (at times) countless number of of technology platforms – most of them are measurement platforms (web analytics, social media analytics, online listening), but many of them are operational systems that, by their nature, collect data that is needs to be reported and analyzed (email platforms, marketing automation and CRM platforms, gamification systems, social media moderation systems, and so on). And, not only do I get to work with these systems in action, but one of the many fun things about my job is that I constantly get to explore new and emerging platforms as well.

During a recent presentation by one of our technology partners, I had a minor out of body experience where I saw this dopey-voiced Texan turn into something of a crotchety crank. He (I) fairly politely, and with (I hope) a healthy serving of humor poured over the exchange, lit into the CEO. I didn’t know where it came from…except I did (when I pondered the exchange afterwards).

When it comes to reporting, technology vendors fall into the age-old trap of, “When all you have is a hammer, all the world looks like a nail.” The myopia these vendors display varies considerably – some are much more aware of where they fit in the overall marketing ecosystem than others – but they consistently don blinders when it comes to their data and their dashboards.

The most important data to their customers, they assume, is the data within their system. Sure, they know that there are other systems in play that are generating some useful supplemental data, and that’s fantastic! “All” the customer needs to do is use the vendor’s (cumbersome) integration tools to bring the relevant subsets of that data into their system. “Sure, you can bring customer data from your CRM system into our web analytics environment. I’ll just start writing up a statement of work for the professional services you’ll need to do that! What? You want data from our system to be fed into your CRM system, too? I’ll get an SOW rolling for that at the same time! Did I mention that my youngest child just got into an Ivy League school? Up until five minutes ago, I was sweating how we were going to pay for it!”

The vendors – their sales teams – tout their “reporting and analytics” capabilities. They frequently lead off their demos with a view of their “dashboards” and tout how easy and intuitive the dashboard interface is! What they’re really telling their prospective customers, though, is, “You’ll have one more system you’ll have to go to to get the data you need to be an effective marketer.” <groan>

Never mind the fact that these “dashboards” are always data visualization abominations. Never mind the fact that they require new users to climb a steep learning curve. Never mind that they are fundamentally centered around the “unit of analysis” that the stem is built for (a content management system’s dashboard is content-centric, while a CRM system’s dashboard is customer-centric). They only provide access to a fraction of the data that the marketer really cares about most of the time.

Clearly, these platforms need to provide easy access to their data. I’m not really arguing that dashboard and reporting tools shouldn’t be built into these systems. What I am claiming is that vendors need to stop believing (and stop selling) that this is where their customers will glean the bulk of their marketing insights. In most cases, they won’t. Their customers are going to export the data from that system and combine it (or at least look at it side by side) with data from other systems. That’s how they’re going to really get a handle on what is happening.

The CEO with whom I had the out-of-body experience that triggered this post quickly and smartly turned my challenge back on me: “Well, what is it, ideally, that you would want?” I watched myself spout out an answer that, now 24 hours later, still holds up. Here are my requirements, and they apply to any technology vendor who offers a dashboard (including web analytics platforms, which, even though they exist purely as data capture/reporting/analysis systems…still consistently fall short when it comes to providing meaningful dashboards – partly due to lousy flexibility and data visualization, which they can control, and partly due to the lack of integration with all other relevant data sources, which they really can’t):

Within your tool, I want to be able to build a report that I can customize in four ways:

  • Define the specific dimensions in the output
  • Define the specific measures to include in the output
  • Define the time range for the data (including a “user-defined” option – more on that in a minute)
  • Define whether I want detailed data or aggregated data, and, if aggregated, the granularity of the trending of that data over time (daily, weekly, monthly, etc.)

Then, I want that report to give me a URL – an https one, ideally – onto which I can tack login credentials such that that URL will return the data I want any time I refresh it. I want to be able to drop that URL into any standalone reporting environment – my data warehouse ETL process, my MS Access database, or even my MS Excel spreadsheet – to get the data I want returned to me. I’m want to be able to pass a date range in with that request so that I can pull back the range of data I actually need.

Sure, in some situations, I’m going to want to hook into your data more efficiently than through a secure http request – if I’m looking to pull down monstrous data sets on a regular basis – but let’s cross that “API plus professional services” bridge when we get to it, okay?

I’m never going to use your dashboard. I’m going to build my own. And it’s going to have your data and data from multiple other platforms (some of them might even be your competitors), and it’s going to be organized in a way that is meaningful to my business, and it’s going to be useful.

Stop over-hyping your dashboards. You’re just setting yourselves up for frustrated customers.

It’s a fantasy, I realize, but it’s my fantasy.

Analytics Strategy, Conferences/Community, General

Web Analysts Code of Ethics …

Following up on last week’s thread about how the web analytics industry is on the cusp of becoming our own worst enemy as the tide of public opinion increasingly turns against online and behavioral analytics I wanted to make good on my offer to help the Web Analytics Association. I fully support the efforts of the Association to create a solid community for web analytics professionals around the world and have long been a contributor to their work, be it turning the Web Analytics Forum (at Yahoo! Groups) over to WAA management, opening the doors for WAA participation in Web Analytics Wednesday, and providing other “behind the scenes” support when asked.

To this end I composed a preliminary “Web Analysts Code of Ethics” that I had planned to work on here in my blog (with you all) and then turn over to the Web Analytics Association. Much to my surprise, according to my partner John Lovett (who is a Board member) the Board of Directors loved the preliminary code and asked to have it publish at the Web Analytics Association blog.

Easy enough, and so I would like to redirect all of you over to the Association blog where I and the WAA both would like to hear what you have to say about this early effort. The comments have already started over there, and of course if you’re more comfortable commenting here then by all means, I welcome  that.

As I mentioned a few times in my recent Beyond Web Analytics podcast (not live until early on September 13th), I believe that we need to start advocating on our own behalf and I see this code as one small step in the right direction. Hopefully the WAA Standards Committee, the Board, and all of you out there whether you’re in the Association or not will join me in this effort to help the wider world understand what we all do (and what we do and will not do.)

So go do two things right now:

  1. Read and comment on my “Web Analysts Code of Ethics” at the WAA Blog
  2. Listen to my interview with Adam Greco and Rudi Shumpert at Beyond Web Analytics
Adobe Analytics, Analytics Strategy, General

Internal Search Term Click-Through & Exit Rates

Recently, I was re-reading one of Avinash Kaushik’s older blog posts on tracking Internal Search Term Exit Rates and realized that I had never discussed how to report on this using Omniture SiteCatalyst. In a past Internal Search post, I covered many different things you can do to track internal search on your site, but did not cover ways to see which terms are doing well and which are not. In this post I will share how you can see this so you can determine which search terms need help…

Why Track Internal Search Term Click-Through & Exit Rates?
So why should you do this? In the era of Google, we are all slowly being trained to find things through search. Many of my past clients saw the percent of website visitors using search rise over the past few years. In addition, Internal Search and Voice of Customer tools are some of the few out there where you can see the intent of your visitors. Unfortunately, most websites have horrible Internal Search results which can lead to site exits. In my previous Internal Search post I demonstrated how to track your Search Results Page Exit Rate, but that only shows you if you have a problem or not. If you do have a high Search Results Page Exit Rate, the next logical step is to determine which search terms your users think have relevant search results and which do not. Note that this is not meant to show you which terms lead directly to website exits, but rather, which terms cause visitors to use or not use the search results you offer them after they search on a particular term.

How Do You Track Internal Search Term Click-Through & Exit Rates?
Ok, so how do you do this? Follow the following implementation steps:

  • Make sure that you are setting a Success Event when visitors conduct Internal Searches on your website. Hopefully you are already doing this so in many cases this step will be done!
  • Make sure that you are capturing the Internal Search Term the visitor searched upon in an eVar variable. Again, you should be doing this (if not, shame on you!).
  • Here is where we get into uncharted territory. The next step is to set a new Success Event when visitors click on one of the items on the search results page. Depending upon the technology you use for Internal Search, this could be hard or easy. Regardless of how you actually code it, the key here is to set the second Success Event (I call it Internal Search Results Clicks) only if visitors click on a search result item (not if they click on a second page of search results or go to another page through other navigation). It is also critically important that you only set this Search Results Clicks Success Event once per search term! Do not set it every time a visitor clicks on one of the search results after using the “Back” button. If you don’t do this correctly, your Click-Through and Exit Rates will be off. This could take a few iterations to get right, but stick with it!
  • Once you have both the Internal Searches and Internal Search Results Clicks Success Events set, you can create a Calculated Metric that divides Internal Search Results Clicks by Internal Searches to see the Internal Search Click-Through Rate as shown here:

  • From there you can create the converse metric which subtracts the Internal Search Click Through Rate by “1” to come up with the Internal Search Exit Rate as shown here:

  • After this is done, you can open the Internal Search Term eVar report and add all three metrics so you see a report like this:

In this case, it looks like the “Zune” Internal Search term might need some different search result content as it has a much higher exit rate as the others. Another cool thing you can do is to create a report which trends the Internal Search CTR % or Exit % for specific Internal Search terms so you can see if they have been good/bad over time. Also, if you use SAINT Classifications to group your Internal Search Terms into buckets, you can see the report above for groups of Internal Search terms. If you vote for my idea in the Idea Exchange, you would be able to set SiteCatalyst Alerts to be notified if your top Internal Search Terms have spikes in their Click-Through or Exit Rates. You can also segment your data to see how the Internal Search rates differ when people come from Paid Search vs. SEO, etc… and even use Test&Target to try out different promotional banners on your search results page…

Finally, don’t forget that when you create a new calculated metric like the Internal Search CTR % metric described above, you also get the bonus of seeing this metric across your entire website under the My Calc Metrics area of the SiteCatalyst toolbar. Simply find this new metric and click on it and you can see your overall Internal Search Click-Through Rate regardless of internal search term. Your report will look something like this:

For The True Web Analyst Geeks
If you were bothered when I mentioned above that you should only set the Search Results Clicks Success Event once per search term, then you are my kind of person (please apply for a job with me!)! You were probably saying to yourself: “If I only count once per search term, how will I know which search terms get visitors to click on multiple search result links?” Right you are! That could be valuable information. If you want to see that as well, all you have to do is set a second Success Event each time a visitor clicks on a search result [I call this Internal Search Result Clicks (All)]. Then you can compare how many times people click on any search result to how often click in total. Here is a sample report:

In this example, you can see that the search term “api” had one click only in either scenario, but the search term “chatter” had people click on it 100% of the time and 5 times they clicked on two search result items. If you want, you can create another Calculated Metric that divides the Internal Search Result Clicks (All) by the # of Internal Searches to see how many search result clicks each term averages. In the case of “chatter” above, it would be 2.25 search result clicks per search!

Final Thoughts
If Internal Search is important to your site, make sure you are tracking it adequately so you can improve it and increase your overall website conversion. Do you have any other cool Internal Search tracking tips I haven’t covered? If so, leave a comment here…

Analytics Strategy

Acquisitions Aplenty! comScore Buys Nedstat

We’re certainly on an acquisition hot roll here in our cozy little measurement industry. This week marked yet another buy-up of a web analytics company, Netherlands based Nedstat, was acquired by comScore. The sale price was reported at $36.7 million USD, which brings the tally of measurement buy-outs including the $1.8 billion dollar Omniture acquisition last year to nearly $2.5 billion dollars by my count. Those are some good multiples on revenue since my Forrester Web Analytics Forecast didn’t peg market spending to hit even $1 billion until sometime in 2015. Granted Omniture, Unica and to some extent Coremetrics were offering more than just web analytics in their product portfolios. But regardless, measurement technologies are all the rage these days and finally, big businesses are taking note of the value of web analytics.

Foreshadowing

Some might say that comScore and Nedstat, while serving similar industries for different purposes, were running on parallel paths and that an acquisition was a plausible outcome. But before I dive into that hypothesis, first I’ll toot my own horn by mentioning that I went on record predicting this one. The good fellas at Beyond Web Analytics interviewed me on the topic of market consolidation just after the IBM acquisition of Unica and we had a good chat about it here on the podcast. The closing question asked me to look into my crystal ball and guess who would be the next acquirer in the analytics market. While I didn’t guess that it would be comScore, I did speculate that there are some very interesting and valuable technologies that exist in Europe. I mentioned both Webtrekk in Germany and Nedstat as companies that would make appealing acquisition targets. Clearly comScore must have been listening (c’mon, I jest). But one of my clients across the pond also mentioned a couple of weeks ago that Nedstat’s CEO was quoted in a German newspaper as saying that there is no longer a place for a dedicated web analytics company in this environment. I’ve been saying this since early 2009, but coming from a chief officer of a successful technology operation…Foreshadowing indeed.

The Red Herring

So, bright and early on morning of the acquisition my friend Jodi McDermott reached out to me on the news by pointing out the press release on the deal and I owe her a big thanks for that. When we spoke later that morning along with Magid Abraham, comScore founder and CEO the first question Jodi asked me was…”Were you surprised?”. Now, the dirty little secret is that analysts can never show surprise, but heck yeah I was surprised that comScore was the buyer!?! I didn’t anticipate comScore because of their Unified Digital Measurement (UDM) solution which currently handles over 500 billion transactions per month and is growing rapidly. So, they already had their own tag based measurement solution. Additionally, just under a year ago comScore announced a strategic partnership with Omniture to deliver a newly created Media Metrix 360 solution predicated on UDM that would leverage a hybrid combination of Omniture page tags and comScore’ panel based measurement.

It was brilliant actually, and demonstrated the first significant attempt to bring together advertising measurement with site-side data. Yet, just a month after this partnership was announced, Omniture was snatched up by Adobe, and I can only speculate that the momentum on the partnership was stymied. Don’t get me wrong, Media Metrix 360 still exists, and clients like Martha Stewart and the Wall Street Journal add marquee status to the initiative. Thus, I would expect that comScore will support Media Metrix 360 by continuing the partnership with Adobe’s Omniture Business Unit as well as continue development on their own proprietary solution. Whatever they choose to do, these efforts – their own hybrid UDM tags and the Omniture relationship – created a red herring for me that had me looking elsewhere. Now the real question is… Was Nielson surprised and how will they counter? Sorry friends, my crystal ball is not that good.

The Plot Twister

I saved the best for last because here’s where the plot starts to get really interesting. comScore has stated that its acquisition interests in Nedstat are to better serve the media and publishing industries. Web analytics and site-side measurement has long been focused on the transaction and sites that don’t have traditional online transactions are left to quantify success by custom fitting solutions to meet their needs. With most web analytics solutions you’re forced to follow the conversion funnel through to a transaction (or not) and attempt tie things together or launch remarketing efforts from there. But when there’s no transaction at the end of the visit, then many traditional web metrics have very little resonance to the business.

Nedstat has long been focused on key topics like engagement and rich media measurement – metrics that matter to publishers. Now with the acquisition by comScore who has a stronghold within many media companies (not to mention a reserved line item in their budgets) they can create a very different value proposition for media companies looking to quantify metrics for their advertisers as well as optimize the experience for their visitors. I tend to agree with Magid who stated that this new paradigm for publishers is likely to create a natural segmentation in the market. With stalwart web analytics firms (albeit in their current incarnations) Omniture, Coremetrics and Unica are working towards an analytical system that feeds marketing automation. Now we’ve got the potential for something entirely different.

For these reasons I’m bullish on the acquisition. We have a new opportunity for web analytics where site-side measurement meets audience (panel based) measurement. It’s the collision course that many have been talking about. And it sets the stage for propelling measurement into next generation devices, apps and mobile platforms that don’t have transactional elements. It’s still too soon to say how this will play out, but I applaud Magid, Gian and the comScore team on their vision for creating a new measurement paradigm. And a big congrats goes out to Michael, Michiel, Fred, Ulrike and the entire Nedstat team for building a globally attractive solution. Bravo.

But these are just my thoughts…I may be way off…I may be crazy. Readers, do you agree that this new duo can impact enterprise measurement on a new level? I’d love to know what others think.

Analytics Strategy

Congratulations to Nedstat and comScore!

This summer’s web analytics acquisition season has heated up to the point where when my phone rings and it’s John (who wakes up hours earlier) saying “comScore has acquired Nedstat” my response is “of course they have!” Not to say this isn’t an exciting acquisition, but wow the vendor landscape has changed a bunch this year …

John spent the morning on the phone with our friend Jodi McDermott and comScore’s CEO Dr. Magid Abraham talking about the decision for comScore to get more deeply into client-side measurement technology and he promises to have a more comprehensive post up in a day or two. I only wanted to weigh in and say “congratulations” to the entire team at Nedstat!

I have been lucky enough to have worked with Michael, Fred, Michiel, and Ulrike on a number of occasions and have produced two white papers with them (one on video, the other on mobile) with a third coming out in a few weeks. The management team and everyone I have interacted with at Nedstat are wonderful people and they will definitely add great value and expertise to the comScore family.

Again, watch for more detail and analysis from John in the coming days and congratulations to the comScore and Nedstat families from all of us here at Analytics Demystified.

Analytics Strategy, Conferences/Community, General

We are our own worst enemy …

Back in February of this year, in partnership with BPA Worldwide, Analytics Demystified published a white paper detailing the risks associated with the use of Flash Local Shared Objects (LSOs) in digital measurement. Titled “The Use of Flash Objects in Visitor Tracking: Brilliant Idea or Risky Business?” the paper drilled down into how some companies are using Flash LSOs and offered  the following guidance:

  1. Do not use Flash to reset browser cookies
  2. Disclose the use of Local Shared Objects
  3. Allow site visitors to disable Local Shared Objects

The first piece of advice turns out to be pretty important since companies are now being sued over their use of Flash to reset browser cookies. MTV, ESPN, MySpace, Hulu, ABC, NBC, Disney, and others are being dragged into a lawsuit based on their use of Quantcast and Clearspring who were identified by Soltani, et al. as using Flash LSO to reset deleted browser cookies. These lawsuits allege a “pattern of covert online surveillance” and seeks status as a class action lawsuit.

Yikes.

Fortunately for Adobe they do not seem to be one of the targets in these suits, which makes sense considering the position the company has taken regarding the use of Flash. In my interview with MeMe Rasmussen, Adobe’s Chief Privacy Officer back April of  this year, Mrs. Rasmussen explicitly stated:

“… the position we outlined in the FTC Comment on condemning the misuse of local storage, was specific to the practice of restoring browser cookies without user knowledge and express consent.  We believe that there are opportunities to provide value to our customers by combining Omniture solutions with Flash technology while honoring consumers’ privacy expectations.”

On the topic of consumer privacy and web analytics, following up my partner John’s response to the Wall Street Journal article on online privacy (“Be still my analytical heart”), I recently wrote a piece for Audience Development Magazine titled “You are all evil …” While a little tongue-in-cheek the article encourages marketers and business owners to:

  1. Have a rock-solid privacy policy
  2. Not use tracking software they don’t understand
  3. Not be unaware of what tracking software they have deployed
  4. Have a clear answer for “how and why do you track us?”
  5. Be transparent as hell when anybody asks what you’re doing

As I reflect back on the guidance we have provided in the past year I run the risk of becoming quite depressed. None of our recommendations are surprising, revolutionary, or particularly Earth shattering … but not nearly enough companies are doing most of these very simple things. Given this, one possible outcome is becoming increasingly apparent …

We are going to get screwed.

Go back to Walt Mossberg’s 2005 assertion that “cookies are spyware” and the related conversation around cookie deletion and you will see a clear pattern: media (ostensibly acting in the best interest of consumers) points out that what we do is somehow devious … and we more or less ignore the problem, hoping it will go away.

My friend Bob Page once referred to something he called the “Data Chernobyl”  … a unexpected and massive meltdown in consumer trust associated with the data that we collect, store, and use to make business decisions.  When you think about it for just a little bit the idea is terrifying … because everything we do depends entirely on our ability to collect, store, and use information about consumer behavior on the Internet.

Our livelihoods depend on everyone ignoring the fact that we track, understanding why we track, or getting something tangible out of the tracking we do.  Sadly we have never offered anything tangible, we have never really made an effort to explain what we do in court of public opinion, and it is increasingly clear that the bright light shining on our trade isn’t going to fade anytime soon.

What’s worse is that we are collecting even more information across mobile, social, and other emerging channels, perfecting our ability to integrate that data into over-arching consumer data warehouses, and occasionally using techniques that even the most hard-hearted of web analysts get all geeked-out about.

We have become our own worst enemy.

Now, as I declared in the Audience Development piece, I simply do not believe that consumers are as freaked out about tracking online as the media makes them out to be … the data I have seen just doesn’t support that conclusion. But consumers aren’t the real problem: the real problem: is the media, lawyers, and potentially the Federal Government. All three of these groups continue to generate page views, make money, and “protect the common man” (sic) by throwing our industry under the bus … and we aren’t doing anything in our defense.

Dumb, dumb, dumb.

People much smarter than I am have repeatedly stated that they don’t want to engage the media or “privacy police” in a conversation that they cannot possibly win.  To a small extent this makes sense, but at some point I wonder if we are going to collectively end up looking like my four year old when he knows he’s made a mistake.  My son gets away with it because he’s awesome cute and I love him, but I am beginning to think the collective web analytics industry is not going to get away with mumbling and making lame excuses for much longer.

The advertising industry has the IAB and NAI, both of whom appear to be responding to articles, lawsuits, and Congressional investigation on many of these issues.  (If you haven’t seen it yet, have a look at this amazing “privacy matters” campaign the IAB is running.) But we are not the advertising industry, we are the web analytics and digital measurement industry, and we need to have our own voice, our own lobby, and our own representation.

Since the framework for this already exists, I am officially asking that the Web Analytics Association formalize and finalize their Industry Advocacy program and represent the digital measurement community in the forum of public opinion.

I have already volunteered to help with this effort under the Presidency of Alex Langshur and reiterate that commitment to the current Board of Directors.  The WAA needs to bring together corporate members and key practitioner representatives to quickly hash out a clear, concise, and practical position on the relationship between digital measurement technology and consumers. The current WAA Board is in perhaps the best position in years to make the decision to represent the needs of our community … but decisive action is required.

Without the WAA’s leadership on this issue I fear that over time we will lose the battle of public opinion and my tongue in cheek assessment of the “evilness” of our industry will be far less funny than it seems today.

Let’s not let that happen.

We are an awesome industry full of brilliant people.  The work we do is some of the most valuable but least understood in the interactive world.  I believe it is time to come out of the closet, accurately describe the value of the work we do, and stop shying away from a conversation we feel is stacked against us and a battle we are unsure that we can win.  If we don’t try, without a doubt, we will remain our own worst enemy.

Analytics Strategy, Conferences/Community

Do not miss this year's X Change conference!

What a crazy week it has been, what with client visits with John here in the West, web casts with the fine folks at Tealeaf and Unica, and the end of summer fast approaching at the Peterson household. I was so busy I wasn’t able to pay close attention to our X Change registrations and when I looked just now I realized something …

X Change 2010 is damn near sold out.

Thanks to some quick thinking from Joel, Grace, and Gary over at Semphonic we have a few more seats available than last year, but with nearly a month to go before we convene in Monterey, California at the beautiful Monterey Plaza Resort and Spa we have sold more seats than last year and last year was completely sold out!

You’re not gonna miss the X Change because you waited too long to sign up, are you? You aren’t going to risk missing out on the chance to discuss digital measurement in our intense and intimate conversation format with practice leaders and managers from amazing brands like Best Buy, ESPN, Expedia, Facebook, MTV, New York Times, Lowes, Turner Broadcasting, HP, Salesforce.com, Nike, Charles Schwab, Comcast, eBay, and NBC Universal, are you?

Seriously, don’t miss out.

I only wish I could make a list of all the great participant companies coming to this year’s event … but I can’t.  If I could you would see that by coming to this year’s X Change you would be joining some of the most respected brands in technology, media, healthcare, advertising, software, and retail in the world.  Worse, you would realize that missing X Change means not getting to hear first-hand how some of the greatest minds in the digital measurement industry are getting it done today.

You would be bummed.

Don’t be bummed, come to X Change 2010, September 20, 21, and 22 in Monterey, California. Register at our web site now or contact me directly for more information.

Adobe Analytics, General

Tracking Navigation

Unless your website is very basic, odds are that you use some sort of navigation to help visitors find website content. Usually navigation is in the header or left side of web pages. Inevitably, there will be times when you are asked how often and in what ways visitors are using navigation. In this post I will cover some common navigation questions and how to answer them.

Common Questions
So what are the common questions you may get around navigation. Here are some that I have been asked over the years:

  1. Which individual navigation links are clicked the most?
  2. Which navigation areas are clicked the most? This is usually related to the main section area, not individual links.
  3. From which pages are visitors using each navigation link?
  4. For what percent of website visits is navigation used?
  5. In what order do website visitors use navigation links?
  6. Which navigation links lead to key website success milestones being accomplished?

The following will show how to answer each of these questions:

Which individual navigation links are clicked the most?
In this scenario, people are looking to see which detailed navigation links are clicked the most. In the image below, this would represent such links as “Sales Cloud 2,” “Service Cloud 2,” “Custom Cloud 2,” etc…

To answer this question, you should have your developer write code that will pass the name of the link to a Traffic Variable (sProp) when a visitor clicks on each link in your navigation. In addition, I highly recommend that you have them include the high-level navigation area in the value passed to the sProp. For example, when a visitor clicks on “Sales Cloud 2” in the example above, I would pass the value of “products:sales cloud 2” (I always use lower case since sProps are case-sensitive) to the sProp. Passing the high-level area will ensure that your data is clean as there are times when the same link can occur more than once in a navigation structure. When this is complete, you can view a report that looks like this:

Which navigation areas are clicked the most?
In this question, people are generally asking to see (in the example above) if the “Products” section is clicked more than the “About” section and if so, by how much. The good news is that if you have done the previous step correctly, you can answer this question by creating a SAINT Classification which rolls up the values in the preceding report into higher-level buckets. You can create this classification easily by exporting the above report to Microsoft Excel and splitting the column by the separator and using the first part as the high-level navigation name. Here is what your SAINT file might look like:

After you create and process this SAINT file you will be able to see a new high-level navigation report that looks like this:

From which pages are visitors using each navigation link ?
In this scenario, people at your company may want to know what is the most common top navigation link clicked from the home page or from another page on your site. To see this, you need to have setup a Previous Page sProp. This sProp passes the name of the previous page to the current page which allows you to create Traffic Data Correlations between it and any other Traffic variable. In this case, once we have a Previous Page sProp, we can correlate it to the Top Navigation Link sProp shown above to see what navigation links are clicked from each page. For example, I can open up the Previous Page sProp within a report suite and then break it down by the new Top Navigation sProp…

…to see a report like this:

In this case, we can see that the “customers:india customers” Top Navigation link was only clicked 482 times from the home page.

In addition, since this uses a correlation and correlations are bi-directional, you can also use this to find out all of the pages from which visitors clicked on a specific navigation link:

In this case, we can see that the “customers:india customers” link was clicked a total of 957 times and then see the breakdown of pages visitors were on when they clicked it. This can help your content people understand when visitors are reaching for the navigation… Finally, if you look closely, you can see that the “SFDC:in:homepage” shows the same 482 clicks referenced above, but in this case we can see that it accounts for 50% of all clicks this link gets across the entire website…

For what percent of website visits is navigation used?
In some cases, you may be asked how often website navigation is used (in general). One easy way to figure this out is to look at the the total Page Views from the first SiteCatalyst report shown in this post and divide it by the number of website Visits. This can be done easily using the ExcelClient where you can pull a Visits data block and the report above and divide the two. However, if you think you might need this on a recurring basis and if trending is important, I will show you another way to do this. When visitors click on navigation links, in addition to passing the link name to a Traffic Variable as shown above, set a “Navigation Clicks” Success Event. Once you have a Success Event, you can create a Calculated Metric that divides Navigation Clicks by Visits as shown here…

…which will allow you to see a report like this:

In what order do website visitors use navigation links?
If you are redesigning your navigation, a useful piece of data is the order in which visitors click on navigation links. Do they always click on the first items in the list? The ones that are farthest to the left? Fortunately, if you have implemented the items above, you can see this by simply enabling Pathing on the new Navigation Links sProp created above. This will allow you to view the Pathing reports including a Next Page Flow and Previous Page Flow just for navigation items:

Which navigation links lead to key website success milestones being accomplished?
Finally, I will occasionally be asked which navigation links are contributing to success. To answer this question, all you have to do is enable Participation for your key metrics on the Navigation links sProp described above. This will allow you to add a Participation metric to the first report shown above to see which links were in the flow of your key website Success Events.

Final Thoughts
Well, there you have it. Everything you wanted to know about tracking your website navigation, but were afraid to ask! If you have any comments/questions, use the form below.

Excel Tips

Excel Dynamic Named Ranges = Never Manually Updating Your Charts

[This post was written in 2010. I’ve made a new version of the post that takes advantage of Excel tables, which simplified the process a bit (it’s still kinda’ complicated). That post is available here.]

[This post is about dynamic named ranges in Excel 2007. I’m seeing a lot of referral traffic to this post searching for Excel 2010. If you’re simply looking for where you define or modify named ranges in Excel 2010 (as one commenter indicated in response to an earlier version of this update), it’s on the Formulas tab in the Defined Names area — Name Manager. If you are looking for other Excel 2010-specific information that this post doesn’t cover, please leave a quick comment as to what the change/issue is that led you to the search. Thanks.]

I’ve had a pretty good run of theoretical posts about the nature of marketing measurement of late, so it seemed like I was due for a more down-in-the-weeds-Excel-efficiency-tactics write-up.  This blog isn’t really focussed on all of the myriad ways that Excel can be contorted to represent data effectively, but I’m a big believer in using tools as effectively as possible to remove as much rote report generation as possible. There are lots of blogs devoted entirely to Excel tips and tricks. My favorite on that front is Jon Peltier’s (if you get intrigued by this post, hop over and peruse a slew of other ways to have charts dynamically update).

This post describes (and includes a downloadable file of the example) a technique that we use extensively to make short work of updating recurring reports. Here are the criteria I was working against:

  • User-selectable report date
  • User-selectable range of data to include in the chart
  • Single date/range selection to update multiple charts at once
  • No need to touch the chart itself
  • Reporting of the most recent value (think sparklines, where you want to show the last x data values in a small chart, and then report the last value explicitly as a number)
  • No use of third-party plug-ins — one of these days, I’ll get around to playing with the various Excel add-ons like those offered by Tableau Software and XLCubed (or even the Peltier Tech add-ins, which are targeted but made by one of the top 3 most authoritative Excel resources on the ‘net), but that adds just the slightest of barriers and, again, isn’t needed for this exercise
  • No macros used — I don’t have anything against macros, but they introduce privacy concerns, version compatibility, odd little warnings, and, in this case, aren’t needed

The example shown here is pretty basic, but the approach scales really well.

Sound like fun?

Setting Up the Basics

One key here is to separate the presentation layer from the data layer. I like to just have the first worksheet as the presentation layer — let’s name it Dashboard — and the second worksheets as the data layer — let’s call that Data. (Note: I abhor many, many things about Excel’s default settings, but, to keep the example as familiar as possible, I’m going to leave those alone. This basic approach is one of the core components in the dashboards I work on every day, and it can be applied to a much more robust visualization of data than is represented here. See An Excel Dashboard Widget for a look at my thoughts on dashboard visualization.)

Data Tab Setup — Part 1

This is a slightly iterative process that starts with the setup of the Data tab. On that worksheet, we’ll use the first column to list our dates — these could be days, weeks, months, whatever (they can be changed at any time and the whole approach still works). For the purposes of this example, we’ll go with months. Let’s leave the first row alone — this is where we will populate the “current value,” which we’ll get to later. I like to use a simple shading schema to clearly denote which cells will get updated with data and which ones never really need to be touched. And, in this example, let’s say we’ve got three different metrics that we’re updating: Revenue, Orders, and Web Visits. This approach can be scaled to include dozens of metrics, but three should illustrate the point. That leaves us with a Data tab that looks like this:

While we’re on this tab, we should go ahead and defined some named cells and some named ranges. We’ll name the cell in the first row of each metric column as the current value for that metric (the cells don’t have to be named cells, but it makes for easier, safer updating of the dashboard as the complexity grows). Name each cell by clicking on the cell, then clicking in the cell address at the top left and typing in the cell name. It’s important to have consistent naming conventions, so we’ll go with <metric>_Current for this (it works out to have the metric identified first, with the qualifier/type after — just trust me!). The screen capture below shows this being done for the cell where the current value for Orders will go, but this needs to be done for Revenue and Web Traffic as well (I just remove the space for Web Traffic — WebTraffic_Current).

And, we’re definitely going to want to have the whole range of data on the tab available to us. Let’s call this MainData and define it by going to Formulas » Name Manager and clicking on New (this is Excel 2007 — it’s somewhere else easier to find in Excel 2003). Define a new range with a Workbook scope that encompasses all the columns and all of the rows of data (starting at row 3):

There are lots of ways to dynamically define MainData. You can just drag a big area if you want, but this is a slightly more elegant approach. I’m not going to go into the nuts and bolts of why this formula works, but you can look up the OFFSET and COUNTA functions and figure it out if you’re so inclined:

=OFFSET(Data!$A$3,0,0,COUNTA(Data!$A:$A)-2,COUNTA(Data!$2:$2))

We’ll also want a named range that just includes the list of months — create that the same way as MainData, but call it DateSelector and use a slightly different formula:

=OFFSET(Data!$A$3,0,0,COUNTA(Data!$A:$A)-2,1)

And, of course, we’ll actually need data — this would come later, but I’ve gone ahead and dropped some fictitious stuff in there:

That’s it for the Data tab for now…but we’ll be back!

Dashboard Tab Setup — Part 1

Now we jump over to the Dashboard worksheet and set up a couple of dropdowns — one is the report period selector, and the other is the report range (how many months to include in the chart) selector. Start by setting up some labels with dropdowns (I normally put these off to the side and outside the print range…but that doesn’t sit nice with the screen resolution I like to work with on this blog):

Then, set up the dropdowns using Excel data validation:

First, the report period. Click in cell C1, select Data » Data Validation, choose List, and then reference the named range of months we set up earlier, DateSelector:

When you click OK, you will have a dropdown in cell C1 that contains all of the available months. This is a critical cell — it’s what we’ll use to select the date we want to key off of for reporting, and it’s what we’ll use to look up the data. So, we need to make it a named cell — ReportPeriod:

Now, let’s do a similar operation for the report range — this tells the spreadsheet how many months to include in each chart. Click in cell C3, select Data » Data Validation, choose List, and then enter the different values you want as options (I’ve used 3, 6, 9, and 12 here, but any list of integers will work):

And, let’s name that cell ReportRange:

Does this seem like a lot of work? It can be a bit of a hassle on the initial setup, but it will pay huge dividends as the report gets updated each day, week, or month. Trust me!

Before we leave this tab, go ahead and select a value in each dropdown — this will make it easier to check the formulas in the next step.

Data Tab Setup — Part 2

Now is where the fun begins. We’re going to go back over to the Data worksheet and start setting up some additional named ranges. We’ve got MainData, which is the full range of data. We want to look at the currently selected Report Period (a named range called ReportPeriod) and find the value for each metric that is in the same row as that report period. That will give us the “Current” value for each metric. All you need to do is put the exact same formula in each of the three “Current” cells:

=VLOOKUP(ReportPeriod,MainData,COLUMN())

In this example, these are the values for each of the three arguments:

  • ReportPeriod — Jul-09, the value we selected on the Dashboard tab
  • MainData — this is the full set of data, including the list of months in column A
  • COLUMN() — this is 2, the column that the current metric is listed in (this function resolves to “3” for Orders and to “4” for Web Traffic)

So, the formula simply takes the currently selected month, finds the row with that value in the data array, and then moves over to the column that matches the current column of the formula:

Slick, huh? And, because the ReportPeriod data validation dropdown on the Dashboard worksheet is referencing the first column of the data on the Data tab, the VLOOKUP will always be able to find a matching value. (Read that last sentence again if it didn’t sink in — it’s a nifty little way of ensuring the robustness of the report)

This little bit of cleverness is really just a setup for the next step, which is setting up the data ranges that we’re going to chart. Conceptually, it’s very similar to what we did to find the current metric value, but we want to select the range of data that ends with that value and goes backwards by the number of months specified by ReportRange. So, in the values we selected above, Jul-09 and “6,” we basically want to be able to chart the following range of data:

We’ll do this by defining a named range called Revenue_Range (note how this has a similar naming convention to Revenue_Current, the name we gave the cell with the single value — this comes in handy for keeping track of things when setting up the dashboard). We can’t use VLOOKUP, because that function doesn’t really work with arrays and ranges of data. Instead, we’ll use a combination of the MATCH function (which is sort of like VLOOKUP on steroids) and the INDEX function (which is a handy way to grab a range of cells). Pull your hat down and fasten your seatbelt, as this one gets a little scary. Ultimately, the formula looks like this:

=INDEX(MainData,MATCH(ReportPeriod,DateSelector)-ReportRange+1, COLUMN(Revenue_Current)):INDEX(MainData, MATCH(ReportPeriod,DateSelector), COLUMN(Revenue_Current))

It’s really not that bad when you break it down. I promise!

Working from the outside in, you’ve got a couple of INDEX() functions. Think of those as being INDEX(First Cell) and INDEX(Last Cell).

The range is defined, in pseudocode, as simply:

=INDEX(First Cell):INDEX(Last Cell)

The Last Cell calculation is slightly simpler to understand. As a matter of fact, this is really just trying to identify the cell location (not the value in the cell) of the current value for revenue — very similar to what we did with the VLOOKUP function earlier. The INDEX function has three arguments: INDEX(array,row_num,column_num). Here’s how those are getting populated:

  • array — this is simply set to MainData, the full range of data
  • row_num — this is the row number within the array that we want to use; we’ll come back to that in just a minute
  • column_num — we use a similar trick that we used on the Revenue_Current function, in that we use the COLUMN() formula; but, since we set up this range simply as a named range (as opposed to being a value in a cell), we can’t leave the value of the function blank; so, we populate the function with the argument of Revenue_Current — we want to grab the column that is the same column as where the current revenue value is populated in the top row.

Now, back to how we determine the row_num value. We do this using the MATCH function, which we need to use on a 1-dimensional array rather than a 2-dimensional array (MainData is a 2-dimensional array). All we want this function to return is the number of the row in the MainData array for the currently selected report period, which, as it turns out, is the same row as the currently selected report period in the DataSelector range. The formula is pretty simple:

MATCH(ReportPeriod,DateSelector)

The formula looks in the DateSelector range for the ReportPeriod value and finds it…in the seventh row of the array. So, row_num is set to 7.

INDEX(First Cell) is almost identical to INDEX(Last Cell), except the row_num value needs to be set to 2 instead of 7 — that will make the full range match the ReportRange value of 6. So, row_num is calculated as:

MATCH(ReportPeriod,DateSelector)-ReportRange+1

(The “+1” is needed because we want the total number of cells included in the range to be ReportRange inclusive.)

Now, that’s not all that scary, is it? We just need to drop the full formula into a named range called Revenue_Range by selecting Formulas » Name Manager » New, naming the range Revenue_Range, and inserting the formula:

=INDEX(MainData,MATCH(ReportPeriod,DateSelector)-ReportRange+1, COLUMN(Revenue_Current)):INDEX(MainData, MATCH(ReportPeriod,DateSelector), COLUMN(Revenue_Current))

The whole formula is there, even if you can’t see it!

Repeat this last step to create two more named ranges with slightly different formulas (the differences are in bold):

  • Orders_Range: =INDEX(MainData,MATCH(ReportPeriod,DateSelector)-ReportRange+1,COLUMN(Orders_Current)):INDEX(MainData,MATCH(ReportPeriod,DateSelector),COLUMN(Orders_Current))
  • WebTraffic_Range: =INDEX(MainData,MATCH(ReportPeriod,DateSelector)-ReportRange+1,COLUMN(WebTraffic_Current)):INDEX(MainData,MATCH(ReportPeriod,DateSelector),COLUMN(WebTraffic_Current))

Tip: After creating one of these named ranges, while still in the Name Manager, you can select the range and click into the formula box, and the current range of cells defined by the formula will show up with a blinking dotted line around them.

You’re getting sooooooo close, so hang in there! In order for the chart labels to show up correctly, we need to make one more named range. We’ll call it Date_Range and define it with the following formula (this is just like the earlier _Range formulas, but we know we want to pull the dates from the first column, so, rather than using the COLUMN() formula, we simply use a constant, “1”:

=INDEX(MainData,MATCH(ReportPeriod,DateSelector)-ReportRange+1,1):INDEX(MainData, MATCH(ReportPeriod,DateSelector),1)

If you want, you can fiddle around with the different settings on the Dashboard tab and watch how both the “Current” values and (if you get into Name Manager) the _Range areas change.

OR…you can move on to the final step, where it all comes together!

Dashboard Tab Setup — Part 2 (the final step)

It’s back over to the Dashboard worksheet to wrap things up.

Insert a 2-D Line chart and resize it to be less than totally obnoxious. It will just be a blank box initially:

Right-click on the chart and select Select Data. Click to Add a new series and enter “Revenue” (without the quotes — Excel will add those for you) as the series name and the following formula for the series values:

=DynamicCharts_Example.xlsx!Revenue_Range

(Change the name of the workbook if that’s not what your workbook is named)

Click to edit the axis labels and enter a similar formula:

=DynamicCharts_Example.xlsx!Date_Range

You will now have an absolutely horrid looking chart (thank you, Excel!):

Tighten it up with some level of formatting (if you just can’t stand to wait, you can go ahead and start flipping the dropdowns to different settings), drop “=ReportPeriod” into cell E6 and “=Revenue_Current” into cell E7, and you will wind up with something that looks like this:

Okay, so that still looks pretty horrid…but this isn’t a post about data visualization, and I’m trying to make the example as illustrative as possible. In practice, we use this technique to populate a slew of sparklines (no x-axis labels) and a couple of bar charts, as well as some additional calculated values for each metric.

To add charts for orders and web traffic is a little easier than creating the initial chart. Just copy the Revenue chart a couple of times (if you hold down <Ctrl>-<Shift> and then click and drag the chart it will make a copy and keep that copy aligned with the original chart).

Then, simply click on the data line in the chart and look up at the formula box. You will see a formula that looks something like this:

=SERIES(“Revenue“,DynamicCharts_Example.xlsx!Date_Range, DynamicCharts_Example.xlsx!Revenue_Range,1)

Change the bolded text, “Revenue,” to be “Orders” and the chart will update.

Repeat for a Web Traffic chart, and you’ll wind up with something like this:

And…for the magic…

<drum rollllllllllll>

Change the dropdowns and watch the charts update!

So, is it worth it? Not if you’re going to produce one report a couple of times and move on. But, if you’re in a situation where you have a lot of recurring, standardized reports (not as mindless report monkeys — these should be well-structured, well-validated, actionable performance measurement tools), then the payoff will hit pretty quickly. Updating the report is simply a matter of updating the data on the Data tab (some of which could even be done automatically, depending on the data source and the API availability), then the Report Period dropdown on the Dashboard tab can be changed to the new report period, and the charts get automatically updated! You can then spend your time analyzing and interpreting the results. Often, this means going back and digging for more data to supplement the report…but I’m teetering on the verge of much larger topic, so I’ll stop…

As an added bonus, you can hide the Data tab and distribute the spreadsheet itself, enabling your end users to flip back and forth between different date ranges — a poor man’s BI tool, if ever there was one (in practice, there will seldom be any real insight gleaned from this limited number of adjustable dropdowns, and that’s not the reason to set them up in the first place).

I was curious as to what it would take to create this example from scratch and document it as I went. As it’s turned out, this is a lonnnnnnngggg post. But, if you’ve skimmed it, get the gist, and want to start fiddling around with the example used here, feel free to download it!

Happy dynamic charting!

Analytics Strategy

IBM buys Unica in $480M Deal

So the beauty of one public company buying another is that they usually hold an analyst call to explain the rationale. Props to IBM for holding this call just two short hours after the news broke and for giving a few of us a chance to pepper them with tough questions. On this call, Craig Hayman, General Manager of IBM Business Solutions (within the IBM Software Solutions Group) and Yuchun Lee, Founder and CEO of Unica shared an insiders’ perspective on the deal. In fact, Craig even shared the code name “Amaru” which was his secret squirrel moniker for referring to the deal internally before it was done.

So here’s the scoop.

The IBM acquisition of Unica was largely driven by a recognized need for enterprises to get closer to their customers by understanding their experiences and interactions across a broad network of channels and customer touch points. They’ll accomplish this by using analytics technologies, building single view profiles of customers and delivering marketing process improvements.

Sounds a little like markety-speak doesn’t it? Well, regardless it’s still a pretty good story and one that I hope IBM is able to pull off. It’s actually similar to the one that Adobe told after the Omniture acquisition with perhaps more of an automation spin.

What does this mean for Web Analytics?

When Joe Stanhope of Forrester fame deftly asked how IBM planned to rationalize the overlap between NetInsight and the recent Coremetrics acquisition, the response was dominated by the word “synergies” which they see a lot of between these firms. Yuchun rightly went on to describe NetInsight as only one product in the Unica portfolio and that Coremetrics and Netinsight served different segments within the web analytics market. He explained that NetInsight has strength in the on premise solution market (which they do) and that their ability to leverage web analytics within an online datamart was also differentiated (while not entirely unique to the market at large, it’s true when compared to Coremetrics. NetInsight uses a relational database construct for storing and accessing clickstream data). Yuchun also pointed out that Coremetrics has strength when it comes to collecting high volume, high transaction data. This is a result of Coremetrics robust infrastructure that they’ve been building to collect and deliver this data at scale without incurring exorbitant expenses (and they were doing a damn good job of this).

All in all, the comments about the synergies concluded by stating that both tools would accelerate the benefits of deep customer insights for IBM’s clients. None the less, it will be very interesting to wait and see which features and functions emerge from a combined solution of two web analytics powerhouses.

What does this mean for IBM?

As much as I’d like to think that analytics is the epicenter of the business world, this deal is about multi-channel campaign management and marketing automation. IBM is without question on a buying spree. They snatched up SPSS, Coremetrics, DataCap, Sterling Commerce and now Unica in short order. Presumably this is all part of IBM’s strategic growth plan that earmarked a whopping $20 billion for acquisitions through 2015. But from a web analytics perspective, this acquisition didn’t occur because of the NetInsight product. Don’t get me wrong, I’m a big fan of the technology, but Unica’s campaign automation solutions and interactive marketing prowess within the marketplace surely made them a tasty morsel for IBM to gobble.

The newly acquired Unica technology will sit within the Software Group business – or more specifically – IBM Software Solutions Group. Yuchun will own the BU within the software solutions group. And this group also holds Websphere Commerce, Coremetrics, Cognos and about a bazillion other software solutions. But as we learned on the call today, Craig Hayman will work to build out frameworks and the connections between these multitude of solutions.

What does this mean for clients?

So, when I look at the big picture, my speculation is that IBM is furthering the bifurcation of the marketplace in yet another direction that separates the “haves” from the “have nots”. What I mean by this is slightly different from what Eric described in his bifurcation of analytics market as a separation of tools based on the level of experience for each user. He puts technologies like Adobe Omniture’s Discover and Coremetrics’ Explore into the exclusive camp of highly skilled analysts who are capable of performing true analysis on digital data sets. The rest of the population is left with simpler, yet still capable tools (not meant in a disparaging way) like Google Analytics that are intuitive and require little training to begin garnering insights. While I agree with Eric, a new twist in this divide can also be developing on a financial level.

As we know, Google Analytics is free and enterprise analytics can quickly run into six – even seven – digit figures in a hurry. My thoughts on this financial divide and IBM’s perpetuation of it stem from Sam Palmisano’s scoff at the notion of consumer technologies dominating the enterprise. Clearly the IBM acquisition moves dictate that a set of tools designed for the professional marketer will be vastly different from the solutions accessible to consumers on the street. Thus, I see this as yet another wedge in the bifurcated divide between large enterprises, the ones that typically purchase software from the likes of IBM, Oracle, and SAS, and small and mid-sized companies who are forced to use a different toolset primarily because of price.

So at first blush, if you’re a big enterprise this all sounds pretty good. IBM and Unica join forces, which isn’t too much of a stretch as there’s also some history here…IBM is a Unica customer using campaign management, marketing resource management and other services to bring about a “marketing transformation” within their own organization (at least that’s how Craig Hayman put it). And Unica has also been OEM’ing IBM solutions for some time. The acquisition extends the growth trajectory that Unica was already on and helps to bring together IBM’s end-to-end story that they call “Blue Washing” (Err…hope that code word was okay for public consumption).

But, the acquisition also acts as a good milestone for IBM who is assembling all the key ingredients for a leading enterprise solution – Sterling Commerce is connecting to the back end – Coremetrics offers deep insights into customer behavior and segments – and now Unica delivers a marketing management solution. It’s hard to argue that they’re not connecting a very compelling story for marketing professionals.

Yet, if you’re a mid-sized business or even a small organization…the IBM “Blueness” may have just distanced itself even further into the stratosphere.

Reporting, Social Media

Social Media ROI: Forrester Delivers the Voice of Reason and Reality

All sorts of agencies, social media technology companies, and analyst firms have hit on a lead generation gold mine: write a paper, conduct a webinar, or host an event that includes “ROI” and “social media” in any combination with any set of connecting articles and prepositions, and the masses will come! The beauty of B2B marketing is that the title and description of any such content is all that really needs to be compelling to get someone to fill out a registration form — the content itself can totally under-deliver…and it’s too late for the consumers of it to remove themselves as leads when they realize that’s the case!

Of the dozens of webinars I’ve attended, blog posts I’ve read, and white papers I’ve perused that fall into this “social media ROI” bucket, not a single one has actually delivered content about calculating a true return on investment in a valid and realistic way based on social media investments. That’s not to say they don’t have good content, but they all wind up with the same basic position: have clear objectives for your social media efforts, establish a set of relevant KPIs/metrics based on those objectives, and then measure them!

When a paper titled The ROI of Social Media Marketing (available behind a registration form from Crowd Factory — see the first paragraph above!) written by Forrester analyst Augie Ray (and others) came across my inbox by way of eMarketer last week, I had low expectations. I scanned it quickly and honed in on the following tip late in the paper:

Don’t use the term “ROI” unless you are referencing financial returns. ROI has an established and understood meaning — it is a financial measure, not a synonym for the word “results.” Marketers who promise ROI may be setting expectations that cannot be delivered by social measures.

Bingo! But, then, what is up with the title of the paper? Was there intense internal pressure at Forrester to write something about calculating social media ROI? Did Ray protest, but then finally cave and write a spot-on paper with an overpromising title…and then slip in an ironic paragraph to poke a little fun? I don’t know, but I loudly read out the above when I saw it (to the mild chagrin of everyone within 50 feet of my desk; I’m known in the office for periodic rants about the over-hyping of ROI, so I mostly just generated bemused eyerolls).

The idea the paper posits is to take inspiration from the balanced scorecard framework — not taken to any sort of extreme, but pointing out that social media impacts multiple differing facets of a brand’s performance. Ray neither presses to have a full-blown, down-to-the-individual-performer application of balanced scorecard concepts, nor does he stick to the specific four dimensions of a pure balanced scorecard approach. What he does put forth is highly practical, though!

The four dimensions Ray suggests are:

  • Financial perspective (the only dimension that does map directly to a classic balanced scorecard approach) — revenue and cost savings directly attributable to social media
  • Brand perspective — classic brand measures such as awareness, preference, purchase intent, etc.
  • Risk management perspective — “not about creating positive ROI but reducing unforeseen negative ROI in the future” — a social media presence and engaged customers improve a brand’s ability to respond in a crisis; in theory, this has real value that can be estimated
  • Digital perspective — measuring the impact of social media on digital outcomes such as web site traffic, fan page growth, and so on; Ray points out, “In isolation, digital metrics provide a weak assessment of actual business results, but when used in concert with the other perspectives within a balanced marketing scorecard, they become more powerful and relevant.” Right on!!!

The paper is chock full of some fantastic little gems.

Which isn’t to say I agree with everything it says. One specific quibble is that, when discussing the financial perspective, the paper notes that media mix modeling (MMM) is one option for quantifying the financial impact of individual social media channels; while Ray notes that this is an expensive measurement technique, that’s actually an understatement — MMM is breaking down with the explosion of digital and social media…but that’s a subject for a whole other post! [Update: I finally got around to writing that post.]

At the end of the day, social media is complicated. It’s not measurable through a simple formula. It can strengthen a brand and drive long-term results that can’t be measured in a simplistic direct response model. Taking a nuanced look at measuring your social media marketing results through several different perspectives makes sense!

Analytics Strategy

Webtrends Acquires Transpond: Analytics, Meet Apps

The burning question on many a web analyst’s mind today is likely…Who is Transpond? That’s the question I asked when I first learned of Webtrends’ plan to acquire the San Francisco based application development platform vendor.

Today the acquisition closed and word is out. At first glance, this may sound like a left turn for web analytics and perhaps it is. But in my mind it’s an interesting acquisition that’s headed in a positive direction. It also demonstrates that Webtrends isn’t afraid to make bold moves and assert its innovative position in the social analytics realm.

 

The History

Transpond was founded in 2007 as iWidgets, back when widgets were all the rage (here’s a view from the Wayback Machine). The company got off the ground with a $4M investment in early 2009, but found that they were limited by their chosen iWidgets moniker and went through a rebranding exercise in the Summer 2009 to become Transpond. All the while, they’ve been providing application development tools for companies to build and deliver apps on mobile devices like the iPhone or Android, web platforms like Facebook and even TV apps for connected televisions. Transpond offers do-it-yourself development of applications such as quizzes, polls, games and interactive commerce for distribution across multiple digital channels. They also provide development support if you’re looking for some expert dev resources to really make your apps sing. Under the new ownership of Webtrends all of these capabilities will be folded into the Webtrends Apps offering and presumably reporting will become available within the Webtrends Analytics 9 interface.

What’s in it for Webtrends?

So, you may be asking yourself, why is Webtrends interested in this company? Well, the way I see it, Webtrends is tuned into the fact that more and more organizations are developing content that will live and breathe off-site. That is, apps that are not contained within your primary web presence. Whether it’s on a mobile phone, Facebook or the next new platform, users are interacting with your content and each other off-site. That’s a domain that web analytics has traditionally not been able to capture without some fancy footwork because most web analytics solutions rely on tracking contained within the pages of your primary web sites. While tracking within apps is not new either, this acquisition opens up the possibility of integrating behavior with applications that exist off your site into the data soup that is digital analytics. It’s really a logical extension of the analytics technology.

Why is this Cool?

What’s also really appealing about this technology from a development perspective is that the platform allows company’s to build apps and deliver them across multiple platforms in a consistent manner. Thus the ability to build it once and delivery to many, in whatever format they choose to consumer the content. Plus, when you bake in measurement and analytics to the apps, then you really have a means to evaluate interaction and compare across channels.

While this is certainly a new direction for web analytics acquisitions, I for one like the purchase and look forward to seeing Webtrends execute on the delivery of this new solution. Webtrends has the distinction of being one of the first pioneers in web analytics out there and also as the last independent vendor left standing. I’m pleased to see that this old dog ain’t afraid to learn new tricks.

Congratulations go out to Alex, Casey, Justin and the Webtrends team on the innovative move and to Peter Yared and Charles Christolini of Transpond for closing the deal.

Adobe Analytics, General

Previous Page Variable

I believe that every SiteCatalyst implementation should have a Previous Page sProp. There! I said it (I feel like I am channeling Avinash!). In past blog posts I have touched upon the use of a Previous Page sProp, but I feel like I have not done it justice and wanted to take time to explain it in greater detail. In this post, I will describe why I think this variable should always be set and provide some examples of its use.

Why You Need a Previous Page sProp
I find that in the web analytics world, I often receive the following question:

What page was the visitor on when he/she _______?”

You can fill in the blank with many things. Here is a list of the ones I have been asked:

  • …searched for this phrase in our internal search box…
  • …clicked on a button to go to a web lead form…
  • …downloaded a white paper…
  • …added products to the shopping cart…
  • …clicked on a banner advertisement…
  • …started using the ROI calculator…
  • …clicked to fill out a website survey…

I could go on for days and never come to the end of these types of questions! People want to know this information because it helps them get inside the head of their visitors. Often times it leads to navigation or content changes. Regardless of the reason, I assure you that you will be asked this question at some point and the truth is that it is not easy to answer with out-of-the-box functionality (i.e. Pathing). The good news is that setting the Previous Page sProp is easy and will pay great dividends down the road…

How To Set the Previous Page sProp
Setting the Previous Value sProp could not be easier. All you have to do is use the Previous Value JavaScript Plug-in to pass the previous page name to a new Traffic Variable (sProp). You can even see a detailed description of the code for this in Ben Gaines’ great Summit blog post. If you need help, call your Omniture Account Manager, Omniture Consulting or ClientCare.

Once you have your JavaScript setup to pass the Previous Page Name to the sProp, you need to enable a Traffic Data Correlation to any sProp for which you want to create a breakdown. For example, if you want to see what pages visitors were on when they searched for a particular internal search term, you would correlate the Previous Page Name sProp with the Internal Search Term sProp…

…so you can see a report like this:

In addition, if you are familiar with correlations, you may recall that they are bi-directional, so in addition to seeing the pages people searched for specific terms from, you can also see the converse. In this case, that would mean seeing all of the internal search terms visitors searched for on a specific page:

As you can see here, we see the same “4” searches for the phrase “chatter” from the selected page as we saw in the first internal search term report (in this case I am just using Internal Search as an example, but if you want to learn more check out my Internal Search post).

One Is Usually Enough
However, one word of caution, I have seen many clients implement several Previous Page sProps and I am not a fan of doing this as I will now explain. Let’s say you want to see what page people were on when the searched on a specific search term (as described above) and you also want to see what page they were on when the downloaded files on your site. A lot of people will set two Previous Page sProps in this situation – one for the search term and one for the file downloads. In my opinion, this just wastes a variable, wastes correlations and causes confusion for your users. The truth is that all you need is one Previous Page sProp to answer both questions. Since on each page there will be one and only one Previous Page value, there is really no reason to do this multiple times.

I have seen some clients who have chosen to pass the Previous Page Name to an eVar. There are some interesting uses of this. For example, if you want to see what pages visitors were on when they added a specific product to the shopping cart, you can pass the Product Name to the Products Variable, set the Cart Add Success Event and the Previous Page Name to an eVar. The main issue you will run into is that Conversion Subrelations are an “all or nothing” proposition so you can only do breakdowns by eVars that have Full Subrelations.

One final tip that I will throw out there is to consider having your developer pass a value of “[NO PREVIOUS PAGE AVAILABLE]” (or something similar) to the Previous Page sProp on entry pages (or any other time no Previous Page is available). I find that this is easier than dealing with questions around “Unspecified” in correlation reports and it is easier to remove this value using the search box than it is to hide the “Unspecified” values.

Final Thoughts
As I mentioned in the beginning, I highly recommend that you have a Previous Page sProp for all of your key report suites and add correlations as needed. If you have any questions/comments, feel free to leave them here…

Adobe Analytics, Analytics Strategy, General, Reporting

Our Mobile Measurement Framework is now available

Today I am really excited to announce the publication of our framework for mobile and multi-channel reporting, sponsored by OpinionLab. You can download the report freely from the OpinionLab web site in trade for your name and email address.

This paper builds on our “Truth About Mobile Analytics” paper we published with our friends at Nedstat last year and focuses on both measurement in mobile applications and, more importantly, a cross-channel measurement framework built around interactions, engagement, and consumer-generated feedback.

  • Interactions occur in every channel, digital or not. Online and on mobile sites we call these “visits” (although that is a made up word for interactions); in mobile apps the interaction starts when you click the icon and ends when you click “close”; in SMS it starts when you receive the message; on the phone it starts when you dial, and in stores interactions start when you walk up to an employee.
  • Engagement is simply “more valuable” interactions. Regardless of your particular belief about the definition of engagement, we all know it when we see it. Online it happens after some number of minutes, or clicks, or sessions, or whatever; in mobile apps it happens when you’ve clicked enough buttons; on SMS it happens when you respond to the message; on the phone it starts when you begin a conversation, and the same is true in a physical store.  We say engagement is “more valuable” because without engagement, value is unlikely to manifest.
  • Positive Feedback happens when you do a really, really good job. Measuring feedback is a critical “miss” for far too many organizations. Apples “app store” and the value of the star-rating system has essentially proven that there are massive financial differences associated with positive and negative experiences … but most companies still make the mistake of ignoring qualitative feedback altogether.

These three incredibly simple metrics can be applied to every one of your channels, your sub-channels, and  your sub-sub-channels (if you like.)  When applied you can create an apples to apples comparison between your web, mobile web, mobile apps, video, social, etc. efforts.

Then you can apply cost data, and you’re really in business.

I don’t want to say much more than that but I would really, really encourage you all to download and read this free white paper. When we put something like this out — something we believe has the power to really transform the way everyone thinks about the metrics they use to run their business, and something that has the potential to force dashboards everywhere to be scrapped and started over — we’d really like your collective feedback.

DOWNLOAD  THE WHITE PAPER NOW

Thanks to Mark, Rick, Rand, and the entire team at OpinionLab for sponsoring this work. If you’re the one person reading my blog that hasn’t seen their application in action, head on over to their site and have a look.

Analytics Strategy, Conferences/Community

Analysis Exchange members going to X Change 2010

Earlier this year after we launched the Analysis Exchange we put out our first challenge to the membership. We asked people to “be exceptional” in their participation, to step up and make a difference by working harder than expected, by bringing crazy passion to their work, and by participating in unexpected ways.  In exchange for “being exceptional” we said we would provide a complimentary pass to one mentor and one student to this year’s X Change conference in Monterey, California September 20, 21, and 22.

Today I am pleased to announce your exceptional winners of this challenge.

While nearly everyone who has participated in Analysis Exchange thus far has really blown my mind with their energy, their commitment, and their willingness to do something special for the larger web analytics and nonprofit communities, five people really stood out in the crowd.

  • Sarah DeAtley, Mentor from Washington who worked like crazy to sign up her fellow Seattle-ites and continues to evangelize for the effort;
  • Victor Acquah, Mentor from Virginia who participated in both our Alpha and Beta tests with PBS and provided tremendously valuable feedback;
  • Jason Thompson, Mentor from Utah who has stepped up repeatedly to mentor projects and has helped a great deal to spread the word;
  • Jan Alden Cornish, Student from California who has not only participated in multiple projects but has been an invaluable source of ideas and feedback;
  • Michael Healy, Student from California who has helped out on numerous projects and who really understands what Analysis Exchange is trying to do.

Unfortunately not everyone would have been able to make the X Change this year due to previous commitments; fortunately that made our job selecting the finalists nominally easier.  To make our final decision we asked everyone to send us a short paragraph describing “what they have learned” in Analysis Exchange to date.  Here is what we heard back:

From Jason Thompson:

“First let me say, that with the amazing cast of students and mentors
that make up the Analysis Exchange, I am truly humbled to be
considered for this honor.

The Analysis Exchange has reminded me that what makes us truly rich is
not the contents of our wallets or how much money we have in our bank
accounts, whoa….sorry…started channeling Tyler Durden there for a
second, but what makes us truly rich are the relationships we have in
life.

So what have I learned thus far?  I have learned that we are all
students and that if we are open, there are many great lessons for us
to be taught.

I didn’t join the AE with the thought of getting anything in return,
although I have been given many wonderful gifts through my
participation. I joined because the AE provided me with the
opportunity to give back to an industry that has given me so much.

I honestly feel a little bit weird writing this email and the humble
Jason in me says this opportunity should go the person who would
benefit from it the most.”

From Jan Alden Cornish:

“My two Analysis Exchange student projects have demonstrated several key takeaways. First, project management principles are paramount. The project kick-of meeting should reinforce the Analysis Exchange project priorities:

  • Schedule: short-term project
  • Cost : limited resource availability, with the expectation that the student have the most flexibility
  • Scope:  the project deliverables given the constraints

The project kick-off meeting should confirm the core roles and responsibilities of the project team members.

  • Organizational Lead: the key business stakeholder who approves the project scope and project deliverables. The organizational lead may be the project manager
  • Mentor : the web analysis expert / consultant. The mentor may be the project manager.
  • Student: the primary execution resource

The project team has to rapidly converge on meeting times for reviewing and also for approving deliverables. The team needs to determine web / audio conferencing details: Skype, GoToMeeting, etc.

The first milestone is approving the project charter, which provides the scope definition and defines the team members’ roles and responsibilities.  Scope definition may require more than one meeting, particularly if the business plan of the nonprofit organization does not clearly set forth measurable expectations for their web presence.  For example, the organizational lead may be accountable for other time critical projects (i.e. migration to a new content management system). The Analysis Exchange project may run concurrently with other projects within the organization that impact its web presence (i.e, an outsourced web redesign).

A second key takeaway for me was that the nonprofit  world is a microcosm of the real world. Thus, risk management is key. A risk is a potential issue which might adversely impact the success of the project. Risks may be categorized as technical or organizational. The web implementation may not allow certain analysis questions to be answered.  Nonprofits often leverage third-party platforms for key business functions such as e-newsletter management, volunteer recruitment, and e-commerce. These functions might not be tightly integrated from an analytics reporting standpoint. They may “roll up” to other departments in the nonprofit’s organization. The nonprofit ‘s web implementation also may have a limited deployment of Google Analytics. Other web analysis tools that are deployed may not may not offer similar functionality to Google Analytics.

Armed with these takeaways from my first two Analysis Exchange projects I look eagerly forward to my next project.”

And from Michael Healy:

“Superhuman effort isn’t worth a damn unless it achieves results. – E. Shackleton

In my experience thus far with the Analysis Exchange I learned that the bounce rate, page views, time on page and every other web metric pretty much aren’t worth anything. More accurately, they aren’t worth anything to the client unless they start to solve a business problem.

Considering that the organizations in the Analysis Exchange aren’t selling anything per se, but instead are providing a non-profit service to others presented a few challenges. Working with Gordon Holstlander, of the Circle Drive Alliance Church of Saskatoon, and Michael Helbling, my excellent mentor, I learned how to work together as a team to move beyond the prima facie challenge.

Our project involved what appeared to be a simple analysis of the home page real estate to determine the best usage of the page. I built out several personas of usage for the CDAC website which showed dynamic access to information; with different goals at different times of the day, week and year. Answering ‘it depends’ to Gordon’s original question delighted my Econ brain to no end.

Moving beyond the population of people who already accessed the site, I was able to show Gordon how to do a simple Google Trends search for the Saskatoon area. An examination of the entire CDAC website also revealed a great source of underutilized content. These two were passed onto the client for future SEO usage.

The biggest lesson in the Analysis Exchange thus far has been the open dialog and client relationships developed. Websites can be very personal things, with people at non-profits often pouring countless hours into improving them.Michael facilitated an exchange of ideas with Gordon and me; such that when I made my presentation all parties were open to improvements. That is a lesson I will lean on for the rest of my career.”

As you can see we are honored to have members who are so thoughtful, intelligent, and especially in Jan’s case, precise! In the end the decision was nearly impossible to make … until Michael Healy mentioned that he would be coming to the X Change regardless of the outcome.  To smooth that path we are sending Michael our “maximum discount” code for the conference and will ensure that he drinks his fill at the bar each evening.

Which leaves us with Jason Thompson and Jan Alden Cornish, this year’s Analysis Exchange at the X Change contest winners!

Jason and Jan will be coming to the X Change compliments of Analytics Demystified and Semphonic, co-hosts for the conference. I hope you will all join us in comments congratulating Jason, Jan, and all of our distinguished members! And if you are lucky enough to be joining us at this year’s X Change conference make sure to find Jason and Jan and congratulate them in person, shake their hands, and ask them about their experience in this effort.

The Analysis Exchange is always looking for more volunteer students, mentors, and nonprofit organizations. The X Change conference will be held September 20, 21, and 22 at the beautiful Monterey Plaza Hotel and Spa in Monterey, California.

Analytics Strategy

Web’s Goldmine – or – Consumer Jackpot?

This weekend the Wall Street Journal produced a well researched article called The Web’s New Gold Mine: Your Secrets. Apparently, it’s the first in a series of articles about Internet tracking practices. It’s entirely informative and chock full of quotes, anecdotes, video and interesting visuals. I highly recommend giving this article a read if you subscribe to the WSJ, or encourage you to join the discussion on their blog. However, I take serious issue with the bias inherent within this first article. The author, Julia Angwin uses phraseology like “the business of spying on consumers”, and “…details about her, all to be put up for sale for a tenth of a penny”. Clearly, the conclusion drawn by the author and presented to readers is that tracking solutions are spawned from malice. I vehemently disagree.

While, it’s true that some tracking can be used for devious function, the majority of uses are fully anonymous and serve to benefit end users exponentially. The reality is that media fragmentation, facilitated by the Internet, has forced advertisers to compete for our attention. To do this, they’re hocking their wares in a significantly more relevant way. By serving up advertising content that’s based on activity, propensity and preference, they are saving us from the irrelevant fire hose of most advertising. Without being coarse, I find that the fact that some consumers are self-conscious and sensitive to advertising that’s targeted to their browsing activity as trivial. It’s trivial compared to the the benefits that targeting delivers to the rest of us.

I’ve got more to say on this topic, a lot more in fact, but I’ll stop short for now. My closing thought is that, while the author of the Web’s New Goldmine may see the art and science of tracking as a boon for advertisers… I see it as a significant win for consumers. A jackpot perhaps. I hope and expect that my online and offline interactions with brands will get increasingly better and more relevant as my interactions continue. Tracking will enable this to happen. But, that’s just me…I’d love to know what you think.

Analytics Strategy, Reporting, Social Media

Marketing Measurement and the Mississippi River

At least once a week in my role at Resource Interactive, I get asked some flavor of this basic question: “How do I measure the impact of my digital/social media investment?” It’s a fair question, but the answer (or, in some cases, the impetus for the question) is complicated and, often, is related to the frustration gap — the logical leap that, since digital marketing is the most measurable marketing medium of all time, it enables a near-perfect linkage between marketing investments and financial results.

It’s no fun to be the bearer of Reality Tidings when asked the question, especially when it’s easy to sound like the reason we can’t make a clean linkage is because it’s really hard or we just aren’t smart enough to do so. There are countless sharp, well-funded people in the marketing industry trying to answer this exact question, and, to date, there is a pretty strong consensus when you get a group of these people together:

  1. We all wish we had “the answer”
  2. The evolution of consumers and the growth of social media adoption has made “the answer” more elusive rather than less
  3. “The answer” is not something that is just around the corner — we’re chipping away at the challenge, but the increasing fragmentation of consumer experiences, and the explosion of channels available for marketers to engage with those consumers, is constantly increasing the complexity of “the question”

That’s not an easy message to convey.

So, How’s That Explanation Working Out for Ya’?

It’s a tough row to hoe — not just being a data guy who expends a disproportionate amount of energy, time, and brainpower trying to find a clean way to come at this measurement, but trying to concisely explain the complexity. Of late, I’ve landed on an analogy that seems to hold up pretty well: measuring marketing is like measuring the Mississippi River.

If you are tasked with measuring the Mississippi, you can head to New Orleans, don hip waders, load up a rucksack with instruments, and measure all sorts of things at the river’s mouth: flow volume, fish count, contaminants, etc. That’s analogous to measuring a brand’s overall marketing results: brand awareness, share of voice in the industry, customer satisfaction, revenue, profitability, etc. The explosion of digital and social media actually makes some of this measurement easier and cheaper than ever before through the emergency of various online listening and social media analytics platforms.

While these “mouth of the river” measures are useful information — they are measures of the final outcome that really matters (both in the case of the Mississippi and brand marketing) — how actionable are they, really? As soon as results are reported, the obvious questions come: “But, what’s causing those results?”

What causes the Mississippi River to flow at a certain rate, with a certain number of a fish, with a certain level of a certain contaminant where it empties into the Gulf of Mexico? It’s the combination of all that is happening upstream…and the Mississippi’s headwaters reach from Montana (and even western Canada) all the way to Pennsylvania! The myriad headwaters come together many times over — they interact with each other just as different marketing channels interact with and amplify each other — in thousands of ways over time.

If we’re looking to make the Mississippi cleaner, we could travel to western Kansas and check the cleanliness of the Smoky Hill River. If it’s dirtier than we think it should be, we can work to clean it up. But, will that actually make the Mississippi noticeably cleaner? Logic tells us that it certainly can’t hurt! But, rational thought also tells us that that is just one small piece in an almost incomprehensibly puzzle.

With marketing, we have a comparably complex ecosystem at work. We can measure the growth of our Facebook page’s fans, but how is that interacting with our Twitter feed and our web site and our TV advertising and blog posts that reference us and reviews of our products on retailer sites and our banner ads and our SEO efforts and our affiliate programs and our competitors’ presence in all of these areas and… ugh! At a high level, a marketer’s Mississippi River looks like this:

Not only does each of the “managed tactics” represent dozens or even hundreds of individual activities, but environmental factors can be a Mack truck that dwarfs all of the careful planning and investment:

  • Cultural trends — do you really think that the Silly Bandz explosion was carefully orchestrated and planned by Silly Bandz marketers (the CEO of Silly Bandz certainly thinks so — I’m skeptical that there wasn’t a healthy dose of luck involved)
  • Economic factors — during a global recession, most businesses suffer, and successful marketing is often marketing that manages to simply help keep the company afloat
  • Competition — if you are a major oil producer, and one of the top players in your market inadvertently starts dumping an unfathomable amount of crude into the Gulf of Mexico, your brand begins to look better by comparison (although your industry as a whole suffers on the public perception front)

“It’s complicated” is something of an understatement when trying to accurately measure either the Mississippi River or marketing!

So, We Just Throw Up Our Hands and Give Up?

Just because we cannot practically achieve the Holy Grail of measurement doesn’t mean that we can’t be data driven or that we can’t quantify the impact of our investments — it just means that we have to take a structured, disciplined approach to the effort and accept (and embrace) that marketing measurement is both art and science. In the Mississippi River example, there are really three fundamentally different measurement approaches:

  • Measure the river where it flows into the Gulf of Mexico
  • Measure all (or many) of the tributaries that feed into each other and, ultimately, into the main river
  • Model the whole river system by gathering and crunching a lot of data

The first two approaches are reasonably straightforward. The third gets complex, expensive, and time-consuming.

For marketers — and I’m just going to focus on digital marketing here, as that’s complex enough! — we’ve got an analogous set of options (as it should be…or I wouldn’t be calling this an analogy!):

Measuring the direct and cross-channel effect of each tactic on the overall brand outcomes is nirvana — that’s what we’d like to be able to do in some reasonably reliable and straightforward way. And, we’d like that to be able to factor in offline tactics and even environmental factors. For now, the most promising approach is to use panel-based measurement for this — take a sufficiently large panel of volunteers (we’re talking 10s or 100s of thousands of people here) who voluntarily have their exposure to different media tracked, and then map that exposure to brand results: unaided recall of the brand, purchase intent, and even actual purchases. But, even to do this in an incomplete and crude fashion is currently an expensive proposition. That doesn’t mean it’s not an investment worth making — it just means it’s not practical in many, many situations.

However, we can combine the other two approaches — measurement of tactics (tactics include both always-on channels such as a Facebook page or a web site, as well as campaigns that may or may not cut across multiple channels) and measurement of brand results. The key here is to have clearly defined objectives at the brand level and to align your tactic-level measurement with those same objectives. I’m not going to spend time here expanding on clear definition of objectives, but if you’re looking for some interesting thinking there, take a look at John Lovett and Jeremiah Owyang’s white paper on social marketing analytics. They list four basic objectives that social media can support. At the overall brand level, I think there are basically eight possible objectives that a consumer brand might be tackling (with room for any brand to have one or two niche objectives that aren’t included in that list) — and, realistically, focusing in on about half that many is smart business. But I said I wasn’t going to expand on objectives…

What is important is to apply the same objectives at the brand and the tactic level — each tactic isn’t necessarily intended to drive all of the brand’s objectives, so being clear as to which objectives are not expected to  be supported by a given tactic can help set appropriate expectations.

Just because the objectives should align between the tactic and the brand-level measurement does NOT mean that the measures used to track progress against each objective should be the same. For instance, if one of your objectives is to increase engagement with consumers, at the brand level, this may be measured by the volume and sentiment of conversations occurring online about the brand (online listening platforms enable this measurement in near real-time). For the brand’s Facebook page (a tactic), which shares the objective, the measure may, instead, be the number of comments and likes for content posted on the page.

But…How Does That Really Help?

By using objectives to align the measurement of tactics and the measurement of the brand, you wind up with a powerful performance measurement tool:

As simplistic and extreme examples, consider the situation where all of your tactics are performing swimmingly, but the brand overall is suffering. This might be the result of a Mack truck environmental factor — which, hopefully, you are well aware of because you are a savvy marketer and are paying attention to the environment in which you are operating. If not, then you should consider revisiting your overall strategy — do you have the wrong tactics in place to support the brand outcomes you hope to achieve?

On the other hand, consider a situation where the brand overall is suffering and the tactics as a whole are suffering. In that case, you might have a perfectly fine strategy, but your tactical execution is weak. The first order of business is to get the tactics clicking along as designed and see if the brand results improve (in a sense, this is a preferable situation, as it is generally easier to adjust and improve tactics than it is to overhaul a strategy).

In practice, we’re seldom working in a world where things are as black and white (or as green and red) as this conceptual scenario. But, it can certainly be the case that macro-level measurement of an objective — say, increasing brand awareness — is suffering while the individual tactics are performing fine. Let’s say you heavily invested in your Facebook page as the primary tactic to drive brand awareness. The page has been growing total fans and unique page views at a rapid clip, but your overall brand awareness is not changing. You may realize that you’re starting from a very small number of fans on Facebook, and your expectation that that tactic will heavily drive overall brand awareness is not realistic — you need to introduce additional tactics to really move the brand-level awareness needle.

In the End, It’s Art AND Science

Among marketing measurement practitioners, the phrase “it’s art and science” is oft-invoked. It sounds downright cliché…yet it is true and it’s something that many marketers struggle to come to terms with. Look at marketing strategy development and execution this way:

“The data” is never going to generate a strategy — knowing your customers, your company, your competition, and a bevy of other qualitative factors should all be included in the development or refinement of your strategy. Certainly, data can inform and influence the strategy, but it cannot generate a strategy on its own. Performance measurement, though, is all about science — at its best, it is the quantitative and objective measurement of progress towards a set of objectives through the tracking of pre-defined direct and proxy measures. Dashboards can identify trouble spots and can trigger alerts, but their root causes and remediation may or may not be determined from the data — qualitative knowledge and hypothesizing (“arts”) are often just as valuable as drilling deeper into the data.

It’s a fun world we live in — lots of data that can be very valuable and can drive both the efficiency and effectiveness of marketing investments. It just can’t quite deliver nirvana in an inexpensive, easy-to-use, web-based, real-time dashboard! 🙂

Adobe Analytics, Analytics Strategy, General

Validating Orders & Revenue

I recently received an e-mail from a blog reader who was having issues tying their Orders in SiteCatalyst to Orders in their back-end system. Here is a snippet from the e-mail:

I have a little issue in my own SiteCatalyst setup that I recently discovered. Sad for me I had trusted the number of Orders for each day’s Conversion Funnel and recently I decided to validate the numbers in SiteCatalyst against what our back-end system has. SiteCatalyst is 5%-10% understated each day which makes for a heck of a difference at the end of the month! I’d rather be understated than overstated, but can you give me some ideas where I should look first?

Unfortunately this is an all too common problem I hear out there. In this post I am going to share some ideas on how you can tackle this Order/Revenue validation issue head-on and make sure you can trust your critical Orders/Revenue data in SiteCatalyst.

Order ID eVar
If you have an online shopping cart, you should already be setting the s.purchaseID variable with a unique Order ID when an Order takes place on the website. This variable is used by SiteCatalyst to ensure Order uniqueness. Unfortunately, the downside of this variable is that it is not readily available in the SiteCatalyst interface. It is available in DataWarehouse but not in regular SiteCatalyst reports or Discover. Carmen Sutter (@c_sutter) has submitted an idea in the Idea Exchange to change this, but until then, I recommend that you set what I call an Order ID eVar variable. To do this, all you need to do is set the same value you pass to the PurchaseID variable to a custom eVar. This will allow you to see all Orders and Revenue by Order ID from within SiteCatalyst and Discover as you would any other eVar. Once you have done this, you can open up this new Offer ID eVar and add your Orders or Revenue Success Event as needed:

In the example above, we can see that most Orders have only one Order ID, which is what we want. However, in this case, we can see that one ID was counted twice. That may require some research and I like to schedule a report like the one above to be sent to me weekly so I can make sure nothing strange is going on.

Data Sources Setup
However, while adding an Order ID eVar is helpful in seeing if you are over counting Orders in SiteCatalyst, it won’t tell you if you are under counting Orders or how close your SiteCatalyst data is to your back-end systems. To do this, I recommend you use Data Sources. As a quick refresher, Data Sources allows you to import external data/metrics into SiteCatalyst (see post link for more details). In this case, I recommend that you import in a file from your back-end system into SiteCatalyst which contains your unique Order ID, the number of Orders (which should always be “1”) and the Revenue Amount. When you import data via Data Sources, you include the date that you want the data to be associated with so it doesn’t matter if you import the data on a daily, weekly or monthly basis, but the more frequently you upload it, the better so you can find issues quickly.

Here are step-by-step instructions on how to do this:

  • Create the Order ID eVar described above
  • Create two new Incrementer Success Events and name them “Back-End Orders” (Type=Numeric) and “Back-End Revenue” (Type=Currency)
  • Create a new Data Sources upload template (ClientCare or Omniture Consulting can assist with this). You want to be sure to map the two new “Back-End” Success Events to the Data Sources template. Even more critical, is that you want to include the newly created Order ID eVar in the Data Sources template. If you do not do this, then you will not be able to see these two new Back-End metrics in the same Order ID eVar report that you have in SiteCatalyst (more on this later).

  • When you are done, you should have a Data Sources template that looks something like this:

  • Now all you have to do is work with your developers to have this file sent via FTP to the Data Sources FTP on a regular basis.

The Payoff
So by now, you are probably saying to yourself: “That’s a lot of work!” No argument here! However, hang with me as I share what the ultimate payoff is for doing this. As you recall, our primary objective was to see if our online Order and Revenue data was matching what our back-end systems indicated. Now that we have the Order ID eVar and two new “Back-End” Order and Revenue metrics, we have everything we need. This is where the fun begins and we put it all together!

All you have to do now is to open the new Order ID eVar report and add all of the relevant metrics. First, we will add the SiteCatalyst Orders and Revenue so we can see online Orders and Revenue by Order ID:

Next, we will add the two new “Back-End” metrics to the report and, since we were smart enough to include the Order ID eVar value in the Data Sources upload, SiteCatalyst knows which “Back-End” Order ID and dates line up with our online data:

Cool huh! As far as SiteCatalyst is concerned, these offline metrics are connected to your Order ID eVar values just as if they had happened online. Using this report, we can see if there are any differences between our online and offline data. In the example above, it looks like the “Back-End” system had an order with $2,350 in revenue that wasn’t captured online. Having this information makes it much easier to troubleshoot order submission issues. You can even use DataWarehouse or Discover (only if you use Transaction ID Data Sources) to break down Order ID by browser, domain, IP address, etc… to see if you can figure out what is happening. In addition, you can export this data to Excel and look at the totals to see how far off you are in general.

Finally, for the true SiteCatalyst geeks, you can create a Calculated Metric that divides Orders by Back-End Orders and/or Revenue by Back-End Revenue to see a trended % that each is off and set up Alerts to notify you if they deviate too much! When you take into account this level of assurance all of a sudden the Data Sources work above might not seem like all that much in the long run!

Final Thoughts
If you sell products online, nothing is more critical than believing in your key metrics. Even if you don’t sell online, the same principles here can be applied to lead generation forms, subscriptions or any other metrics you store in SiteCatalyst and also in your back-end systems.

General

Sad to see Aurelie Pols go …

I am very sorry to say that our European partner Aurelie Pols has decided to leave Analytics Demystified and pursue other goals in her life. While I am very sad to announce this, I have certainly enjoyed working with Aurelie over the past year and on behalf of myself, my family, and our partner John Lovett we wish Aurelie, Rene, and little Luca all the best.

Analytics Strategy, General, Social Media

Guest Post: Kevin Hillstrom

Kevin Hillstrom is one smart dude. President of MineThatData, author of Online Marketing Simulations, and prolific contributor to the Twitter #measure channel. Kevin spends a huge amount of time in Twitter challenging web analysts to think and work harder on behalf of their “clients,” 140 characters at a time.

A few weeks ago I asked Kevin “what five practices learned in the offline data analytics world would you like to see web analytics professionals adopt?” The following contributed blog post has Kevin’s answers which are, unsurprisingly, awesome. Near the end Kevin says “The Web Analyst has the keys to the future of the business, so it is a manner of getting the Web Analyst to figure out how to use keys to unlock the future potential of a business.”

Brilliant. We are the future of business … so what future will we be helping to create?

Kevin Hillstrom, President, MineThatData

In 1998, I became the Circulation Director at Eddie Bauer. Back in those days, Eddie Bauer printed money, generating more than a hundred million dollars of pre-tax profit on an annual basis.

One of the ways that Eddie Bauer generated profit was through the use of discounts and promotions. If a customer failed to purchase over a six month period of time, Eddie Bauer applied a “20% off your order” offer. The customer had to use a special promotion code, in order to receive discounted merchandise.

We analyzed each promotion code, using “A/B” test panels. Customers were randomly selected from the population, and then assigned to one of two test panels. The first test panel received the promotion, the second test panel did not receive the promotion. We subtracted the difference between the promotion segment and the control segment, and ran a profit and loss statement against the difference.

In almost all cases, the segment receiving the promotion generated more profit than the control segment. In other words, it became a “best practice” to offer customers promotions and incentives at Eddie Bauer. Over the course of a five year period of time, the marketing calendar became saturated with promotions. In fact, it became hard to find an open window where we could add promotions!

Being a huge fan of “A/B” testing, I decided to try something different. I asked my circulation team to choose two customer groups at random from our housefile. One group would receive promotions for the next six months, if the customer was eligible to receive the promotion. The other group would not receive a single promotion for the next six months. At the end of the six month test period, we would determine which strategy yielded the most profit.

At the end of six months, we observed a surprising outcome. The test group that received no promotions spent the exact same amount of money that the group receiving all promotions spent. After calculating the profitability of each test group, it was obvious that Eddie Bauer was making a significant mistake. It appeared that we would lose, at most, five percent of total annual sales, if we backed off of our promotional strategy. Eddie Bauer would be significantly more profitable by minimizing the existing promotional strategy.

In 1999, we backed off of almost all of our housefile promotions. At the end of 1999, the website/catalog division enjoyed the most profitable year in the history of the business.

This experience shaped all of my subsequent analytical work.

Just because we have the tools to measure our activities in real-time doesn’t mean we are truly optimizing business results. In the Eddie Bauer example, we had the analytical tools to measure every single promotion we offered the customer, and we used existing best practices and “A/B” testing strategies. All of it, however, was wrong, costing us $26,000,000 of profit on an annual basis. Simply put, we were measuring “conversion rate”. What actually happened was that we “shifted conversions” out of non-promotional windows, into promotional windows! Had we measured non-promotional windows, we would have noticed that demand decreased.

So, by measuring customer behavior across a six month period of time, we made a significant change to business strategy, one that dramatically increased annual profit.

What does this have to do with Web Analytics?

The overwhelming majority of Web Analytics activity is focused on improving “conversion rate”. Our software tools are calibrated for easy analysis of events. Did a visitor do what we wanted the visitor to do? Did a promotion work? Did a search visitor from a long-tail keyword buy merchandise when they visited the website? All of these questions are easily answered by the Web Analytics expert, the expert simply analyzes an event to determine if the event yielded a favorable outcome.

Offline analytics experts (often called “Business Intelligence” professionals or “SAS Programmers” if they use SAS software to analyze data) frequently analyze business problems from a different perspective. They use whatever data is available, incomplete or comprehensive, to determine if the individual actions taken by a business over time cause a customer to become more loyal.

With that in mind, here are five offline practices I wish online analytics experts would adopt.

Practice #1 = Extend the Conversion Window: Instead of analyzing whether a customer converted within a single visit or session, it makes sense to extend the conversion window and learn whether the customer converted across a period of time. For instance, when I ran Database Marketing at Nordstrom, we learned that our best customers had a 5% conversion rate, when measured on the basis of individual visits, but our best customers nearly achieved a 100% conversion rate when combining website visits and store visits during a month. By extending the conversion window, we realized that we didn’t have website problems, instead, we had loyal customers who used our website as a tool in a multi-channel process.

Practice #2 = Measure Long-Term Value: Offline analytics practitioners want to know if a series of actions results in long-term profit. In other words, individual conversions are relatively meaningless if, over the course of a year, individual conversions do not yield incremental profit. This is essentially the “Eddie Bauer” example I mentioned at the start of this paper, we learned that individual conversions (customers purchasing via a promo code) yielded increased profit during the promotional period, but generated a loss when measured across a six month timeframe. A generation of Web Analytics experts were trained, largely because of software limitations, to analyze short-term business results, and have not developed the discipline to do what is right for a business across a six month or one year timeframe. Fortunately, Web Analytics practitioners are exceptionally bright, and are easily able to adapt to longer conversion windows.

Practice #3 = Comfort with Incomplete Data: I recently analyzed data for a retailer that was able to tie 70% of store transactions to a name/address. During my presentation, an Executive mentioned that my results must be inaccurate, because I was leaving 30% of the transactions out of my analysis. When I asked the Executive if it would be better to make decisions on incomplete data, or to simply not make any decisions at all until all data is complete and accurate, the Executive acknowledged that inferences from incomplete data are better than inaction caused by data uncertainty. Offline analysts have been dealing with incomplete multi-channel data for decades, and have become good at communicating the benefits and limitations of incomplete data to business leaders. The same opportunity exists for Web Analytics practitioners. Don’t hide from incomplete data! Instead, make confident decisions based on the data that is available, simply communicating what one can and cannot infer from incomplete data.

Practice #4 = Demonstrate What Happens to a Business Five Years From Now Based on Today’s Actions: Believe it or not, this is how I make a living. I use conditional probabilities to show what happens if customers evolve a certain way. Pretend a business had 100 customers in 2009, and 44 of the 100 customers purchase again during 2010. This business must find 56 new customers in 2010 to replace the customers lost during 2010. I can demonstrate what the business will look like in 2015, based on how well the business can retain existing customers or acquire new customers. This type of analysis is the exact opposite of “conversion rate analysis”, because we are looking at the long-term retention/acquisition dynamics that impact every single business. I find that CEOs and CFOs love this type of analysis, because for the first time, they have a window into the future, they actually get to see where the business is heading if things remain as they are today. Better yet, the CEO/CFO can go through “scenario planning” to identify ways to mitigate problems or to capitalize on favorable business trends. The Web Analytics practitioner has the data to do this type of analysis, it is simply a matter of tagging customers or shaping queries in a way that allows the analyst to make inferences that impact long-term customer value.

Practice #5 = Communicate Better: This probably applies to all analysts, not just Web Analytics experts. Executives are frequently called “HiPPOs” by the Web Analytics community, a term that refers to “Highest Paid Person’s Opinion”. The term can be used in a negative manner, suggesting that the Executive is choosing to not make decisions based on data but rather on opinion or gut feel or instinct or internal politics. I was a member of the Executive team at Nordstrom for more than six years, and I can honestly say that I made far more decisions based on opinion than I made based on sound data and analytics … and I am an analyst by trade!! Too often, the analytics community tells an incomplete story. Once, I witnessed an analytically minded individual who made a compelling argument, demonstrating that e-mail marketing had a better return on investment than catalog marketing. This analyst used the argument to suggest that the company shut down the catalog marketing division. On the surface, the argument made sense. Upon digging into the data a bit more, we learned that 75% of all e-mail addresses were acquired when a catalog shopper was placing an online order, so if we discontinued catalog marketing, we would cut off the source of future e-mail addresses. This is a case where the analyst failed to communicate in an appropriate manner, causing the Executive to not heed the advice of the analyst. Too often, analysts fail to put data and customer findings into a larger context. Total company profit, long-term customer profitability, total company staffing strategies and politics, multi-channel customer dynamics, and Executive goals and objectives all need to be taken into account by the analyst when communicating a data-driven story. When this is done well, the analyst becomes a surrogate member of the Executive team. When this is not done well, the analyst sometimes perceives the Executive to be a “HiPPO”.

These are the five areas I’d like to see Web Analytics experts evolve into. The Web Analyst has the keys to the future of the business, so it is a manner of getting the Web Analyst to figure out how to use keys to unlock the future potential of a business. Based on what I have witnessed during the past forty months of multi-channel consulting, I am very confident that Web Analytics practitioners can combine offline techniques with online analytics. The combination of offline techniques and online analytics yields a highly-valued analyst that Executives depend upon to make good business decisions!

Conferences/Community

X Change 2010 Conversation Topics Announced!

I’m excited to announce that most of the 2010 X Change huddle topics and leaders have now been announced on our web site. If you’ve heard about the X Change and have been wondering what we will be talking about, please go have a look at the 2010 topics! We are more or less talking about everything … mobile, social, tagging, analysis, big data, testing, … you name it and we will be talking about it in Monterey September 21st and 22nd.

Serious. We have Kim Weller from ESPN talking about Digital Convergence, Kelly Olin from Nike talking about Measuring Global Brands, Dylan Lewis from Intuit talking about Testing, Lynn Lanphier from Best Buy talking about Analytics for Retailers, and 16 more amazing minds talking about the pressing topics of our day.

Are you ready to join us yet?

Just in case you’re not, please take a look at some of the amazing practitioners we have leading this year’s conversations. I consider it an honor to be co-producing a conference with so many brilliant web analytics practice leaders coming to join us and make the event happen. Folks like Shari Cleary from MTV Networks, Blandon Casanave from NBC Universal, Bob Page from eBay, and Adam Greco (yes, THAT Adam Greco) from Salesforce.com!

I know, amazing!

We will be adding a few more topics and conversation leaders this week so bookmark those pages and keep checking in. And by all means, if you have any questions about whether X Change is right for you, what the event is like, and what you can expect to take back to your boss after the conference, please don’t hesitate to contact me or one of my partners.

Don’t forget to read about this year’s exciting keynote with our “three VPs” as well. Shari, Joe Megibow from Expedia.com, and Steve Bernstein from Paypal will be talking about the career path from analyst to Vice President and the types of challenges they face heading analytics organizations as part of their companies senior leadership teams.

We are waaaay ahead on registrations this year compared to previous years and so a sell-out is more or less assured at this point. Don’t get left out — register right away and ensure your seat at the table at X Change 2010!

Adobe Analytics, General

X+ Page Visits

[I apologize in advance for such a horrible blog post title, but I couldn’t think of a succinct way to describe what I intend to cover. Maybe one of you out there will have a better suggestion after reading the post!]

If your website is like many I have seen, you get a fair amount of daily visits and unique visitors, but it may be the case that a large number of your visitors don’t go beyond the first few pages of your site. When I see this, I get very frustrated when I think about all I have done to get people to my site and optimized the site for my designated conversion goals. But as web analysts, we need to put our emotions to the side and get down to the numbers. Therefore, one of the things I like to do is to quantify how big of a problem my website has with visitors who only view a small number of pages. In this post I will show you how to quantify this so you can begin to take action on addressing this issue.

The Setup
Before I get too deep into this topic, I’d like to setup the scenario since I think this will help it make more sense. Let’s say that the main purpose of your website is to get visitors to view and complete lead generation forms. Let’s also say that on your website you see that your most significant drop-off takes place after the third page of each visit. In this situation, you might have lots of Visits and relatively few Form Completes so that your Conversion Funnel looks like this:

As you can see in this funnel, there is a pretty significant gap between Visits and Form Views. While that presents a huge optimization opportunity, I like to break massive efforts like this into smaller chunks that I can work towards (or as Avinash points out – Micro-Conversions). Since we noted earlier that a large portion of visits exit after three pages, wouldn’t it be nice if we could bridge the gap between our Visits metric and our Form Complete metric in the funnel above? Having a middle ground between these Visits and Form Views might get our team to think about ways to turn more Visits into Visits of four pages or more which, depending upon your site, might be a step in the right direction. In many sites I have worked with, there is a direct correlation between visitors viewing more pages and higher form conversion rates.

X+ Visits Explained
Now that we have set-up the situation,it becomes a bit easier to understand what I mean by “X+ Visits” since I am really saying that you can set a new Success Event metric which represents how many Visits your website gets where the visitor viewed more than “X” Pages. What “X” represents is up to you and should be based upon your own data. In this example, we will say that we are going to call it “4+ Page Visits” meaning the number of Visits in which Visitors viewed four or more pages.

The implementation of this is very easy for any good JavaScript developer since all that is involved is setting a Success Event as soon as each Visitor hits the fourth page of the session. Once you have done this, you can update the conversion funnel shown above to look like this:

While this may not seem like much of a difference, here are some cool things you can do once you have this implemented:

  • Create a Calculated Metric to divide 4+ Page Visits by Total Visits to see what % make it to four pages and trend this over time to see if you are getting better or worse

  • Use the filter feature of the conversion funnel to see your funnel by Visit Number or Traffic Source (i.e. SEO) to see how each impacts the mix of Visits and Visits of four or more pages

  • Create a calculated metric for the inverse (in this case three pages & fewer) by subtracting 4+ Page Visits from Visits. I also like to pass both to Excel using the ExcelClient to create a stacked graph like this to show progress

Final Thoughts
There you have it. If you find that you consistently have significant website drop-off after a few pages, hopefully, this new metric will help you better dissect what is happening so you can “Micro Conversion” your way to more Macro Conversion!

Conferences/Community

Guest Post: Jason Thompson, Analysis Exchange Mentor

(This is a guest post from Jason Thompson, one of the great Analysis Exchange mentors that have been working to help us create an entirely new way to train web analysts while also providing free analytics to nonprofit organizations around the world.  Jason blogs at http://emptymind.org and can be found banging around Twitter @usujason.

We are offering a complimentary pass to this year’s X Change conference in Monterey, California to one mentor and one student who distinguish themselves in the program.)

There is a concept in Zen Buddhism called Shoshin, meaning “beginner’s mind”. This concept refers to being open and eager or as Shunryu Suzuki puts it, “In the beginner’s mind there are many possibilities, in the expert’s mind there are few.”

When I first steeped foot inside The Analysis Exchange, I did so as a mentor or in my mind “the expert.” Sure, I had a warm, fuzzy feeling deep down inside about giving back to the community, sharing freely of my knowledge, and showing my altruistic side but really I was there to teach, after all, I was the expert.

For those of you who may not be familiar with The Analysis Exchange, let me take a step back. The Analysis Exchange is a unique community of non-profit organizations, web analytics beginners, and industry experts, each willing to give of their time in order to reap their own rewards. For organizations, they gain access to free resources that help analyze data, train future analysts, and establish measurement road maps. The students, well, they get to attend school for free and learn on the job while they are mentored by the industry expert — not to mention it’s a great bullet-point on their resume. The mentors have the opportunity to share their skills, help shape the future of the industry, and yes, get a nice stroke to their ego.

It was not long into my first project that I was reminded of why Shoshin is so important. I was greeted by a student and an organization who were open to any possibilities and best of all were eager and excited about what web analytics had in store for them. Their child-like exuberance rekindled a flame inside me that had slowly faded away as the years of segmenting data past by.

The team quickly bonded and in 3 weeks we delivered an executive presentation highlighting low hanging fruit that the organization could quickly change and realize huge results, needless to say, this made our project manager look like a rockstar. It didn’t take much, a barebones Google Analytics implementation and a student full of bright ideas.

As extra credit, we delivered an implementation guide that the organization could use to beef up their data collection and an analytics road map to help successfully guide them down a path of measurement maturity.

I came to The Analysis Exchange as the expert but by the end, I had become the student. Those with the beginner’s mind had much to teach and I am grateful for the gift of a rekindled passion that they gave me.

(I am humbled by Jason’s description of his experience. Will you join Jason and make a difference in the world by mentoring a web analytics student and helping a great organization?)

Analytics Strategy, General

Are you looking for experienced web analysts?

Anyone who has read my blog for long knows that I am passionate about two things in web analytics: process and people. Process is the glue that holds all the hard work we do as analysts together and allows our effort to translate into tangible business value. But without a doubt it is the people who are absolutely critical to any businesses ability to compete and succeed on web analytics.

Unfortunately people, especially really good ones, are incredibly hard to find. So much so that my partners and I have invested heavily in creating an entirely new way for novice and veteran analytics practitioners alike to gain valuable “hand’s on” experience using data to answer business questions, The Analysis Exchange.

While the Analysis Exchange has exceeded every single short-term milestone we have established for the effort, it has long been clear to my partners and I that training alone is not enough to satisfy the immediate needs of businesses working to take advantage of their existing investment in web Analytics. Companies need analytical talent now, not a year from now, not in six months, right now.

Why the urgency? Myriad reasons. The money has been spent on technology, the clock is ticking, the promises have been made, offline revenues are in decline and the company’s digital channels are the hope and future and difference between profitability and not.

The web analytics promise is real — companies that have become adept at generating analytically-driven insights and then translating those insights into sound business decisions have staked a clear competitive advantage. The giants of our industry — brilliant people like Joe Megibow, Dylan Lewis, Shari Cleary, and Lynn Lanphier <plug>all of whom are coming to the X Change conference in September, are you?</plug> — have not only determined the value of people but have also figured out how to convince management of that value.

Have you? Most companies have not.

Most companies persist in their belief that web and digital analytics is something that they can do “part time” and still have the successes that Intuit, Expedia, MTV, Best Buy, and others gain by hiring brilliant people, giving them clear direction, and recognizing the value of the analytical output they produce. Despite being well-intentioned, far too many managers still believe that software alone will provide insights and make recommendations.

But I digress.

Because we at Analytics Demystified believe in people and process so strongly, and because we are pretty confident in our consulting as it relates to process, we have decided to put our money where our mouths are and start helping companies fill their open positions for “web analyst, senior.” Today we are extremely proud to announce our first-of-it’s-kind partnership with the web analytics community’s leading recruiting firm, IQ Workforce.

Working directly with Workforce CEO Corry Prohens and his team, Analytics Demystified has crafted a “one-two” punch to help speed the process of finding, vetting, and hiring the kind of deep talent and teams required to take complete advantage of any investment in digital measurement technology. The Demystified partners and IQ Workforce will help you determine exactly which roles you need to fill, what strengths the ideal candidate will have, and how hired resources will fit into the organization that both creates business value and a satisfying experience for the analyst (which has a surprisingly positive impact on retention!)

In essence Analytics Demystified with our 30+ years of experience in web analytics will sit on your hiring panel and help you find and hire the critical difference between “web analytics as a cost center” and “web analytics as a profit center.”

Did we mention we will do it for a fixed price and in a way that allows most companies to circumvent HR’s aversion to “outside help?”

If you’re looking for an analytics guru for your organization, give us a call. We are more than happy to explain how this partnership creates a dramatic advantage for most companies, and would love to talk with you about our business and our partners at IQ Workforce. In the meantime please have a look at our press release on the announcement and more details about the offering:

Thanks to Corry and his team for making this idea a reality. On behalf of IQ Workforce and the Demystified Partners we look forward to helping you with your staffing needs.

Adobe Analytics, General

Product Pathing

In my last post, I discussed how you can see how much money you are leaving on the table when it comes to the online shopping cart. While I still have the shopping cart on my mind, I thought I would follow this up with a concept I call Product Pathing. Product Pathing answers one of the questions I get from time to time: How can I see the order in which website visitors are looking at my products or product categories? The following will provide details on why you might want to do this and its implementation.

Why Product Pathing?
So why would you want to implement Product Pathing? Here are a few reasons:

  1. Understanding how visitors jump between products or product categories which helps you understand how visitors navigate your products
  2. Seeing what products are viewed concurrently which helps you understand what cross-sell/up-sell opportunities might exist
  3. If one of your website goals is to get visitors to view multiple products, you can measure how you are doing against that goal

There may be more reasons, but the preceding items should help you build a case for implementing this, especially since it is not difficult to do.

Implementing Product Pathing
So the standard way to see the answers to the questions above is to use page name-based Pathing reports. You might find the page name of a particular product and then look at Pathing reports to see what visitors did after viewing the product. However, I find that this approach does not work because there are so many pages on the website that it is impossible to sift through them all and isolate just product pages. Therefore, I am going to propose the following alternative solution:

  1. On all Product View Pages, in addition to setting a Product View Success Event and the Products Variable, pass the Product Name (or ID if that is all you have) to a new “Product” Traffic Variable (sProp). Be sure that you pass nothing but the Product Name (or ID) to this sProp.
  2. After that is done, enable Pathing on this sProp

Believe it or not, that is all you have to do! By passing only the Product Name (or ID) to this new sProp, you will have a clean, new sProp that allows you to see Pathing reports on only Products like this:

Moreover, keep in mind that you have access to all Pathing reports so you get the bonus benefits of seeing the following as well:

  • How often visitors looked at Product X and then didn’t look at any other Products (Exit % – 42.32% in this case)
  • All paths containing Product X (Full Paths Report)
  • What Products visitors see (if any) between Product X and Product Z (Pathfinder Report)
  • How often did visitors see Product X and then Product Y (Fallout Report)
  • Which Products were viewed first the most often (Entries) or last the most often (Exits)

A Few Other Cool Uses of Product Pathing
In addition to this, there are a few other cool things you can do:

  • Instead of passing Product Names (of IDs), you can pass in Product Categories to see the same data at a higher level
  • Instead of passing Product Name values at the Product View Success Event, you can set an additional sProp in which you pass Product Names when the Cart Add Event is set to see the order in which visitors add products to the shopping cart
Analytics Strategy

Does your data quality still suck?

Years ago Google’s Analytics Evangelist Avinash Kaushik told everyone “data quality sucks, get over it” which at the time was quite the funny and controversial thing to say.  Among other things Mr. Kaushik encouraged his readers to “resist the urge to dig deep” to understand data-related problems, to “assume a level of comfort with the data” and to focus more on trends and less on absolutes.

At the time this advice seemed good. Any number of companies were in the midst of switching vendors back in 2006 (a trend that has noticeably declined) and so guidance to not stress out on the differences observed between old system “A” and new system “B” was good, as was his encouragement to spend more time focusing on data quality in key areas (checkout, carts, etc.)

Unfortunately times have changed.

Since 2006 we have seen a slow but steady increase in the prominence that digitally collected data has within businesses of all sizes. Now in 2010, more senior managers, Vice Presidents, and CEOs than ever are incorporating both qualitative and quantitative data collected from web, mobile, and social sites than ever before. Among our clients we have seen a profound shift from “nice to have” to “critical” when it comes to data flowing through Omniture, Coremetrics, Unica, Google Analytics and other systems, and slowly web analytics is becoming an embedded component of business decision making.

While this shift has far reaching implications lately at Analytics Demystified we have been looking more closely at how we can help our clients not “get over” the “suckiness” of data quality and actually do something about it.  We are doing this for one simple reason: senior leadership doesn’t want a glib response to data quality issues, they want as high a level of accuracy possible and concrete answers for why that accuracy isn’t forthcoming.

Don’t believe me? The next time your boss asks about the quality of the numbers you produce look them squarely in the eye and repeat Mr. Kaushik’s words, “Well Bob the data quality sucks and so you should just get over it, okay?”

When you’re done you can call my friend Corry Prohens to help you find a new job.

The alternative is, of course, to actually pay attention to your data’s quality and work diligently to incrementally improve data collection processes. Rather  than be lazy about the very foundation of all of your valuable work (and the high-quality analysis you’re working to drive into the business) you can do a few simple things designed to make your data “less sucky” and thusly more valuable.

And what are those things? Thanks to our friends at ObservePoint we have authored a short white paper on this exact subject! Titled “When More is Not Better: Page Tags” and subtitled “The Dramatic Proliferation of Script-based Tagging and the Resulting Need for a Chief Data Officer” (okay, not my best title, I admit it) the paper outlines the business processes and technologies required to develop a little more trust and faith in your digitally collected data.

The paper is a free download from ObservePoint but you will need to trade some information with them. I can assure you Rob Seolas and his team are fine folks and given that they have a tendency to send out sweet USB devices to prospects there are worse things than having someone from ObservePoint call.

Get your copy of “When More is Not Better: Page Tags” at the ObservePoint web site today!

If you’ve read the paper I welcome your comments. While we recognize that few companies are going to appoint a Chief Data Officer to manage their digital data quality we hope that our readers understand that the point is not the job title but rather the work the associated work. Our thesis is that as the push towards digital continues those companies who have (and can communicate) a high-level of trust in their data will gain a competitive advantage, and in a world where the competition is always only a click away, who doesn’t want every advantage they can create?

Social Media

Hubspot: 2010 Facebook Page Marketing Guide

Hubspot released a new ebook a couple of weeks ago, compiled and edited by the Who’s Blogging What folk, with yours truly contributing the Facebook measurement chapter.

It’s behind a registration page, but it’s a good 30-page read. Topics covered include:

  • Creating a Facebook Page
  • Examples of Effective Pages
  • Six Ways to Get Found on Facebook
  • Getting People to “Like” Your Facebook Page
  • Developing Content/Inbound Marketing for Facebook
  • Leveraging Facebook for Ecommerce
  • Facebook’s Potential to Make Sales Become Viral
  • Analyzing Facebook Traffic

It was a fun little project to contribute to! Check it out!

Adobe Analytics, General

Money Left on the Table

(Estimated Time to Read this Post = 3.5 Minutes)

Imagine that you are in a retail store and you grab a bunch of items, bring them up to the counter and just as you are about to pay, you decide to push a few of the items off to the side and not include them as part of your purchase. While this may not happen too often in real life, it happens quite often in eCommerce. If you are a retail website, these discarded items can add up quickly! In this post, I am going to show you how to quantify how much money you are leaving on the table. For those not involved in a Retail site, I will also do my best to show how this concept can be applied to non-Retail sites.

The Standard Cart Process
So before we get to the more advanced stuff, let’s make sure we are all on the same page when it comes to the eCommerce shopping cart process. Normally, here’s how it works:

  1. Visitors view products on your website and you capture this with a Product View Success Event and store the products viewed in the Products Variable.
  2. At some point, visitors add items to the shopping cart and you set the Cart Add Success Event and the Products Variable with the product ID or name(s).
  3. Hopefully, visitors get to the Checkout Page and you set the Checkout Success Event and the Products Variable with the Product ID or name(s).
  4. Finally, the order is completed and you set the Purchase Success Event which sets the Orders, Units and Revenue Success Events for each Product purchased.

Hopefully this is straightforward and if you sell online you have successfully implemented these steps on your site. If so, you are ready to take things to the next level and do some stuff that is not traditionally done as part of standard eCommerce implementations.

How Much $$$ Left on the Table?
As the post name implies, in this scenario we would like to see how much $$$ we are losing online by website visitors leaving items in their Cart. If you think back to the initial scenario above, this is equivalent to the Retail store adding up how much they could have made that day if no one had left stuff on the counter when they were checking out. In addition to seeing how much $$$ is being missed out on, the store owner would probably want to know what products are being left to see if there are any patterns he/she could identify. For example, it may be the case that items over $100 are left more often than products under $100, etc…

Well the good news, is that if you are doing business online, this much easier and you can see a lot more data on the items being abandoned and those who abandon them. So here’s how you do it:

  • When a website visitor adds one or more products to the shopping cart, in addition to setting the Cart Add Success Event (scAdd), you should set a currency Incrementer Event with the dollar amount associated with the items added. As a refresher, an Incrementer Event allows you to pass in a numeric/currency value to a Success Event instead of using it as a counter. By passing in the amount associated with the items added the Cart, you will have a new metric which represents the total potential that you could have made had no one left anything in the cart. I call this new metric $$$ Added to Cart.
  • Once this is done, you can compare this “$$$ Added to Cart” metric with your Revenue metric, either in a conversion funnel report or in a normal Conversion Variable (eVar) report by creating a Calculated Metric dividing the two metrics to see what % of $$$ Added to Cart turns into Revenue.
  • If you want to be even more particular, you can set another incrementer event with the $$$ that the visitor has in the Cart at the time of Checkout. However, if you find that you don’t have much loss between Cart Add and Checkout or between Checkout and Purchase, this may prove to be unnecessary.
  • Finally, since you are setting the Products variable with the Cart Add event already, when you compare these two metrics, you can easily break it down by Product (or any other eVar variables you have set previously).

Beyond Retail
As promised, I wanted to touch upon a few ways you could use this same concept if you manage a non-Retail website. Here are a few that come to mind:

  1. On a Financial Services site, pass in the total loan amount a person is requesting and compare that to how much they are eventually loaned.
  2. On a Media site, pass in the total amount of advertising your site could have earned if all ads were clicked.
  3. On an Auto site, pass in the total value of cars visitors configure to see your max potential.
  4. On a Lead Generation site, pass in a value for ever visitor who starts completing a lead form.
  5. On a Travel site, pass in the total value of trips planned online and compare it to the amount actually booked.
  6. On a Manufacturing site, pass in the total Bill of Materials value the visitor has added.

As you can see, the concept of seeing what your high-end potential is and comparing it to actual performance can be applied to almost any website and gives you another data point for comparison. I like using this metric better than Visits or Unique Visitors since it is not realistic that you are going to convert every person who comes to your site. However, once a visitor takes some more deliberate actions, they are self-qualifying themselves, and therefore, capturing their potential revenue streams gives you a high, but realistic goal to strive for and a KPI that you can use to see how you are doing over time.

Final Thoughts
So there you have it. Just a quick, easy way to add some more data to your all-important shopping cart process. In general, I feel like Incrementer success events are under-utilized by SiteCatalyst users so hopefully this example helps to get your mind working in new and inventive ways to use them…

Analysis, Analytics Strategy, Social Media

Integrated View of Visitors = Multiple Data Sources

I attended the Foresee Results user summit last month, and John Lovett of Analytics Demystified was the keynote speaker. It’s a credit to my general lack of organization that I wasn’t aware he was going to be speaking, much less keynoting!

John showed this diagram when discussing the importance of recognizing your capabilities:

The diagram starts to get at the never-ending quest to obtain a “360 degree customer view.” A persistent misperception among marketers when it comes to web analytics is that behavioral data alone can provide a comprehensive view of the customer. It really can’t — force your customers to behave in convoluted ways and then only focus on behavioral data, and you can draw some crazily erroneous conclusions (“Our customers appear to visit our web site and then call us multiple times to resolve a single issue. They must like to have a lot of interactions with us!”).

Combining multiple data sources — behavioral and attitudinal — is important. As it happened, Larry Freed, the Foresee Results CEO, had a diagram that came at the same idea:

This diagram was titled “Analytics Maturity.” It’s true — slapping Google Analytics on your web site (behavioral data) is cheap and easy. It takes more effort to actually capture voice-of-the-customer (attitudinal) data; even if it’s with a “free” tool like iPerceptions 4Q, there is still more effort required to ensure that the data being captured is valid and to analyze any of the powerful open-ended feedback that such surveys provide. Integrating behavioral and attitudinal data from two sources is tricky enough, not to mention integrating that data with your e-mail, CRM, marketing automation, and ERP systems and third-party data sources that provide demographic data!

It’s a fun and challenging world we live in as analysts, isn’t it?

(On the completely off-topic front: I did snag 45 minutes one afternoon to walk around the University of Michigan campus a bit, as the conference was hosted at the Ross School of Business; a handful of pictures from that moseying is posted over on Flickr.)

Analytics Strategy

Be-Still My Analytical Heart

Okay…I’ve been quiet about the Coremetrics acquisition by IBM for long enough now. While the dust still won’t settle until sometime in Q3’10, when this deal passes FTC scrutiny, I’m compelled to weigh in and offer my $.02 USD mainly because there’s been some good dialog in the blogosphere from people I respect like: Eric, Joe Stanhope, Akin and more recently Brian Clifton.

I’ll take a slightly different approach and use the acquisition to talk about the state of the web analytics marketplace. For starters, let me just say that this acquisition was inevitable. So too will Webtrends be acquired by some player looking to incorporate metrics into their overarching set of technology capabilities. And as I blogged earlier this spring, yet another even bigger fish will eat the existing big fish and we’ll utter oooh’s and ahhh’s as the analytics technology market evolves into a vital organ for all businesses with a heartbeat. While not immune to arrhythmia, this course of events shouldn’t really take anyone by surprise. I’ve been saying this for a while now and even penned “Web Analytics is Destined to Become an Integrated Service” back in May 2009 when I wrote the Forrester US Web Analytics Forecast 2008-2014 (subscription required). I’ve been advocating web analytics as a function within the marketing organization, which seems to be a logical orientation. However, it’s interesting that the consumption of analytical technologies has come from a smattering of different perspectives.

Here’s how the post-acquisition landscape looks:

Creatively Speaking…

Adobe’s acquisition of Omniture undoubtedly took many by surprise (myself included – although you’re never allowed to admit surprise as an analyst). The promise Adobe made to investors was that they would incorporate the market leading web analytics technology into the creative life-cycle by enabling measurement at the point of content creation. Perhaps that’s not exactly how they positioned it, but that was my impression and they’re now executing on that promise. Say what you want about acquisitions and the slow moving integration process, but Creative Suite 5 debuted in April just six short months after the deal closed, with measurement hooks from FlashPro and Dreamweaver into both SiteCatalyst and Test & Target. They’ve also accomplished this remarkable feat using a visual interface allowing content editors and non power-users the ability to begin measuring their digital assets. This utilization of analytics places measurement at the operational level, yet by and large it’s still within the marketing group.

The Marketer’s Toolbox…

Enter Unica with their rebranded Marketing Innovation product suite where NetInsight (formerly Sane Solutions) web analytics sits at the core. While both Omniture and Coremetrics made pre-acquisition strides to amass a truly effective online marketing suite, they were merely playing second fiddle to Unica Campaign, Interact and Marketing Platform solutions. Unica is widely acclaimed as a leading Campaign Management tool and sits proudly in the marketing departments across many an enterprise business. They’ve worked web analytics into the DNA of their overall marketing perspective and use it to power the automation and decisioning that many organizations strive for with lust and admiration. Their utilization of analytics really does empower analytics as a lynchpin for integrated marketing.

Shoppers World…

With speculation still swirling about the how’s and why’s of IBM’s intended use of Coremetrics, it’s tough to ignore Coremetrics’ strength in the retail vertical. While Coremetrics has an impressive client based outside of retail, including publishers and financial institutions among others, they’ve clearly got some good mojo going with their triple-A retail clients. Just thinking of how Big Blue will assimilate the nimble teams of relentless Coremetrics marketers in San Mateo and Texas makes me slightly nervous. Not for any loss of focus by the Coremetrics team on their dedication to client support or from their delivery of leading analytical capabilities that they offer – rather – where will this newly acquired asset live within the IBM estate? The way I see it, two possible scenarios can play out here:

1. First is the scenario that Akin speculates upon whereby IBM is folded into the Websphere group and serves to illuminate the value of customer interactions within website platforms across IBM’s customer base. This would greatly benefit Websphere customers although it would narrowly define a finite application of a technology that is so much bigger than just online commerce.

Or;

2. The scenario that Eric envisions (and one that I believe would benefit our industry exponentially) is the one where IBM becomes the “business analytics” juggernaut in the enterprise. If this were to occur, IBM would need to integrate its SPSS and Cognos acquisitions to get really crafty about delivering extremely high value digital insights.

These are two very different outcomes and both speculatory, but I’m rooting for the latter simply because it has the potential to push analytics so much further along. My sources tell me that some long-time IBM’ers feel this way too. One confidant with access to IBM brass even shared with me that internally the acquisition will be deemed a failure by some at IBM if Coremetrics isn’t integrated with SPSS and Cognos. That’s great news, because wholesale failure of business analytics isn’t an option.

Switzerland…

So here we have Webtrends as the only standalone web analytics player remaining from the set of truly original US-based technologies. They’re doing a good job of playing the part of Switzerland as they not-so-quietly establish a platform of Open Analytics whereby data flows in -and- out of the interface fueling other operations around the business. While this is not the same as an integrated approach, Webtrends is taking a strong stance on have-it-your-way analytics. Their open APIs and REST URLs make it easy to leverage their data collection and pump data to any application within the enterprise. Thus, they too offer an integrated approach yet do so by maintaining a position that supports rather than delivers the adjacent marketing functions.

The Low End Theory…

Any post about the state of the analytics marketplace would be remiss if Google Analytics wasn’t included in the conversation. I include the Big Googley in the Low End Theory – not because they’re trailing – but because they’re sneaky smart. Just in case you haven’t been watching, since Google acquired Urchin Software, GA has been quietly amassing millions of installations across businesses large and small adding to the democratization of web analytics. I’d argue that they’re not doing this in a concerted enterprise-wide way, but they are probably gaining the most ground across the enterprise by sheer adoption and hands-on utilization. What this means is that pockets of users are deploying Google Analytics for very focused use of the data and the organization is becoming more accustomed to seeing GA data and using it to make key decisions in their day-to-day operations.

Many other analytics programs are delivering similar value to business users, yet in an extremely isolated manner with tools like KissMetrics, Twitalyzer, Visible Measures and Radian6 just to name a few. This is truly the low end theory because the data is rarely seen by anyone outside the marketing group, but it’s driving key activity around specific marketing functions without the larger business really taking note. Think grassroots baby – under the radar – with potential super smartie effectiveness.

Can Marketing Come from the Heart?

By now you should be asking yourself; So where’s this all going? Despite how each of the companies I described above fit into the overall aspect of a company’s business, I think that we can all agree that analytics is about understanding business performance. Here is where Eric’s vision of the Coming Revolution in Web Analytics fits into the story and the quietly powerful behemoth that’s already penetrated the enterprise garden sits in wait down in Cary, North Carolina. Whether it’s SAS, another player, or an amalgamation of services from multiple players – analytics needs to be at the heart of the organization. Here’s where my analogy pays off…because if this is to happen, then data becomes the lifeblood of the enterprise and analytics allows companies to relate to their customers and offer more tuned in and relevant products and services. Marketing should control this blood flow but use it to power the brain and the working limbs of the organization. While this may start to look like Business Intelligence, I believe it’s different because it requires real-time information, automated decisioning and ultimately creativity. These are qualities that I have yet to see from a BI tool. But maybe I’m naive.

Before this diatribe gets any longer, and you dear reader need resuscitation I’ll call it quits. But I’ll offer fair warning that this is just the beginning of my thoughts on the matter and there’s more to follow. I’d also love to hear what you think.

Adobe Analytics, General

Traffic Correlation Hack

(Estimated Time to Read this Post = 3.5 Minutes)

From time to time, I will see people talking about some of the limitations of SiteCatalyst Traffic Data Correlations on Twitter and blogs. Below is the most common request I see:

For those of you not familiar with Correlations, they are a way to break one Traffic Variable (sProp) down by another sProp, assuming that both are set on the same page (image request). Unfortunately, in SiteCatalyst, the only metric you can see for Correlations are Page Views. In the example below, I can use a Correlation to see what page a visitor was on when searching for a specific phrase in the internal search box:

While this is handy, what if I wanted to see how many Visits people searched for this phrase from each page or better yet, how many Daily, Weekly, Monthly Unique Visitors did this? Currently, the only way to get at this information is to use DataWarehouse or Omniture Discover. However, the following section will show you a handy little hack to get this information directly from SiteCatalyst…

The Hack
So in this scenario, we will say that we want to see how many Weekly Unique Visitors searched for the phrase above from each page on our website (as in the example above). Here is the trick to doing this:

  1. Just as in a Correlation, you must have both data points you want to correlate available on the same page. In this example, it is Previous Page Name (from the GetPreviousValue JavaScript Plug-in) and the Internal Search phrase
  2. Once you have the two data points available, create a new Traffic Variable (sProp) and concatenate the two values using a separator. In this example, if the user searched for the above phrase from the Japan Customers page, the value would be “キーワード検索:SFDC:jp:customers”
  3. After you have passed the concatenated value to the sProp, contact your Omniture Account Manager and tell him/her that you would like to enable Visits, Daily Unique Visitors, Weekly Unique Visitors, etc… on that sProp

When all that is done, using the example above, you will have a report that looks like this:

The confusing part of this hack is that you won’t actually use a Correlation report anymore. You will no longer open one report and break it down by another report, but instead you will simply open the new sProp report and add the metrics you want to see. In the report above, I have added Page Views, Visits and Weekly Unique Visitors and searched on the specific Internal Search phrase for which I am interested.

However, as you can imagine, you could have a lot of unique values in this report. One caveat is that you need to make sure you don’t exceed the SiteCatalyst variable limit which is 500,000 unique values per month. In this example, you would want to make sure that the combination of Page Name and Search Term does not exceed 500,000 per month, but for most sites this shouldn’t be a problem.

Another thing to keep in mind is that the above scenario is just one example. This “hack” is by no means limited to Internal Search Terms and Pages. Here are some other examples of what you can do with this “hack:”

  • Unique Visitors/Visits who saw a specific page by visit number (i.e. Home Page:Visit 1, Home Page: Visit 2, Demo Page: Visit 1, etc…)
  • Unique Visitors/Visits who searched for a term by visit number (i.e. Pricing: Visit 1, Pricing: Visit 2, Demo: Visit1, etc…)
  • Unique Visitors/Visits who saw a specific page by country (i.e. Home Page:US, Home Page:UK, Demo Page:US, etc…)
  • Unique Visitors/Visits viewing a specific product page by search engine (i.e. Product X:Google, Product Y: Google, Product X:Yahoo, etc…)
  • Unique Visitors/Visits viewing a specific product page by search keyword (i.e. Product X:walmart, Product Y:walmart, Product X: walmart.com, etc…)

These are just a few examples and my advice is to look at whatever you are currently correlating (in Admin Console) and determine the items for which you would like to see Visits and Unique Visitors.

Advanced Users
For advanced users out there, I wanted to call out a few more things you can do with this concept:

  • If you don’t have a lot of unique combinations, you can add multiple correlations to the same sProp and use the search box to find the item combinations you need. For example, you may use the Internal Search & Page example shown above, but also store Page Name & Visit Number combinations in the same sProp. As long as all of your data is underneath the 500,000 unique value monthly limit you are ok. Alternatively, you can multiple sProps assuming you have enough variables that can have Visits and Unique Visitors enabled remaining in your contract.
  • With this alternative approach, you can also view Trending Reports for each of your combinations if you enable Pathing. This means that you can trend Visits or Unique Visitors for any combination (i.e. Weekly Unique Visitors who view Product Page X on Visit Number 1 over the last 90 days). This temporarily solves the following Idea Exchange item (Allow Trended Versions of Correlation Reports)

Final Thoughts
So there you have it. Just a quick “hack” that allows you to get a bit more information out of SiteCatalyst. In the future, perhaps Omniture will allow you to see Visits and Unique Visitors right form the normal Correlation interface (please vote for this here: Provide Visits and Unique Visitors in Correlation Reports), but in the meantime, hopefully this will help bridge the gap…

Analytics Strategy

Thoughts on IBM's acquisition of Coremetrics …

By now you’ve heard that IBM announced today their intent to purchase Coremetrics. While I was caught off guard by the news — it was all over Twitter before I’d had a single drop of Espresso — I cannot say I was particularly surprised. Coremetrics has not been particularly coy about their intent to do what is best for their customers and company, and IBM is on a tear for acquisitions and investment in analytics with over $10 billion spent on 14 (now 16) acquisitions since 2005 including SPSS, Cognos, and now Coremetrics.

In fact, according to CMS Wire, IBM is now #2 behind privately held SAS for analytics market-share (14.5% versus SAS’s 33%.)  Couple this with the consulting services organization IBM established just over a year ago and the picture becomes incredibly clear: IBM has the potential to become a digital analytics juggernaut in the Enterprise.

You know what? I’m excited about this.

I’m excited for three reasons, none of which are particularly altruistic of me considering that I am a strategic business consultant who has spend years working to elevate the visibility of digital analytics within the Enterprise. From my point of view, my partners and I (and our peers) will only be more successful as traditional business pays more attention to the value of digitally collected data as an input to business intelligence.

But I digress.

I’m excited by IBM’s acquisition of Coremetrics for these three reasons:

  1. This acquisition validates my “Coming Revolution in Web Analytics” thesis. In 2009, with the support of SAS, we postulated that we were on the cusp of a grand revolution in web analytics, one that had profound implications on both the practice and the practitioner. In the same way SAS continues to refine their web and customer experience analytics offerings, IBM buying Coremetrics pushes us further along towards what I believe is an inevitable future where digital analytics is powered as much (or more) by statistical and predictive models as “pretty” spreadsheets and iPhone viewer apps.
  2. This acquisition validates my “Two Sets of Tools” thesis. Back in February of this year I wrote a post about the need for two sets of tools to do digital analytics professionally. I wrote this post in response to a dramatic increase in the number of companies telling us their business users were frustrated with the complexity of the most widely deployed “Enterprise” web analytics tools. While Coremetrics 2010 is certainly not perfectly suited for business users of all shapes and sizes, I do like what they’ve done with the recent release (view source on the main site!) More importantly, Coremetrics plus SPSS is essentially the bifurcated solution I describe appropriate for deployment within large companies.
  3. This acquisition has tremendous potential to elevate the visibility of “digital analytics” within business leaders across the globe. One of our vendor clients commented today “don’t you think the need for web analytics has been validated already?” to which I could only respond “no.” Everywhere we go — conferences, clients, prospects, social events, you name it — we continue to see a half-hearted expression of commitment by senior leadership for web and digital analytics. Don’t get me wrong — they are 100% committed to connected efforts! Leadership LOVES mobile apps, social media, Flash and Flex and Silverlight and the iPad and QR codes and … well, you get the idea. But in my experience when it comes to investing in the measurement of those efforts to validate their value to the business, well, let’s just say interest seems to wane. My hope is that with IBM backing digital analytics and pushing Coremetrics as part of their portfolio that this will change much more quickly than it would with, say, Adobe pushing the same agenda (sorry, sorry, sorry!)

My last point is clearly the most important. For web analytics to truly “grow up” and mature into the valuable contributor to the entire business we all know (or at least suspect) it can be, it has to become a greater priority for senior leadership. And despite the fact that the people I respect the most — guys like Jim Sterne, Gary Angel, and Josh Manion — continue to do the real evangelical work, pounding their fists and respectfully raising their voices in an attempt to get management to pay attention, I suspect that John Squire and Joe Davis injected into the analytics sales process at IBM has far greater potential to change minds and focus within corner offices.

Sorry to put that on ya, Squire. Let me know if I can help.

Since I was slow to get this post up two other well-written and thoughtful pieces about the acquisition have been published that I would encourage you to read:

  • Akin Arikan (Unica)’s “Farewell to Web Analytics” post, which I think is a little hopeful on Akin’s part but should be expected as IBM is a large Unica customer and so this deal creates some risk for Akin and his team;
  • Joe Stanhope (Forrester)’s take on the deal. Joe was lucky enough to have dinner with John Squire last night and has a lot of detail about IBM’s opportunity.

Again I am really excited for Coremetrics management and staff. Personally I think that IBM made a great decision, one that will create benefits for everyone truly committed to digital analytics.

As always I welcome your thoughts.

Adobe Analytics, General, Reporting

Site Wide Bounce Rate

In the past, I have written about Bounce Rates, Traffic Source Bounce Rates and Segment Bounce Rates. Hopefully this will be my last post related to Bounce Rates, but I recently found a “hack” to calculate and trend a Site Wide Bounce Rate in SiteCatalyst so I thought I would share it. I define Site Wide Bounce Rate as the total number of Single Access Visits divided by the total number of website Visits. Unfortunately, for some reason, this metric is very difficult to wrestle down in Omniture SiteCatalyst because you cannot view Pathing metrics (i.e. Entries, Single Access) in Calculated Metrics unless you are within an Traffic (sProp) report that has Pathing enabled.

To date, the way I have reported on Site Wide Bounce Rate was by pulling Visits and Single Access data into Excel using the SiteCatalyst ExcelClient. Once there, I could divide the two and if I wanted to see it by day (or week or month), all I needed to do was to pull both metrics by day. It was straightforward, but I could not add this to my SiteCatalyst Dashboards.

The Hack
So let’s say that you want to report a daily/weekly/monthly trend of your Site Wide Bounce Rate and add it to one of your executive dashboards. Here are the steps:

  • First you need to create the required calculated metric. In this case you want to divide Total Single Access by Total Visits (or Total Entries which is the same thing). I would recommend making this a Global Metric so all of your users have access to it going forward:

  • Once this metric is created, open your Pages report, click the Add Metrics link and add the new Site Wide Bounce Rate metric to your list of metrics. It will be under the Calculated Metrics area. Place this new Site Wide Bounce Rate metric so it is the first metric and then add your regular Bounce Rate metric and finally add the Page Views metric and click the small triangle to sort by Page Views. When you are done, it should look like this:

  • When you click OK, you will be able to see a report that shows your most popular pages, the Bounce Rate for each page and the overall Site Wide Bounce Rate. This report is handy for seeing how each page is doing in relation to the Site Wide Bounce Rate.

  • However, our original objective was to see the trend of the Site Wide Bounce Rate and add it to a dashboard, so let;s get back on track. To do this, all you have to do is click the “Trended” link shown in the report above. As is always the case, trending will show you the left-most metric trended over your chosen date range (which is why it was important to put Site Wide Bounce Rate in the first metric slot!). After clicking it, you will see a report that looks like this:

So the resulting graph is your Site Wide Bounce Rate and you can now add this to any SiteCatalyst Dashboard. However, as you recall, I mentioned this is a “hack” so if you look closely you will see a bunch of pages in the data table for this report. What is strange is that the values for each row are the exact same. This is the place where you can see how much of a hack this is. This data is pretty much useless so I recommend just adding the graph to your dashboards and ignoring the data table. Perhaps in the future Omniture will let us add this type of Calculated Metric to the “My Calc Metrics” area so we don’t have to take such a convoluted path to add this trend graph to a dashboard!

Final Thoughts
So there you have it. A quick hack in case you ever need to calculate Site Wide Bounce Rate for your HIPPO’s! Enjoy!

Analysis

From Data to Action — The Many Flavors of Latency

I was flipping through the slides from a workshop that Teradata put on at The Ohio State University several months ago, and one of the diagrams jumped out and resonated with me. As I did some digging, it turns out this diagram has been floating around since at least 2004, if not for longer. It was created by Dr. Richard Hackathorn of Bolder Technology Inc. (BTI).

There are a slew of lousy recreations of the diagram (the original diagram wasn’t so hot, either). Rather than recreating it myself, I just snagged one of the cleaner ones, which came from a 4-year-old TDWI article:

The point of the diagram, as well as of most of the derivative works that reference it, is that the value of information has a direct relationship to the speed with which you can react to it. And, there are three distinct things that have to happen between the business event that triggers the information and ation actually being taken.

I don’t know if there is any real math or science behind the shape of the curve. As diagrammed, this says that you’ve already lost most of your value by the time you get to the “decision latency” point in the process. I don’t know that that is necessarily true in most cases. The diagram supports the assertions by all of the various BI/data tool vendors that data needs to be available in near real-time (and, of course, that’s something that all of the vendors claim they are better at than their competition).

But, is the data latency and analysis latency really the big value driver for marketers? In some cases, the data latency is a structural issue — conducting a campaign where the people exposed to it are likely to not convert for anywhere from 1 to 30 days…means you really need to wait for 30 days to see how the campaign played out. Analysis latency is real…but this really can be broken into two pieces: 1) the time to do the analysis and get it packaged for delivery, and 2) the time to schedule/coordinate the information delivery. And, then, certainly the decision latency is real.

In short, the “action time” components totally make sense, and it’s good to understand them. The shape of the curve, though, doesn’t necessarily stand up to scrutiny when looked at through a marketer’s lens.

Conferences/Community, General

Update from The Analysis Exchange …

I have been so busy with clients, presentations, the launch of Twitalyzer version 3.0, and trying to enjoy the onset of summer I have been a very bad blogger. I have missed opportunities to follow-up on Steve Jobs mixed messages about analytics in the iOS platform, to talk about some really amazing Web Analytics Wednesday events that have been happening, … heck, I’ve even missed the chance to weigh in on a really interesting (albeit one-sided) flame war between Quantivo and Google.

Oh were there only 38 or perhaps 42 hours in every day.

Still as busy as I have been I have been amazed at some of the success folks are having in the Analysis Exchange so I wanted to drop a note and share some of what is going on:

  • First, and perhaps coolest, is this article in Internet Retailer about how the Public Broadcasting System (PBS) are using Analysis Exchange to gain insights into their traffic data;
  • Second, we have announced that our first-year goals are to leverage the Analysis Exchange community to produce free analysis for 1,000 nonprofits and non-governmental organizations, to train 500 “student” web analysts, and to create opportunities for participation for 150 “mentor” analytics practitioners;
  • Thirdly, we have also announced  that as an incentive to participate, we will be awarding a complimentary pass to this year’s X Change conference in Monterey, California to one (1) student and one (1) mentor who distinguish themselves as participants in the Analysis Exchange effort;
  • Finally, we are just pleased as punch to get so much great feedback from participants, both via email and on Twitter. Folks really seem to be enjoying themselves which is awesome!

To keep track of Analysis Exchange I have created a pretty elaborate dashboard. I’ll spare you all  the details but in an effort to be transparent in this work here is the top-line summary that I watch change and improve every day:

“Of our 529 members, 46% have completed a profile. Based on the number active and staffed projects our member participation rate is currently 7%. Of our 54 organizational members, 52% have created projects and 13% have completed projects. Of our 21 active projects, 62% are fully staffed and 100% of those have set a starting date for the work.”

We have work to do, but it is great work and we hope you will join us and participate!

Adobe Analytics

How to Prove Your Testing Results

(Estimated Time to Read this Post = 4 Minutes)

If you are in the Web Analytics space, besides tracking what people do on your website, hopefully you are actively doing testing and content targeting to try and improve your conversion. If you are an Omniture customer, you might be using their Test&Target product or you may be using Google’s Website Optimizer. If you are just getting into the testing area, you may simply be using an eVar to see how your tests are performing. Regardless of what tool you are using, there is a common question that arises in the testing/targeting area. Here is the scenario:

  1. You come up with a great hypothesis you want to test
  2. You run a test and see awesome results (say a 10% uplift in conversion)
  3. You broadcast it to your company only to hear the inevitable “well that was just a test…how do you know we’ll see the same result in real life?”

As a web analyst, this can be infuriating and can be compounded by the fact that you often cannot simply run with the winning recipe and show the results in your testing tool because:

  • You may be running multiple tests and things can get confusing
  • You may want to apply what you have learned from the test to many places on your website which may or may not have the required “MBoxes”

In reality, it may take time for you to take your awesome test and let it out “into the wild” and when you do so, how can you prove that the uplift you saw in your test will actually occur over the next year on the website? The following will tell you exactly how you can do this and hopefully put the naysayers in their place!

How To Prove Your Test Results
So now that I have framed the situation, let’s learn how to do it. Our objective is to prove the long-term results of a test we did using our chosen testing/targeting tool. In this example, let’s imagine that your website has twenty forms on it and you have just done a test showing that if you reduce the number of fields on a form, you can see a 15% uplift in Form Completion Rates. This test was conducted using Test&Target for three weeks with a high level of statistical confidence (+95%). Now you want to go ahead and take five of the twenty forms and remove the same fields you did in the test for the next three months and see what happens. One way to do this would be to add lots of “MBoxes” and use Test&Target to deploy the winner in hopes of seeing the same lift results, but in this example, let’s assume that your conversion team has closed the books on this test, moved onto other tests and has told you that you now need to work with the web team to reduce the fields on your five forms.

So what do you do? How will you know if these five forms will really see a 15% uplift over the next three months? All you need to do is the following:

  1. Create a new Testing eVar (not the T&T eVar)
  2. On each of the five forms you modify on your website, pass in the name of the the test that it was based on to this new eVar. This may be the name of the winning T&T recipe or you can use any descriptive name you’d like. In this case, we’ll pass in the value “Remove Form Fields Test”
  3. Set the eVar to “Most Recent Value” and expire “Never” in the Admin Console

That’s it. Now when you open this new Testing eVar report, you can see how these five new forms are doing with respect to Form Completion Rate (assuming you have the right Success Events set – in this case Form Views and Form Completes). When you look in this new eVar report, all forms that were not modified based upon a testing initiative will fall into the “None” row so you can easily compare those forms that are based upon testing with those that are not:

In the preceding example, we can see that the “Remove Form Fields Test” seems to have about a 17% uplift in Form Completion Rate after it was fully deployed so we are doing even better than the 15% expected! What’s better, is that if you repeat this process every time you make changes to things on your website based upon testing, you can see how each is doing:

And, if you look at them all together, you can show your boss at the end of the year how much uplift you have been responsible for overall! In this example, if we look at all of the tests we have implemented, we are seeing a cumulative uplift of 16.2% over forms that are not based upon any testing. This is a great way to show the value of your conversion efforts and justify more headcount, get promoted, get more budget, etc… In fact, you can show your boss, that if all of the “Form Views” on your website were, in this case, seeing optimized forms, you could produce 5,800 Form Completes instead of the 5,000 you are currently getting at the lower Form Completion Rate.

The only downside of this solution is that it might actually show you that something you expected to have an uplift, in reality didn’t. For example, in the preceding screen shot, the “Form Headline Bold” change doesn’t seem to be pulling its weight (losing against the control) and may need to be revisited. However, even though this is disappointing, it is great information to have since it might prompt you to do some further testing in Test&Target and abandon the losers.

Finally, if you want to get a little more advanced, you could also apply SAINT Classifications to this new Testing eVar and group your tests into types (i.e. “Field-Related Tests” or “Color Related Tests”) so you can calculate the uplift of each type and see which ones you may want to focus on going forward.

Final Thoughts
So there you have it. As a rule of thumb, I would build a step for passing in the Test Name a change was based upon into a Testing eVar into your conversion testing process so that you can look at how your tests ultimately perform. While this will add one small step to your overall process, I think that in the long run you will be happy that you have this variable to show how your team is doing…

Analytics Strategy, Reporting, Social Media

Monish Datta Learns All about Facebook Measurement

Columbus Web Analytics Wednesday was last week — sponsored by Omniture, an Adobe company, and the topic wound up being “Facebook Measurement” (deck at the end of this post).

For some reason, Monish Datta cropped up — prominently — in half of the pictures I took while floating around the room. In my never-ending quest to dominate SEO for searches for Monish, this was well-timed, as I’m falling in the rankings on that front. You’d think I’d be able to get some sort of cross-link from http://www.monishdatta.com/, but maybe that’s not to be.

Columbus Web Analytics Wednesday -- May 2010

We had another great turnout at the event, AND we had a first for a Columbus WAW: a door prize. Omniture provided a Flip video camera and a copy of Adobe Premier Elements 8 to one lucky winner. WAW co-organizer Dave Culbertson presented the prize to the lucky winner, Matt King of Quest Software:

Columbus Web Analytics Wednesday -- May 2010

Due to an unavoidable last minute schedule change, I wound up pinch-hitting as the speaker and talked about Facebook measurement. It’s been something I’ve spent a good chunk of time exploring and thinking about over the past six months, and it was a topic I was slated to speak on the following night in Toronto at an Omniture user group, so it wound up being a nice dry run in front of a live, but friendly crowd.

I made some subsequent updates to the deck (improvements!), but below is substantially the material I presented:

In June, Columbus Web Analytics Wednesday is actually going to happen in Cincinnati — we’re planning a road trip down and back for the event. We’re hoping for a good showing!

Adobe Analytics

Data Extracts

(Estimated Time to Read this Post = 2.5 Minutes)

From time to time, I hear people talking about Data Extracts in SiteCatalyst (especially on Twitter). In this post, I thought I’d review them in case you are unsure of what they are and how they are used.

What Are Data Extracts?
The good news is that if you know how to use the SiteCatalyst ExcelClient then you are essentially an expert in Data Extracts, you just may not know it! Behind the scenes, Data Extracts are really just a different way to access the engine that powers the ExcelClient data blocks with which you are already familiar. The main difference between the ExcelClient and Data Extracts is that Extracts are normally used to have a report e-mailed on a regular basis without having to go into Excel (especially good for Mac users!). Any report for which you can create a data block in the ExcelClient can also be accessed using the Data Extract icon in the SiteCatalyst toolbar:

If you cannot see the icon shown above in a SiteCatalyst report that means you cannot create a Data Extract for that report. Once you click the icon, you will see a new screen that looks like this:

If you have used the ExcelClient, this should look familiar and you use this screen to choose how much data and what metrics you’d like to see. Whatever settings you choose on this screen are what you will be stuck with (except for date which I believe is floating). Finally, in the example below, notice that Data Extracts (and the ExcelClient for that matter!) have access to the same Classifications, Correlations and Subrelations that are available when using the normal SiteCatalyst user interface (click the green magnifying glass icon). In this example, I am showing a correlation between Day of Week and Hour of Day:

Once you have configured your Data Extract, the last screen will ask you if you want to have it send via e-mail, added as a bookmark or both. In a minute I will share with you why it is advantageous to store Data Extracts as bookmarks, but I recommend keeping any Data Extracts you create in an “Extracts” folder in case you ever need them again.

That’s it. You’re done. If you have e-mailed yourself the report, it will arrive shortly thereafter. However, I have to explain something that often perplexes folks. If you chose to add a bookmark, inevitably at some point you will open that bookmark using your bookmark toolbar and be facing a screen that looks like this:

If you are like me the first time I saw this, you might be a bit confused. I expected to see the actual report, but that will never happen. This screen is only to be used to modify your Data Extract and to open it so you can re-send it to yourself or others. At first, I decided that this devalued my decision to store my Data Extract as a bookmark, but don’t despair because SiteCatalyst makes up for this by allowing you do something really cool with bookmarked Data Extracts. If you open the ExcelClient and choose to insert a new Data Block, you can create a new one or you can choose from any Data Extracts you have created (or shared ones) as shown here:

Notice that the new Data Extract we just created appears here. All we have to do is to click the “Insert” link and it will add the data block to our Excel spreadsheet and run the query:

This saves you having to re-create it in the ExcelClient and you can still edit it to suit your needs after it has been inserted. The only bummer is that you cannot tie the Data Extract data block to cells in Excel so if you are ever looking for dynamic data blocks, start them in Excel, not as Data Extracts.

So that is pretty much all you need to know about Data Extracts. One final note – it is my understanding that Data Extracts do not work with the new Omniture ReportBuilder, but perhaps something similar will be rolled out in the future.

Adobe Analytics, Technical/Implementation

CRM Integration #3 – Passing CRM Meta-Data to Web Analytics

(Estimated Time to Read this Post = 2.5 Minutes)

In my last few posts I have been delving into Web Analytics & CRM (Customer Relationship Management) integration. In my first post I described how you can pass Web Analytics Data to your CRM system to help your sales people. In my last post, I described how you could pass CRM data like Leads, Opportunities and Revenue into your Web Analytics tool. In this post, I will round out the trilogy by describing how you can use CRM data as Web Analytics meta-data to enhance your Web Analytics reporting.

My Golf Handicap Story
Since most people don’t often like talking about meta-data, I will begin by sharing an easier to understand story which first taught me how interesting integrating CRM and Web Analytics data could be. Back when I managed the website for the CME, we had situation in which we were trying to sell tickets for a major golf tournament. Unfortunately, the event was nearing and we still had lots of tickets to sell. At the time, I recalled that, for registered website users, we had golf handicap as one of our CRM fields in our Salesforce.com system (our customers were traders and spent a lot of time golfing!). I had recently worked on capturing each customer’s website ID in SiteCatalyst and also placing it in our CRM system. Suddenly, the light bulb went on in my head…why not upload golf handicap as a SAINT Classification of the website ID I had in an sProps and eVar in SiteCatalyst? I created a SAINT Classification table that passed in the raw handicap and also grouped it into buckets like this:

Whereas previously I could see what pages each website ID had viewed on the website, I could now expand that to see the same data for this new golf handicap Classification of that variable. The result was a report like this, in which I could see the most popular pages for website visitors by golf handicap:

From there, all that was left to do was to target some ads on those pages and voilà, the tickets were soon gone!

For me, this was more experimental than anything else, but it was the catalyst (no pun intended!) which helped me see the power of integrating CRM and Web Analytics. Of course back then there were no API’s to help pass data between systems, but nowadays, this is much easier (i.e. Genesis integrations). With this in mind, let’s take a look at a few more examples of how you can take advantage of this concept.

Examples of Passing CRM Meta-Data to Web Analytics
Now that you get the general idea, I’ll walk you through some other examples of enriching your Web Analytics data by bringing in CRM meta-data. Let’s assume that you have done the steps outlined in my last post and have made a connection between your Web Analytics visitors and your known CRM prospects/customers. Using the primary key described in my last post, you can export whatever CRM fields you care about from your CRM system and import them into your SiteCatalyst implementation as SAINT Classifications. Here, you can see that I have decided to export Industry, # of Employees, Lifetime Value and a Lifetime Value grouping (to make my reports more readable) from my CRM system and import them using the following SAINT file:

Now that I have done this, I can open my Lead Gen ID report in SiteCatalyst and look at any of these CRM fields as Classifications. Here is a view of some of my Success Events by Industry:

Here is the same data viewed by # of Employees:

Here is the same data viewed by Lifetime Value:

The same concept can apply if you are using other Web Analytics tools. Here is an example of viewing reports in Google Analytics by Job Title (in this case filtering for CIO’s):

Final Thoughts
As you can see, once you have made the connection between your Web Analytics and CRM system, there are lots of creative things you can do with respect to augmenting your traditional web analyses. I know a lot of people also do this in tools like Quantivo or Omniture Insight, but I hope this was helpful to see some of the ways to do this if you only have access to SiteCatalyst.

Analytics Strategy

The Impending Acquisition of Adobe

This is purely speculation. I have no inside knowledge into the possibility I present here other than hypothetical conversations with peers. Sean Power brought up the topic over dinner recently, which caused me to start thinking seriously about the realities of Microsoft buying Adobe. He blogged about it way back when Adobe acquired Omniture. At that time, I was at Forrester and when we got wind of the deal, we had an all-hands meeting to make sense of the awkward acquisition. Upon arriving at consensus, I quickly penned a missive about why the acquisition of the leading web analytics and optimization firm made sense for a creative software firm like Adobe. Yet, like most others, we had to squint at the deal to see any logic in it at all. Now it’s starting to become clear why Adobe shelled $1.8B to add some attraction to its offering for a much larger suitor.

Adobe controls big chunks of the digital customer experience. Specifically, they play a major role in content creation through the CS5 suite of products. While web developers aren’t necessarily building their global digital offerings in Dreamweaver, surely they are using elements of Creative Suite to do just that. Further, any document where the author wants to control its integrity will lock it down by saving it in PDF format. And now through the acquisition of Omniture, they gained the ability to measure and optimize consumer utilization of those assets as well as the web sites and marketing efforts of leading brands across the globe. We’re just starting to see the fruits of this curious marriage between the two firms in the announcements of tracking capabilities within the CS5 release. Yet, these tracking methods are not meant for the traditional users of Omniture’s set of highly robust analysis capabilities; they are designed for content creators and developers to gain insights about the digital assets they’re producing. I like to think of this as tricking people into using web analytics by not actually telling them that they’re using data to make day-to-day business decisions. Brilliant actually. This introduction of tracking capabilities within CS5 falls precisely in line with what my partner Eric Peterson describes as the bifurcation of the web analytics marketplace. Analytics at the low-end are offering information that is helpful (dare I say critical) in making decisions about business activity. At the top end are trained web analysts who crunch the data to tease out the insights and offer recommendations based on a holistic representation of data from numerous disparate sources. With Omniture Insights providing the analysis horsepower at the top of this scenario and CS5 empowering the bottom, all of the sudden, Adobe becomes an invaluable resource for enterprises that deliver services in online, offline, B2C, B2B or B2B2C environments. Now, let’s introduce Microsoft into this mix.

MSFT has labored [successfully] to own the consumer desktop with its operating system, indispensable productivity tools (MS Office), and not-so-universally, rich media with Silverlight. Not to mention that they’re still working diligently to capture consumers with Bing, MSN and a slew of other services pointed at end users. All this traction across MSFT properties gives them a lofty vantage point from which to monitor consumer behavior across digital channels. Adding a stack of ubiquitous software for content creation and some world class measurement capabilities may be quite attractive to the Redmond rotund. They’d immediately challenge Apple on a new level of customer intelligence and empower their enterprise customers with a whopping new set of capabilities. Despite the new consumer view to be gained from this possible acquisition, the real benefits are a nicely wrapped enterprise solution complete with: MS servers, a .NET framework, SharePoint, Dynamics CRM, Dynamics ERP, and a kitchen sink of bells, whistles and anything else you might want. Given the opportunity to deliver, measure and manage the customer experience at a really deep and integrated level seems like an appealing bet to me.

I won’t droll on about how or when this impending acquisition will occur; mostly ‘cause I have no idea. But I will hedge by saying that others suitors may actually line up before MSFT comes calling. Google for instance could parlay a nice entrance to the packaged software market and gain the ability to create, deliver and measure that largest advertising network on the globe. For that matter Apple may be strategically sparring with Adobe’s crystal palace in a deliberate attempt to soften their value. Swooping in for an acquisition after some fierce battling on the street wouldn’t be completely unheard of…now would it?

Adobe Analytics, Analytics Strategy

Thoughts on the WAA Certification Exam

I’ve been pondering this blog post for a couple of weeks now since I took the WAA Certification Exam along with eight others in the inaugural proctored exam at eMetrics in San Jose.

To be totally honest, I probably didn’t need to take this test. For starters, I’m not a traditional web analyst that’s down in the trenches doing the hard work of analysis, reporting and translating the massive amounts of data we’re all so fond of collecting into insights and recommendations. While these web analysts have something to prove to their organizations about the value of their jobs and the expertise they posses – frankly I have nothing to prove.

Additionally, I work for a well established consultancy with a great brand reputation and I’m not planning on looking for a new job anytime soon. Our clients are most likely going to work with us regardless of our certification status. Yet, I wanted to take this test because I do advise my clients on what they should be doing with web analytics from a strategic perspective. I speak frequently about analytics and how to interpret and deliver data in the most effective ways. So my vantage point cannot be void of practical knowledge that dictates what’s possible in a realistic world.

Thus, I took the test in part to illustrate to myself that I not only talk the talk, but am willing to put my practical skills to the test. And yes…I passed, so you’ll be seeing the CWA (Certified Web Analyst) designation show up on my credentials.

Further, many of you voted recently to elect me to the Web Analytics Association Board of Directors; and I thank you for that. I took the WAA Certification Exam, so that I could lead by example and educate others about what I genuinely believe to be a valuable test of digital measurement knowledge. I encouraged all of my fellow board members to take the test as well and several have done so and more are sure to follow.

But because I went through the experience of taking this exam, I am uniquely qualified to share my experiences that stretch way beyond the speculation of any detractors that criticize this exam. Thus, I give you the Good…the Bad…and the Ugly of the WAA Certification Exam.

The Good…

This exam is a true test of analytical knowledge that requires both business acumen and a deep understanding of applied web analytics. Like all things analytics – it’s not easy. In fact, it’s downright hard. The guidance offered by the WAA regarding a recommended 3 years of practical experience is sound advice. And even then, this exam will require web analysts to dig deep into their skill set to come up with not just acceptable answers, but the best answer. Out of the initial nine exam-takers, seven passed the test, which is good. Yet, the minimum passing grade for the exam is 60% and the mean scores for our inaugural group was 61.7% (maybe I should have saved that for the ugly). The high score among all test takers thus far was 70%. While this may open questions about whether or not this test is too hard, to me it shows that there is plenty of runway for analysts to showcase their superstar skills with high scores. And if it was easy, where everyone could pass, then what validation of knowledge would that really be?

As my fellow WAA Board member Vicky Brock Tweeted: “As an employer I’d hire folk who ace this, as it tests analytical skills not recall”. Vicky also shared thoughts on her experience. Much like Vicky, I believe this exam is a good test of knowledge that requires prospective certified analysts to know their stuff, which in turn demonstrates that the credential holds distinction.

The format is a familiar multiple choice answer system with four possible answers. Like most diligent test takers, I relied on the process of eliminating the ones that I knew were incorrect and then sorting through the remaining choices. This typically left me with two answer choices that could work, but knowing that one was better than the other, I was largely going on instinct to make the right choice. There is also a word question section that offered business scenarios and data sets leaving you to solve problems within the context of a specific business. These questions were the real gems of the exam and guaranteed to make your head spin. I love these types of questions, but perhaps I’m a glutton for punishment.

The big elephant in the room is the price. Without question, taking this exam is a financial commitment. I shelled out the bucks from my own pocket to do it because I believe in the value of certification. We as an industry are gaining momentum so quickly that analytics and data-driven cultures are all the rage today. The use of data is permeating organizations from the tactical to the strategic and ending up on the boardroom table, and in some cases, in financial analyst reports that end up on Wall Street. Yet, despite these significant gains, we have no designation to acknowledge that our Web Analysts are qualified for the job. This certification exam is that designation that will identify the truly proficient practitioners. In my opinion, this exam is worth every penny and I strongly believe that as more and more professionals acquire the CWA accreditation it will become the gold standard by which job candidates, consultants and trusted advisors are selected. When we reach this critical mass, those who aren’t Certified Web Analysts will be questioned with just cause…So why aren’t you certified?

The Bad…

I’ll be the first to admit that their are still some kinks in the system so it’s not perfect. Yet, nobody is so I’m willing to offer some leniency. For me, just downloading the application to sign up for the test was a chore. I offered feedback, so hopefully a fix is in the works now [there is], but when I registered the editable PDF application only worked if you had Acrobat writer on your machine, which I don’t. So after filling out the entire form, I couldn’t save it. I ended up printing out the pages and then scanning them back in to submit my application. Now, that’s more than I’d expect from your average exam taker, but I was on a mission. Also, be prepared to dig out your resume because the application requires listing all of your previous employers, their addresses, manager names and phone numbers. I was toggling between the application and my LinkedIn profile just to complete the darn thing.

**UPDATE** There is now a web based form that serves as the application, so no more downloading the PDF.

Next, it was very challenging for me to prepare for this exam. I did utilize the documents offered by the WAA including the Knowledge Required for Certification and the practice questions. The practice questions were actually great. They helped me to decide whether I was going to take the test and did closely resemble the actual questions on the test. I just wished there were more of them. The Knowledge Required document also contained a great deal of useful information, but after pouring through the 37 pages of material, I was still left feeling unprepared. The document mirrors the UBC course material, so it is thorough in describing what will be offered in terms of knowledge, but the meat of the work isn’t included in this document. It was all menu and no entree. So essentially, the document tells you what you will be tested on, but doesn’t teach any of the concepts. While they clearly state that: “Taking these four courses is not required to sit for the certification test.” those that do will be much better prepared than I was. I know that these courses are incredibly valuable and students rave about their success, but most professionals like myself don’t have the time to endure them – despite their value.

The Ugly…

So, I already ranted about the preparation materials and the costs above, but the Ugly for me was determining if I would actually re-take this test if I failed. The feedback that I received from the WAA did contain results for the four sections that were included in the test (Analytical Business Culture, Case Studies, Marketing Campaigns, and Site Optimization) and my scores for each section. Yet, this was the extent of the feedback on my performance. It was up to me to decipher which questions may have been within each of the four categories and where I needed to focus my efforts to better prepare for a re-test. To the credit of the Association, most standardized tests are scored this way and offer similar amounts of feedback – but most tests of this magnitude also have test preparation courses that teach the skills of taking the test and offer extensive feedback on skills necessary to score well on the exam. Thus, it was ugly for me because I can sincerely admit that I wouldn’t have paid to retake this test because I do not know how I would have prepared for a second exam.

The bright spot in this potentially ugly situation is that the WAA Board is committed to endorsing organizations that choose to develop WAA Certification Exam training programs. Since this test is still very new, these programs have yet to emerge, but the opportunity is out there. I want the WAA Certification Program to succeed for the WAA and for our industry. If the test-takers are better prepared to take the test through the help of a training program, then that’s a win-win. This type of prep course would offer me the confidence I needed to take the test again if I had failed…or for those of you taking the exam for the first time. Stay tuned for more news on this front as it develops.

The Summary…

This post is already getting long in the tooth and I’ve said a lot. The bottom line for me is that this exam is a strong indication of the digital measurement skills that an individual brings to his or her organization. Passing the WAA Certification Exam means that an individual is an expert in the field of web analytics. It’s an accomplishment that anyone in our industry should be proud of, and one that should receive accolades on top of accolades.

But that’s enough of my rant…What do you think?

I look forward to starting a long-term dialog on this topic, so please comment, email me or otherwise shout your opinions from the rooftops.

Analytics Strategy, Conferences/Community, General

The Analysis Exchange is OPEN TO EVERYONE

Back in December of last year Aurelie, John, and I announced an idea we believe has the potential to change the web analytics industry forever, The Analysis Exchange. Briefly, the Analysis Exchange is a totally new approach towards web analytics training — one that depends less on what you read and more on what you do.

The Analysis Exchange lets experience web analysts demonstrate their passion for their work and gives beginners valuable “hands on” experience with data and real business problems. What’s more, the output from Analysis Exchange projects directly benefits some of the most amazing organizations around the globe — nonprofits and non-governmental groups who work not for money but for the betterment of humanity, our planet, and all creatures great and small.

You can read more about the origination of this effort in our blog posts and a very nice write up by our friend Jim Sterne, founder of the Web Analytics Association:

Since December we have been hard at work building out a web site and perfecting the business process that would be required to accomplish our core goals. What are those goals, you ask? Very, very simple … between now and June 1, 2011 we want to:

  • Provide FREE analysis to 1,000 nonprofit organizations
  • Provide FREE training and certification to 500 web analytics students
  • Provide FREE certification and support to 150 web analytics experts

1,000/500/150 are the numbers that we will be living by, but we know we’re not living there alone. We know this because the initial response to The Analysis Exchange has been tremendous! In addition to the great stuff we learned in our first testing round we have had excellent feedback from nonprofits, mentors, and students alike.

I love what Amy Sample, Director of Web Analytics at PBS Interactive had to say:

“What I love about the Analysis Exchange is the learning is reciprocal.  Not only is the student learning about analytics and giving back to the organization, but the organization is learning from the student as well.  Many of our local PBS stations have little experience with Web Analytics.  Through the Exchange, the stations are able to learn how to tackle analytics problems along with the student and how to make a lasting impact to their own organization.”

Cindy Olnick from the Los Angeles Conservancy had similar enthusiasm for her project:

“Joy’s a terrific mentor from what I can tell, and she and Danielle are great at translating all the numbers into information I can use. They’ve given me a report and will set up some new parameters in Google Analytics targeted to my goal of increasing membership.”

Todd Bullivant, one of our students said:

“It was a great way to end the week! Thanks again to everyone for the opportunity. I learned a lot about analytics that I can use in my own organization as well as future projects. I hope to work on many more of these in the future! I also just heard that my company is planning to spotlight me in the next internal newsletter due to this project, so increased visibility!”

Susie Hall, Director of Outreach and Enrollment at Acton School of Business said:

“The Analysis Exchange project was very enlightening for me as well.  We found out some valuable information, and I’m excited to use this new-found knowledge to help shape our outreach efforts. This project could not have come at a better time, we are in the middle of changing pretty much all of our processes, so moving forward armed with such powerful information is invaluable. Andrew and Candace were lovely to work with, and I am very happy with the whole experience.”

One of our super-motivated students, Andrew Hall said:

“During the course of the project, I worked with almost every functionality of Analytics other than custom variables, got to understand how Adwords campaigns work, and learned the benefits of taking data from Analytics into an analysis software like Tableau to gain and communicate insights.  Most importantly, I confirmed that I really enjoy doing this!  I am waiting to hear back from a couple of jobs, but in the mean time I’ve decided the knowledge I now possess would be beneficial a lot of organizations.  I feel confident enough to start approaching businesses and nonprofits in my community to get consulting work.”

By now I’m sure you get the picture, and our mentors have been having a great time as well. So much so that Joy Billings from Digitaria gave a positively glowing review of her work at the San Jose Emetrics, John Lovett’s student had her work go all the way to the CMO’s office at The Holocaust Museum, and Victor Acquah from Blue Analytics said:

“Just got off the presentation! Todd did such a great job with the analysis and presentation that it is hard to tell he hasn’t been in analytics for too long. Totally impressive output.”

We at Analytics Demystified have felt totally blessed to be part of the projects that have been going this far … but now is the time to take it to the next level: Starting with the publication of this blog post, The Analysis Exchange is open to all students, all mentors, and all qualifying organizations around the world.

If you haven’t already, please create an Analysis Exchange profile and join us in our effort to change web analytics forever. If you need more information first we have lots and lots of content including:

If you’re already in the Analysis Exchange and you want to help, please reach out to nonprofits you know and ask them to create a project so you can work with them. If you’re on Twitter, please use our short link (http://bit.ly/analysis-exchange) to help spread the news. If you’re a member of Web Analytics Wednesday, please consider mentioning the effort at your next meeting.

Finally, I want to offer a special “thank you” to Aurelie, John, Jim, Holly Ross, Beth Kanter, Sean Power, and each and every one of the mentors, students, and organizations who have helped us over the last five months. You are all amazing for contributing your time and energy to help make this effort run as smoothly as possible. Thank you!

Analysis, Analytics Strategy, Reporting

Answering the "Why doesn't the data match?" Question

Anyone who has been working with web analytics for more than a week or two has inevitably asked or been asked to explain why two different numbers that “should” match don’t:

  • Banner ad clickthroughs reported by the ad server don’t match the clickthroughs reported by the web analytics tool
  • Visits reported by one web analytics tool don’t match visits reported by another web analytics tool running in parallel
  • Site registrations reported by the web analytics tool don’t match the number or registrations reported in the CRM system
  • Ecommerce revenue reported by the web analytics tool doesn’t match that reported from the enterprise data warehouse

In most cases, the “don’t match” means +/- 10% (or maybe +/- 15%). And, seasoned analysts have been rattling off all the reasons the numbers don’t match for years. Industry guru Brian Clifton has written (and kept current) the most comprehensive of white papers on the subject. It’s 19 pages of goodness, and Clifton notes:

If you are an agency with clients asking the same accuracy questions, or an in-house marketer/analyst struggling to reconcile data sources, this accuracy whitepaper will help you move forward. Feel free to distribute to clients/stakeholders.

It can be frustrating and depressing, though, to watch the eyes of the person who insisted on the “match” explanation glaze over as we try to explain the various nuances of capturing data from the internet. After a lengthy and patient explanation, there is a pause, and then the question: “Uh-huh. But…which number is right?” I mentally flip a coin and then respond either, “Both of them” or “Neither of them” depending on how the coin lands in my head. Clifton’s paper should be required reading for any web analyst. It’s important to understand where the data is coming from and why it’s not simple and perfect. But, that level of detail is more than most marketers can (or want to) digest.

After trying to educate clients on the under-the-hood details…I almost wind up at a point where I’m asked the “Well, which number is right?” question. That leads to a two-point explanation:

  • The differences aren’t really material
  • What matters in many, many cases is more the trend and change over time of the measure — not its perfect accuracy (as Webtrends has said for years: “The trends are more important than the actual numbers. Heck, we put ‘trend’ in our company name!”

This discussion, too, can have frustrating results.

I’ve been trying a different tactic entirely of late in these situations. I can’t say it’s been a slam dunk, but it’s had some level of results. The approach is to list out a handful of familiar situations where we get discrepant measures and are not bothered by it at all, and then use those to map back to the data that is being focussed on.

Here’s my list of examples:

  • Compare your watch to your computer clock to the time on your cell phone. Do they match? The pertinent quote, most often attributed to Mark Twain, is as follows: “A man with one watch knows what time it is; a man with two watches is never quite sure.” Even going to the NIST Official U.S. Time Clock will yield results that differ from your satellite-synched cell phone. Two (or more) measures of the time that seldom match up, and with which we’re comfortable with a 5-10 minute discrepancy.

Photo courtesy of alexkerhead

  • Your bathroom scale. You know you can weigh yourself as you get out of the shower first thing in the morning, but, by the time you get dressed, get to the doctor’s office, and step on the scale there, you will have “gained” 5-10 lbs. Your clothes are now on, you’ve eaten breakfast, and it’s a totally different scale, so you accept the difference. You don’t worry about how much of the difference comes from each of the contributing factors you identify. As long as you haven’t had a 20-lb swing since your last visit to the doctor, it’s immaterial.

Photo courtesy of dno1967

  • For accountants…”revenue.” If the person with whom your speaking has a finance or accounting background, there’s a good chance they’ve been asked to provide a revenue number at some point and had to drill down into the details: bookings or billings? GAAP-recognized revenue? And, within revenue, there are scads of nuances that can alter the numbers slightly…but almost always in non-material ways.

Photo courtesy of alancleaver_2000

  • Voting (recounts). In close elections, it’s common to have a recount. If the recount re-affirms the winner from the original count, then the results is accepted and moved on from. There isn’t a grand hullabaloo about why the recount numbers differed slightly from the original account. In really close races, where several recounts occur, the numbers always come back differently. And, no one knows which one is “right.” But, once there is a convergence as to the results, that is what gets accepted.

Photo courtesy of joebeone

    That’s my list. Do you have examples that you use to explain why there’s more value in picking either number and interpreting it rather than obsessing about reconciling disparate numbers. I’m always looking for other analogies, though. Do you have any?

    Adobe Analytics

    CRM Integration #2 – Passing CRM Data to Web Analytics

    (Estimated Time to Read this Post = 5 Minutes)

    In my last post, I explained a bit about CRM and how you could improve CRM by passing Web Analytics data into your CRM system. In this post, I am going to cover the reverse angle – passing CRM data into Web Analytics. Since most of you reading this are web analysts, I think you will find this post more relevant, but I think it is important to understand both sides.

    Why Pass CRM Data into Web Analytics?
    As I mentioned in my last post, we web analysts get lots of great information about website visitors, but for many companies (especially B2B), the richest data resides in the CRM (Customer Relationship Management) system. If you want to be relevant in your organization, it is always best to be as close as possible to the $$$ and that often means playing nicely with CRM systems. Don’t get me wrong, showing your CMO that you can lift form completion rates by 200% through optimization is awesome, but if you can show him the revenue impact of it right there in your Web Analytics tool, you will be a rock star! Additionally, I will show that if you don’t have actual revenue-generating events on your site (eCommerce and Media sites have this easy!), then not doing this could actually result in Web Analytics data causing incorrect business decisions…

    Passing Post-Website Data from CRM to Web Analytics
    OK. So there are many different ways to merge CRM and Web Analytics data including passing data from both into a massive Marketing data warehouse (or Omniture Insight), but just for the purposes of this post, I am going to assume that you are a SiteCatalyst person and want to get something done relatively quickly. In this scenario, we’ll assume the following:

    • You want to see which of your website visitors completing lead forms on the site evolve into Leads, Opportunities and Revenue
    • Your CMO has charged you with capturing all of the different marketing channels and asked for your opinion on where the company should invest to get the most Revenue
    • You are tracking the various sources of traffic you receive and using SAINT Classifications to roll each up into a high-level marketing channel (SEO, SEM, E-mail, etc…)

    Given all of this, you might have a SiteCatalyst report that looks like this:

    As a web analyst, at this point, it looks like we might want to invest more in our E-mail program since that seems to be converting the best. Without CRM integration, that would probably be as far as we could go. But let’s now dig a little deeper. As I mentioned in the last post, when website visitors complete a form, we have a brief moment in time when we can connect our website data with our CRM data. Most CRM tools allow you to capture leads and set a unique ID for each form completion. At the same time, Omniture SiteCatalyst has a really cool feature (that many don’t use enough!) called Transaction ID. I highly recommend you read my full post on Transaction ID, but at a high level, it allows you to set an ID to a special SiteCatalyst variable and then days or weeks later, upload [normally offline] metrics into SiteCatalyst. The magic of Transaction ID is that when you upload these metrics later, they are tied to the eVar values (sorry – no sProps or Participation) that were present at the time the Transaction ID was set. That means that if a website visitor had a City eVar value of Chicago, a Traffic Source eVar value of Paid Search and a Visit Number eVar value of 3, then any offline metrics you import will also be tied to Chicago, Paid Search and Visit Number 3 in the respective eVar reports. This means that if you set the CRM ID associated with a website form completion, you now have a primary key (think Rosetta Stone!) that can connect your Web Analytics data to your CRM data!

    So what does this mean to you? Following our preceding example, let’s assume that you have made this connection and later imported all of the new leads your CRM system has seen along with the status (i.e. Qualified) of each into SiteCatalyst (these new metrics would be Incrementor Events). This gives you a new metric named “Qualified Leads” that you can now see in SiteCatalyst reports and since you used Transaction ID, these imported CRM metrics are correctly attributed to all eVar reports in your implementation. The result is that you can now open a report similar to the one we saw above, but now it has “Qualified Leads” instead of Form Completes and a new Calculated Metric that divides these Qualified Leads by Visits:

    The icons above the report show where each data point comes from and as you can see, the last column is truly magical in that it is combining data from two disparate systems (Cool huh?)! Once we have this, we can see that even though E-mail looked to be the best channel a few minutes ago, it now appears that SEM is where we want to spend our money. It turns out that E-mail generates form completions at the highest rate, but perhaps those form completions are all junk!

    However, I like to go as far downstream as possible and nothing is better than cold, hard cash! Applying the same principles, we can import Qualified Opportunities, Potential Pipeline, but the CRM metric that trumps them all is Revenue. By uploading Revenue via Transaction ID, we can see how much $$ we got from each Lead Form completed on the website and tie it to any eVar value we have – in this case marketing channel/traffic source. The following report shows the result of this:

    Again, we see that some data is coming from SiteCatalyst and some is coming from our CRM system. Our new Revenue/Visit Calculated Metric can be used to see that, in the end, it is really SEO that provides the most Revenue/Visit and maybe we should consider additional investment there. Please keep in mind that these examples are simply meant to illustrate the concept and show the value in adding CRM metrics to your Web Analytics tool. Finally, don’t forget that Transaction ID data is available in Omniture Discover so you can slice and dice this data even further there!

    Targeting Based Upon CRM Data
    Another really cool integration between CRM and Web Analytics is in the area of Test&Target. For those not familiar with Test&Target, it is an Omniture tool that lets you test and dynamically target content to website visitors based upon what you know about them. It is commonly used to optimize your website success metrics. However, this can be extended by importing in CRM data so that your targeting is based upon both online and offline data.

    Let’s walk through an example. Imagine that a website visitor named Bill has been to your website a few times, looked at a few of your products and completed a lead form. Next, Bill spoke to your sales representative and is at “Stage 3” of the sales process (the discovery phase). Over the next few weeks, meetings take place and Bill comes to the website occasionally (your sales team would know when and exactly what he is doing if you read my last post!). But now let’s say that Bill is in sales “Stage 9” which is the final stage before the deal is won or lost. We know what products he wants, we know he is close to making a decision, we know how big is company is, etc… If we knew all of this, what would we want to show him the next time he arrives at our website? Here are a few things I would show to Bill on my home page when he (and only he) arrives on it:

    1. Case studies related to his industry
    2. ROI calculator for the product Bill is interested in
    3. Links to community content to show Bill that he would be well taken care of if he were to be a customer
    4. A time-sensitive offer (“Buy in the next 24 hours and get XX% off”) – You could even address him as “Bill” but that might freak him out!
    5. etc…

    The point is that if you can get the rich customer data related to Bill and multiply this to all of your prospects, each one could see more personalized content that helps move them further down the sales funnel. You can even track how often they see these “recipes” and track the success of your intelligent targeting. If you are interested in this type of CRM-based targeting I suggest that you contact @brianthawkins who is a Test&Target Jedi-master…

    Final Thoughts
    Hopefully this sparks some ideas about ways in which you can enrich your Web Analytics data by adding CRM data to the mix. In the next post I will cover ways in which you can import CRM meta-data into your Web Analytics tool to augment your current web analyses.

    Adobe Analytics, Analytics Strategy, Conferences/Community, General

    Excited to Announce X Change 2010 Keynote!

    Now that Emetrics West is behind us, and what an Emetrics it was this year, Analytics Demystified and Semphonic officially start to ramp up our efforts to get the best of the best of you to join us for three days in Monterey September 20, 21, and 22. While I am excited about the entire event, I am particularly excited about our keynote offering this year titled “A Conversation with Management.”

    Because the X Change draws so many expert practitioners, managers, and directors of web analytics my general feeling has always been that we should be programming for “lifers” in the field, looking for opportunities to help participants expand their career horizons. Our “Conversation with Management” keynote is a conversation with three of the most successful web analytics professionals I personally know:

    • Shari Cleary, Vice President of Digital Research at MTV Networks
    • Joe Megibow, Vice President of Global Analytics at Expedia.com
    • Steve Bernstein, Vice President of Analytics at Myspace

    I have personally known Shari, Joe, and Steve for years and have had the great honor of watching each progress up the management chain, taking an increasing amount of responsibility with each step. Now all three of our keynote participants represent web analytics at the highest levels within each of their organizations, an incredible feat when you consider the footprint MTV, Expedia, and Myspace have on the Internet.

    During our keynote I will be leading the panel to explore common “lifer” challenges including staffing, vendor management, the balance between reporting and analysis, their relationship with senior-most management, and the importance of business process to each of their jobs. My goal will be to get each to share details regarding their own career path in hopes those insights will help X Change attendees accelerate their own goals.

    You can learn more about the 2010 X Change on our micro-site for the conference:

    • The 2010 X Change conference schedule
    • Hotel information for the Monterey Plaza Resort and Spa
    • Details about Analytics Demystified’s Think Tank offerings
    • Our X Change Frequently Asked Questions document
    • Registration for the 2010 X Change in Monterey, CA September 20, 21, and 22

    If you have questions about the conference please don’t hesitate to give any of the Analytics Demystified partners a call or email. Remember that the conference is limited to the first 100 people who register and registration has already started.

    See you at the X Change!

    Adobe Analytics, Technical/Implementation

    CRM Integration #1 – Passing Web Analytics Data to CRM

    (Estimated Time to Read this Post = 5 Minutes)

    One of the areas of Web Analytics that I am passionate about is the integration of Web Analytics and CRM. In the next three blog posts, I am going to share why I think this topic is important and some ideas on how to do it.

    Why Integrate Web Analytics and CRM?
    For those who are not experts on CRM, it stands for Customer Relationship Management and it generally involves using a tool to store all information you have about your prospects/customers. This normally includes all contacts with customers while they were prospects, all customer service touches, what products they use and how much they pay for each. However, the main thing to understand is that CRM systems contain pretty much all data about prospects/customers that takes place after you know who they are. But before your customers fill out a form or call you, guess where many of them are going? That’s right, your company’s website (and to social media sites more and more!). Guess who knows the most about what prospects do before your company knows they are interested in you? Your Web Analytics platform!

    Last week, I presented on this topic at the eMetrics conference where I posited that the combination of Web Analytics and CRM is akin to the joining of chocolate and peanut butter in that they are both great, but even better together! Often times, as web analysts we know a great deal about what happens on the website, but unless your website sells something or sells advertising, the true success event ($$$) often takes place off the website (especially for B2B sites). Additionally, for all the great information we have about website visitors, most of it is anonymous – we don’t really know who they are so we can’t easily connect their website behavior to other interactions. What if we could take all of that anonymous website behavior and somehow connect it with the known prospect/customer behavior stored in our CRM system? Imagine if every time a prospect filled out a lead form on your website, the sales person who is routed the lead could see what that person had viewed on the website, what products they had looked at, etc… That could lead to a much more meaningful conversation and help get things off on the right foot. In this first post on the topic, I will cover ways in which you can improve your CRM system by passing it meaningful data from your web analytics tool.

    Passing Pages Viewed
    The first area I would like to cover is the concept mentioned above in which we pass data about pages viewed from your Web Analytics tool into your CRM tool. So let’s say that you have a website visitor who navigates a bunch of pages on your website and then fills out a lead form. At that moment, you have the opportunity to create a connection between that user’s website (cookie) ID (Omniture calls this a Visitor ID) and the ID used to record that lead form in your CRM system. While it would take too long to go into all of the details on how to do this (Hint: read my old Transaction ID post!), at a high level, you can use API’s of both tools to tie these ID’s together. Once you have made this connection, you can pass data bi-directionally between the two systems. In this case, we are going to create a custom object in our CRM system that represents website traffic and import what pages this particular prospect on the website. While this may sound hard, if you look closely, you will notice that the following screen shot is something I did between Omniture and Salesforce.com back in 2005 so it can’t be that hard right?

    In this case, your sales team would know that this person is probably interested in Weather products so they might want to prepare accordingly for their first phone call or face-to-face meeting.

    Passing Website Scores
    In one of his post-Summit blog posts, Ben Gaines talked about a topic called Visitor Scoring (I prefer the name Website Scoring to avoid the whole Engagement debate!). Basically, this involves storing a unique website score for each website visitor so you can see how active they have been on the website. For example, you can set this up so if a visitor views a Product page they get 5 “points” but if they view a product demo video, they get 8 “points” and so on. I tend to use Participation metrics or segments in Discover to determine which pages should be rating higher than others. If you have implemented this, one of the cool ways you can use it is to identify the current website score of a website visitor who completes a web form and pass it to your CRM system. Let’s say that your sales team receives hundreds or thousands of new leads each day. One way they can determine which ones they should call first might be to see how active each has been on the website. If one prospect comes through with a website score of “10” and another with “54” which one would you call? While this isn’t meant to replace a full-blown lead management system, it is another data point that can be passed from Web Analytics to CRM.

    Lead Nurturing
    Unfortunately, there are most likely way too many visitors for your sales team to talk to and not all of them are truly qualified. Therefore, one of the key strengths of CRM tools is that they can nurture or re-market to prospects via e-mail and other platforms. For example, it would be common for a company to use its CRM tool to automatically schedule an e-mail to go to all prospects who are interested in Product X and have more than 500 employees. However, what is often missing from these types of nurturing programs is the deep insights that can come from your Web Analytics tool. Building upon the preceding scenario, if we have a connection between a particular prospect and their Website cookie ID, as they come back to the site and click on more things, we should be pushing that information into our CRM tool and having it then decide which re-marketing information the prospect receives. For example, if the prospect above started clicking on items related to another CME product (say Eurodollars), the sales person may not have any plans in the next week to look at this person’s record so they would never know that. But by automating the data exchange between the Web Analytics tool and the CRM tool, specific product flags could be triggered that would result in the prospect being intelligently nurtured with little human intervention. If you are interested in Lead Nurturing, you can also look at tools like Eloqua which partner with CRM tools to provide this type of functionality.

    Passing Key Website Metrics to CRM
    The last concept I will cover in this post is the passing of key website metrics to your CRM system. Most sales organizations use conversion funnels that are not unlike what we are used to in Web Analytics. However, their funnels normally begin with new Leads and progress through different sales stages until business is won or lost. The one flaw in this model is that it doesn’t account for the true potential of selling opportunities that exist. A true salesperson would say that anyone who visits their company’s website is an opportunity for a sale so the way I look at it, they should include metrics like Unique Visitors and people who View a Demo or see a Lead Form as part of their sales funnel. I also think that getting sales to think of their funnel in a larger context helps bridge the gap between Sales and Marketing and opens the door for increased cooperation.

    Therefore, one of the ways I do this is to take the traditional sales funnel and add some of our Web Analytics KPI’s to it like this:

    Final Thoughts
    So covers most of the topics related to passing Web Analytics data into CRM. In the next post I will cover the flip-side and show how you can pass CRM data into your Web Analytics tool.

    Adobe Analytics, Technical/Implementation

    Omniture Usage Stats

    (Estimated Time to Read this Post = 1 Minute)

    Every once in a while, I will get questions about how often people are accessing Omniture SiteCatalyst. In case this happens to you as well, I thought I’d write a (really) quick post with instructions on how to do this. Currently, there isn’t that much you can report on (I have requested more here) so this will be one of my shortest posts ever!

    Usage Reportlet on Dashboard
    The only way I know of to get usage metrics is by adding a usage reportlet to a SiteCatalyst Dashboard. To do this, open a new or existing Dashboard and add a “Usage” element. Through this element you can get information on Users, Reports Viewed, Suites, etc… as shown here:

    Unfortunately, you don’t get much detail and basically are stuck with one metric “Views.” (which I assume is similar to Page Views). Once there, you can fill out the rest of the dashboard items. In the example below, I am adding a report that will show me the Top 500 users.

    Once the reportlet is on the Dashboard, I tend to export it to excel and do some charts/graphs there. Here is an example of how you can trend how often people are engaging with your SiteCatalyst reports:

    Well, that’s it. Short and sweet…

    Analytics Strategy, Conferences/Community, General

    Are you coming to Emetrics?

    Well folks, it’s that time of year again. The winds are dying down and the flowers have all started to bloom so it must be time to make our annual pilgrimage to San Jose to bask in the glory of Jim Sterne and the Emetrics Marketing Optimization Summit! As usual I will be there and have the honor of sharing a keynote slot with my long-time friend and uber-optimizer Bryan Eisenberg!

    • Emetrics Keynote: Wednesday at 1:00 PM in the Grand Ballroom

    Partner John Lovett will also be there, basking in his own glory on the heels of his Web Analytics Association victory … and taking the WAA’s new Certification test. I haven’t really had much time to think about the Certification yet but will be interested to hear what John and others taking the test have to say.

    I also have the rare honor of presenting with Brett Crosby, Group Product Manager for Google Analytics and one of the nicest guys in the entire industry, hand’s down. Oddly he and I are presenting IMMEDIATELY AFTER his “What’s new from Google Analytics” pitch on Tuesday … but to compensate we’re gonna try something new and have a very loose “conversation” about web analytics that is more similar to an X Change mini-huddle than a traditional presentation.

    • Talking Analyics: Tuesday at 2:00 PM at The Conversion Conference (co-located w/Emetrics)

    Finally I will be sharing the stage at Web Analytics Wednesday with Adam Laughlin from the nonprofit Save the Children. We will be talking about our respective community education efforts — his “Web Analytics Without Borders” WAA initiative and our own Analysis Exchange. I will be making a few exciting announcements about The Analysis Exchange next Wednesday so if you cannot attend Web Analytics Wednesday please watch my blog or follow me on Twitter.

    • Web Analytics Wednesday: Wednesday at 6:00 PM at the Fairmont in San Jose

    That schedule again:

    • Tuesday, 2:00 PM at The Conversion Conference with Brett Crosby (Google)
    • Wednesday, 1:00 PM at Emetrics with Bryan Eisenberg (Emetrics Keynote)
    • Wednesday, 6:00 PM at Web Analytics Wednesday (The Fairmont Hotel, Market Street Foyer)

    Thanks to Coremetrics and SAS for their generous support of Web Analytics Wednesday at Emetrics, by the way. Great companies like these are what keep WAW events around the world free and open to everyone!

    See you in San Jose!

    Adobe Analytics

    Classification Alerts

    Recently, Omniture released its latest version of SiteCatalyst (version 14.7) in which there were a bunch of new features added. Below are a few links to other blog posts that describe some of these new features:

    However, one of my favorite new features was the addition of Alerts on Classifications. In the past, you could set Alerts on Success Events and on eVar and sProp values, but not on Classifications of those eVars and sProps. While this new feature doesn’t sound all that impressive, it can be quite powerful. In this post I will demonstrate how it can be used.

    Example – Traffic Driver Type Classification
    For the purposes of this post, let’s imagine that we have an eVar or Campaign variable that captures individual Traffic Drivers (Sources). These sources of traffic might be SEO keywords, Paid Search Keywords, E-mail links or clicks from Social Media sites. However, in this case, we don’t want to be notified every time there is a increase/decrease for a specific Traffic Driver, but rather, we would like to know if there has been a significant change to one of the higher level types (SEO, SEM, Social Media). Before this new version, doing that would be basically impossible, but this is easy now if you have used SAINT to classify your individual Traffic Drivers.

    Let’s say that your boss wants to be alerted if there is a significant change in Social Media referrals to the website from one week to the next. To be alerted about this, you would simply open up the Classification version of the Traffic Driver report (we’ll call it Traffic Driver Type) and click the Alert icon. Doing this brings up the screen below which is basically the same as the Alert screen you may have used previously. Once here, you can give your Alert a name, choose the time frame, pick your metric and then select the specific item you want to be alerted about (in this case I have selected Social Media). Next, you set the type of Alert – in this case I have chosen a percent change of over 15% and then tell SiteCatalyst how you woudl like to be notified (e-mail or mobile device). That’s it!

    Other Uses
    There are an infinite number of ways you can use this powerful feature. Here are just a few that I can think of off the top of my head:

    • Classify internal search terms into buckets and be alerted when a specific type of keywords hits a threshold or changes significantly
    • Classify countries or cities into regions and be alerted when a metric related to a specific one changes
    • Classify search engine keywords into Branded/Non-Branded and be alerted when each changes significantly
    • Classify videos viewed on your website and be notified when a video type changes significantly
    • If you are using the Omniture Twitter Integration, all of the Twitter data points are based upon Classifications so you can set an Alert when a particular person Tweets or if a specific Twitter keyword experiences a spike

    These are just a few examples, and I am sure you will find many ones unique to your business that can prove beneficial.

    Now, if only Omniture could combine this new feature with this future idea, then we will really be in business! In the meantime, if you think of some really interesting ones, feel free to leave a comment here…

    Social Media

    New Research on Social Marketing Analytics

    I’ve been awaiting this day with eager anticipation for some time now because we are finally releasing our paper on Social Marketing Analytics. Several months ago Eric Peterson and I started talking with Jeremiah and Altimeter Group about the issues facing social marketers. Despite the red hot flames trailing anything with the word social in it, the outlook for effectively measuring the effects of social marketing initiatives in a meaningful way was somewhat grim.

    A lack of standardization, reckless experimentation, and unanswered calls for accountability were plaguing businesses who were working in earnest to embark on their social marketing activities. After some intense discussions and some creative thinking, we decided to collaborate on a research project that would leverage social media strategy from Altimeter Group and digital measurement rigor from Analytics Demystified. The result is a framework for measuring social media that we’re happy to share with you today. If you’re into this stuff, please drop us a note or give a call to get involved in the conversation.

    Introducing the Social Media Measurement Framework

    There’s no denying that social media is the hottest sensation sweeping the globe today. Yet, marketers must see past the shiny object that is social media and start applying a pragmatic approach to measuring their efforts in the social space. We developed a framework that starts with a strategy that requires solid business objectives. From there, specific measures of success – that we call Key Performance Indicators – provide a standardized method for quantifying performance.

     

    Mapping Business Objectives to KPIs

    Our report identifies four social business objectives that include: Foster Dialog, Promote Advocacy, Facilitate Support and Spur Innovation. While, there may be others that apply to your business, we view these as a solid foundation for beginning the measurement process. In the report we align KPIs to these social business objectives and offer real formulas for calculating success.

      The Social Business Objectives and Associated KPIs are:

    • Foster Dialog: Share of Voice, Audience Engagement, Conversation Reach
    • Promote Advocacy: Active Advocates, Advocate influence, Advocate Impact
    • Facilitate Support: Resolution Rate, Resolution Time, Satisfaction Score
    • Spur Innovation: Topic Trends, Sentiment Ratio, Idea Impact

    We encourage you to download the full report here to get the complete context and actual formulas for these KPIs.

    Note: This report was a collaborative effort by Analytics Demystified and Altimeter Group and as such there are two versions of this report. The content is identical, yet we each published under our own letterhead.

    This is Open Research…

    We made a conscious decision not to accept sponsors for this research and to produce it entirely at our own expense so that we could offer a genuine launching pad for social media measurement to the industry. However, this research would not have been possible without numerous contributions from social media and measurement visionaries. We thank them in the report, but it’s worth mentioning that these contributors helped illuminate the big picture of the challenges and opportunities associated with measuring social. We’re publishing this work under a Creative Commons License Attribution-Noncommercial-Share Alike 3.0 United States and encourage practitioners, vendors and consultants to adopt our framework and use it in measuring social media.

    We also want to be realistic about this body of work and acknowledge that it does not answer all questions regarding the measurement of social activities. Our hope is that it offers a solid jumping off point for getting started and that each of you in the community will modify and make contributions to improve this method of measurement. We can assure you that we’ll be listening to to your feedback and will continue to update our knowledge based your feedback on this work.

    Want to Contribute or Learn More?

    Jeremiah and I will be conducting a webcast on June 3rd to reveal the gritty details behind the strategy and framework. Join us by registering here: Attend the no-cost webinar on Social Media Measurement.

    We encourage you to embed the Slideshare link into your own sites and I’ll link to others that extend the conversation here as well.

    For more white papers from Analytics Demystified, click here. Or to instantly download a PDF of this Social Marketing Analytics report click this GET IT NOW link.

    Related Links:

    Jeremiah Owyang, my co-author, on the Web-Strategist blog

    The Altimeter Group blog posting

    Social market analytics: the dark side? Posted by Dennis Howlett

    Shel Holtz also recognizes “The haphazard means by which we are monitoring and measuring social media…”

    cjlambert’s posterous gives us a “like” but remains skeptical on the concept that media can be measured comprehensively

    Geoffroi Garon takes the report to his French speaking audience.

    Marshall Sponder includes us in his Social Analytics Web Journal write-up. Marshall is also working to implement our KPIs with one of his clients. Here’s Part 1 of his multi-part series.

    Kenneth Yeung delivers a fantastic synopsis of our 25 page report with once concise post on his blog, The Digital Letter, with a follow-up post here.

    Research Live offered a brief write-up of the research.

    @Scobleizer tweeted us! Woot!

    Chelsea Nakano references our research in her ambitious post titled Everything You Need to Know About Social Media Marketing.

    Lisa Barone of Outspoken Media gives us a shout out. Lisa was also an influencer that we interviewed for the research.

    Christopher Berry delivers great questions and thoughts on the research in his Eyes on Analytics blog.

    Analytics Strategy, General

    My Interview with Adobe Chief Privacy Officer

    Those of you paying close attention to issues regarding consumer privacy on the Internet are probably at least a little familiar by now with Flash Local Shared Objects (also called Flash “Cookies” by some.) I wrote a white paper on the subject Flash objects’s use in web analytics on behalf of BPA Worldwide back in February and had to update the blog post I wrote when I noticed  that Adobe had wisely written a letter to the Federal Trade Commission regarding the use of Flash to reset browser cookies.

    After writing that update I got in contact with Adobe’s Chief Privacy Officer, MeMe Rasmussen, who politely agreed to answer a few questions that I had about their letter and Adobe’s position on the use of Flash as a back-up strategy for cookies.  Given that Scout Analytics is now reporting that Flash “Cookies” are increasingly being deleted by privacy-concerned Internet users I figured it was a good time to publish my questions and MeMe’s responses.

    The following are my questions (in bold) and Mrs. Rasmussen’s responses verbatim.

    Flash Local Shared Objects (LSOs) have been around for a long-time and I have been aware of their use as a “backup” for browser cookies for reset and other calculations for a few years.  What made you write your letter to the FTC now?  Was there a specific event or occurrence?

    The topic of respawning browser cookies using Flash local storage was publicized after research conducted by UC Berkeley on the subject was published in August 2009.  The topic was also raised at the FTC’s First Privacy Roundtable in December, so when the FTC announced that its Second Roundtable would focus on Technology and Privacy, we felt it was the appropriate opportunity for Adobe to describe the problem and state our position on the practice.

    While I believe the position you outlined in your letter to the FTC is the correct one, you have put many of your customers in an uncomfortable position by condemning an act that they have been using for quite some time — essentially issuing negative guidance where none had been previously issued (to my knowledge.)  What has the response to this been if I may ask?

    We have not received any comments or concerns from customers about our Comment Letter to the FTC.  Adobe’s position specifically condemns the practice of using Flash local storage to back up browser cookies for the purpose of restoring them after they have been deleted by the user without the user’s knowledge and express consent.  We believe companies should follow responsible privacy practices for their products and services, regardless of the technologies they choose to use.

    On page 8 of your response to the FTC you discuss Adobe’s commitment to research the extent of this (mis)use of Flash LSOs.  Given the extent to which LSOs are being used perhaps “not as designed” and the sheer popularity of Flash on the web this seems quite a task.  Can you describe how you have started going about this effort?

    We are currently in the process of defining the research project and are working with a well-respected consumer advocacy group and university professor.  At this time, the specific details of the project have not yet been finalized.

    Within the web analytics community many have commented that your position on Flash LSOs may impact some of what Mr. Nayaren and Mr. James have said about the integration of Omniture and Adobe products like Flash.  Specifically some of the commentary suggests a tight integration of Omniture’s tracking and Flash.  Does your position on LSOs as a tracking device change the guidance the company has issued to common customers?

    No, the position we outlined in the FTC Comment on condemning the misuse of local storage, was specific to the practice of restoring browser cookies without user knowledge and express consent.  We believe that there are opportunities to provide value to our customers by combining Omniture solutions with Flash technology while honoring consumers’ privacy expectations.

    One of the suggestions I made in the white paper with BPA Worldwide that you cited was to use Flash LSO as a back-up tracking mechanism but NOT to use it to re-spawn cookies.  From a measurement perspective there are a handful of good reasons to do this … does Adobe have a position on that strategy that you can outline?

    The point we made in our FTC Comment was that we considered the practice of using Flash local storage to respawn HTML cookies without user consent or knowledge to be an inappropriate privacy practice.  In your white paper, you identified some uses of Flash local storage whereby browser cookies are rest but the use is given clear notice and an opportunity to consent.  We believe that technology should be used responsibly and in ways that are consistent with user expectations.  The example you presented in your white paper was an example of a Web site that, by giving notice and control to the user, implemented our technology in what appeared to be a responsible manner.

    (Thanks again to MeMe and the team at Adobe for getting these responses back to me! As always I welcome your comments and questions.)

    Adobe Analytics, General

    Advanced Comparison Reports

    In my last post, I covered Comparison Reports and showed the basics on how to use them. In this post, I will cover a few advanced ways that I use these reports related to Pathing reports.

    Date-Based Pathing Comparisons
    For many of the analyses I perform, I like to see how website visitors are navigating my site via Pathing reports. However, these reports are usually static – for a specified time period. Therefore, what I like to do is to view a few of the standard SiteCatalyst Pathing reports in a way that I can see how Pathing is changing day to day, week to week or month to month. While SiteCatalyst doesn’t let you use comparison reports in all Pathing reports, there are a few key ones that do allow you to compare date ranges – Next/Previous Page Report and Full Paths Report. The following are some examples of how you can take advantage of this.

    Next/Previous Page Report
    Let’s say that you have a key page like your home page and you want to see what pages visitors are going to from it in March vs. February. To do this, simply open the Next Page report (under Pathing) and select the Home Page as the page of focus. Once there, you can use the calendar as shown in my last post to select two different time periods (February & March in this case) and see the report:

    When looking at this type of report, I tend to change the graph so I am looking at percentages instead of the raw numbers since that is what I care about the most. Also, you can apply Normalization to this report by selecting it in the report settings (to learn about Normalization see my previous post). Finally, the same principles apply if you want to see the above report as a Previous Page report to see how people are getting to a specific page differently between two different time periods.

    Full Paths Report
    In this next example, let’s say that we don’t have a specific page in mind, but rather, want to see how all of our paths are changing between two different time periods. To do this, you can use the Full Paths report (found under Pathing). This report shows all of your paths and can be quite massive, but I tend to use it to see just my top few site paths. Using the date comparison feature, you can see how the paths of the same site differ between two distinct time periods. To do this, simply open the Full Paths report and use the calendar tool to select your two date ranges and you will see a report that looks like this:

    As you can see, this report allows you to look at your top paths and see if there are any significant changes you should know about. This is handy if you have a special promotion on a page and want to see how it is changing your Pathing behavior. Finally, there are some cool advanced things you can do in the Full Paths report like limiting paths to a specific number of pages and specifying an entry page, but you can explore that as needed.

    Site-Based Pathing Comparisons [If you don’t have multiple report suites or ASI slots you can skip this section!]
    As I mentioned in my previous post, the other type of comparison report you can create is one based upon report suites/ASI slots. These comparisons allow you to see how one data set is doing compared to another. A perfect example of this is if you are part of a multinational and have basically the same website for different geographies. Other examples might be a company that has multiple divisions/products that are similar enough that they use the same type of website template. In both of these cases, you have the same page names, just different locales or products and might want to see how one is doing vs. another. In this example, we will assume that a website is basically the same except for the site geography and that we want to see how people are navigating from the home page of two different geographies. If you are clever, you will quickly realize that this might be problematic since each website might have different page names including the site locale. This why in my Page Naming Best Practices post I recommended that you set a custom sProp without the site locale (and have Pathing enabled). Here is a quick excerpt from that post:

    One wrinkle that can emerge are cases where you have multiple geographic websites. For many companies, this results in a similar version of the website, but translated into different languages. If you have this situation, I recommend tweaking the above page names to include a site locale indicator. For example, each page in the UK site should have “uk:” in the page name and so on. When this is done, your page names might look like this:

    [Advanced User Alert – If you have multiple site locales, I also recommend passing the page name without the site locale to a different sProp (with Pathing enabled) so you can see how a page does across all site locales (i.e. Participation). I also like to pass the site locale by itself to a separate sProp so in a global report suite I can create correlations between sProps and other variables (i.e. Internal Search Terms).]

    If you have done this, then you can simply open the next page report for this custom sProp (that has no site locale) and choose the “Compare to Site” option and select the sites that you want to compare. In this example, I am looking to see what visitors from Spain and Italy do when they are on the home page. In this case, I am normalizing the data so I can get a better view of the next page differences between the two sites even if they have different amounts of traffic. As mentioned previously, you can do the same thing for Previous Page and Full Path reports…

    Using Comparisons With Other Custom sProps
    Lastly, if you have Pathing enabled for other custom sProps, you can take advantage of this functionality there as well. Let’s say you have Pathing enabled on what internal search terms people look for on your website. You can compare this between websites or for two different time periods. Below is an example of looking at the same internal search terms for two different time periods.

    Final Thoughts
    These two posts should cover pretty much all you need to know about SiteCatalyst comparison reports. If you are using them in any other creative ways, please leave a comment here. Thanks!

    Adobe Analytics, General, Reporting

    Comparison Reports

    Often times when I used to work with clients and now internally, I am surprised to see how many SiteCatalyst users don’t take advantage of Comparison Reports within the SiteCatalyst interface. In this post I will review these reports so you can decide if they will help you in your daily analysis.

    Comparing Dates
    Hopefully most of you are familiar with this type of Comparison Report. This report type allows you to look at the same report for two different date ranges. To do this, simply open up an sProp or eVar report and click the calendar icon and choose Compare Dates when you see the calendar. In the example shown here, I am going to compare February 2010 with March 2010:

    For this example, I have chosen the Browser report, using Visitors as the metric. After selecting the above dates, my report will look like this:

    As you can see, SiteCatalyst adds a “Change” column where it displays the difference between the two date ranges. This can be handy to spot major differences between the two date ranges. In this case we can see that “Microsoft Internet Explorer 8” had a big increase and that “Mozilla Firefox 3.5” had a decrease (probably due to version 3.6!). You can compare any date ranges you want from one day to one year vs. another year.

    However, when you compare ranges that have different numbers of days, your results can be skewed. For example, in the report above, March had three more days than February so that may account for why the differences between the two are so stark. If this ever becomes an issue, you can take advantage of a little-known feature of Comparison Reports – Normalization. In the report settings, there is a link that allows you to normalize the data. When you normalize the data, SiteCatalyst makes the totals at the bottom of each report match and increases/decreases the values of one column to adjust for the different number of days. I am not 100% sure what specific formula or algorithms are used to do this, but for the amount of times that you will use it, I would go ahead and trust it. Below is an example of the same report with Normalization enabled:

    If you look closely, you will see that the March 2010 column has been normalized when we clicked the “Yes” link shown in the red box above. By doing this, SiteCatalyst has reduced the numbers in the March 2010 column to assume the same number of Visitors as there were in February. If you want to normalize such that February is increased to match March, you simply have to reverse the date ranges so when you select your dates, March is the first column and February is the second column (the second column is always the one that gets adjusted). As you can see, the “Change” column is now dramatically different! In this version, “Microsoft Internet Explorer 8” no longer looks like it has changed much. I find that using this feature allows me to get a more realistic view of date range differences.

    Finally, you may notice a tiny yellow box in the preceding report image (says “6,847”). This is a secret that not many people know about. When you normalize data, Omniture artificially reduces or increases the values in the normalized column. But if you want to see what the real value is (if not normalized), you can hover your mouse over any value and you will see a pop-up with the real number! If you look at the first version of the report (the one before we normalized), you will see the same “6,847” number in the first row of the report… Pretty cool huh?

    Comparing Suites
    This second type of Comparison Report is the one that fewer people are aware of or have used. In this type of comparison, instead of comparing date ranges you compare different report suites. Obviously, this only makes sense if you have more than one report suite, but it also works with ASI slots so don’t assume this isn’t relevant to you if you have just one report suite. Much of the mechanics of this are similar to the steps outlined above. You simply open one report (in this case we will continue to use the Browser report) and then choose the “Compare to Site” link and choose a second report suite or ASI slot. In this case, I am showing an example of the Browser report for two different geographic locations. Since most report suites have different totals, I tend to use Normalization more in these types of comparison reports.

    Final Thoughts
    This covers the basics of Comparison Reports. Hopefully you can use this to start creating these reports and adding them as scheduled reports or even to Dashboards. In my next post, I will take this a step further and demonstrate an advanced technique of using Comparison Reports…

    Analytics Strategy

    An Open Letter to Steve Jobs

    Update (April 22, 2010): This article at Venturebeat suggests that iPhone application measurement vendors are hearing good news from Apple regarding their ability to measure in-app data.  The article, however, is devoid of any kind of details whatsoever and so the Tweets saying “Apple NOT banning analytics from OS 4.0” appear to be somewhat optimistic in my opinion.  I’d love to hear from Google, Omniture, Webtrends, Coremetrics, Unica, etc. and see if they are having the same “you have nothing to worry about” conversations with Apple. Obviously is Venturebeat is correct this is great news, but if I were developing an iPhone application I’d want a little more  than rumor to contradict the language in Section 3.3.9.

    Wait and see I guess …

    Dear Mr. Jobs,

    As a very loyal Apple customer and user of your products I want to thank you for all that you’ve done for computing in general. Your attention to detail and your vision have resulted in many of the most useful and usable products I own, too many to list honestly. While I was able to hold off for three days before purchasing the original iPhone (now on my third since I upgrade with every release) I pre-ordered my iPad and absolutely love it.

    Thanks for that.

    Unfortunately as a long-time member of the digital measurement industry I am in the uncomfortable position of having to ask you to reconsider what will undoubtedly be viewed by many Apple customers, developers, and end-users as an egregious mistake. I am talking about Section 3.3.9 in your updated iPhone Developer Agreement in which you apparently ban all third-party in-app measurement. While I respect Apple’s right to privacy, for those not familiar with Section 3.3.9 I would encourage you to read the following articles:

    The summary statement is that your updated Developer Agreement, if my read is accurate, strips all of your Development partners of their ability to measure application usage with an eye towards improving the overall quality of their product.  Just as a reminder these Developer partners include Best Buy, Expedia, The New York Times, The Wall Street Journal, Netflix, and some 150,000+ other companies working to deliver great experiences on your iPhone, iPod Touch, and iPad devices.

    While I certainly understand the over-arching desire to have quality control in all things Apple, which the mobile family of applications essentially become by proxy, banning the ability to measure application use is likely to be met with some resistance among your larger Development partners.  Many of these companies are known to me as a consultant and have active programs in place to use solutions like Adobe’s Omniture, Webtrends, Coremetrics, Unica, Google Analytics, and Yahoo Web Analytics to determine which application functionality is working and which needs to be addressed in future updates.

    Given that Apple is a long-time Adobe/Omniture customer I rather suspect that this third-party tracking is embedded in many of your own applications.  Perhaps that’s not the case, but given the general utility of these applications I would be pretty surprised if your own developers aren’t in violation of the new Developer Agreement somewhere in the pre-installed application stack.

    If not, well, shame on your developers for not embedding application tracking in complex applications like Pages and Keynote on the iPad. While I certainly do love the freedom I have to write on the iPad, I suspect if you were using Adobe/Omniture to track Pages you’d see me continually tapping the page in landscape mode trying to get a menu to come up so I can make a bulleted list …

    But I digress.

    Since many of your best Development partners are companies well-known for their general prowess for digital analytics — companies like Best Buy, Expedia, Cisco, Netflix, Disney, ABC, ESPN, and many, many more — you may want to give a little more thought to Section 3.3.9. If this section remains you are essentially blocking all of these companies (and all mobile developers in your App Store) from gaining valuable insight into how their applications can be more useful, more delightful, and frankly, more like Apple.

    Hopefully this was just a huge oversight on someone else’s part within Apple, especially since as far as anyone knows you don’t have technology available to replace the data that would be lost if (or when) Developers comply with this requirement. It may seem a touch geeky but business owners are increasingly relying on this data to justify the expense and commitment it takes to participate in the App Store (and the mobile revolution in general.)

    Being such a fan of your work I’d like to offer a solution, just in case your open to the idea.

    Apple has an opportunity to do something that, well, nobody else really has or does in terms of digital measurement. Because of how the App Store works, Apple could create a set of terms and conditions for application tracking that would simultaneously provide guidance to your Developer community and create an unprecedented level of transparency for technology end-users everywhere (or at least those using Apple products.)

    Imagine a tier of requirements and resulting notifications to the end-user based on the type of data the application wanted to pass.  Just like geo-location requires explicit one-time opt-in today, tracking of individually identifiable (e.g., device-level or personal) data could require the same type of opt-in.  For more basic tracking (e.g., completely anonymous interaction data) you could simply allow that without opt-in to foster the growth and development of the application development community.

    The most important thing is you would have an opportunity to craft a set of mobile tracking requirements that could be extended and applied across the entire mobile universe. In the same way Apple has changed our relationship with “pocket computing” forever, your company could essentially resolve a problem that in some ways is an accident waiting to happen, and do so in a way that creates opportunities rather than creating tension with the very group that is making your products so successful today.

    If this is in any way interesting to you I’d love to discuss it more. My contact information is on my web site.

    Measurement is not as sexy as the iPad or iPhone, but at the end of the day it is just about as important. With every new technology comes the need to understand it’s use and justify related expenses. Your Development partners are intensely drawn to the iPhone opportunity to be sure, and it’s great that you’re making people like Loren Brichter and others rich thanks to their efforts.

    But not everyone will be as savvy as Apple or as fortunate as Atebits; most companies work to use the limited data they do have to understand user behavior in an effort to make incremental improvements to their applications. Section 3.3.9 seems to prevent this data from getting into your partner’s hands, preventing the very thing I suspect you’re working to promote: the highest quality applications possible delivered via amazing devices.

    Hopefully I and others are simply reading Section 3.3.9 the wrong way. I would be honored if you or someone from Apple would provide guidance on this point and I’m happy to help communicate that guidance in whatever way I am able.

    Sincerely,

    Eric T. Peterson
    CEO, Founder, and Senior Partner
    Analytics Demystified, Inc.

    Analytics Strategy, Reporting, Social Media

    Digital Measurement and the Frustration Gap

    Earlier this week, I attended the Digital Media Measurement and Pricing Summit put on by The Strategy Institute and walked away with some real clarity about some realities of online marketing measurement. The conference, which was relatively small (less than 100 attendees) had a top-notch line-up, with presenters and panelists representing senior leadership at first-rate agencies such as Crispin Porter + Bogusky, and Razorfish, major digital-based consumer services such as Facebook and TiVo, major audience measurement services such as comScore and Nielsen, and major brands such as Alberto Culver and Unilever. Of course, having a couple of vocal and engaged attendees from Resource Interactive really helped make the conference a success as well!

    I’ll be writing a series of posts with my key takeaways from the conference, as there were a number of distinct themes and some very specific “ahas” that are interrelated but would make for an unduly long post for me to write up all at once, much less for you to read!

    The Frustration Gap

    One recurring theme both during the panel sessions and my discussions with other attendees is what I’m going to call The Digital Measurement Frustration Gap. Being at an agency, and especially being at an agency with a lot of consumer packaged goods (CPG) clients, I’m constantly being asked to demonstrate the “ROI of digital” or to “quantify the impact of social media.” We do a lot of measurement, and we do it well, and it drives both the efficient and effective use of our clients’ resources…but it’s seldom what is in the mind’s eye of our clients or our internal client services team when they ask us to “show the ROI.” It falls short.

    This post is about what I think is going on (with some gross oversimplification) which was an observation that was actively confirmed by both panelists and attendees.

    Online Marketing Is Highly Measurable

    When the internet arrived, one of the highly touted benefits to marketers was that it was a medium that is so much more measurable than traditional media such as TV, print, and radio. That’s true. Even the earliest web analytics tools provided much more accurate information about visitors to web sites – how many people came, where they came from, what pages they visited, and so on – than television, print, or radio could offer. On a “measurability” spectrum ranging from “not measurable at all” to “perfectly measurable” (and lumping all offline channels together while also lumping all online channels together for the sake of simplicity), offline versus online marketing looks something like this:

    Online marketing is wildly more measurable than offline marketing. With marketers viewing the world through their lens of experience – all grounded in the history of offline marketing – the promise of improved measurability is exciting. They know and understand the limitations of measuring the impact of offline marketing. There have been decades of research and methodology development to make measurement of offline marketing as good as it possibly can be, which has led to marketing mix modeling (MMM), the acceptance of GRPs and circulation as a good way to measure reach, and so on. These are still relatively blunt instruments, and they require accepting assumptions of scale: using massive investments in certain campaigns and media and then assessing the revenue lift allows the development of models that work on a much smaller scale.

    The High Bar of Expectation

    Online (correctly) promised more. Much more. The problem is that “much more” actually wound up setting an expectation of “close to perfect:”

    This isn’t a realistic expectation. While online marketing is much more measurable, it’s still marketing – it’s the art and science of influencing the behavior of human beings, who are messy, messy machines. While the adage that it requires, on average, seven exposures to a brand or product before a consumer actually makes a purchase decision may or may not be accurate, it is certainly true that it is rare for a single exposure to a single message in a single marketing tactic to move a significant number of consumers from complete unawareness to purchase.

    So, while online marketing is much more measurable than offline marketing, it really shines at measurement of the individual tactic (including tracking of a single consumer across multiple interactions with that tactic, such as a web site). Tracking all of the interactions a consumer has with a brand – both online and offline – that influence their decision to purchase remains very, very difficult. Technically, it’s not really all that complex to do this…if we just go to an Orwellian world where every person’s action is closely tracked and monitored across channels and where that data is provided directly to marketers.

    We, as consumers, are not comfortable with that idea (with good reason!). We’re willing to let you remember our login information and even to drop cookies on our computers (in some cases) because we can see that that makes for a better experience the next time we come to your site. But, we shy away from being tracked – and tracked across channels – just so marketers are better equipped to know which of our buttons to push to most effectively influence our behavior. The internet is more measurable…but it’s also a medium where consumers expect a decent level of anonymity and control.

    The Frustration Gap

    So, compare the expectation of online measurement to the reality, and it’s clear why marketers are frustrated:

    Marketers are used to offline measurement capabilities, and they understand the technical mechanics of how consumers take in offline content, so they expect what they get, for the most part.

    Online, though, there is a lot more complexity as to what bits and bytes get pushed where and when, and how they can be linked together, as well as how they can be linked to offline activity, to truly measure the impact of digital marketing tactics. And, the emergence and evolution of social media has added a slew of new “interactions with or about the brand” that consumers can have in places that are significantly less measurable than traffic to their web sites.

    Consumer packaged goods struggle mightily with this gap. Brad Smallwood, from Facebook, , showed two charts that every digital creative agency and digital media agency gnashes their teeth over on a daily basis:

    • A chart that shows the dramatic growth in the amount of time that consumers are spending online rather than offline
    • A chart that shows how digital marketing remains a relatively small part of marketing’s budget

    Why, oh why, are brands willing to spend millions of dollars on TV advertising (in a world where a substantial and increasing number of consumers are watching TV through a time-shifting medium such as DVR or TiVo) without batting an eye, but they struggle to justify spending a couple hundred thousand dollars on an online campaign. “Prove to us that we’re going to get a higher return if we spend dollars online than if we spend them on this TV ad,” they say. There’s a comfort level with the status quo – TV advertising “works” both because it’s been in use for half a century and because it’s been “proven” to work through MMM and anecdotes.

    So, the frustration gap cuts two ways: traditional marketers are frustrated that online marketing has not delivered the nirvana of perfect ROI calculation, while digital marketers are frustrated that traditional marketers are willing to pour millions of dollars into a medium that everyone agrees is less measureable, while holding online marketing to an impossible standard before loosening the purse strings.

    My prediction: the measurement of online will get better at the same time that traditional marketers lower their expectations, which will slowly close the frustration gap. The gap won’t be closed in 2010, and it won’t even close much in 2011 – it’s going to be a multi-year evolution, and, during those years, the capabilities of online and the ways consumers interact with brands and each other will continue to evolve. That evolution will introduce whole new channels that are “more measurable” than what we have today, but that still are not perfectly measurable. We’ll have a whole new frustration gap!

    Analytics Strategy, General

    iPad, Mobile Analytics, and Web Analytics 3.0

    If you follow me on Twitter (@erictpeterson) you are likely already annoyingly aware that I rushed right out last week and bought Apple’s new iPad. I got the device for a few reasons but fundamentally it was because I’m a technology geek–always have been really–and despite knowing the iPad will only get better over time I was happy to shell out $500 to see what the future of computing and all media would look like.

    Yeah, I see the iPad as the future of computing and all media. Bold, sure, but hear me out … and I promise I’ll make this relevant to web analytics, eventually.

    I believe that all that the “average user” of any technology really wants is a simple solution to whatever problem they may have at the time. At a high level people look towards their operating system to simplify access to the multitude of applications and documents they use; at a lower level we want our applications to simplify whatever process we’re undertaking.

    Proof points for my belief are everywhere, ranging from the adoption of speed dial on phones (simplifies calling your friends and family), power seats in cars (simplifies getting comfortable when you switch drivers), and even into web analytics where a substantial growth driver behind Google Analytics has been the profound simplicity with which important tasks such as custom report creation and segmentation are accomplished.

    The iPad, and to some extent the iPhone and it’s clones, absolutely crushes simplicity in a way that is simultaneously brilliant and powerful. Want to read a book? Touch the iBooks application, touch the book you want, and start reading. Want to send an email? Touch the Mail app, touch the new icon, and start writing. Want to play a game or send an SMS or Tweet something? It all works exactly the same way … tap, swipe, smile.

    Sure, the iPad is a little heavier than is optimal, and yeah it shows fingerprints and costs a lot of money and isn’t open source and … blah, blah, blah, blah. The complainers are gonna complain no matter what–you’re Apple or your not in this world I guess. But the complainers I think fail to grasp the opportunity the iPad creates:

    • The iPad takes mobile computing to an entirely new level. With iPad you have a 1.5 lb device that will let you read, write, watch, and generally stay connected from just about anywhere for up to 10 hours between charging. What computer or phone does that? None that I know of, and so iPad gives us a simple answer to “I need to work but I’m away from the office.”
    • The iPad enforces usability of applications, and this is a very good thing. The complainers complain that Apple asserts too much control over app design via their App Store acceptance processes. Apparently these folks haven’t used enough crappy software in their lifetimes and are hungry for more. Apple’s model and their application design toolkit gives us a simple answer to “I wish this software was easier to use.”
    • The iPad changes media consumption forever. Despite the Flash-issue, one I suspect will become a non-issue very quickly thanks to the adoption of HTML5, the iPad is the most amazing media consumption device ever created. It is a portable, high-definition TV, it is a near-complete movie library, it provides access to hundreds of thousands of books, and it allows you to surf the Internet in a way that can only be described as “delightful”. By definition the iPad gives us a simple answer to “I wish I had a way to keep my books, my movies, my newspapers, my TV shows, … all of my media, in a single place that could be accessed anytime from anywhere.”
    • The iPad changes education forever. I’m making a bet that by the time my first grade daughter hits middle school a significant number of children will carry iPads to school, not expensive, heavy, and immediately out-dated textbooks. Think about this for a second: interactive textbooks that can be updated as easily as a web site, think about young people’s media consumption model today, and think for just a second about why Apple would be motivated to provide “significant educational discounts” for the device. The iPad in schools gives us a simple answer to “How can we provide a common platform for learning that any student or teacher can immediately master and reflects our rapidly changing world?”

    Think that last piece isn’t important? Have a look at the image at the right, sent to me by @VABeachKevin (thanks man!) where he has already translated all three of my books into the ePub format and placed them on his iBooks bookshelf! This collection gives any web analyst with the iPad instant access to hundreds of pages of web analytics insight, anywhere, anytime. How cool is that?

    (And heck, these aren’t even Jim, Avinash, or Bryan’s books … I bet Kevin’s converting those as we speak!)

    I suspect you cannot appreciate this until you have one in your hands but the iPad has or soon will remove the necessity to purchase printed books, newspapers, and magazines. More importantly it gives the holder the ability to work efficiently from nearly any location around the world–all you need is a Wifi connection today and later this month that will be augmented with a 3G option.

    Yeah, I’m an Apple fanboy, and yeah, I’m lucky to be able to drop $500 on technology without giving it much thought, but wait and see … I bet the adoption curve on the iPad will very much mirror the iPhone which is essentially ubiquitous these days. And just wait until someone develops a full-featured web analytics data viewer that takes advantage of all the pinching, swiping, dragging, and zoom UI capabilities of the iPad, that will simply be awesome! Imagine:

    • Scrolling along through time by simply swiping left or right
    • Zooming in on data by tapping or dragging across several dates
    • Adding metrics and dimensions by dragging them onto the existing graph or table
    • Changing from graph to table by simply rotating the device

    Total “Minority Report” for web analytics … and I bet we see this within nine months time. In fact, if you’re a Apple developer looking for an awesome project … call me! I’d love to help guide a team developing next-generation web analytics interfaces on tablet computers.

    Why This Matters to Web Analytics Professionals

    I said I would try and make this relevant to web analytics practitioners so here I go. The iPad matters to measurement folks for exactly the reason I outlined back in September, 2007 when I first wrote about mobile’s impact on digital measurement. Web Analytics 3.0, a term I coined at the time and one I still use, is essentially the addition of a completely new dimension for analysis: user location.

    In a digitally ubiquitous world–again one I described in 2007 that has more or less come to pass (although the prediction was kind of like predicting gridlock in Washington or rain in Oregon in April)–where a visitor is accessing information from becomes increasingly important and adds potentially significant context to any analysis we conduct. Location coupled with the device they’re using will likely have a profound impact on their likelihood to transact or otherwise use your site.

    For example, a visitor accessing your site from home will likely have different needs and goals than one in their car, in an airport, in a coffeeshop, or in one of your competitors stores. In a world where an increasing number of visits are “out of home/out of office” visits conducted using mobile devices our collective approach towards analysis needs to change, perhaps dramatically.

    To be fair, this is not something you need to solve and resolve today. While our ability to discern and differentiate mobile visits is getting better all the time, our overall analytical capabilities for mobile including the ability to tie mobile, fixed web, and offline visitors together is still unfortunately complicated. On top of that, while applications are increasingly able to pass over geographic information, most web browsers are not, and so our ability to gather large quantities of this data are still limited …

    … at least for the time being.

    For now I stand by what I said back in 2007–digital ubiquity and location-awareness changes everything. Back then the devices and platforms were just an idea; now we have the iPhone and it’s clones, the iPad is about to usher in a new era of mobile computing, Google and Apple are both behind mobile advertising, and the full scope of our analytical challenges are just beginning to emerge. If  you’re struggling with how to measure your mobile investment and thinking about how that strategy needs to evolve please consider giving us a call.

    What do you think? Do you have an iPad or do you refuse to purchase one? Why or why not? Have you already started to struggle measuring mobile devices or do you have it all worked out? Is this all as exciting to you as it is to me? As always I welcome your thoughts and comments.

    Analytics Strategy

    Opening Day Baseball Brings Out New Reporting

    Baseball fans across the nation were smiling this weekend with opening day games around the league. Those of you who know me, recognize that I’m a raging Red Sox fan, but last night’s 8pm start time against the Yankees was just too late for me to catch the entire game. So, upon checking the scores this morning I got to see this cool new interactive game summary on Redsox.com.

     

    The spark lines show Tweet volume with mouseovers that offer details on each individual tweet. And the highlights are actual video clips that fire off a new window right from the summary page. Way to go MLB.com for delivering a simple, yet innovative mix of professional and consumer generated content.

    Oh, yeah…the Sox beat the Yankees 9 – 7 in the opener if you’re wondering.
    GO SOX!!

    Analytics Strategy

    Vote Lovett for the WAA Board of Directors

     

    Being a change agent for web analytics requires taking calculated risks, standing up for what you believe, and working diligently to make our industry stronger. I left my job at Forrester Research in part to become a change agent for web analytics and my bid for a seat on the WAA board of directors is the next big step in my journey. But, this quest I cannot fulfill alone – I need your vote. I’ve never run for elected office before so to illustrate my conviction, I borrow words from John F. Kennedy’s 1960 presidential nomination speech and added a few of my own…

    “With a deep sense of duty and high resolve, I accept your nomination.” I’ve stated several times before that there is no industry better than web analytics. Our colleagues within web analytics – the practitioners, the vendors, the leaders and gurus – are by and large friendly, approachable, and always willing to lend a hand. It makes working within analytics gratifying and fun. My hope is to elevate these positive attributes of our industry by aligning under the professional organization that we can call our own.

    “The times are too grave, the challenge too urgent, and the stakes too high…”
    Despite my positivism, we are facing turbulent times as an industry. We need to strengthen our association at a global scale to ensure that we speak with a common voice in all countries and in all languages to distinguish the Web Analytics Association as the undisputed resource for education, standards, research, and advocacy.

    “…if we open a quarrel between the present and the past, we shall be in danger of losing the future.” It has recently come to my attention, despite proof from some members that not everyone receives value from the WAA. If elected to the Web Analytics Association board I will dedicate my term to proving the value of our association to members at every level, from student to vendor to advanced consultant. If we cannot recognize our own value, how then can we expect outsiders to accept our mission with the credibility and respect it deserves?

    “I believe the times demand new invention, innovation, imagination, decision.” Those of you who know me recognize that I am not one to dwell on the mistakes of our past. Instead, I look to the future to determine how we can improve our situation and our position within the industry. These are exciting times for web analytics but times that will regale us to obscurity if we fail to demonstrate our vision through genuine contributions. We must think differently about how measurement technologies can be applied to today’s challenges and illustrate how the WAA is defining these efforts by taking a decisive leadership stand.

    “This is a platform on which I can run with enthusiasm and conviction.”
    It is this industry, its people and our collective challenges that I want to champion as a representative of the Web Analytics Association. I’ve stated my intentions on the WAA web site, but will reiterate the most important. It’s time to stop making excuses and start delivering value to the members of the WAA. A vote for me will get you a dedicated evangelist who is willing to shoulder the burden of hard work and diligence that’s required to orchestrate change in this industry.

    I welcome your thoughts and comments about how to improve our industry and will guarantee an open mind throughout my tenure on the Web Analytics Association board of directors if elected. Thanks for reading, Now Get Out And Vote!!

    Analytics Strategy

    Columbus WAW Recap: Don't "Antisappoint" Visitors

    We had a fantastic Web Analytics Wednesday last week in Columbus, sponsored by (Adobe) Omniture, with just under 50 attendees! Darren “DJ” Johnson was the presenter, and he spoke about web site optimization (kicking off with a riff of how “optimization” is an over-used word!). I, unfortunately, forgot my “good” camera, which means my photojournalism duties were poorly, poorly performed (DJ is neither 8′ tall, nor was he ignoring his entire audience):

    Columbus Web Analytics Wednesday -- March 2010

    One of the anecdotes that stuck with me was when DJ explained a personal experience he had clicking through on a banner ad (“I NEVER click on banner ads!” he exclaimed) and then having the landing page experience totally under-deliver on the promise of the ad. He used the term “antisappointment” (or “anticappointment?”) to describe the experience. It’s a handy word that works better orally than written down, but I’ll be shocked with myself if I don’t start using it!

    I’ve been spending more and more time thinking about and working on optimization strategies of late, and DJ’s presentation really brought it all together. This post isn’t going to be a lengthy explanation of optimization and testing…because I’m really not qualified to expound on the subject (yet). But, I will drop down a few takeaways from DJ’s presentation that hit home the most with me:

    • Testing (and targeting) doesn’t typically deliver dramatic step function improvements, so don’t expect it to — it delivers incremental improvements over time that can add up to significant gains
    • (Because of the above) Testing isn’t a project; it’s a process — it’s not enough to plan out a test, run it, and evaluate the results; rather, it’s important to develop the organizational capabilities to always be testing
    • “Testing” without “targeting” is going to deliver limited results — while initial tests may be on “all visitors to the site,” it’s important to start segmenting traffic and testing different content at the segment level as quickly as possible

    Good stuff.

    In other news, I’ve got a few additional bullet points:

    • Our next Web Analytics Wednesday is tentatively slated to be a happy hour only (unsponsored or with a limited sponsor) on a Tuesday. If you don’t already get e-mail reminders and you’d like to, just drop me a note and I’ll add you to our list (tim at this domain)
    • The Ohio Interactive Awards are fast approaching! This event, started up by Teambuilder Search, huber+co. interactive, and 247Interactive,  is shaping up to be a great event on April 29th at the Arena Grand Movie Theater (Resource Interactive is sponsoring the event happy hour)
    • The TechLife Columbus meetup.com group continues to grow and thrive, with over 1,500 members now — it’s free, and it’s a great way to find meetups and people who are involved in high tech and digital in central Ohio

    It’s been a lot of fun to watch social media get put to use in central Ohio and make it so easy to find interesting people with shared interests. I’ve certainly gotten to know some great people over the past couple of years with a relatively low investment of my time and energy, and I’m a better person for it!

    Conferences/Community

    WAA Elections: Accountability, Inclusion, and Value

    Those of you in the Web Analytics Association are likely aware that the voting for the 2010 – 2012 Directorships starts sometime today. My understanding is that ballots will arrive in the mail and each active member will be able to vote for up to five nominees.

    While I have never seriously considered running for the WAA I do pay special attention to the board’s make-up, primarily because the Board of Directors is the shepherd of the Web Analytics Forum at Yahoo Groups that I founded in 2004 and “donated” to the Association back in 2006, and because I have a standing partnership with the Association around the Web Analytics Wednesday social network that I founded with Board member June Dershewitz back in 2005.

    This year there are some really amazing people running for election and there are a lot of them. Because the field is crowded I wanted to take a few minutes to call your attention to three in particular who all have the type of long-standing investment in the web analytics community, passion, and experience required to succeed on the Web Analytics Association Board of Directors. If elected, I am confident that John, June, and Steve will push the WAA to provide more value, show more leadership, and create more opportunities for the larger web analytics community across the globe.

    • John Lovett: Obviously I plan to vote for my business partner John Lovett, despite the fact that I tried repeatedly to talk him out of running! John is a long-time member of the web analytics community and has an incredible depth and breadth of knowledge about the industry. More than anything John gets my vote because he has demonstrated time and time again the ability to build consensus around difficult ideas despite the presence of hot-headed contributors (yes, I do mean me.) This combined with his bold statement “My efforts will be focused to stop making excuses and start delivering value …” resonates clearly with me as it should any member who has ever struggled to justify the cost of membership or time spent volunteering. I believe that a vote for John Lovett is a vote for accountability with the WAA Board.
    • June Dershewitz: June has been a friend for an awfully long time and is someone I have grown to trust, respect, and listen to (which I am not particularly good at on some matters.) June and I worked out the partnership between the WAA and Analytics Demystified’s Web Analytics Wednesday social network to provide the WAA access to our global network of web analytics practitioners, all of whom are potential members of the WAA, and despite small-minded opposition from within the current board June continues to champion for this relationship as it clearly delivers value to the WAA member rolls. I believe that a vote for June Dershewitz is a vote to make the WAA more inclusive, not more exclusive.
    • Steve Jackson: Steve, as anyone who has ever met him knows, is delightful to work with and a brilliant analytics practitioner and consultant. More importantly, Steve is one of a very small number of web analytics bloggers willing to address difficult and uncomfortable subjects head-on honestly, objectively, and with a passion that is sometimes rare out there. Hell, anyone willing to run for WAA Board after publicly stating “I don’t think that the current membership fee is worth the money …” deserves to be elected just so he can put his money where is mouth is. I believe a vote for Steve Jackson is a vote to change the way the WAA provides value to it’s members around the globe.

    Obviously there are an awful lot of other good people running including Jodi, Dennis, Jim, Eric, Lee, Alex, Sean, and more I’m surely forgetting right now! Hopefully those of you in the WAA will spend some time this week taking a look at all of the nominees and thinking about your own relationship with the Association and how these “elected officials” influence what the Web Analytics Association will ultimately become for you.

    Best of luck to everyone running and DON’T FORGET TO VOTE!

    Adobe Analytics, Analytics Strategy, General, Reporting

    Integrating Voice of Customer

    In the Web Analytics space, we spend a lot of time recording and analyzing what people do on our website in order to improve revenues and/or user experience. While this implicit data capture is wonderful, you should be supplementing it with data that you collect directly from your website visitors. Voice of Customer (VOC) is the term often used for this and it is simply asking your customers to tell you why your website is good or bad. There are two main ways that I have seen people capture Voice of Customer:

    1. Page-Based Comments – Provide a way for website visitors to comment on pages of your site. This is traditionally used as a mechanism to get direct feedback about a page design, broken links or problems people are having with a specific page. Unfortunately, most of this feedback will be negative so you need to have “thick skin” when analyzing this data!
    2. Website Satisfaction – Provide a way for visitors to rate their overall satisfaction with your website experience (vs. specific pages). This is normally done by presenting visitors with an exit survey where you ask standard questions that can tell you how your website is doing and compares your site against your peers.

    There are numerous vendors in each of these spaces and the goal of this post is not to compare them, but rather discuss how you can integrate Voice of Customer data into your Omniture SiteCatalyst implementation. In this post, I am going to focus on the first of the aforementioned items (Page-Based Comments) and specifically talk about one vendor (OpinionLab) that I happen to have the most direct experience with (their headquarters was a mile from my home!). The same principles that I will discuss here can be applied to all Voice of Customer vendors so don’t get hung up on the specific vendor for the purposes of this post.

    Why Integrate Voice of Customer into SiteCatalyst
    So given that you can see Voice of Customer data from within your chosen VOC tool, why should you endeavor to integrate Voice of Customer and your web analytics solution? I find that integrating the two has the following benefits:

    1. You can more easily share Voice of Customer data with people without forcing them to learn [yet] another tool. People are busy and you are lucky if they end up mastering SiteCatalyst, lest you make them learn how to use OpinionLab, Foresee Results, etc…
    2. Many Voice of Customer tools charge by the user so if you can port their data into SiteCatalyst, you can expose it to an almost unlimited number of users.
    3. You can use Omniture SiteCatalyst’s date and search filters to tailor what Voice of Customer each employee receives.
    4. You can divide Voice of Customer metrics by other Website Traffic/Success Metrics to create new, interesting KPI’s.
    5. You can use Omniture SiteCatalyst Alerts to monitor issues on your site.
    6. You can use Omniture Discover to drill deep into Voice of Customer issues

    I hope to demonstrate many of these benefits in the following sections.

    How to Integrate Voice of Customer into SiteCatalyst
    So how exactly do you integrate Voice of Customer data into SiteCatalyst. For most VOC vendors, the easiest way to do this is by using Omniture Genesis. These Genesis integrations are already pre-wired and make implementation a snap (though there are cases where you may want to do a custom integration or tweak the Genesis integration). You can talk to your Omniture account manager or account exec to learn more about Genesis.

    Regardless of how you decide to do the implementation, here is what I recommend that you implement:

    1. Set three custom Success Events for Positive Page Ratings, Negative Page Ratings and Neutral Page Ratings. These Success Events should be set on the “Thank You” page after the visitor has provided a rating.
    2. Pass the free form text/comment that website visitors enter into an sProp or eVar. If they do not leave a comment pass in something like “NO COMMENT” so you can make sure you are capturing all comments. If you are going to capture the comments in an sProp, I recommend you use a Hierarchy variable since those have longer character lengths vs. normal sProps which can only capture 100 characters.
    3. Pass the actual page rating (usually a number from 1 to 5) into an sProp. I also recommend a SAINT Classification of this variable such that you classify 1 &2 as Negative, 3 as Neutral and 4 & 5 as Positive. This classification should take less than 5 minutes to create…
    4. Use the PreviousValue plug-in to pass the previous page name to an sProp.
    5. Create a 2-item Traffic Data Correlation between the Previous Page (step #4) and Page Rating (step #3). This allows you to see what page the user was on when they submitted each rating.

    All in all, this is not too bad. A few Success Events and a few custom variables and you are good to go. The rest of this post will demonstrate some of the cool reports you can create after the above implementation steps are completed.

    Share Ratings
    As I mentioned previously, you [hopefully] have users that have become familiar with the SiteCatalyst interface. This means that they have Dashboards already created to which you can add a few extra reportlets. In this first example, let’s imagine that you want to graphically represent how your site is doing by day with respect to Positive, Negative and Neutral ratings. To do this, all you have to do is open the Classification version of the Page Rating report (can be an sProp or eVar – your call) and switch to the trended view. You should have only three valid values and I like to use a stack ranked graph type using the percentage to see how I am doing each day as shown here:

    This graph allows me to get a quick sense of how my site is doing over time and can easily be added to any Dashboard.

    You can also mix your newly created Voice of Customer Success Events with other SiteCatalyst metrics. For example, while you could look at a graph/trend of Positive or Negative Comments by opening the respective Success Events, a better way to gauge success is to divide these new metrics by Visits to see if you are doing better or worse on a relative basis. The following graph shows a Calculated Metric for Negative Comments per Visit so we can adjust for traffic spikes:

    Find Problem Pages
    Another benefit of the integration is that you can isolate ratings for specific pages. The first way to do this is to see which pages your visitors tend to rate positively or negatively. In the following report, you can open the Rating variable report (or Classification of it as shown below) and break it down by the Previous Page variable to see the pages that most often had negative ratings:

    This will then result in a report that looks like this:

    Alternatively, if you want to see the spread of ratings for a specific page, all you need to do is find that page in the Previous Page report and break it down by the Rating variable (or its Classification) as shown here:

    Share Comments
    As noted above, if you capture the actual comments that people leave in a variable, you will have a SiteCatalyst report that captures the first 256 characters of the comments visitors enter. This report duplicates scheduled reports from your Voice of Customer vendor in that it allows you to share all of the comments people are leaving with your co-workers. However, by doing this through SiteCatalyst, you gain some additional functionality that some VOC vendors don’t provide:

    1. You can create a Traffic Data Correlation between the Comments variable and the Previous Page variable so you can breakdown comments for a specific page. Therefore, if you have users that “own” specific pages on the website, you can schedule daily/weekly reports that contain comments only for those pages so they don’t have to waste time reading all of the comments left by visitors.
    2. You can use the Search filter functionality of SiteCatalyst to scan through all of the visitor comments looking for specific keywords or phrases that your co-workers may be interested in. In the example below, the user is looking for comments that mention the words “slow” or “latent” to be notified of cases where the visitor perceived a page load speed issue:

    Set Alerts
    Another cool thing you can do with this integration is set automated Alerts in SiteCatalyst so you can be notified when you see a spike in Negative Comments on your site. This allows you to react quickly to broken links or other issues before they affect too many visitors (and help avoid #FAIL posts in Twitter!). Here is an example of setting this up:

    Review Problem Visits using Omniture Discover
    Finally, if you have access to Omniture Discover, after you have implemented the items above, you can use Discover to do some amazing things. First, you can use the unlimited breakdown functionality to zero in on any data attribute of a user that is complaining about your site. For example, if you had visitors complaining about not being able to see videos on your site, you might want to see their version of Flash, Browser, OS, etc… or even isolate when the problem took place as shown here:

    Additionally, you can use Discover to isolate specific comments and watch the exact visit that led to that comment. This is done through a little-known feature of Discover called the “Virtual Focus Group.” This feature allows you to review sessions on your site and see the exact pages people viewed and some general data about their visit (i.e. Browser, GeoLocation, etc…). While not as comprehensive as tools like Clicktale, it is good enough for some basic analysis. Here is how to do this:

    1. Open Discover and find the comment you care about in the custom sProp or eVar report
    2. Right-click on the row and create a Visit segment where that comment exists
    3. Save the segment in a segment folder
    4. Open the Virtual Focus Group (under Pathing in Discover)
    5. Add your new segment to the report by dragging it to the segment area
    6. Click “New Visit” in the Virtual Focus Group
    7. Click on the “Play” button to watch the visit

    Now you can watch how the user entered your site, what pages they went to and see exactly what they had done prior to hitting the Voice of Customer “Thank You” page.

    Final Thoughts
    So there you have it, a quick review of some cool things you can do if you want to integrate your chosen Voice of Customer tool and Omniture SiteCatalyst. If you are interested in this topic, I have written a white paper with OpinionLab that goes into more depth about Voice of Customer Integration (click here to download it). If you have done other cool things, please let me know…

    Analytics Strategy, General

    Why Google is really offering an opt-out …

    When I first saw the news of Google’s opt-out browser plug-in spread around Twitter I thought “hmm, I wondered when we’d see this” and moved on since opt-out is more or less an non-issue — basically because in the grand scheme of things nobody really opts-out. For all the hand-wringing and navel-gazing people do on the subject of privacy online, I have never, ever seen any data that indicates that web users actively opt-out of tracking in significant numbers.

    Never.

    If you have it, bring it on as I’d love to see it. But in my experience the only people really truly and actively interested in browser- or URL-based opt-out for tracking are privacy wonks, extreme bit-heads, and some Europeans. The privacy wonks and bit-heads are who they are and are unlikely to ever change; the Europeans have privacy concerns for other reasons but I will defer to Aurelie to try and make heads or tails of what those reasons are.

    Still, it has been interesting to see some bright folks like Forrester’s Joe Stanhope offer some explanations about why Google might be doing this and what the ramifications might be. And it has been less interesting to see some of the fear mongering and hyperbole offered by Marketing Pilgrim’s Andy Beal in his post “Why your web traffic is going to nosedive thanks to Google” although I found Econsultancy balances things out with their straightforward and tactful post “Will opt-out threaten Google Analytics?

    What Andy, Patricio, and to some extent Joe, apparently didn’t notice is that Google Analytics is about to make a big, big push into Federal Government web sites, and this browser-based opt-out is just a check-box requirement to satisfy the needs of said privacy wonks who for better or worse have the Administration’s ear (or some body part, you choose!)

    Yep, the browser opt-out isn’t actually for anyone … except for perhaps the Electronic Freedom (sic) Foundation and their ilk. Google is somewhat brilliantly checking a box now so that when the Office of Management and Budget (OMB) releases all new Federal guidelines for browser cookie usage later this year any Federal site operator who wants can immediately dump their existing solution and go directly to Google Analytics.

    You do remember that Google Analytics comes at the amazing deficit reducing price of ABSOLUTELY FREE. Even a Republican can get his or her arms around that price tag, huh?

    You betcha.

    “Hey wait,” you say, “what about the fact that Federal web sites will probably never get permission to track visitors over multiple sessions?” Good point, except did you know you can override Google Analytics _setVisitorCookieTimeout() and_setCampaignCookieTimeout() variables and set their values to zero (“0”) which effectively converts all Google Analytics tracking cookies to session-only cookies?

    Yep.

    Not to mention that the little birds who sing songs in only hushed tones suggest that OMB is about to take a much more reasonable stance on visitor tracking anyway. This is not a done deal, but the situation that most Federal site managers work under today — one where many sites are more or less forced to use out-of-date log file analyzers and most are hamstrung in their ability to analyze multi-session behavior — seems to fly directly in the face of President Obama’s efforts to make government more transparent and effective.

    I said as much just after he was elected, and then I said it again when I pointed out that Barack Obama should not fear browser cookies! Federal managers need modern, easy-to-use tools to improve the overall quality of government web sites.

    Now, I could be wrong about all of this — I am human, and like Joe Stanhope I have not heard word-one from Google about the opt-out app — but I am pretty good at connecting dots and these are big, obvious dots:

    1. Google loves data
    2. Feds have tons of data
    3. Feds have requirements necessitating privacy controls
    4. Google builds privacy controls
    5. Google gets Feds data

    This is actually pretty brilliant of Google if you think about it. Assuming you’re with me in my belief that Google Analytics isn’t about AdWords or Analytics or anything other than Google’s desire to have all the world’s data, then you’ll surely see that providing Federal web site operators a web analytics solution that simultaneously solves a multitude of analysis problems AND saves money is, well, pretty freaking brilliant.

    Don’t take my word for it. Here’s a list of sites in the .gov domain that people are tracking using our free, browser agnostic web analytics solution discovery tool. We have about 100 sites total, the majority of which don’t appear to have any kind of tracking code at all, and of these:

    • 12% are using Google Analytics exclusively already
    • Another 3% are using Google Analytics with Omniture (1%) or Webtrends (2%)
    • 6% are using Omniture (one, GSA.gov in tandem with Webtrends)
    • 15% are using Webtrends (including GSA.gov in tandem with Omniture)
    • 63% appear to have no hosted analytics of any kind

    If I’m right the evidence will be obvious as more of these “no hosted analytics” sites begin to have Google Analytics tags. Sites like Census.gov, the EPA, FCC, FEMA, HUD, and even FTC might all start to take advantage of Google’s largesse (and willingness to provide a browser-based opt-out, don’t forget that!)

    What do you think?

    As always I welcome your thoughts, observations, reaction, and even anti-tracking-pro-privacy rants. If you are you a Federal site manager with insight to share but unable to voice your position publicly then out of respect I am happy to have you post anonymously as long as you provide a valid email address that I will confirm and then convert to “anon@anonymous.gov” to protect your identity.

    Analytics Strategy, Social Media

    Facebook Analytics: Part II – Vendor Solutions

    Earlier this week we described the Facebook Analytics Ecosystem and some of the ways in which businesses can go about measuring components of the social networking empire. Today, we reveal two key pieces of additional information that will help organizations: a. Understand the benefits of measuring Facebook (a necessary element in forming social marketing business objectives) and b. Identify vendors that offer measurement solutions for Facebook.

    DISCLOSURE: Analytics Demystified works with many web analytics vendors including some of those discussed in this post. We rarely disclose our clients publicly but for the sake of transparency wanted the reader to know that we do have a financially beneficial relationship with three of these vendors and a mutually beneficial relationship with all four.

    Three Business Benefits of Measuring Facebook

    To demystify the ways in which businesses can measure, understand and capitalize on the growing Facebook phenomenon, we identified three pillars of Facebook measurement. These three pillars identify the “what”, the “who” and the “cha-ching” of marketing within Facebook.

    1. Observe Interactions: What are people on Facebook doing?

        This essential component of Facebook measurement includes the ability to track anonymous user information such as visits, friends, comments, likes and exposure across pages, custom tabs and applications. It’s not as easy as it sounds.

    2. Understand Demographics: Who are all these people on Facebook? In addition to knowing what they do, it’s important to know who Facebook users are to segment the massive population on attributes such as user ID, gender, age, education level, work experience, marital status or geographic information. Facebook is very protective of user privacy and many limitations apply here as well.

    3. Impact Conversions: How can businesses cash in on Facebook marketing? This includes the ability to associate impressions and exposure within Facebook back to conversion events on external sites. And the ability to target advertising within Facebook based on observations and demographic data, while maintaining the ability to track viewthroughs and conversions offsite. This too requires extensive development and fancy footwork to make it happen so we’ll explain shortly.

    Vendor Capabilities For Measuring Facebook

    As mentioned in Part I of our series on Facebook Analytics, several of the major web analytics vendors are vying for position to deliver Facebook measurement capabilities. While we have not yet reached Facebook directly for comment, we concluded that no single vendor is likely to gain a long-term competitive advantage over the rest of the market for measuring Facebook. The partnership established between Omniture and Facebook does provide some short-term gains because Omniture is able to leverage a direct relationship with Facebook developers to fully utilize data provided by existing APIs; still, all of the vendors interviewed for this research informed us that they were actively engaged in talks with Facebook. Further, Analytics Demystified strongly believes that it is not in Facebooks’ best interest to lock into exclusive vendor agreements or partnerships because of the risk of alienating significant portions of their business population using disparate tools.

    Here’s what we know:

    Facebook Insights

    Facebook Insights offers aggregate views of behavioral and demographic data across a number of areas within the ecosystem. The Wall Insights show behavioral and demographic info on unique Fan interactions and all Fan visits. While the offering is great for the price, one thing we heard repeatedly is that the data is sampled and slow in coming, sometimes delayed up to three days.

    To Facebook’s credit they appear to be constantly working to improve the quality of insights they provide. Mashable broke unofficial news again this morning by reporting that Facebook is offering more analytics detail to page admins through weekly email alerts. These reports reveal new fan counts, page views comments and likes over the week.

    Most useful for: Companies unwilling to invest in “pro tools” for Facebook measurement. Facebook Insights does offer value so if you’ve got no other measurement prospects then what they’re offering is better than no data at all.

    Coremetrics

    Coremetrics’ Facebook announcement described their ability to determine Facebook’s influence on site visits and conversions, which melds nicely with their impression attribution tool. This slick capability allows Coremetrics to reveal vendor, category, placement and item data collected from within Facebook back to the Coremetrics Analytics interface. They do this using image tags and claim that caching happens infrequently, yet they circumvent these occurrences with cache busters.

    Server-side rendering of image tags allows Facebook to segment and report on attributed data. This allows Coremetrics’ users to see how interaction with specific tabs led to web site engagement and conversions. While technically possible for them, Coremetrics hasn’t dedicated focus on reporting user interactions within Facebook in their interface. Instead, they’ve honed in on the ability to understand how the social networking site acts as a feeder channel to their customers’ primary web properties.

    Most useful for: Companies heavily focused on Facebook as an advertising channel. Coremetrics is a good choice for clients that are not heavily invested in Facebook, but more interested in understanding how it compliments their other online acquisition marketing efforts.

    Omniture

    Omniture’s view on measuring Facebook is simple: Understand your audience > Target them with advertising > Optimize the message. They enable this by focusing on the custom tabs, apps and ads within Facebook. Their most recent announcements touted their partnership with Facebook to enable ad creation and demographic targeting directly within their Search Center Plus solution. This works through an Omniture Genesis integration that also enables even more granular behavioral and demographic data collection. We previewed each of these solutions in working demos, and both are scheduled for general release later this year (with the Genesis offering likely tied to the rumored announcements at Facebook f8.) Clients today are targeting with limited profile information specific to gender with product ads.

    Omniture has also developed a “Facebook JavaScript” (FBJS) measurement library that allows them to track behavior data natively within Facebook, and despite competitor’s claims Omniture pointed out that they’ve been doing this since May of 2009. They also deploy output and image tags, which occurs less frequently, and for permissioned applications Omniture is collecting a bevy of demographic data that will appear within Discover for slice and dice ability. They’ve also created default segments within Discover showing pathing reports for: visitors acquired from Facebook (conversion); visitors from Facebook (impressions); and known Facebook users (user association).

    Most useful for: Companies that have not yet fully determined what their approach towards Facebook will be. Given the breadth of their capabilities Omniture is a good choice for companies looking to better understand how user’s interact with the platform and the demographic make-up of their audience in Facebook, and with the SearchCenter Plus release, Omniture has the potential to dramatically improve customer’s ability to purchase laser-targeted advertising on the platform.

    Unica

    Unica has been noticeably quiet during the Facebook Analytics Wars but we’re not shy so we called ‘em out and asked them to weigh in on their capabilities. It turns out that they too have been measuring Facebook for some time using both dynamic and static image tags. They’re collecting strictly according to Facebook’s published rules but include unique user IDs and other attributes such as friend counts from visitors to their custom tabs. They also get app data for average viewing time, views, visits and visitors – all passed to Unica’s NetInsight interface.

    Unica is taking a very conservative approach to the demographic data and like some others, waiting for a ruling from Facebook before developing capabilities in that area. While that capability is waiting in the wings, Unica’s longer term vision may include integration with their new search technology and conversion pathing visualizations.

    Most useful for: Companies using other of Unica’s Affinium products and companies needing in-house analytical capabilities. Unica customers looking to develop or advertise within Facebook should explore the vast customization possibilities with Unica directly.

    Webtrends

    Webtrends is aggressively working to deliver social analytics solutions to their customers and also walking the Facebook talk. Their own corporate Facebook fan pages are the most developed of all vendors interviewed and they use these pages to test concepts and showcase capabilities. Webtrends also takes a conservative approach to data collection and privacy by adhering strictly to the letter of Facebook law, thus collecting and displaying fewer demographic attributes.

    In their own words Webtrends has been “throwing the book” at the Facebook API to obtain as much data as their published documentation allows, which includes: views, visits, bounce rates and time on site for Facebook shares, ads, apps and custom tabs. Webtrends refutes the long-term feasibility and accuracy of image tags and cache busting techniques within Facebook. And they’ve responded by developing a proprietary solution that uses a data call to pass parameters from the data collection API. This method captures all the typical data as well as flash, pop-ups and other custom fields with the potential to do a whole lot more if the data collection restrictions ease. But before you go snooping for details, Webtrends informed us they have filed a patent for this new method of data collection.

    Most useful for: Companies that want to gain deep visibility into interactions within the Facebook ecosystem. Webtrends has the potential to be very useful for social media marketers who are actively developing and tracking social media behavior in Facebook.

    Questions to ask your vendor…

    While some aspects of these Facebook measurement solutions have been around for a while, they are still very much nascent. Nearly all of the capabilities described above – as best we can tell – are deployed via customized consulting engagements with each vendor and likely will be for the foreseeable future. Keep this in mind as you think about pricing, development resources and timing.

    Also, because Facebook is changing its rules and these solutions are largely custom consulting jobs, please don’t even think about buying anything before you see it in action. Have the vendor demonstrate the functionality you’re looking for using live customer data. While mock data and in-house examples are fine for some purposes, ask to see real-world data or decide whether you want to be the test subject.

    Additionally here are a few questions that we recommend you pose to vendors when seeking out a Facebook Analytics solution:

    1. How long have you had an active measurement solution in place for Facebook?
    2. How many active customers do you have using your Facebook measurement capabilities?
    3. Can we speak with two or three of your customers actively using your Facebook measurement capabilities?
    4. Do you adhere to Facebook’s published data collection, storage and privacy regulations?
    5. Are you using your solution to measure your own Facebook efforts? Can we see your data?
    6. Do you have documented PHP and FBJS libraries that we are able to deploy on our own?
    7. How long, on average, do your Facebook measurement deployments take start to finish?
    8. Do I need to be a customer to purchase your Facebook measurement solution?
    9. Which Facebook profile data can you import into your application? Can we see it in your application?
    10. Which of your solutions are required to leverage your Facebook measurement solution?

    As always I welcome your comments, thoughts, and opinions about this exciting aspect of digital measurement. And if you think we got something wrong, please do let us know!

    Conferences/Community

    Analysis Exchange ALPHA Nearing Completion

    You know how sometimes in life a plan looks good on paper but when you put it into action things don’t work out? That happens to me sometimes … but I gotta say that is not happening with The Analysis Exchange! Our Alpha testers are starting to complete their projects and I am so excited I wanted to share the feedback I got this morning from our mentor + student + organization trio.

    First, from the mentor:

    “My student presented her initial findings today to the organizational partner and our project lead was THRILLED. She loved the initial findings and asked if we could present the final deliverable to her CMO! I couldn’t have asked for a better outcome of the first alpha project. The student discovered some very compelling findings and she’s getting access to the c-level to showcase her analysis skills and discoveries. It will likely guide the way that they develop for their mobile users.

    Congratulations! I’m so looking forward to helping you build a high impact final presentation and hearing you deliver.”

    The student responded with this:

    “I can’t tell you what a great feeling it is to have someone genuinely get excited about my findings and recommendations, especially on my very first project since finishing the UBC program. The reassurance that what I learned in school is transferable to the real world is invaluable.

    My mentor was a great mentor and teacher, taking the extra time to educate me on Google Analytics, what data to focus on and where to find it. I am excited about the presentation we will present to our client. This Analysis Exchange experience has been extremely rewarding and educational. I feel that students will get the experience they need to enter the analytics world and be an effective and relevant contributor. I am looking forward to my next project!”

    (You can perhaps imagine that I personally am floating on cloud nine at this point!)  The organizational lead wrote in and said:

    “I am just so excited about the things our student discovered. I can’t wait to share it with my team here, and see if there’s anything, no matter how small, we can do to improve the site. Doing this kind of analysis was something I just never had the time for, and I knew that it was important to find out more about our mobile audiences. The student is going to prepare a brief presentation that she is going to share with the team here, hopefully next week. I’m really looking forward to seeing all the data!

    Thank you for giving us the opportunity to participate in this project. It has been a very valuable use of time for me, and hopefully for the mentor and student as well.”

    Will someone pinch me please!! We have two more Alpha projects due to complete this week and, assuming I’m able to get their feedback coded into the site, we should be ready to launch into a more widespread BETA effort staring right around April 1st.

    Thanks to everyone who is signing up and helping to spread the word about The Analysis Exchange. Keep up the great work!

    Adobe Analytics

    Quick Tip: Discover Time Saver!!

    Thanks to Laura MacTaggart, I just learned an awesome Discover time-saver I felt obligated to share.

    Have you ever needed to pull a bunch of different successive reports in Omniture Discover? Maybe you want to look at Pages, then Visit Number, then Campaigns, but for each you want to see the same metric columns. If you are like me, you open a new report and then have to re-add all of the metrics that you want to see. If you are looking at a lot of reports this is very tedious and can drive you crazy. However, I just learned of a “hidden” Discover feature that can avoid all of this (maybe you all knew about this, but I certainly didn’t!). Here is what you do:

    1. Open the first report you want to analyze
    2. Add the metrics you want to see
    3. When you are ready to see the same metrics for a different report, simply right-mouse click on the name of the report (above the rows of values as shown below – in this report you would right-mouse click on “Pages (Traffic)”)
    4. Select the “Change Report” option and select the new report that you want to see!

    Presto! You are now looking at a different variable report using the same metrics without having to re-add all of your metrics…Coolness!

    Adobe Analytics

    Advanced Percent of Page Viewed

    [Warning: Some elements of this post are meant for advanced SiteCatalyst users so read at your own risk!]

    Last week, Ben Gaines (@OmnitureCare) wrote a great blog post about the getPercentPageViewed JavaScript plug-in, which he had demonstrated in his Summit presentation. This plug-in is very fun and I have enjoyed using it. While this topic is fresh in people’s mind, I wanted to throw out some additional/advanced uses of this plug-in in case they are relevant to those out there using it or thinking about implementing it.

    Beyond % of Page Viewed
    When I first started using the aforementioned plug-in, my goal was like most, to see the total % of each website page that my visitors viewed. I found the Browser Height report to be too limiting (shows pixels, not % of page viewed) and this plug-in helped provide some additional insight. However, after using the plug-in, and correlating % of Page Viewed to pages, I realized that there were a few more things that could be done with this concept. The next question that popped in my head was “I wonder what % of the page, for each website page, can be attributed to scrolling?” After all, the % Page Viewed plug-in really only shows you how much of the page they saw in total, but not how much of the page they initially saw vs. saw because they scrolled. This line of thinking drove me to ask what else could be learned from this plug-in such as:

    1. Total % of Page Viewed by Page Name
    2. % of the page that users tend to scroll by Page Name
    3. Initial % of Page visitors tend to see before they start scrolling (to get around the pixel limitation of the Browser Height report)

    As I mentioned earlier, Ben’s post covered item #1 above, so in this post, I will show you how to solve items #2 and #3.

    How Much Are Users Scrolling on each Page?
    To see how much visitors are scrolling on each page, I asked my developer if he could calculate the actual % of the page that users scroll through the plug-in. Through his supreme awesomeness, he told me that he could. That meant that we would have the total % of the page that was viewed and the % representing scrolling (both available on the next page as described by Ben). From there, we decided that we would concatenate the two values into an sProp. This means that the sProp Ben described would be slightly different such that instead of having raw number values (i.e. 100, 95, 56, etc…), it would have two values concatenated together as shown below [TOTAL % OF PAGE VIEWED]|[TOTAL % VISITOR SCROLLED]:

    (Before you get scared, bear with me as I will show you how to deal with these strange looking values in the next few paragraphs!)

    Once you have these concatenated values in the sProp, it is time to classify them using SAINT Classifications. To do this, you create the following classifications of this sProp:

    • Total % Page Viewed. This is the first of the two values and this classification replicates what Ben blogged about.
    • Scrolling %. This is the second value and represents the % of the page the visitor scrolled to see.

    When you are done with this, you can see the report Ben showed in his post, but can now see the following additional report:

    Through this clever classification, you can see how often people on your website tend to scroll. In this case, it looks like 73.2% of visitors don’t reach for that scroll bar! However, as Ben stated, all of this data is more valuable when viewed by Page Name. Since this Scrolling % report is really a classification of an sProp that is correlated to the Previous Page Name sProp, any classifications of it will also be correlated to Previous Page Name (if you understand that in one reading you are a true SiteCatalyst ninja – if not, re-read it a few times). This means that you can break the above report down by Page Name, or in other words, you can look at any page on your website and determine how often visitors are scrolling. To do this, simply open your Previous Page Name report, find the page you care about, and break it down by the Scrolling % (classification of the Percent of Page Viewed sProp). In the following example, we can see how much visitors scroll when looking at the Home Page:

    In this case it looks like about half of the time visitors are scrolling on the Home Page. Finally, if you want, you can bucket these Scrolling percentages into more meaningful groupings by adding additional SAINT classification columns as Ben described.

    What % of the Page Do Visitors See Upon Page Load?
    So the other thing I wanted to see was the percentage of the page that visitors see before scrolling. I don’t like looking at Browser Height pixels, but would rather simplify things and see the exact % of the page my visitors are seeing – period! Unfortunately, there is not an easy way to do this in SiteCatalyst. However, when I was looking at the above items in this plug-in, I realized that the answer to this question was a side-benefit of doing the SAINT Classification shown above. Think about it…we have the Total % of the Page Viewed and the Total % that visitors scrolled, both concatenated in an sProp as shown above. If you subtract these two values, you are left with the percent of the page that visitors saw before scrolling (in other words, upon page load)! For example, if the value in the sProp is “58|10” (see first row of example above), then we know that the visitor saw a total of 58% of the page and they scrolled 10% of it, so they must have seen 48% of it initially (good thing web analysts like math!).

    Therefore, when you are classifying the sProp shown above, you can add a new Classification of the Percent of Page Viewed sProp named “Initial % of Page Viewed” and simply subtract these two values and add that as a new classification (no new data to collect!). When you do this, you end up with a new report that shows you the total % of pages visitors tend to initially see like this:

    Here we can see that, in general, about 60% of visitors are essentially seeing the entire page without having to scroll. Again, we can group these values into more meaningful buckets using SAINT, but the real power is seeing this Initial Page Viewed % classification by a specific Page Name. Again, using the Home Page as an example, the following report shows how much of the Home Page most visitors see before scrolling:

    What’choo Talkn ‘Bout, Willis?
    OK…I know this sounds complicated, but really all you need to do is slightly modify the code (see below) that Matt Thomas (of Omniture Consulting) created and that Ben alluded to in his post to add one concatenated value (% Scrolled). The majority of the hard work is in building the SAINT classification file to get all of these cool, new reports. Well the good news there is, that I have already created the file which you can use as a starting point for these extra reports. All you have to do is to download it by clicking here (save .TAB file to your hard drive). Simply save this file and add the values to your own SAINT template after you have created the Classifications mentioned in this post.

    So there you have it…a few additional ideas for you to ponder while you have % of Page Viewed on the brain… If you have other ideas or questions, please leave a comment here…Thanks!

    FOR OMNITURE GEEKS ONLY!
    Here is the “enhanced” JavaScript code that modifies the great code that Matt Thomas (from Omniture Consulting) created and is in the Omniture KnowledgeBase. This is not currently supported by Omniture so use at your own risk!

    /*
    * Plugin: getPercentPageViewed v1.x
    * This code has been modified from the original version distributed
    * by Omniture and will not be supported by Omniture in any way
    */
    s.getPercentPageViewed=new Function("",""
    +"var s=this;if(typeof(s.linkType)=='undefined'||s.linkType=='e'){var"
    +" v=s.c_r('s_ppv');s.c_w('s_ppv',0);return v;}");
    s.getPPVCalc=new Function("",""
    +"var dh=Math.max(Math.max(s.d.body.scrollHeight,s.d.documentElement."
    +"scrollHeight),Math.max(s.d.body.offsetHeight,s.d.documentElement.of"
    +"fsetHeight),Math.max(s.d.body.clientHeight,s.d.documentElement.clie"
    +"ntHeight)),vph=s.d.clientHeight||Math.min(s.d.documentElement.clien"
    +"tHeight,s.d.body.clientHeight),st=s.wd.pageYOffset||(s.wd.document."
    +"documentElement.scrollTop||s.wd.document.body.scrollTop),vh=st+vph,"
    +"pv=Math.round(vh/dh*100),cv=s.c_r('s_ppv'),cpi=cv.indexOf('|'),cpv="
    +"'',ps='';if(cpi!=-1){cpv=cv.substring(0,cpi);ps=parseInt(cv.substri"
    +"ng(cpi+1));}else{cpv=ps=0;}if(pv<=100){if(pv>parseInt(cpv)){ps=pv-M"
    +"ath.round(vph/dh*100);s.c_w('s_ppv',pv+'|'+ps);}}else{s.c_w('s_ppv'"
    +",'');}");
    s.getPPVSetup=new Function("",""
    +"var s=this;if(s.wd.addEventListener){s.wd.addEventListener('load',s"
    +".getPPVCalc,false);s.wd.addEventListener('scroll',s.getPPVCalc,fals"
    +"e);s.wd.addEventListener('resize',s.getPPVCalc,false);}else if(s.wd"
    +".attachEvent){s.wd.attachEvent('onload',s.getPPVCalc);s.wd.attachEv"
    +"ent('onscroll',s.getPPVCalc);s.wd.attachEvent('onresize',s.getPPVCa"
    +"lc);}");
    s.getPPVSetup();

     

     


    Social Media

    Facebook Analytics: Part I – The Measurable Ecosystem

    2010 is shaping up to be the year of social media measurement and March is the month for measuring Facebook. While most of the major analytics vendors have been working on their Facebook measurement capabilities for some time; Webtrends, Coremetrics and Omniture all released significant advancements in their respective abilities to measure and analyze activity within the social networking juggernaut recently. These announcements created a frenzy of curiosity and confusion around what’s possible and what each vendor can deliver, so we were compelled to investigate. However, our inquiries exposed a world of complexity in terms of what’s measurable according to the emerging Facebook rules and exactly how organizations would benefit from measuring behavior within the walled social networking ecosystem.

    In this first part of our two part series on Facebook Analytics, we will dissect the Facebook ecosystem of pages, tabs, applications, advertisements, and Facebook Connect functionality to reveal the do’s and don’ts of tracking visitor activity. While it may seem straightforward, some areas of the ecosystem are off limits to traditional tracking, while other areas can be measured with a high degree of detail. But in all cases, 3rd party measurement solutions must play by the Facebook rules, which we’ll begin to describe here. In Part II of this series, we’ll lay out a framework for how businesses can derive value from measuring their efforts within Facebook and we’ll take a deep dive into the specific capabilities of vendors that offer solutions for measuring Facebook today.

    The Facebook Ecosystem

    The Facebook ecosystem is comprised of many parts, some of which can be customized while others may not. This section will offer a brief description of each component within the ecosystem.

    Facebook Page & Tabs

    Facebook “Pages” form the skeleton of each company’s presence on Facebook. Within the pages are a series of “tabs” with default (i.e., mandatory) tabs as well as customization opportunities. Default tabs include: the Wall and Info tabs, but additional standard tabs may include Photos, Discussion, Videos, Events, Boxes, etc. In addition to the standard tabs, Custom tabs within Facebook are plentiful. Yet, none of the tabs within Facebook can be measured using traditional JavaScript web analytics tags. This presents huge measurement challenges despite the fact that tabs offer massive opportunity for businesses to create compelling user experiences within Facebook. Mashable did a nice write-up last summer of Killer Facebook Fan Pages, which will give you a good idea of some of the customization possibilities.

    Facebook Applications

    Applications on Facebook can be developed using a variety of coding languages including PHP, JavaScript, Ruby or Python and Facebook even provides Client Libraries for their API. Applications must be hosted outside of Facebook and they can be stand-alone apps or embedded within custom tabs. Because apps can be developed using standard code, tracking with traditional web analytics methods is possible. It’s important to note that all applications require permission to track data about users (more on this in the next section). More than 500,000 applications are available on Facebook today so clearly they’re popular. Developers can learn more about The Anatomy of an App.

    Facebook Advertisements

    Facebook ads appear in the right hand column of your Facebook pages and can link to external web pages – or – within Facebook on tabs, applications, events or groups. Ads can be tracked using Facebook Insights or with traditional web analytics tags when the ad links out to external sites by using campaign ID codes. Ads follow a template format and offer some restrictions around size, text and images. Ads can be targeted according to nine filters including age, gender and keywords just to name a few. Ads can be purchased according to impressions or clicks providing options for businesses.

    Facebook Share

    Facebook share options are surfacing across the web at an astounding rate. Much in the same way that you can share content trough social bookmarking sites or microblog formats, Facebook Share will populate a link within a users Wall page. Adding the Share link requires only one line of code and can drive traffic back to your site. Facebook even makes it simple by offering multiple Share icons to choose from.

     

    Facebook Connect

    Facebook Connect enables businesses and individuals to extend capabilities of Facebook including their identity and connections to the web at large (e,g., outside the Facebook ecosystem). In other words, Facebook Connect makes sharing content, conversations, images and social comments possible, both inside and outside the walls of Facebook. Some aspects of Facebook Connect are measurable when delivered outside the Facebook ecosystem, yet internal connections likely require custom solutions. Facebook Connect works through a set of APIs that quite frankly have the potential to make Facebook the epicenter of the digital universe. Below is an example of Facebook Connect in action and more examples are available here. I recommend checking out JCPenney’s “Beware the Doghouse” campaign that leverages Facebook Connect for a good laugh and a taste of how Connect can pull content, images and video from Facebook to create a rich multimedia experience.

     

    Why is measuring the Facebook ecosystem so difficult?

    Regardless of whether you agree with Facebook’s ideology or not, the company has made a conscious decision to build it’s empire using standard web development practices within its own ecosystem. Unlike standard web pages that are rendered using HTML, Facebook requires that organizations use their markup language called FBML (Facebook Markup Language) to build custom tabs and enable personalized experiences. Further, Facebook does not allow JavaScript to run on any page or tab on load, but instead uses their own solution FBJS (Facebook JavaScript). There’s a developer wiki maintained by Facebook that provides great detail on the Facebook platform located here and the bloggers at PHP, Web and IT Stuff in the UK did a great write-up on the topic of custom tabs as well.

    This ain’t your ordinary JavaScript

    Because Facebook utilizes its own Markup Language to “empower developers with functionality” and “protect users privacy”, you need to use FBJS if you want to include JavaScript in your custom tabs or applications. This makes tracking using traditional web analytics JavaScript tags impossible. However, some web analytics vendors have developed methods to track visitor information within standard tabs, which we will reveal in Part II of this series. Facebook does offer its own analytics tool called Insights for tracking the default Wall page. It provides reports on exposure, fans, actions and behavior and offers demographic information about visitors to Wall pages and ads. Note that while Insights provides both click-through rates (CTRs) and engagement rate (ETRs), this is sampled data that offers estimates on actual behavior. Data can be exported from Insights to Excel (.xls) or CSV files. Facebook’s development roadmap indicates that more data will be made available through Insights in early 2010. The developer notes also indicate that an API will be available to gain access to data collected within the Insights tool.

    The clock is ticking and tracking permission is opt in

    To complicate matters, at this time Facebook does not permit the storage of user data acquired from Facebook for more than 24 hours. Although rumors are brewing that this may change. Exceptions to the 24 hour storage rule are documented in the Facebook developer site, but they are far from being crystal clear. Data stored in perpetuity may include User ID, Photo Album ID, email address, primary network ID and several other attributes noted here. This means that despite all the ways that you can get data out of Facebook Insights or through third party methods, their platform policies may prohibit long-term storage of that data. [If you choose to follow those rules]. However, Facebook has opened the floodgates to external measurement solutions for applications and advertisements…if… And this is a big IF… users grant permission to track and store data about them. This authorization is requested using a standard message shown in the screenshot below.

     

    For users who are comfortable with tracking and aware that this happens on nearly every web site out there, it’s really no big deal. But I’m willing to guess that the abandon rate on most permission requests is astronomical. If you’ve got data on Facebook app abandon rates, I’d love to know.

    Next steps…

    Now that we’ve painted the big picture of the Facebook ecosystem and hinted at what’s possible in terms of measurement, it’s time to explore vendors that can actually measure all these moving parts. We’ll save the juicy details for Part II of this post, but leave you with some food for thought…

    Measuring Facebook is no easy task. Despite the fact that over 400 million users access the site regularly, the visibility into the actions, behavior, and demographics is carefully guarded. Each of the vendors we interviewed interrogated was highly sensitive to Facebook rules and the privacy of its citizens.

    I’d love to hear your thoughts on the ecosystem and if you think I missed anything, which is entirely possible given the complexity of Facebook. I welcome your comments and I hope you’ll visit again soon to learn how a small handful of major web analytics vendors are cracking the Facebook measurement ecosystem.

    Adobe Analytics, General

    Twitter Integration Enhancement Ideas

    As I was at Omniture Summit last week, I couldn’t believe that it had already been a year since I started talking about integrating Twitter data into Omniture SiteCatalyst! Since I haven’t seen many updates about this integration come from Omniture, I thought I would share a few enhancements I have made over the year in case any of them are useful to those out there using the integration…

    Competitor Twitter Share
    When I first envisioned importing Twitter data into SiteCatalyst, my primary focus was tracking how often my brand was mentioned and importing the brand-related tweets. This allowed me to monitor my brand usage and filter tweet reports to send the right tweets to the right people based upon search phrases. However, the more I thought about it, the more I realized that this integration could be used to keep tabs on competitors as well. Instead of setting one “Brand Mentions” Success Event, you could expand the scope of what is tracked and also grab tweets mentioning your competitors and set a second Success Event named “Competitor Tweets.” This second Success Event allows you to trend your competitors and track them on the same SiteCatalyst dashboard you use to track your own brand:

    This led me to another cool idea…Why not track overall “Competitor Tweet Share” in which you quantify the % of tweets your brand gets in relation to those of your competitors? This would allow you to trend your “share of twitter” for your narrow competitive niche. To do this, create a Calculated Metric as follows:

    This results in a graph like this which allows you to see when spikes occur to see if local events or press releases move the needle:

    You can also set Alerts based upon this Calculated Metric to be notified when you are spiking or tanking in relation to your competitors!

    General Tweets
    The next concept I thought about was “general tweets” that were related to a business. For example, if you are Coca-Cola, you might want to keep tabs on tweets mentioning “soda” or “soft drink.” However, you wouldn’t want these counted as “Brand Tweets” or “Competitor Tweets,” so instead you can set a third Success Event called “Twitter General Mentions” and specify a list of keywords that should trigger this Success Event. This allows you to see if a list of “general” keywords related to your business is rising or falling over time to gauge the general level of interest in your category over time:

    #Fail
    Lastly, I decided that the #Fail hashtag was too good to pass up. If your brand is mentioned in the same tweet as the #fail hashtag, you probably want your social media team (if you have one!) to be alerted at once! To do this, all you have to do is create a scheduled report with #Fail in the search box and schedule it to run hourly. Unfortunately, SiteCatalyst delivers hourly reports whether there is data or not (to stop this please vote for this idea) so you may need your social media folks create an Outlook rule to filter the alerts that say “No Data” in the subject.

    In addition, you can perform the same exercise for your “Competitor Tweets” since your social media team may want to be notified when your competitors have a #Fail hashtag in tweets mentioning their brand name!

    So there you have it…a few minor updates or enhancements to the Twitter – SiteCatalyst integration. If you have other ideas, please leave a comment here…Thanks!