General

#eMetrics Reflection: Privacy Is Getting More Tangible

I’m chunking up my reflections on last month’s eMetrics conference in San Francisco into several posts. I had a list of eight possible topics, and this is the fourth and (probably) final one that I’ll actually get to.

I’ve attended the “privacy” session at a number of recent eMetrics, and the San Francisco one represented a big step forward in terms of specificity. “Privacy” seems to be a powerful word in the #measure industry — it’s a single word that seems to magically turn many people and companies into ostriches! It’s not that we want to avoid the topic, but there is so much complexity and uncertainty that putting our heads in the sand and kicking the can down the road (everyone loves a good mixed metaphor, right?) seems to be the default course of action.

In the session sardonically titled “Attend this Session or Pay €1 Million,” René Dechamps Otamendi of Mind Your Privacy covered European privacy regulations and Joanne McNabb of the California Department of Justice covered California and US privacy regulations.

When Pop Culture Picks It Up…

I was a West Wing fan, but had no memory of this clip that René shared:

When you’ve got mainstream network television referencing a topic, it’s a topic that is at least on the periphery of the mainstream.

“Fundamental Right” vs. “Business/Consumer Negotiation”

René pointed out that many Americans miss the point when it comes to the European privacy regulations — in typical America-centric fashion, we ignore history. We see privacy as a topic that is up for debate — how do we protect consumers with minimal regulation so that businesses can capitalize on as much personal data as possible.

In Europe…there was the Holocaust. René described how, in The Netherlands prior to WWII, the  government maintained detailed and accurate records on every citizen. When the Nazis invaded, this data made it very easy for them to identify and persecute Jews. Of the 140,000 Jews who lived in The Netherlands prior to 1940, only 30,000 survived the war, and historians point to the availability of this data as one of the main reasons for this. Yikes! For many Europeans, this sort of history is both deeply embedded and strongly linked to the topic of personal and online privacy.

Thinking of privacy as an undisputed as a fundamental right is somewhat eye-opening.

It Doesn’t Matter Where Your Company Is Based

This isn’t exactly news, but it seems to be one of the excuses marketers use for burying their heads in the sand: “We’re based in Ohio — not California or Europe. So, how much do we have to worry about privacy regulations there?”

The answer comes down to where your customers are. The European Directive, as well as California regulations, do not care where a company is based. They’re focused on where the consumers interacting with those companies are. Pull up your visitor geography reports in your web analytics platform and look at where your traffic is coming from — anywhere that has a non-miniscule percentage of traffic is likely somewhere that you need to understand privacy-regulation-wise.

Why California instead of “the U.S.?”

Joanne pointed out that California is clearly in the forefront when it comes to developing, implementing, and enforcing privacy regulations in the U.S. The California Online Protection and Privacy Act (CalOPPA) has been in effect since 2004 (although not widely understood for the first few years). That’s closing in on a decade!

To me, this sounded a lot like fuel economy standards in the auto industry — California is a large enough market that businesses can’t afford to ignore the state’s residents. At the same time, other states, and the federal government (because the U.S. has a long — and checkered — history of using the states as laboratories for testing ideas), are watching California to see what they figure out. There is a very good chance that what works for California will be a basis for other states and for federal regulations.

Is California the Same As Europe?

Yes and no. They’re the same in that they have a similar orientation towards “individuals’ rights.” They’re the same in that they are increasingly starting to enforce their regulations (with very real fines levied on companies).

They’re different…in that the U.S. and Europe are different — both culturally and structurally.

They follow developments in each others’ worlds, but they’re not actively marching towards a single, unified regulation.

So, Where Should Companies Start?

Step 1: Check your privacy policy. Really. Read it. Read it for your country-specific sites (simply translating your U.S. privacy policy into German doesn’t work!). If you give it a really close read, are you even complying with what you say you are?

Step 2: Learn some details. For Europe, reach out to René at the email address in the image below. He’s got a document that explains the ins and outs of EU privacy regulations (if the number “27” doesn’t mean anything to you, you haven’t learned enough):

euprivacy27dpas Rene's email

For California, one resource is the California Attorney General’s site for online privacy. Unfortunately, it is a bureaucratically built site, so be ready for some heavy document-wading.

Step 3: Educate your company. This one is no small task, because, when asked who to include in that discussion, it seemed like a simpler answer would have come if the question was who not to include. The web team, marketing, legal, and IT are a good start. The best hook is “We could be fined 1,000,000 euros…”

In Short: It’s Still Messy, but Things Are Getting Clearer

The heading says it all. “We” all need to take our heads out of the sand and get smarter on this. If a regulatory agency comes calling, the worst response is, “Tell me who you are again?” The best (but not currently possible) response is, “We’re totally compliant.” A good response is, “We’re working on it, here’s what we’ve done, and here’s our roadmap to do more.”

Presentation

#eMetrics Reflection: Data Visualization (Still!) Matters

I’m chunking up my reflections on last week’s eMetrics conference in San Francisco into several posts. I’ve got a list of eight possible topics, but I seriously doubt I’ll managed to cover all of them.

On Tuesday, I attended Ian Lurie’s presentation: “Data That Persuades: How to Prove Your Point.” This session was a “fist pumper” for me, as Ian is as frustrated by crappy data visualization as I am (he led off the presentation by showing a mouth guard, sharing that he wears one at night because he grinds his teeth, and then noting that the stress of seeing data poorly presented was a big source of the stress driving that grinding!).

One of the ways Ian illustrated the importance of putting care into the way data gets presented was with this image:

Read, React, Respond

think it’s fair to say this a representation of the three types of memory:

  • The “lizard brain” represents iconic memory — the “visual sensory register.” It’s where preattentive cognitive processing occurs. If we don’t put something forth that is clear and instantaneously perceptible, then the information won’t get past the lizard brain.
  • The “ape brain” represents short-term memory — where conscious thought and basic processing occurs. The initial, “Do I care?” question gets asked and answered.
  • The “human brain” represents longer-term memory — where we actually need to digest the information and develop and implement a response.

Ian also spent a lot of time on Tufte’s data-ink ratio — imploring the audience to be heavily reductionist in the visualization of data by removing extraneous words, lines, tick marks, etc. so that “the data” really comes through.

Otherwise, the recipients of the data will be like screaming goats:

Screaming Goat

Analysis, Analytics Strategy

#eMetrics Reflection: Self-Service Analysis in 2 Minutes or Less

I’m chunking up my reflections on last week’s eMetrics conference in San Francisco into several posts. I’ve got a list of eight possible topics, but I seriously doubt I’ll managed to cover all of them.

The closing keynote at eMetrics was Matt Wilson and Andrew Janis talking about how they’ve been evolving the role of digital (including social) analytics at General Mills.

Almost as a throwaway aside, Matt noted that one of the ways he has gone about increasing the use of their web analytics platform by internal users is with video:

  1. He keeps a running list of common use cases (types of data requests)
  2. He periodically makes 2-minute (or less) videos of how to complete these use cases

Specifically:

  • He uses Snagit Pro to do a video capture of his screen while he records a voiceover
  • If a video lasts more than 120 seconds, he scraps it and starts over

Outside of basic screen caps with annotations, the “video with a voiceover” is my favorite use of Snagit. When I need to “show several people what is happening,” it’s a lot more efficient than trying to find a time for everyone to jump into GoToMeeting or a Google Hangout. I just record my screen with my voiceover, push the resulting video to YouTube (in a non-public way — usually “anyone with the link” mode), and shoot off an email.

I’ve never tried this with analytics demos — as a way to efficiently build a catalog of accessible tutorials — but I suspect I’m going to start!

Analysis, Analytics Strategy

#eMetrics Reflection: Visits / Visitors / Cohorts / Lifetime Value

I’m chunking up my reflections on last week’s eMetrics conference in San Francisco into several posts. I’ve got a list of eight possible topics, but I seriously doubt I’ll managed to cover all of them.

One of the first sessions I attended at last week’s eMetrics was Jim Novo’s session titled “The Evolution of an Attribution Resolution.” We’ll (maybe) get to the “attribution” piece in a separate post (because Jim turned on a light bulb for me there), but, for now, we’ll set that aside and focus on a sub-theme of his talk.

Later at the conference, Jennifer Veesenmeyer from Merkle hooked me up with a teaser copy of an upcoming book that she co-authored with others at Merkle called It Only Looks Like Magic: The Power of Big Data and Customer-Centric Digital Analytics. (It wasn’t like I got some sort of super-special hookup. They had a table set up in the exhibit hall and were handing copies out to anyone who was interested. But I still made Jennifer sign my copy!) Due to timing and (lack of) internet availability on one of the legs of my trip, I managed to read the book before landing back in Columbus.

A Long-Coming Shift Is About to Hit

We’ve been talking about being “customer-centric” for years. It seems like eons, really. But, almost always, when I’ve hear marketers bandy about the phrase, they mean, “We need to stop thinking about ‘our campaigns’ and ‘our site’ and ‘our content’ and, instead, start focusing on the customer’s needs, interests, and experiences.” That’s all well and good. Lots of marketers still struggle to actually do this, but it’s a good start.

What I took away from Jim’s points, the book, and a number of experiences with clients over the past couple of years is this:

Customer-centricity can be made much more tangible…and much more tactically applicable when it comes to effective and business-impacting analytics.

This post covers a lot of concepts that, I think, are all different sides of the same coin.

Visitors Trump Visits

Cross-session tracking matters. A visitor who did nothing of apparent importance on their first visit to the site may do nothing of apparent importance across multiple visits over multiple weeks or months. But…that doesn’t mean what they do and when they do it isn’t leading to something of high value to the company.

Caveat (defended) to that:

Visitors Trump Visits

Does this means visits are dead? No. Really, unless you’re prepared to answer every new analytics question with, “I’ll have an answer in 3-6 months once I see how visitors play out,” you still need to look at intra-session results.

When I asked Jim about this, his response totally made sense. Paraphrasing heavily: “Answering a question with a visit-driven response is fine. But, if there’s a chance that things may play out differently from a visitor view, make sure you check back in later and see if your analysis still holds over the longer term.”

Cohort Analysis

Cohort analysis is nothing more than a visitor-based segment. Now, a crap-ton of marketers have been smoking the Lean Startup Hookah Pipe, and, in the feel-good haze that filled the room, have gotten pretty enamored with the concept. Many analysts, myself included, have asked, “Isn’t that just a cross-session segment?” But “cross-session segment” isn’t nearly as fun to say.

Cohort Analysis Tweet

Here’s the deal with cohort analysis:

  • It is nothing more than an analysis based around segments that span multiple sessions
  • It’s a visitor-based concept
  • It’s something that we should be doing more (because it’s more customer-centric!)

The problem? Mainstream web analytics tools capture visitors cross-session, and they report cross-session “unique visitors,” but this is only in aggregate. You can dig into Adobe Discover to get cross-session detail, or, I imagine, into Adobe Insight, but that is unsatisfactory. Google has been hinting that this is a fundamental pivot they’re making — to get more foundationally visitor-based in their interface. But, Jim asked the same question many analysts are:

Visitor Value Prediction

Having started using and recommending visitor-scope custom variables more and more often, I’m starting to salivate at the prospect of “visitor” criteria coming to GA segments!

Surely, You’ve Heard of “Customer Lifetime Value?”

“Customer Lifetime Value” is another topic that gets tossed around with reckless abandon. Successful retailers, actually, have tackled the data challenges behind this for years. Both Jim and the Merkle book brought the concept back to the forefront of my brain.

It’s part and parcel to everything else in this post: getting beyond, “What value did you (the customer) deliver to me today?” to “What value have you (or will you) deliver to me over the entire duration of our relationship” (with an eye to the time value of money so that we’re not just “hoping for a payoff wayyyy down the road” and congratulating ourselves on a win every time we get an eyeball).

Digital data is actually becoming more “lifetime-capable:”

  • Web traffic — web analytics platforms are evolving to be more visitor-based than visit-based, enabling cross-session tracking and analysis
  • Social media — we may not know much about a user (see the next section), but, on Twitter, we can watch a username’s activity over time, and even the most locked down Facebook account still exposes a Facebook ID (and, I think, a name)…which also allows tracking (available/public) behavior over time
  • Mobile — mobile devices have a fixed ID. There are privacy concerns (and regulations) with using this to actually track a user over time, but the data is there. So, with appropriate permissions, the trick is just handling the handoff when a user replaces their device

Intriguing, no?

And…Finally…Customer Data Integration

Another “something old is new again” is customer data integration — the “customer” angle of of the world of Master Data Management. In the Merkle book, the authors pointed out that the illusive “master key” that is the Achilles heel of many customer data integration efforts is getting both easier and more complicated to work around.

One obvious-once-I-read-it concept was that there are fundamentally two different classes of “user IDs:”

  • strong identifier is “specifically identifiable to a customer and is easily available for matching within the marketing database.”
  • weak identifier is “critical in linking online activity to the same user, although they cannot be used to directly identify the user.”

Cookie IDs are a great example of a weak identifier. As is a Twitter username. And a Facebook user ID.

The idea here is that a sophisticated map of IDs — strong identifiers augmented with a slew of weak identifiers — starts to get us to a much richer view of “the customer.” It holds the promise of enabling us to be more customer-centric. As an example:

  • An email or marketing automation system has a strong identifier for each user
  • Those platforms can attach a subscriber ID to every link back to the site in the emails they send
  • That subscriber ID can be picked up by the web analytics platform (as a weak identifier) and linked to the visitor ID (cookie-based — also a weak identifier)
  • Now, you have the ability to link the email database to on-site visitor behavior

This example is not a new concept by any means. But, in  my experience, the way each of the platforms involved in a scenario like this has preferred to work is that they set their own strong and weak identifiers. What I took away from the Merkle book is that we’re getting a lot closer to being able to have those identifiers flow between systems.

Again…privacy concerns cannot be ignored. They have to be faced head on, and permission has to be granted where permission would be expected.

Lotta’ Buzzwords…All the Same Thing?

Nothing in this post is really “new.” They’re not even “new to me.” The dots I hadn’t connected was that they are all largely the same thing.

That, I think, is exciting!

 

Analysis, Reporting, Social Media

Analysts as Community Managers' Best Friends

I had a great time in Boston last week at eMetrics. The unintentional theme, according to my own general perception and the group messaging backchannel that I was on, was that tag management SOLVES ALL!!!.

My session…had nothing to do with tag management, but it seemed worth sharing nonetheless: “The Community Manager’s Best Friend: You.” The premise of the presentation was twofold:

  • Community managers plates are overly full as it is without them needing to spend extensive time digging into data and tools
  • Analysts have a slew of talents that are complementary to community managers’, and they can apply those talents to make for a fantastic partnership

Due to an unfortunate mishap with the power plug on my mixing board while I was out of town a few month ago, my audio recording options are a bit limited, so the audio quality in the 50-minute video (slides with voiceover) below isn’t great. But, it’s passable (put on some music in the background, and the “from the bottom of a deep well” audio effect in the recording won’t bug you too much):

I’ve also posted the slides on Slideshare, so you can quickly flip through them that way as well, if you’d rather:

As always, I’d love any any and all feedback! With luck, I’ll reprise the session at future conferences, and a reprise without refinement would be a damn shame!

Social Media

eMetrics Day 1 — Let's Look at the Tweets!

Update: I misstated @johnlovett’s follower count in the initial post. This was a fatigue-driven user error on my end — not bad data coming from either tool employed in this analysis and has been corrected!

Picking up on Michele Hinojosa’s quick analysis of tweets from the first day of the Omniture Summit, I thought I’d take a quick crack at Day 1 of eMetrics. I used TweetReach and a “tracker” (query) I set up a couple of weeks ago for that.

Now, I was a bit short-sighted, in that I set up the tracker on Eastern time. But, we still cover the main bulk of the tweets by selecting March 14th for the analysis range, so I’m not going to lose any sleep over it. The high-level summary:

Let’s take a look at some of the more interesting tweets, as identified using a few different criteria.

Just looking at raw exposure of the tweets, @SocialMedia2Day really dominated with their tweets. Now, @SocialMedia2Day has over 59,000 followers, which means every tweet gets recorded as that many impressions — even before anyone retweets (and there are more followers who might retweet). According to Twitalyzer, @SocialMedia2Day has an effective reach of 175,226, which puts the account in the 98.2nd percentile. The top 3 tweets, just based on raw exposure:

Notice that the top tweet had 10 retweets — 10 people in @socialmedia2day’s network thought it worth repeating. And, it’s a pretty good point content-wise.

@comScore also has a high follower count — more than 24,000, and an effective reach from Twitalyzer of 46,474 (94.3rd percentile). So, after all of the @socialmedia2day tweets comes a list of all of the @comScore tweets. Jumping beyond those as anomalies, of sorts, we get the top tweets by “individual” contributors:

John’s Code of Ethics tweet was retweeted 9 times and garnered almost 30,000 impressions. Nice! We care about acting responsibly! John’s tweet generated its exposure through retweets, as he has around 2,500 people following him…which is a lot of people, but only 1/4 of Ken, who has 10,000 people following him (and he’s following 10,000 people), so his tweets generate ~10,000 impressions just from him tweeting them.

So, looking at raw retweet volume is an indication of how naturally interesting and repeatable a user’s followers (and any followers who retweeted) found the tweet to be. The top retweeted tweet was retweeted 11 times:

Again…a pretty sharp observation.

Shifting around to the top contributors, TweetReach again provides a list based on the exposure generated by each user. The top 35:

We covered that @SocialMedia2Day, @comScore, and @kenburbary have a very high follower count, so let’s take a look at the next two. First, @michelehinojosa, who has just under 1,000 followers, an effective reach in Twitalyzer of 18,852 (89.7th percentila), and tweeted about eMetrics 127 tweets over the course of the day (tweet detail sorted by highest to lowest exposure):

Note the top two tweets were retweeted multiple times…and they’re worth sharing!

And, finally, yours truly — a bit under 1,200 followers, and a Twitalyzer effective reach of ~3,000  (although it jumped up to north of 89,900 starting on March 9th, which is twice what @comScore’s effective reach is, and they have 20X the followers; I need to ping the Twitalyzer folk to help me understand how that happened). My top 5 highest exposure eMetrics tweets for the day:

The second tweet — which was just a humorous observation — was interpreted as a “reply” to @jimsterne…but it showed up as the second-highest exposure tweet. That’s not exactly high-value content — more of a chuckle for those in the room who were watching the #emetrics stream. And, interestingly, I got a direct message from a follower midway through the day that they were unfollowing me as I was clogging their stream. I’m somewhat sensitive to that, but, with tweets being, essentially, public note-taking for me at conferences (and the enticing opportunity to then analyze and summarize those tweets after the conference, so it’s actually shared public note-taking), I suppose I’m okay with that.

Overall, this (very quick) analysis seems to reveal that the most engaging (egad! scary word!) tweets were one that stated, succinctly and eloquently, truths about our profession. I also  I would’ve liked to generate a word cloud of all of the tweets (appropriately cleansed)…but that’s simply not as quick and easy as I wish it was!

What do you think?

 

 

Analytics Strategy, Social Media

eMetrics Washington, D.C. 2010 — Fun with Twitter

I took a run at the #emetrics tweets to see if anything interesting turned up. Rather than jump into Nielsen Buzzmetrics, which was an option, I just took the raw tweets from the event and did some basic slicing and dicing of them.

[Update: I’ve uploaded the raw data — cleaned up a bit and with some date/time parsing work included — in case you’d like to take another run at analyzing the data set. It’s linked to here as an Excel 2007 file]

The Basics of the Analysis

I constrained the analysis to tweets that occurred between October 4, 2010, and October 6, 2010, which were the core days of the conference. While tweets occurred both before and after this date range, these were the days that most attendees were on-site and attending sessions.

To capture the tweets, I set up a Twapper Keeper archive for all tweets that included the #emetrics hashtag. I also, certainly, could have simply set up an RSS feed and used Outlook to capture the tweets, which is what I do for some of our clients, but I thought this was a good way to give Twapper Keeper a try.

The basic stats: 1,041 tweets from 218 different users (not all of these users were in attendance, as this analysis included all retweets, as well as messages to attendees from people who were not there but were attending in spirit).

Twapper Keeper

Twapper Keeper is free, and it’s useful. The timestamps were inconsistently formatted and/or missing in the case of some of the tweets. I don’t know if that’s a Twapper Keeper issue, a Twitter API issue, or some combination. The tool does have a nice export function that got the data into a comma-delimited format, which is really the main thing I was looking for!

Twitter Tools Used

Personally, I’ve pretty much settled on HootSuite — both the web site and the Droid app — for both following Twitter streams and for tweeting. I was curious as to what the folks tweeting about eMetrics used as a tool. Here’s how it shook out:

So, HootSuite and TweetDeck really dominated.

Most Active Users

On average, each user who tweeted about eMetrics tweeted 4.8 times on the topic. But, this is a little misleading — there were a handful of very prolific users and a pretty long tail when you look at the distribution.

June Li and Michele Hinojosa were the most active users tweeting at the conference by far, accounting for 23% of all tweets between the two of them directly (and another 11% through replies and retweets to their tweets, which isn’t reflected in the chart below — tweet often, tweet with relevancy, and your reach expands!):

Tweet Volume by Hour

So, what sessions were hot (…among people tweeting)? The following is a breakdown of tweets by hour for each day of the conference:

Interestingly, the biggest spike (11:00 AM on Monday) was not during a keynote. Rather, it was during a set of breakout sessions. From looking at the tweets themselves, these were primarily from the Social Media Metrics Framework Faceoff session that featured John Lovett of Web Analytics Demystifed and Seth Duncan of Context Analytics. Of course, given the nature of the session, it makes sense that the most prolific users of Twitter attending the conference would be attending that session and sharing the information with others on Twitter!

The 2:00 peak on Monday occurred during the Vendor Line-Up session, which was a rapid-fire and entertaining overview of many of the exhibiting vendors (an Elvis impersonator and a CEO donning a colonial-era wig are going to generate some buzz).

There was quite a fall-off after the first day in overall tweets. Tweeting fatigue? Less compelling content? I don’t know.

Tweet Content

A real challenge for listening to social media is trying to pick up hot topics from unstructured 140-character data. I continue to believe that word clouds hold promise there…although I can’t really justify why a word frequency bar chart wouldn’t do the job just as well.

Below is a word cloud created using Wordle from all 1,041 tweets used in this analysis. The process I went through was that I took all of the tweets and dropped them in MS Word and then did a handful of search-and-replaces to remove the following words/characters:

  • #emetrics
  • data
  • measure
  • RT

These were words that would come through with a very strong signal and dominate potentially more interesting information. Note: I did not include the username for the person who tweeted. So, occurrences of @usernames were replies and retweets only.

Here’s the word cloud:

What jumped out at me was the high occurrence of usernames in this cloud. This appears to be a combination of the volume of tweets from that user (opening up opportunities for replies and retweets) and the “web analytics celebrity” of the user. The Expedia keynote clearly drove some interest, but no vendors generated sufficient buzz to really drive a discussion volume sufficient to bubble up here.

As I promised in my initial write-up from eMetrics, I wasn’t necessarily expecting this analysis to yield great insight. But, it did drive me to some action — I’ve added a few people to the list of people I follow!

Analytics Strategy

Gilligan's eMetrics Recap — Washington, D.C. 2010

I attended the eMetrics Marketing Optimization Summit earlier this week in D.C., and this post is my attempt to hash out my highlights from the experience. Of all the conferences I’ve attended (I’m not a major conference attendee, but I’m starting to realize that, by sheer dint of advancing age, I’m starting to rack up “experience” in all sorts of areas by happenstance alone), this was one that I walked away from without having picked up on any sort of unintended conference theme. Normally, any industry conference is abuzz about something, and that simply didn’t seem to be the case with this one.

(In case you missed it, the paragraph above was a warning that this post will not have a unifying thread! Let’s plunge ahead nonetheless!)

Voice of the Customer

It’s good to see VOC vendors aggressively engaging the “traditional web analytics” audience. Without making any direct effort, I repeatedly tripped over Foresee Results, iPerceptions, OpinionLabs, and CRM Metrix in keynotes, sessions, the exhibit hall, and over meals.

My takeaway? It’s a confusing space. Check back in 12-18 months and maybe I’ll be able to pull off a post that provides a useful comparison of their approaches. If I had my ‘druthers, we’d pull off some sort of bracketed Lincoln-Douglas style debate at a future eMetrics where these vendors were forced to engage each other directly, and the audience would get to vote on who gets to advance – not necessarily judging which tool is “better” (I’m pretty sure each tool is best-in-class for some subset of situations…although I know at least one of the vendors above who would vigorously tell me this is not the case), but declaring a winner of each matchup so that we would get a series of one-on-one debates between different vendors that would be informative for the audience.

Cool Technology

I generally struggle to make my way around an exhibit hall, so I didn’t come anywhere close to covering all of the vendors This wasn’t helped by the fact that I talked to a couple of exhibitors early on that were spectacularly unappealing. That wasn’t exactly a great motivator for continuing the process. There were, however, several tools that intrigued me:

  • Ensighten – if you’re reading this blog, then chances are you read “real” blogs, too, and you likely caught that Eric Peterson recently wrote a paper on Tag Management Systems (sponsored by Ensighten). It’s worth a read. Ensighten was originally developed in-house at Stratigent and then spun off as a separate business with Josh Manion at the helm. Their corny (but highly effective) schtick at the conference was that they were starting a “tagolution” (a tagging revolution). That gave them high visibility…but I think they’ve got the goods to back it up. Put simply, you deploy the Ensighten javascript on your site instead of all of the other tags you need (web analytics, media tracking, VOC tools, etc.). When the page loads, that javascript makes a call to Ensighten, which returns all of the tags that need to be executed. Basically, you get to manage your tags without touching the content on your site directly. And, according to Josh, page performance actually improves in most cases (he had a good explanation as to why — counter-intuitive as it seems). Very cool stuff. Currently, they’re targeting major brands, and the price point reflects this – “six figures” was the response when I asked about cost for deploying the solution on a handful of domains. Ouch.
  • DialogCentral – this is actually an app/service from OpinionLabs, and I have no idea what kind of traction it will get. But, as I stood chatting with the OpinionLabs CIO, I pulled out my Droid and had had a complete DialogCentral experience in under a minute. The concept? Location-based services as a replacement for “tell us what you think” postcards at physical establishments. You fire up their app (iPhone) or their mobile site (dialogcentral.com will redirect to the mobile site if you visit it with a mobile device). DialogCentral then pulls up your location and nearby establishments (think Foursquare, Gowalla, Brightkite-type functionality to this point), and then lets you type in feedback for the establishment. That feedback then gets sent to the establishment, regardless of whether the venue is a DialogCentral customer. Obviously, their hope is that companies will sign on as customers and actually promote the feedback mechanism in-store, at which point the feedback pipeline gets much smoother. It’s an intriguing idea — a twist-o’-the-old on all of the different “publicly comment on this establishment” aspects of existing services.
  • Clicktale – these guys have been around for a while, and I was vaguely familiar with them, but got an in-depth demo. They use client-side code (which, presumably, could be managed through Ensighten — I’m just sayin’…) to record intra-page mouse movements and clicks. They then use that data to enable “replays” of the activity as well as to generate page-level heatmaps of activities and mouse placement. Their claim (substantiated by research) is that mouse movements are a pretty tight proxy for eye movement, so you get a much lower cost / broadly collected set of (virtual) eye-tracking data. And, the tool has all sorts of triggering and filtering capabilities to enable honing in on subsets of activity. Pretty cool stuff.
  • ShufflePoint – this wasn’t an exhibiting vendor, but, rather, the main gist of one of the last sessions of the conference. The tool is a poor man’s virtual-data-mart enabler. Basically, it’s an interface to a variety of tool APIs (Google Analytics, Google Adwords, Constant Contact, etc. – Facebook and Twitter are apparently in the pipeline) that allows you to build queries and then embed those queries in Excel. I’ve played around with the Google Analytics API enough to get it hooked into Excel and pulling data…and know that I’m not a programmer. Josh Katinger of Accession Media was the presenter, and he struck me as being super-pragmatic, obsessive about efficiency, and pretty much bullshit-free (I found out after I got to the airport that a good friend of mine from Austin, Kristin Farwell, actually goes wayyy back with Josh, and she confirmed that this was an accurate read). We’ll be giving ShufflePoint a look!

Social Media Measurement

I was expecting to hear a lot more on social media measurement at the conference…but it really wasn’t covered in-depth. Jim Sterne kicked off with a keynote on the subject (he did recently publish a book on the topic, which is now sitting on my nightstand awaiting a read). And, there was a small panel early on the first day where John Lovett got to discuss the framework he developed with Jeremiah Owyang (which is fantastic) this past spring. But, other than that, there really wasn’t much on the subject.

MMM and Cross-Channel Analytics

Steve Tobias from Marketing Management Analytics conducted a session that focused on the challenges of marketing mix modeling (MMM) in a digital world. I felt pretty smart as he listed multiple reasons why MMM struggles to effectively incorporate digital and social media, because many of his points mirrored what I’ve put together on the exact same subject (to be clear, he didn’t get his content from me!). It was good to get validation on that front from a true expert on the subject.

Where things got interesting, though, was when Steve talked about how his company is dealing with these challenges by supplementing their MMM work (their core strength) with “cross-channel analytics.” By “cross-channel analytics,” he meant panel-based measurement. Again, I felt kinda’ smart (and, really, it’s all about me and my feelings, isn’t it?), as I keep thinking (and I’ve got this in some internal presentations, too), that panel-based measurement is going to be key in truly getting a handle on cross-channel/cross-device consumer interactions and their impact.

The People

One of the main reasons to go to a conference like eMetrics is the people — catching up with people you know, meeting people you’ve only “known” digitally, and meeting people you didn’t know at all.

For me, it was great to again get to chat with Hemen Patel from CRM Metrix, John Lovett from Analytics Demystified, Corry Prohens from IQ Workforce, and the whole Foresee Results gang (Eric F., Eric H., Chris, Maggie,…and more). And, it wound up being a really special treat to see Michelle Rutan, who I take credit for putting on the web analytics career path way back when we worked at National Instruments together…and she was presenting (as an amusing aside, I credit Michelle’s husband, Ryan — although they weren’t even dating at the time — as being pretty key to helping me understand the mechanics of page tagging; he’s credited by name in one of the most popular posts on this blog)!

I actually got to meet Stéphane Hamel in person, which was a huge treat (I saw a lot of other web analytics celebrities, but never wound up in any sort of conversation with them — maybe next time), as well as Jennifer Day, who I’ve swapped tweets with for a while.

Digital Analytics folk are good peeps. That’s all there is to it.

Twitter (and Twapper Keeper) Means More to Come!

I actually managed to have the presence of mind to set up a Twapper  Keeper archive for #emetrics shortly before the conference started, and I’m hoping to have a little fun with that in the next week or two. We’ll see if any insights (I’m not promising actionable insights, as I’ve decided that term is wildly overused) emerge. I picked up a few new people to follow just based on the thoroughness and on-pointed-ness of their tweets — check out @michelehinojosa (who also is blogging her eMetrics takeaways) if you’re looking to expand your follower list.

It was a good conference!