General, Social Media

Measuring the Super Bowl Ads through a Social Media Lens

Resource Interactive evaluated the Super Bowl ads this year from a digital and social media perspective — how well did the ads integrate with digital channels (web sites, social media, mobile, and overall user experience) before and during the game. I got tapped to pull some hard data. It was an interesting experience!

A Different Kind of Measurement

This was a different kind of measurement from what I normally do. I definitely figured out a few things that we’ll be able to apply to client work in the future, but, while, on the surface, this exercise seemed like just a slight one-off from the performance measurement we already do day in and day out, it actually has some pretty hefty differences:

  • Presumption of Common Objectives — we used a uniform set of criteria to measure the ads, which, by definition, means that we had to assume the ads were all, basically, trying to reach the same consumers and deliver the same results. Or, to be more accurate, we used a uniform set of criteria and then made some assumptions about the brand to inform how an ad and it’s digital integration was judged. That’s a little backwards from how a marketer would normally measure a campaign’s performance.
  • Over 30 Brands — the sheer volume of brands that advertise at the Super Bowl introduces a wrinkle. From Teleflora to PepsiMax to Kia to Groupon, the full list was longer than any single brand would normally watch as its “major competitors.”
  • Real-Time Assessment — we determined that we wanted to have our evaluation completed no later than first thing Monday morning. The reality of Marketing, though, is that, even as there is a high degree of immediacy and real-time-ness…successful campaigns actually play out over time.  In this case, though, we had to make a judgment within a few hours of the end of the game itself.
  • No Iterations — I certainly could (and did) do some test data pulls, but I really had no idea what the data was going to look like when The Game actually hit. So, we chose a host of metrics, and I laid out my scorecard with no idea as to how it would turn out once data was plugged in. Normally, I would want to have some time to iterate and adjust exactly what data was included and how it was presented (certainly starting with a well-thought-out plan of what was being included and why, but knowing that I would likely find some not-useful pieces and some additions that were warranted).

It was a challenge, for sure!

The Approach

While the data I provided — the most objective and quantitative of the whole exercise — was not core to the overall scoring…the approach we took was pretty robust (I had little to do with developing the approach — this is me applauding the work of some of my co-workers).

Simply put, we broke the “digital” aspects of the experience into several different buckets, assigned a point person to each of those buckets, and then had that person and his/her team develop a set of heuristics against which they would evaluate each brand that was advertising. That made the process reasonably objective, and it acknowledged that we are far, far, far from having a way to directly and immediately quantify the impact of any campaign. Rather, we recognized that digital is what we do.  Ad Age putting us at No. 4 on their Agency A-List was just further validation of what I already knew — we have some damn talented folk at RI, and their experience-based judgments hold sway.

For my part, I worked with Hayes Davis at TweetReach, Eric Peterson at Twitalyzer, and my mouse and keyboard at Microsoft Excel to set up seven basic measures of a brand’s results on Twitter and in Facebook. For each measure, there were either two or three breakdowns of the measure, so I had a total of 17 specific measures. For each measure, I grouped each brand into one of three buckets: Top performer (green), bottom performers (red), all others (no color). My hope was that I would have a tight scorecard that would support the core teams’ scoring — perhaps causing a second look at a brand or two, but largely lining up with the experts’ assessment. And, this is how things wound up playing out.

The Metrics

The metrics I included on my scorecard came from three different angles with three different intents:

  • Brand mentions on Twitter — these were measures related to the overall reach of the “buzz” generated for each brand during the game; we worked with TweetReach to build out a series of trackers that reported — overall and in 5-minute increments — the number of tweets, overall exposure, and unique contributors
  • Brand Twitter handle — these were measures of whether the brand’s Twitter account saw a change in its effective reach and overall impact, as measured by Twitalyzer; Eric showed me how to set up a page that showed the scores for all of the brands we were tracking, which was nifty for sharing.
  • Facebook page growth — this was a simple measure of the growth of the fans of the brand’s Facebook page

The first set of measures were during-the-game measures, and we normalized them using the total number of seconds of advertising that the brands ran. The latter two sets of measures we assessed based on a pre-game baseline. We used Monday, 1/31/2011, as our baseline date. Immediately following the game, there was a lot of manual data refreshing — of Facebook pages and of Twitalyzer — followed by a lot of data entry.

As it turned out, many of the brands came up short when it came to integrating with their social media presence, which made for a pretty mixed bag of unimpressive results for the latter two categories above. Sure, BMW drove a big growth in fans of their page, but they did so by forcing fans to like the page to get to the content, which seems almost like having a registration form on the home page of a web site in order to access any content.

The Results

In the end, I had a “Christmas Tree” one-pager: for each metric, the top 25% of the brands were highlighted in green and the bottom 25% were highlighted in red. I’m not generally a fan of these sorts of scorecards as an operational tool, but, to get a visual cue as to which brands generally performed well as opposed to those that generally performed poorly, it worked. It also “worked” in that there were no hands-down, across-the-board winners.

What Else?

In addition to an overall scoring, we captured the raw TweetReach data and have started to look at it broken down into 5-minute increments to see which specific spots drove more/less social media conversations:

THAT analysis, though, is for another time!

Analytics Strategy, Reporting, Social Media

Digital Measurement and the Frustration Gap

Earlier this week, I attended the Digital Media Measurement and Pricing Summit put on by The Strategy Institute and walked away with some real clarity about some realities of online marketing measurement. The conference, which was relatively small (less than 100 attendees) had a top-notch line-up, with presenters and panelists representing senior leadership at first-rate agencies such as Crispin Porter + Bogusky, and Razorfish, major digital-based consumer services such as Facebook and TiVo, major audience measurement services such as comScore and Nielsen, and major brands such as Alberto Culver and Unilever. Of course, having a couple of vocal and engaged attendees from Resource Interactive really helped make the conference a success as well!

I’ll be writing a series of posts with my key takeaways from the conference, as there were a number of distinct themes and some very specific “ahas” that are interrelated but would make for an unduly long post for me to write up all at once, much less for you to read!

The Frustration Gap

One recurring theme both during the panel sessions and my discussions with other attendees is what I’m going to call The Digital Measurement Frustration Gap. Being at an agency, and especially being at an agency with a lot of consumer packaged goods (CPG) clients, I’m constantly being asked to demonstrate the “ROI of digital” or to “quantify the impact of social media.” We do a lot of measurement, and we do it well, and it drives both the efficient and effective use of our clients’ resources…but it’s seldom what is in the mind’s eye of our clients or our internal client services team when they ask us to “show the ROI.” It falls short.

This post is about what I think is going on (with some gross oversimplification) which was an observation that was actively confirmed by both panelists and attendees.

Online Marketing Is Highly Measurable

When the internet arrived, one of the highly touted benefits to marketers was that it was a medium that is so much more measurable than traditional media such as TV, print, and radio. That’s true. Even the earliest web analytics tools provided much more accurate information about visitors to web sites – how many people came, where they came from, what pages they visited, and so on – than television, print, or radio could offer. On a “measurability” spectrum ranging from “not measurable at all” to “perfectly measurable” (and lumping all offline channels together while also lumping all online channels together for the sake of simplicity), offline versus online marketing looks something like this:

Online marketing is wildly more measurable than offline marketing. With marketers viewing the world through their lens of experience – all grounded in the history of offline marketing – the promise of improved measurability is exciting. They know and understand the limitations of measuring the impact of offline marketing. There have been decades of research and methodology development to make measurement of offline marketing as good as it possibly can be, which has led to marketing mix modeling (MMM), the acceptance of GRPs and circulation as a good way to measure reach, and so on. These are still relatively blunt instruments, and they require accepting assumptions of scale: using massive investments in certain campaigns and media and then assessing the revenue lift allows the development of models that work on a much smaller scale.

The High Bar of Expectation

Online (correctly) promised more. Much more. The problem is that “much more” actually wound up setting an expectation of “close to perfect:”

This isn’t a realistic expectation. While online marketing is much more measurable, it’s still marketing – it’s the art and science of influencing the behavior of human beings, who are messy, messy machines. While the adage that it requires, on average, seven exposures to a brand or product before a consumer actually makes a purchase decision may or may not be accurate, it is certainly true that it is rare for a single exposure to a single message in a single marketing tactic to move a significant number of consumers from complete unawareness to purchase.

So, while online marketing is much more measurable than offline marketing, it really shines at measurement of the individual tactic (including tracking of a single consumer across multiple interactions with that tactic, such as a web site). Tracking all of the interactions a consumer has with a brand – both online and offline – that influence their decision to purchase remains very, very difficult. Technically, it’s not really all that complex to do this…if we just go to an Orwellian world where every person’s action is closely tracked and monitored across channels and where that data is provided directly to marketers.

We, as consumers, are not comfortable with that idea (with good reason!). We’re willing to let you remember our login information and even to drop cookies on our computers (in some cases) because we can see that that makes for a better experience the next time we come to your site. But, we shy away from being tracked – and tracked across channels – just so marketers are better equipped to know which of our buttons to push to most effectively influence our behavior. The internet is more measurable…but it’s also a medium where consumers expect a decent level of anonymity and control.

The Frustration Gap

So, compare the expectation of online measurement to the reality, and it’s clear why marketers are frustrated:

Marketers are used to offline measurement capabilities, and they understand the technical mechanics of how consumers take in offline content, so they expect what they get, for the most part.

Online, though, there is a lot more complexity as to what bits and bytes get pushed where and when, and how they can be linked together, as well as how they can be linked to offline activity, to truly measure the impact of digital marketing tactics. And, the emergence and evolution of social media has added a slew of new “interactions with or about the brand” that consumers can have in places that are significantly less measurable than traffic to their web sites.

Consumer packaged goods struggle mightily with this gap. Brad Smallwood, from Facebook, , showed two charts that every digital creative agency and digital media agency gnashes their teeth over on a daily basis:

  • A chart that shows the dramatic growth in the amount of time that consumers are spending online rather than offline
  • A chart that shows how digital marketing remains a relatively small part of marketing’s budget

Why, oh why, are brands willing to spend millions of dollars on TV advertising (in a world where a substantial and increasing number of consumers are watching TV through a time-shifting medium such as DVR or TiVo) without batting an eye, but they struggle to justify spending a couple hundred thousand dollars on an online campaign. “Prove to us that we’re going to get a higher return if we spend dollars online than if we spend them on this TV ad,” they say. There’s a comfort level with the status quo – TV advertising “works” both because it’s been in use for half a century and because it’s been “proven” to work through MMM and anecdotes.

So, the frustration gap cuts two ways: traditional marketers are frustrated that online marketing has not delivered the nirvana of perfect ROI calculation, while digital marketers are frustrated that traditional marketers are willing to pour millions of dollars into a medium that everyone agrees is less measureable, while holding online marketing to an impossible standard before loosening the purse strings.

My prediction: the measurement of online will get better at the same time that traditional marketers lower their expectations, which will slowly close the frustration gap. The gap won’t be closed in 2010, and it won’t even close much in 2011 – it’s going to be a multi-year evolution, and, during those years, the capabilities of online and the ways consumers interact with brands and each other will continue to evolve. That evolution will introduce whole new channels that are “more measurable” than what we have today, but that still are not perfectly measurable. We’ll have a whole new frustration gap!

General

Am I Ever BeHIND on Posting…

August was a little crazy for me:

  • I changed jobs — left Nationwide to become Director, Measurement and Analytics at Resource Interactive — which is 1000% the “right” move, but meant for a hectic/stressful month
  • Back-to-school time, which was more than just getting our kids ready — my wife ran our two sons’ elementary school’s entire supply sale…and my “I’ll show you a few tricks in Excel to help you stay organized” offer morphed into a full-blown custom ERP system built in MS Access; August was the month when all the supplies arrived (think almost 10,000 no. 2 pencils…) and had to be divvied up; I did no divvying, but there were a number of late-breaking report requests; at last count, the database had over 20 tables (it’s almost a fully denormalized database), over 40 queries, 12 forms, and 20+ reports; AND…it’s now been extended to also handle the production of the school’s student directory; gotta love MS Access!
  • Company, company, company — two visits from friends in Texas, two visits from my parents, a visit from my in-laws, and my mother-in-law moved in for six weeks to convalesce from surgery…all in a 3-week period in August

I’ve got one more good customer data management post in me that needs to get written, at which point I expect to be shifting over to more web analytics-y, social media measurement-y posts going forward.

And…as I played around with Drupal for a couple of projects over the past couple of months, I realized that the theme that I settled on after weeks of experimentation on this blog…is one that was built for WordPress to mimic one of the Drupal default themes! How embarrassing!

Please be patient! My life will settle back down soon (I hope). In the meantime, if you’re going to be in Columbus in the middle of September, consider stopping by this month’s Web Analytics Wednesday on September 16th!