Analytics Strategy

Reflections from the Google Analytics Partner Summit

Having recently become a Google Analytics Certified Partner, we got to participate in our first Partner Summit out in Mountainview, California, last week. It was unfortunate that the conference conflicted with Semphonic’s XChange conference (There really aren’t that many digital analytics conferences, are there? Maybe I should publish a proposed schedule for 2013 for a non-conflicting master schedule?), but I’m looking forward to reading through the reflections from huddlers who were down in San Diego on the blogosphere in the coming weeks!

Onto my shareable takeaways from the Google Analytics summit…

CRAZY Coolness Is on the Way

<sigh> This is the stuff where I can’t provide any real detail. But, essentially, the first two hours of the summit were one live demo after another of very nifty enhancements to the platform, some of which are coming in the next few weeks, and some of which won’t be out until 2012. Some of the enhancements fall in the “well…the Sitecatalyst sales folk won’t be able to use that as a Google Analytics shortcoming when they’re a-bashing it” category, and some fall in the “where on earth did they come up with that — no one else is even talking about doing that” category.

Very cool stuff, and with a continuing emphasis on ease of implementation, ease of management, and a clean and usable UI. Clearly, when v5 rolled out and Google emphasized that the release was more about positioning the under-the-hood mechanics for more, better, and faster improvements in the future, they meant it. Agility and a constant stream of worthwhile enhancements are the order of the day.

I Don’t Know My Googlers

Two presenters — both spoke a couple of times, either formally or when called upon from the stage — really stood out. Maybe I’ve just been living in an oblivious world, but I wasn’t familiar with either one:

  • Phil Mui, Group Product Manager — Phil is apparently a regular favorite at the summit, and he got to run through a lot of the upcoming features; he’s a very engaging speaker, and he’s both excited about the platform while also in tune (for the most part) with how and where the upcoming enhancements will be able to be put to good use by users
  • Sagnik Nandy, Engineering Lead, Google Analytics Backend and Infrastructure — it was a pleasure to listen to Sagnik walk through all manners of how the platform works and what’s coming in the future; the backend is in good hands!

Both of these guys (all of the Googlers, actually) are genuine and excited about the platform. Avinash Kaushik’s passion and thoughtfulness (and healthy impatience with the industry) is alive and well…and entertaining as all get out!

Google Analytics Competitive Advantage

I owe Justin Cutroni for this one, but it was one of the more memorable epiphanies for me. As we chatted about GA relative to the other major web analytics players, he pointed out a fundamental difference (which I’m expanding/elaborating on here):

  • Adobe/Omniture, Webtrends, and IBM (Coremetrics and Unica) are all largely fighting on the same playing field — striving to develop products that have a better feature set at a better price than their competition. This is pretty basic stuff, but it requires pretty careful P&L management — R&D investment that, ultimately, pays a sufficient return through product revenue
  • Google is playing a different game — their products are geared towards driving revenue from their other products (Google Adwords, the Google Display Network, etc.). That actually makes for a very different model for them — much less of a need to manage their R&D investment against direct Google Analytics income (obviously), as well as a totally different marketing and selling model.

There is a certain inherent degree of commoditization of the web analytics space. With a relatively small number of players, R&D teams are focused as much on closing feature gaps that their competitors offer as they are on developing new and differentiating features. In a sense, Google is more focused on “making the web better” — raising the water level in the ocean — while the paid players are geared solely towards making their boats bigger and faster.

I fervently hope that Adobe, Webtrends, and IBM are able to remain relevant over the long term. Competition is good. But, it may very well be a very steep uphill battle for structural reasons.

Silly Me — I thought Tag Management Was a 2-Player Field

Several of the exhibitors at the conference offer some flavor of tag management. The conference was geared towards Google Analytics, so their focus was on GA, but all of them clearly had the “any tag, any Javascript” capability that Ensighten touts (TagMan is the other player I was aware of, but, due to crossed signals, I haven’t yet seen a demo of their product).

The most impressive of these tools that I saw was Satellite from Search Discovery, which Evan LaPointe presented during Wednesday night’s blitz “app integration” session, and which he showed me in more depth on Thursday morning. In his Wednesday night presentation, Evan made a pretty forceful point that, if we’re talking about “tag management,” we’re already admitting defeat. Rather, we should be thinking about data management — the data we need to support analyses — rather than about “the tag.”

Subtle semantic framing? Perhaps. But, it falls along the same lines of the “web analytics tools are fundamentally broken” post I wrote last month that set off a vigorous discussion, and which wound up being timed such that Evan’s post about web analytics douchiness had a nice tie-in.

In short, Analytics Engine is impressive for its rich feature set and polished UI. Equally, if not more, exciting is the mindset behind what the platform is trying to do — get analysts and marketers thinking about the data and information they need rather than the tags that will get it for them.

In Short, Not a Bad Couple of Days!

The nature of any conference is that there will be sessions and conversations that are either not informative or not relevant to the attendee. That’s just the way things go. If I walk away with a small handful of new ideas, a couple of newly established or deepened personal relationships with peers, and validation of some of my own recent thinking, I count the conference a success. The Partner Summit delivered against those criteria — there were a few sessions I could have lived without, at least one session that wildly under-delivered on its potential, and some looseness with the Day 2 schedule that made it difficult to bounce between tracks effectively. But, overall, it was a #winning event.

 

 

Analysis, Analytics Strategy, Reporting

In Defense of "Web Reporting"

Avinash’s last post attempted to describe The Difference Between Web Reporting and Web Analysis. While I have some quibbles with the core content of the post — the difference between reporting and analysis — I take real issue with the general tone that “reporting = non-value-add data puking.”

I’ve always felt that “web analytics” is a poor label for what most of us who spend a significant amount of our time with web behavioral data do day in and day out. I see three different types of information-providing:

  • Reporting — recurring delivery of the same set of metrics as a critical tool for performance monitoring and performance management
  • Analysis —  hypothesis-driven ad hoc assessment geared towards answering a business question or solving a business problem (testing and optimization falls into this bucket as well)
  • Analytics — the development and application of predictive models in the support of forecasting and planning

My dander gets raised when anyone claims or implies that our goal should be to spend all of our time and effort in only one of these areas.

Reporting <> (Necessarily) Data Puking

I’ll be the first person to decry reporting squirrel-age. I expect to go to my grave in a world where there is still all too much pulling and puking of reams of data. But (or, really, BUT, as this is a biggie), a wise and extremely good-looking man once wrote:

If you don’t have a useful performance measurement report, you have stacked the deck against yourself when it comes to delivering useful analyses.

It bears repeating, and it bears repeating that dashboards are one of the most effective means of reporting. Dashboards done well (and none of the web analytics vendors provide dashboards well enough to use their tools as the dashboarding tool) meet a handful of dos and don’ts:

  • They DO provide an at-a-glance view of the status and trending of key indicators of performance (the so-called “Oh, shit!” metrics)
  • They DO provide that information in the context of overarching business objectives
  • They DO provide some minimal level of contextual data/information as warranted
  • They DON’T exceed a single page (single eyescan) of information
  • They DON’T require the person looking at them to “think” in order to interpret them (no mental math required, no difficult assessment of the areas of circles)
  • They DON’T try to provide “insight” with every updated instance of the dashboard

The last item in this list uses the “i” word (“insight”) and can launch a heated debate. But, it’s true: if you’re looking for your daily, weekly, monthly, or real-time-on-demand dashboard to deliver deep and meaningful insights every time someone looks at it, then either:

  • You’re not clear on the purpose of a dashboard, OR
  • You count, “everything is working as expected” to be a deep insight

Below is a perfectly fine (I’ll pick one nit after the picture) dashboard example. It’s for a microsite whose primary purpose is to drive registrations to an annual user conference for a major manufacturer. It is produced weekly, and it is produced in Excel, using data from Sitecatalyst, Twitalyzer, and Facebook. Is this a case of, as Avinash put it, us being paid “an extra $15 an hour to dump the data into Excel and add a color to the table header?” Well, maybe. But, by using a clunky Sitecatalyst dashboard and a quick glance at Twitalyzer and Facebook, the weekly effort to compile this is: 15 minutes. Is it worth $3.75 per week to get this? The client has said, “Absolutely!”

I said I would pick one nit, and I will. The example above does not do a good job of really calling out the key performance indicators (KPIs). It does, however, focus on the information that matters — how much traffic is coming to the site, how many registrations for the event are occurring, and what the fallout looks like in the registration process. Okay…one more nit — there is no segmentation of the traffic going on here. I’ll accept a slap on the wrist from Avinash or Gary Angel for that — at a minimum, segmenting by new vs. returning visitors would make sense, but that data wasn’t available from the tools and implementation at hand.

An Aside About On-Dashboard Text

I find myself engaged in regular debates as to whether our dashboards should include descriptive text. The “for” argument goes much like Avinash’s implication that “no text” = “limited value.” The main beef I have with any sort of standardized report or dashboard including a text block is that, when baked into a design, it assumes that there is the same basic word count of content to say each time the report is delivered. That isn’t my experience. In some cases, there may be quite a bit of key callouts for a given report…and the text area isn’t large enough to fit it all in. In other cases, in a performance monitoring context, there might not be much to say at all, other than, “All systems are functioning fine.” Invariably, when the latter occurs, in an attempt to fill the space, the analyst is forced to simply describe the information already effectively presented graphically. This doesn’t add value.

If a text-based description is warranted, it can be included as companion material. <forinstance> “Below is this week’s dashboard. If you take a look at it, you will, as I did, say, ‘Oh, shit! we have a problem!’ I am looking into the [apparent calamitous drop] in [KPI] and will provide an update within the next few hours. If you have any hypotheses as to what might be the root cause of [apparent calamitous drop], please let me know” </forinstance> This does two things:

  1. Enables the report to be delivered on a consistent schedule
  2. Engages the recipients in any potential trouble spots the (well-formed) dashboard highlights, and leverages their expertise in understanding the root cause

Which…gets us to…

Analysis

Analysis, by [my] definition, cannot be something that is scheduled/recurring/repeating. Analysis is hypothesis-driven:

  • The dashboard showed an unexpected change in KPIs. “Oh, shit!” occurred, and some root cause work is in order
  • A business question is asked: “How can we drive more Y?” Hypotheses ensue

If you are repeating the same analysis…you’re doing something wrong. By its very nature, analysis is ad hoc and varied from one analysis to another.

When it comes to the delivery of analysis results, the medium and format can vary. But, I try to stick with two key concepts — both of which are violated multiple times over in every example included in Avinash’s post:

  • The principles of effective data visualization (maximize the data-pixel ratio, minimize the use of a rainbow palette, use the best visualization to support the information you’re trying to convey, ensure “the point” really pops, avoid pie charts at all costs, …) still need to be applied
  • Guy Kawasaki’s 10-20-30 rule is widely referenced for a reason — violate it if needed, but do so with extreme bias (aka, slideuments are evil)

While I am extremely wordy on this blog, and my emails sometimes tend in a similar direction, my analyses are not. When it comes to presenting analyses, analysts are well-served to learn from the likes of Garr Reynolds and Nancy Duarte when it comes to how to communicate effectively. It’s sooooo easy to get caught up in our own brilliant writing that we believe that every word we write is being consumed with equal care (you’re on your third reading of this brilliant blog post, are you not? No doubt trying to figure which paragraph most deserves to be immortalized as a tattoo on your forearm, right? You’re not? What?!!!). “Dumb it down” sounds like an insult to the audience, and it’s not. Whittle, hone, remove, repeat. We’re not talking hours and hours of iterations. We’re talking about simplifying the message and breaking it up into bite-sized, consumable, repeatable (to others)  chunks of actionable information.

Analysis Isn’t Reporting

Analysis and reporting are unquestionably two very differing things, but I don’t know that I agree with assertions that analysis requires an entirely different skillset from reporting. Meaningful reporting requires a different mindset and skillset from data puking, for sure. And, reporting and analysis are two different things, but you can’t be successful with the latter without being successful with the former.

Effective reporting requires a laser focus on business needs and business context, and the ability to crisply and effectively determine how to measure and monitor progress towards business objectives. In and of itself, that requires some creativity — there are seldom available metrics that are perfectly and directly aligned with a business objective.

Effective analysis requires creativity as well — developing reasonable hypotheses and approaches for testing them.

Both reporting and analysis require business knowledge, a clear understanding of the objectives for the site/project/campaign/initiative, a better-than-solid understanding of the underlying data being used (and its myriad caveats), and effective presentation of information. These skills make up the core of a good analyst…who will do some reporting and some analysis.

What About Analytics?

I’m a fan of analytics…but see it as pretty far along the data maturity continuum. It’s easy to poo-poo reporting by pointing out that it is “all about looking backwards” or “looking at where you’ve been.” But, hey, those who don’t learn from the past are condemned to repeat it, no? And, “How did that work?” or “How is that working?” are totally normal, human, helpful questions. For instance, say we did a project for a client that, when it came to the results of the campaign from the client’s perspective, was a fantastic success! But, when it came to what it cost us to deliver the campaign, the results were abysmal. Without an appropriate look backwards, we very well might do another project the same way — good for the client, perhaps, but not for us.

In general, I avoid using the term “analytics” in my day-to-day communication. The reason is pretty simple — it’s not something I do in my daily job, and I don’t want to put on airs by applying a fancy word to good, solid reporting and analysis. At a WAW once, I actually heard someone say that they did predictive modeling. When pressed (not by me), it turned out that, to this person, that meant, “putting a trendline on historical data.” That’s not exactly congruent with my use of the term analytics.

Your Thoughts?

Is this a fair breakdown of the work? I scanned through the comments on Avinash’s post as of this writing, and I’m feeling as though I am a bit more contrarian than I would have expected.

Reporting

Measurement Strategies: Balancing Outcomes and Outputs

I’m finding myself in a lot of conversations where I’m explaining the difference between “outputs” and “outcomes.” It’s a distinction that can go a long way when it comes to laying out a measurement strategy. It’s also a distinction that can seem incredibly academic and incredibly boring. To the unenlightened!

Outputs are simply things that happened as the result of some sort of tactic. For instance, the number of impressions for a banner ad campaign is an output of the campaign. Even the number of clickthroughs is an output — in and of itself, there is no business value of a clickthrough, but it is something that is a direct result of the campaign.

An outcome is direct business impact. “Revenue” is a classic outcome measure (as is ROI, but this post isn’t going to reiterate my views on that topic), but outcomes don’t have to be directly tied to financial results. Growing brand awareness is an outcome measure, as is growing your database of marketable contacts. Increasing the number of people who are talking about your brand in a positive manner in the blogosphere is an outcome. Visits to your web site is an outcome, although if you wanted to argue with me that it is really just an aggregated output measure — the sum of outputs of all of the tactics that drive traffic to your site — I wouldn’t put up much of a fight.

Why Does the Distinction Matter?

The distinction between outputs and outcomes matters for two reasons:

  • At the end of the day, what really matters to a business are outcomes — if you’re only measuring outputs, then you are doing yourself a disservice
  • Measuring outputs and outcomes can help you determine whether your best opportunities for improvement lie with adjusting your strategy or with improving your tactics

Your CEO, CFO, CMO, COO, and even C-3PO (kidding!) — the people whose tushes are most visibly on the line when it comes to overall company performance — care that their Marketing department is delivering results (outcomes) and is doing so efficiently through the effective execution of tactics (outputs).

Campaign Success vs. Brand Success

Avinash Kaushik wrote a post a couple of weeks ago about the myriad ways to measure the results of a “brand campaign.” Avinash’s main point is that “this is a brand campaign, so it can’t be measured” is a cop-out. If you read the post through an “outcomes vs. outputs” lens, you’ll see that measuring “brand” tends to be more outcome-weighted than output-weighted. And (I didn’t realize this until I went back to look at the post as I was writing this one), the entire structure of the post is based on the outcomes you want for your brand — attracting new prospects, sharing your business value proposition more broadly, impressing people about your greatness, driving offline action, etc.

Avinash’s post focuses on “brand campaigns.” I would argue that all campaigns are brand campaigns — while they may have short-term, tactical goals, they’re ultimately intended to strengthen your overall brand in some fashion. You have a strategy for your brand, and that strategy is put into action through a variety of tactics — direct marketing campaigns, your web site, a Facebook page, press releases, search engine marketing, banner ads, TV advertising, and the like. Many tactics are in play at once, and they all act on your brand in varying degrees:

Tactics vs. Brand

And, of course, you also have happenstance working on your brand — a super-celebrity makes a passing comment about how much he/she  likes your product (or, on the other hand, a celebrity who endorses your product checks into rehab), you have to issue a product recall, the economy goes in the tank, or any of these happen to one of your competitors. You get the idea. The picture above doesn’t illustrate the true messiness of managing your brand and all of the other arrows that are acting on it.

Oh, and did I mention that those arrows are actually fuzzy and squiggly? It’s a messy and fickle world we marketers live in! But, here’s where outcomes and outputs actually come in handy:

  1. In a perfect world, you would measure only outcomes for your tactics…which would mostly mean you would actually measure at some point after the arrows enter the brand box above, but…
  2. You don’t live in a perfect world, so, instead, you find the places where you can measure the brand outcomes of your tactics, but, more often than not, you measure the outputs of your tactics (measuring closer to the left side of the arrows above), which means…
  3. You actually measure a mix of outcomes and outputs, which is okay!

Tactics are what’s going on on the front lines. Their outputs tend to be easily measurable. For instance, you send an e-mail to 25,000 people in your database. You can measure how many people never received it (output — bouncebacks), how many people opened it (output), how many people clicked through on it (output), and how many people ultimately made a purchase (outcome). Except the outcome…is probably something you wildly under count, because it can be darn tough to actually track all of the people for whom the e-mail played some role in influencing their ultimate decision to buy from your company. The outputs  can also be measured very soon after the tactic is executed (open rate is a highly noisy metric, I realize, but it is still useful, especially if you measure it over time for all of your outbound e-mail marketing), whereas outcomes often take a while to play out.

At the same time, if you ignored measuring the tactics and, instead, focussed solely on measuring your brand, you would find that you were measuring almost exclusively outcomes (see Avinash’s post and think of typical corporate KPIs like revenue, profitability, customer satisfaction, etc.)…but you would also find that your measurements have limited actionability, because they reflect a complex amalgamation of tactics.

So, What’s the Point?

Measure your brand. Measure each of your tactics. Accept that measurement of the tactics is heavily output-biased and measurable on a short cycle, while measurement of your brand is heavily outcome-biased and is a much messier and sluggish beast to affect.

Watch what happens:

  • If your brand is performing poorly (outcomes), but your tactics are all performing great (outputs), then reconsider your strategy — you chose tactics that are not effective
  • If your brand is performing poorly (outcomes) and your tactics are performing poorly (outputs), then scrutinize your execution
  • If your brand is performing well…cut out early and play some golf! Really, though, if your tactics are performing poorly, then you may still want to scrutinize your strategy, as you’re succeeding in spite of yourself!

The key is that tactics are short-term, and driving improvement in how they are executed — through process improvements, innovative execution, or just sheer opportunism — is an entirely different exercise (operating on a different — shorter — time horizon) than your strategy for your brand. Measure them both!

Reporting

Put-in-Play Percentage: A "Great Metric" for Youth Baseball?

BB PitchingMy posts have gotten pretty sporadic (…again, sadly), and I’ll once again play the “lotta’ stuff goin’ on” card. Fortunately, it’s mostly fun stuff, but it does mean I’ve got a couple of posts written in my head that haven’t yet gotten digitized and up on the interweb. This post is one of them.

As I wrote about in my last post, I’ve recently rolled out the first version of a youth baseball scoring system that includes both a scoresheet for at-the-game scoring, as well as a companion spreadsheet that will automatically generate a number of individual and team statistics using the data from the scoresheets. The whole system came about because I’ve been scoring for my 10-year-old’s baseball team, and I was looking for a way to efficiently generate basic baseball statistics for the players and the team over the course of the season.

The Birth of a New Baseball Statistic

After sending the coach updated stats after a couple of games mid-season, he posed this question:

Do we have any offensive stats on putting the ball in play? I’m curious to know which, if any, of the kids are connecting with the ball better than their hit stats would suggest.  That way I can work with them on power hitting.

How could I resist? I mulled the question over for a bit and then came up with a statistic I dubbed the “Put-in-Play Percentage,” or PIP. The formula is pretty simple:

Put-In-Play Percentage Formula

Now, of all the sports that track player stats, baseball is at the top of the list: sabermetrics is a term coined solely to describe the practice of studying baseball statistics,  Moneyball was a best-selling book, and major league baseball itself is fundamentally evolving to increase teams’ focus on statistics (including some pretty crazy ones — I’ve written about that before). So, how on earth could I be coming up with a new metric (and a simple one at that) that could have any value?

The answer: because this metric is specifically geared towards youth baseball.

More on that in a bit.

Blog Reader Timesaver Quiz

Question: In baseball, if a batter hits the ball, it gets fielded by the second baseman, and he throws the ball to first base and gets the batter out, did the batter get a hit?

If you answered, “Of course not!” then skip to the next section in this post. Otherwise, read on.

One of the quirks of baseball — and there are many adults as well as 10-year-olds on my son’s team who don’t understand this — is that a hit is only a hit if:

  1. The player actually reaches first base safely, and
  2. He doesn’t reach first base because a player on the other team screwed up (an error)

“Batting average” — one of the most sacred baseball statistics — is, basically, seeing what percentage of the time the player gets a hit (there’s more to it than that — if the player is walked, gets hit by a pitch, or sacrifices, the play doesn’t factor into the batting average equation…but this isn’t a post to define the ins and outs of batting average).

PIP vs. Batting Average

Batting average is a useful statistic, even with young players. But, as my son’s coach’s question alluded to, at this age, there are fundamentally two types of batters when it comes to a low batting average:

  • Players who struggle to make the split-second decision as to whether a ball is hittable or not — they strike out a lot because they pretty much just guess at when to swing
  • Players who pick good pitches to swing at…but who still lack some of the fundamental mechanics and timing of a good baseball swing — they’ll strike out some, but they’ll also hit a lot of soft grounders just because they don’t make good contact

(Side note: I’m actually one of the rare breed of people who fall into BOTH categories. That’s why I sit behind home plate and score the game…)

What the coach was looking for was some objective evidence to try to differentiate between these two types of players so that he could work with them differently. Just from observation, he knew a handful of players that fell heavily into one category or the other, but the question was whether I could provide quantitative evidence to confirm his observations and help him identify other players on the team who were more on the cusp.

And, that’s what the metric does. Excluding walks, hit by pitches, and sacrifices (just as a batting average calculation does), this statistic credits a player for, basically, not striking out.

But Is It a Great Metric?

Due to one of those “lotta’ things goin’ on” projects I referenced at the beginning of this post, I had an occasion to revisit one of my favorite Avinash Kaushik posts last week, in which he listed four attributes of a great metric. How does PIP stand up to them? Let’s see!

Attribute Summary (Mine) How Does PIP Do?
Uncomplex The metric needs to be easily understandable — what it is and how it works PIP works pretty well here. While it requires some basic understanding of baseball statistics — and that PIP is a derivation of batting average (as is on-base percentage, for that matter) — it is simply calculated and easy to explain
Relevant The metric needs to be tailored to the specific strategy and objectives they are serving This is actually why PIP isn’t a major league baseball stat — the coach’s primary objective in youth baseball is (or should be) to teach the players the fundamentals of the game (and to enjoy the game); at the professional level, the coach’s primary objective is to win as many games as possible. PIP is geared towards youth player skill development.
Timely Metrics need to be provided in a timely fashion so decision-makers can make timely decisions The metric is simple to calculate and can be updated immediately after a game. It takes me ~10 minutes to enter the data from my scorecard into my spreadsheet and generate updated statistics to send to the coach
“Instantly Useful” The metric must be able to be quickly understood so that insights can be found as soon as it is looked at PIP met this criteria — because it met the three criteria above, the coach was able to put the information to use at the very next practice.

I’d call it a good metric on that front!

But…Did It Really Work?

As it turned out, over the course of the next two games after I first provided the coach with PIP data, 9 of the 11 players improved their season batting average. Clearly, PIP can’t entirely claim credit for that. The two teams we played were on the weaker end of the spectrum, and balls just seemed to drop a little better for us. But, I like to think it helped!

Analysis, Reporting

What is "Analysis?"

Stephen Few had a recent post, Can Computers Analyze Data?, that started: “Since ‘business analytics’ has come into vogue, like all newly popular technologies, everyone is talking about it but few are defining what it is.” Few’s post was largely a riff off of an article by Merv Adrian on the BeyeNETWORK: Today’s ‘Analytic Applications’ — Misnamed and Mistargeted. Few takes issue (rightly so), with Adrian’s implied definition of the terms “analysis” and “analytics.” Adrian outlines some fair criticisms of BI tool vendors, but Few’s beef regarding his definitions are justified.

Few defines data analysis as “what we do to make sense of data.” I actually think that is a bit too broad, but I agree with him that analysis, by definition, requires human beings.

Fancy NancyWith data “coming into vogue,” it’s hard to walk through a Marketing department without hearing references to “data mining” and “analytics.” Given the marketing departments I tend to walk through, and given what I know of their overall data maturity, this is often analogous to someone filling the ice cube trays in their freezer with water and speaking about it in terms of the third law of thermodynamics.

I’ve got a 3-year-old daughter, and it’s through her that I’ve discovered the Fancy Nancy series of books, in which the main character likes to be elegant and sophisticated well beyond her single-digit age. She regularly uses a word and then qualifies it as “that’s a fancy way to say…” a simpler word. For instance, she notes that “perplexed” is a fancy word for “mixed up.”

“Analytics” is a Fancy Nancy word. “Web analytics” is a wild misnomer. Most web analysts will tell you there’s a lot of work to do with just basic web site measurement. And, that work is seldom what I would consider “analytics.” As cliché as it is, you can think about data usage as a pyramid, with metrics forming the foundation and analysis (and analytics) being built on top of them.

Metrics Analysis Pyramid

There are two main types of data usage:

  • Metrics / Reporting — this is the foundation of using data effectively; it’s the way you assess whether you are meeting your objectives and achieving meaningful outcomes. Key Performance Indicators (KPIs) live squarely in the world of metrics (KPIs are a fancy way to say “meaningful metrics”). Avinash Kaushik defines KPIs brilliantly: “Measures that help you understand how you are doing against your objectives.” Metrics are backward-looking. They answer the question: “Did I achieve what I set out to do?” They are assessed against targets that were set long before the latest report was pulled. Without metrics, analysis is meaningless.
  • Analysis — analysis is all about hypothesis testing. The key with analysis is that you must have a clear objective, you must have clearly articulated hypotheses, and, unless you are simply looking to throw time and money away, you must validate that the analysis will lead to different future actions based on different possible outcomes. Analysis tends to be backward looking as well — asking questions, “Why did that happen?”…but with the expectation that, once you understand why something happened, you will take different future actions using the knowledge.

So, what about “analytics?” I asked that question of the manager of a very successful business intelligence department some years back. Her take has always resonated with me: “analytics” are forward-looking and are explicitly intended to be predictive. So, in my pyramid view, analytics is at the top of the structure — it’s “advanced analysis,” in many ways. While analysis may be performed by anyone with a spreadsheet, and hypotheses can be tested using basic charts and graphs, analytics gets into a more rigorous statistical world: more complex analysis that requires more sophisticated techniques, often using larger data sets and looking for results that are much more subtle. AND, using those results, in many cases, to build a predictive model that is truly forward-looking.

The key is that the foundation of your business (whether it’s the entire company, or just your department, or even just your own individual role) is your vision. From your vision comes your strategy. From your strategy come your objectives and your tactics. If you’re looking to use data, the best place to start is with those objectives — how can you measure whether you are meeting them, and, with the measures you settle on, what is the threshold whereby you would consider that you achieved your objective? Attempting to do any analysis (much less analytics!) before really nailing down a solid foundation of objectives-oriented metrics is like trying to build a pyramid from the top down. It won’t work.

Analysis, Presentation, Reporting

The "Action Dashboard" — Avinash Mounts My Favorite Soapbox

Avinash Kaushik has a great post today titled The “Action Dashboard” (An Alternative to Crappy Dashboards. As usual, Avinash is spot-on with his observations about how to make data truly useful. He provides a pretty interesting 4-quadrant dashboard framework (as a transitional step to an even more powerful dashboard). I’ve gotten red in the face more times than I care to count when it comes to trying to get some of the concepts he presents across. It’s a slow process that requires quite a bit of patience. For a more complete take on my thoughts check out my post over on the Bulldog Solutions blog.

And, yes, I’m posting here and pointing to another post that I wrote on a completely different blog. We’ve recently re-launched the Bulldog Solutions blog — new platform, and, we hope, with a more focussed purpose and strategy. What I haven’t fully worked out yet is how to determine when to post here and when to post there…and when to post here AND there (like this post).

It may be that we find out that we’re not quite as ready to be as transparent as we ought to be over on the corporate blog, in which case this blog may get some posts that are more “my fringe opinion” than will fly on the corporate blog. I don’t know. We’ll see. I know I’m not the first person to face the challenge of contributing to multiple blogs (I’ve also got my wife’s and my personal blog…but that one’s pretty easy to carve off).

Analytics Strategy, Reporting

ROI — the Holy Grail of Marketing (and Roughly as Attainable)

The topic of “Marketing ROI” has crossed my inbox and feed reeder on several different fronts over the past few weeks. I don’t know if the subject actually has peaks and valleys, or if it’s just that my biorhythms periodically hit a point where the subject seems to bubble up in my consciousness.

The good news is that the recent material I’ve seen has had a good solid theme of, “Don’t focus too much on truly calculating ROI.” The bad news is that that message has been in response — directly or indirectly — to someone who is trying to do just that.

One really in-depth post came from — no surprise — My Hero Avinash Kaushik. He did a lengthy post, including five embedded videos, each 4-9 minutes long: Standard Metrics #5: Conversion / ROI Attribution.  What the post does is walk through a series of scenarios  where a Marketer might be trying to calculate the ROI for their search engine marketing (SEM) spend. He starts with the “ideal” scenario: a visitor does a search, clicks on a sponsored link, comes to the site, moves through and makes a purchase. In that case, calculating/attributing ROI is very simple. But, that’s just a setup for the other scenarios…which are wayyyyyy closer to reality. The challenge is that, as Marketers, it’s we all too often ignore our own typical behavior and common sense so that we can assume that most of our potential customers behave in an overly simplistic way. When was the last time you did a search, clicked on a sponsored link, and then, during that visit, made a purchase?

Unfortunately, very, very, very few Marketing executives would ever actually spend the 45 minutes it would take to truly consume all of Avinash’s post.  And, honestly, that’s not really “the solution.” The smart Marketing executive will find the Avinashes of the world and will hire them and trust them. Avinash (and John Marshall) really make the case that “time on site” is a more useful metric for assessing the effectiveness of your SEM spend — ROI just brings in too many variables and too much complexity.

In short: Don’t treat ROI as the Holy Grail and try to tie every one of your marketing tactics to “revenue generated.” For one thing, you will head down so many rat holes that you’ll start drooling whenever someone says, “cheese.” For another thing, you will find yourself facing decisions that seem right based on your ROI calculation…but that you just know are wrong.

Another place where this topic came up was in a thread titled ROI Models – High Level Thinking on the webanalytics Yahoo! group. I responded, but others chimed in as well. Some of those responses, in my mind, are still a bit too accepting of the premise that “I need to calculate a hard ROI.” But, other responses go more to a “back up and don’t look at ROI as the be-all/end-all.”

And, finally, ROI crossed my inbox last week by way of a CMO Council press release from back in January. I saw this when it came out, but a colleague forwarded it along last week, which prompted me to re-read it. The press release emphasized how much marketers are focussing on accountability when it comes to their marketing investments. One data point that jumped out was “34 percent [of marketers] said they were planning to introduce a formal ROI tracking system.” This is an alarming statistic. Marketers absolutely should be focusing on accountability — finding ways that they can measure and analyze the results of their efforts. But, if they truly are framing this as the need for “a formal ROI tracking system,” then that means 34 percent of marketers are going to be largely chasing their tails rather than driving business value.