Reporting

Measurement Strategies: Balancing Outcomes and Outputs

I’m finding myself in a lot of conversations where I’m explaining the difference between “outputs” and “outcomes.” It’s a distinction that can go a long way when it comes to laying out a measurement strategy. It’s also a distinction that can seem incredibly academic and incredibly boring. To the unenlightened!

Outputs are simply things that happened as the result of some sort of tactic. For instance, the number of impressions for a banner ad campaign is an output of the campaign. Even the number of clickthroughs is an output — in and of itself, there is no business value of a clickthrough, but it is something that is a direct result of the campaign.

An outcome is direct business impact. “Revenue” is a classic outcome measure (as is ROI, but this post isn’t going to reiterate my views on that topic), but outcomes don’t have to be directly tied to financial results. Growing brand awareness is an outcome measure, as is growing your database of marketable contacts. Increasing the number of people who are talking about your brand in a positive manner in the blogosphere is an outcome. Visits to your web site is an outcome, although if you wanted to argue with me that it is really just an aggregated output measure — the sum of outputs of all of the tactics that drive traffic to your site — I wouldn’t put up much of a fight.

Why Does the Distinction Matter?

The distinction between outputs and outcomes matters for two reasons:

  • At the end of the day, what really matters to a business are outcomes — if you’re only measuring outputs, then you are doing yourself a disservice
  • Measuring outputs and outcomes can help you determine whether your best opportunities for improvement lie with adjusting your strategy or with improving your tactics

Your CEO, CFO, CMO, COO, and even C-3PO (kidding!) — the people whose tushes are most visibly on the line when it comes to overall company performance — care that their Marketing department is delivering results (outcomes) and is doing so efficiently through the effective execution of tactics (outputs).

Campaign Success vs. Brand Success

Avinash Kaushik wrote a post a couple of weeks ago about the myriad ways to measure the results of a “brand campaign.” Avinash’s main point is that “this is a brand campaign, so it can’t be measured” is a cop-out. If you read the post through an “outcomes vs. outputs” lens, you’ll see that measuring “brand” tends to be more outcome-weighted than output-weighted. And (I didn’t realize this until I went back to look at the post as I was writing this one), the entire structure of the post is based on the outcomes you want for your brand — attracting new prospects, sharing your business value proposition more broadly, impressing people about your greatness, driving offline action, etc.

Avinash’s post focuses on “brand campaigns.” I would argue that all campaigns are brand campaigns — while they may have short-term, tactical goals, they’re ultimately intended to strengthen your overall brand in some fashion. You have a strategy for your brand, and that strategy is put into action through a variety of tactics — direct marketing campaigns, your web site, a Facebook page, press releases, search engine marketing, banner ads, TV advertising, and the like. Many tactics are in play at once, and they all act on your brand in varying degrees:

Tactics vs. Brand

And, of course, you also have happenstance working on your brand — a super-celebrity makes a passing comment about how much he/she  likes your product (or, on the other hand, a celebrity who endorses your product checks into rehab), you have to issue a product recall, the economy goes in the tank, or any of these happen to one of your competitors. You get the idea. The picture above doesn’t illustrate the true messiness of managing your brand and all of the other arrows that are acting on it.

Oh, and did I mention that those arrows are actually fuzzy and squiggly? It’s a messy and fickle world we marketers live in! But, here’s where outcomes and outputs actually come in handy:

  1. In a perfect world, you would measure only outcomes for your tactics…which would mostly mean you would actually measure at some point after the arrows enter the brand box above, but…
  2. You don’t live in a perfect world, so, instead, you find the places where you can measure the brand outcomes of your tactics, but, more often than not, you measure the outputs of your tactics (measuring closer to the left side of the arrows above), which means…
  3. You actually measure a mix of outcomes and outputs, which is okay!

Tactics are what’s going on on the front lines. Their outputs tend to be easily measurable. For instance, you send an e-mail to 25,000 people in your database. You can measure how many people never received it (output — bouncebacks), how many people opened it (output), how many people clicked through on it (output), and how many people ultimately made a purchase (outcome). Except the outcome…is probably something you wildly under count, because it can be darn tough to actually track all of the people for whom the e-mail played some role in influencing their ultimate decision to buy from your company. The outputs  can also be measured very soon after the tactic is executed (open rate is a highly noisy metric, I realize, but it is still useful, especially if you measure it over time for all of your outbound e-mail marketing), whereas outcomes often take a while to play out.

At the same time, if you ignored measuring the tactics and, instead, focussed solely on measuring your brand, you would find that you were measuring almost exclusively outcomes (see Avinash’s post and think of typical corporate KPIs like revenue, profitability, customer satisfaction, etc.)…but you would also find that your measurements have limited actionability, because they reflect a complex amalgamation of tactics.

So, What’s the Point?

Measure your brand. Measure each of your tactics. Accept that measurement of the tactics is heavily output-biased and measurable on a short cycle, while measurement of your brand is heavily outcome-biased and is a much messier and sluggish beast to affect.

Watch what happens:

  • If your brand is performing poorly (outcomes), but your tactics are all performing great (outputs), then reconsider your strategy — you chose tactics that are not effective
  • If your brand is performing poorly (outcomes) and your tactics are performing poorly (outputs), then scrutinize your execution
  • If your brand is performing well…cut out early and play some golf! Really, though, if your tactics are performing poorly, then you may still want to scrutinize your strategy, as you’re succeeding in spite of yourself!

The key is that tactics are short-term, and driving improvement in how they are executed — through process improvements, innovative execution, or just sheer opportunism — is an entirely different exercise (operating on a different — shorter — time horizon) than your strategy for your brand. Measure them both!

Reporting, Social Media

A Great Starting Point for Social Media ROI

Yesterday, I wrote about my beef with the popular cliché that “ROI for social media is Return on Influence.” This latest take was prompted by Connie Bensen’s ROI of a Community Manager post that has some great thoughts when it comes to measuring the value of social media.

As I put in my last post, quantifying the results of your social media investment is a worthwhile endeavor. Mapping those results to business value can be tricky, but it’s important to make the effort. As I implied yesterday, a Darwinian Take on Business says that the key decisionmakers are probably pretty sharp about the business they’re helping to run. They’re probably not sitting back and making every decision based on a simplistic ROI calculation. Talk to them about the business when you’re talking about social media.

Connie’s post has a pretty great point to start with this exercise. And, at the risk of exhibiting excruciatingly poor form blogging-wise, I’m just going to repeat it here. This is Connie’s list of the ways that investing in social media can provide value to the company. The investment can:

  • Humanize the company by providing a voice
  • Nurture the community & encourage growth
  • Communicate directly with the customers
  • Connect customers to appropriate internal departments
  • Ensure that messaging will connect
  • Build brand awareness through word of mouth
  • Lower market research costs
  • Add more points in the purchase cycle
  • Provide support to customers that have fallen thru the cracks
  • More satisfied customers because they’ve been involved with product development
  • Shorten length of product development cycle
  • Build public relations for brand with influentials in the industry
  • Identify strengths & weaknesses of competitors
  • Collaborate & partner with related organizations
  • Provide industry trends to the executive level

Which of these resonate the most with you as something that your company values highly or that your company is struggling to do effectively? How do you know that? Are there anecdotes that are widely circulated? Are there metrics that get shared regularly to either illustrate how important the area is to the company…or how much of an uphill battle the company is facing?

Start there. Don’t jump from what you come up with on that front to “…and here’s what we’re going to measure.” Start there and then develop a social media strategy (read more of Connie’s blog…and Jeremiah Owyang’s…and Chris Brogan’s…and others for tips on that). From that strategy, you can then develop your measures — the way you’re going to assess the value of your social media efforts.

Photo courtesy of cambodia4kidsorg

Reporting, Social Media

Social Media ROI: Stop the Insanity!

I’ve taken a run at this before…but my assertion that the emperor has no clothes didn’t stick. Either that, or the dozens of people who read this blog simply agree with me in principle, but don’t really think it’s worth the effort to raise a stink.

Regardless, I’m not quite ready to let it go. And I do think this is important. Connie Bensen’s recent post (cross-posted on the Marketing 2.0 blog) on the subject had me cheering…and crying…at the same time!

Maybe it’s because I’ve had the good fortune to know and work with some incredibly sharp CFO-types in my day. Most notably, for my entire eight years at National Instruments, the CFO (not necessarily his official title the whole time, but that was his role) was Alex Davern — a diminutively statured, prematurely white-haired Irishman who arguably knows the company’s business and market as well or better than anyone else in the company. He is a numbers guy by training…who gets that numbers are a tool, a darn important tool, but not the be-all end-all.

I had to sit down with — or stand up in front of — Alex on several occasions and pushinitiatives that had a hefty price tag for which I was a champion or at least a key stakeholder — a web content management system, a web analytics tool, and a customer data integration initiative. I never had to pitch a social media initiative to Alex, and I don’t know exactly how I would have done it. But, I seriously doubt that I would have pitched that “ROI is Return on Influence when it comes to social media.” I can feel the pain in my legs as I write this, just imagining myself being taken down at the knees by his Irish brogue.

Here’s the deal. Let’s back up to ROI as return on investment. Return. On. Investment. It’s a formula:

Both numbers have the same unit of measure — let’s go with US dollars — so that the end result is a straight-up ratio. Measured as a percentage. This is a bit of an oversimplification, and there are scads of ways to actually calculate ROI. A pretty common one is to use “net income” as the Return, and “book value of assets” as the Investment. With me so far? You acquired the assets along the way, and they have some worth (let’s not go down the path of that you might have spent more…or less…to acquire them than their “book value”). The return is how much money they made for you.

Now, let’s look at ROI as “Return on Influence” (I’ll skip “Return on Interaction” here — I can get plenty verbose without a repetitive example):

Hmmm… The construct starts to break down on several fronts. First off, you’re going to have a hard time measuring both of these in like units. That’s sorta’ the point of all of the debate on ROI — “influence” is hard to quantify. But, that’s not actually the main beef I have on this front. At the end of the day, your return is still “what value did we garner from our social media efforts?” Maybe that isn’t measured in direct monetary terms. But, really, is this whole discussion about mapping the level of Influence to some Return, or, rather, is it about assessing the Influence that you garner from some Investment? A more appropriate (conceptual) formula would be:

But, IOI, as pleasantly symmetrical as it is, really doesn’t get us very far, does it? So, let’s go back to Alex as a proxy for the Finance-oriented decision-makers in your company. You have two options when making your case for social media investment:

  • The Cutesy Option — waltz in with an opening that, frankly, is a bit patronizing: “What you have to understand about ROI when it comes to social media is that ROI is really Return on Influence rather than Return on Investment”
  • The Value Option — know your business (chances are the Finance person does); know your company’s strategy; know the challenges your company is facing; frame your pitch in those terms

Obviously, I’m a proponent of the second. I don’t really have a problem with starting the discussion with, “Trying to do an ROI calculation on a social media investment is, at best, extremely difficult and, at worst, not possible. But, there is real value to the business, and that’s what I’m going to talk about with you. And, I’ll talk about how we can quantify that value and the results we think we can achieve.”

Connie’s post has a great list to work from for that case. But…more on that in my next post.

Oh, yeah. the picture at the beginning of this post. And the title. Susan Powter, people! Stop the insanity!!!

Reporting

So, You Think Measuring Marketing Performance Is Hard?

Not a week goes by that I don’t see, hear, read, or preach on the topic of measuring marketing results. From equating Marketing ROI to The Holy Grail, to sticking my tongue in my cheek to the point of meanness when it comes to a “simple” process for establishing corporate metrics, to mulling over Marketing ROI vs. Marketing Accountability, there really is no end to the real-world examples that warrant commentary. The reason? Because it’s hard to figure out how to measure marketing’s impact in a meaningful way. It can be done, and it needs to be done, but it requires having a very clearly defined strategy and objectives to do it well, and, even then, the measurement is not as perfect and precise as we would like it to be.

So…it’s hard. I agree.

Try being a non-profit.

I do some volunteer work with the United Way of Central Ohio. Specifically, I sit on the Meeting Emergency and Short Term Basic Needs Impact Council, as well as the Emergency Food, Shelter, and Financial Assistance Results Committee that reports into that impact council, as well as the Emergency Food, Shelter, and Financial Assistance Performance Measures Ad Hoc Committee, which reports into the results committee. Yeah. A mouthful, to say the least. But, it’s the ad hoc committee that has been doing the most tangible work of late and, lookie there!, it’s a committee geared towards performance measurement. Some of the work of that committee inspired an Outputs vs. Outcomes post earlier this year. I find a lot of parallels between measurement in the non-profit world and measurement in the Marketing world.

One difference is that, while Marketers (broad generalization alert!) typically view measurement as a necessary evil — they do want to be data-driven, and they understand the conceptual value of doing measurement…but it’s simply not baked into their DNA to truly want to do it — nonprofits increasingly view measurement as a necessity. (At least) two reasons for this:

  • In the nonprofit world, resources are pretty much infinitely scarce — no agency has a real surplus of the services they supply; if they actually get to a point where they’ve got one area reasonably well covered…they expand their offering to meet other needs of their clients
  • Donors want to know that their investment is making a difference — on the surface, this may seem similar to investors in a publicly held company; but, investors look at revenue, profitability and growth — financial measures — much more than they scrutinize “Marketing” results (although the “average tenure of a CMO is 27 months” is a stat that gets bandied around quite a bit, so there is some flow down the chain of command to Marketing for accountability); donors to nonprofits are scrutinizing “results” that need to be tied to the agency’s efforts (their investment) and meaningful in an oftentimes relatively soft context

As more and more nonprofits are being driven to collaborate to gain efficiency, more of them are working with foundations or some sort of umbrella organizing/coordinating entity. The Community Shelter Board in Columbus is a really good example of this. It’s an organization that, on its own, does not provide any direct services…but most of the homeless shelters in the area receive funding and some level of direction from the organization. And they do some pretty nice quarterly indicator reports — using plain ol’ Excel. They do it right by: 1) choosing metrics that matter and balance each other, 2) setting targets for those metrics and assessing each metric against its target, and 3) providing a contextual analysis of the results for each set of metrics.  Two thumbs up there.

Right now, the United Way of Central Ohio is trying to do something similar — narrowing its focus, establishing clear strategies in each area, and then honing in on meaningful performance measures for each strategy. It’s a fairly grueling exercise, but well worth undertaking. We constantly find ourselves battling the tendency to broaden the scope of a strategy — it’s hard to find any nonprofit that isn’t doing good work, but trying to support “everything that is good” means not really moving any of the needles in a meaningful way.

One similarity I’ve seen between the non-profit world and Marketing in the for-profit world has to do with capturing data. I touched on this in my post on being data-oriented vs. process-oriented. When trying to establish good, meaningful metrics, it can be very tempting to envision ways the data you want would be captured through a minor process change: “When the inside sales representative answers the phone, we will have him/her ask the caller where they heard about the company and get that recorded in the system so we’ll be able to tie the caller back to specific (or at least general) Marketing activity” or “In order to verify that our agency referral program is working, we’ll call the client we referred 1-2 weeks after the referral to find out if the referral was appropriate and got them the services they needed.” This is dangerous territory. The reason? In both cases, you’re inserting overhead in a process that is not inherently and immediately valuable to person using the process. Sure, it’s valuable in that you can sit back and assess the data later and determine what is/is not working about the process and use that information to come back and make improvements…but that’s an awfully abstract concept to the person who is answering the telephone day in and day out (in both of the above examples). I’ll take an imperfect proxy metric that adds zero overhead to the process that generates it any day over a more perfect metric that requires adding “jus’ a li’l” complexity to the process. And, you know what? My metric will be more accurate!

Photo by batega

Analytics Strategy, Reporting, Social Media

Measuring ROI Around Web 2.0…and More Webinars (geesh!)

Awareness (the company) has a Measuring ROI Around Web 2.0 webinar this Thursday, May 22, at 2:00 PM EDT. That’s heavy on the buzzwords, but it sounds like it might have some interesting information. And, I found out about it thanks to a mention on Twitter from Connie Bensen, who will be leaving her new kayak behind and heading to London and Paris for some R&R, so will be missing the live event herself.

Unfortunately, it partially conflicts with Kalido’s What’s Behind Your BI? webinar, which starts at 2:30 PM EDT, and it conflicts with Fusing Field Marketing and Sales, which Hoover’s and Bulldog Solutions are putting on at 2:00 PM EDT on Thursday as well.

It looks like I’ll be doing some on-demand catch-up after the fact.

Reporting, Social Media

Death to "Marketing ROI is Return on Influence"…Please!!!

I realized that my Data Posts from Non-Data Blogs Yahoo! pipe wasn’t working correctly, and when I fixed it, a recent post from Debbie Weil at BlogWrite for CEOs popped up: More on the ROI of Social Media: Return on Influence. Ordinarily, I’m a big fan of Weil’s thoughts, but this one had me wondering if I ought to try to track down some blood pressure medication. Weil by no means invented the phrase (and does not claim to have), “When it comes to social media, ROI really means ‘return on influence,'” but, sadly, she has jumped right on that misguided bandwagon.

Maybe it’s that I was raised in a house where one parent was an engineer and the other was an English major. Maybe it’s because I’ve got a contrarian bent — a slight one (I like “alternative” music but not “experimental” music). For whatever reason, “ROI is return on influence” has stuck in my craw from the first time I heard it. And it still makes me twitch whenever I stumble across a post where someone waxes eloquently about the genius of the phrase.

Weil has a couple of “short answers” for why return on influence makes sense. Her first is that it makse sense “because the return is soft. The benefits of incorporating social media strategies into your marketing are real (and can no longer be ignored) but they’re not normally measured in dollars.” I have no argument with any part of that assertion after the word “because.” Weil points out that the return is soft. So, why isn’t the “return” being replaced in this platitude? “Influence from (social media) investment” I get. And that is something that you should try to measure.

Are you still with me? No one who has picked up this phrase has stopped to think that it doesn’t make sense! If you develop influence in your market, then you will get a return, which may or may not be soft. But, are you trying to measure the return on that influence, or are you trying to measure the influence that you garnered by engaging in social media?

Marketers really are freaked out by the increasing focus on Marketing ROI. That focus is driven by CEOs and CFOs. In my experience, CFOs are pretty sharp people. They get that Marketing is important. What they want is accountability, efficiency, and effectiveness from Marketing. They want to know that the chunk of the company’s budget that is being invested in Marketing is being well-used. Unfortunately, they communicate that imperative in financial terms: “What’s the ROI?” They’re Finance people, folks! What would you expect?

Marketers, rather than getting to the heart of delivering business value — driving improvements in efficiency and effectiveness, and demonstrating results — have instead gone nutso with, “I have to show ROI!” Return on Influence is a headless-chicken response to this belief. And, almost comically, it has resulted in a classic marketing response: “Let’s spin and message it! Let’s talk about how, for Marketing in the social media world, ROI really stands for ‘Return On Influence.'”

Oh, man oh man, what I would pay to sit in the room when a Fortune 1000 CMO proudly rolls out that explanation to the CFO. It completely, utterly, totally, and ridiculously misses the point.

Accountability and continuous improvement, people: the executives in your company are not stupid (if you think they are, then they either are, or they aren’t but you think they are: in either case, find a new company). Understand what you are trying to accomplish with your social media strategy. Is it to build your brand? Is it to engage with your most avid customers? Is it to position your company as being full of cutting-edge thought leaders? Articulate that. Measure whether you are making headway with your efforts.

Am I right?

Analytics Strategy, Reporting

ROI — the Holy Grail of Marketing (and Roughly as Attainable)

The topic of “Marketing ROI” has crossed my inbox and feed reeder on several different fronts over the past few weeks. I don’t know if the subject actually has peaks and valleys, or if it’s just that my biorhythms periodically hit a point where the subject seems to bubble up in my consciousness.

The good news is that the recent material I’ve seen has had a good solid theme of, “Don’t focus too much on truly calculating ROI.” The bad news is that that message has been in response — directly or indirectly — to someone who is trying to do just that.

One really in-depth post came from — no surprise — My Hero Avinash Kaushik. He did a lengthy post, including five embedded videos, each 4-9 minutes long: Standard Metrics #5: Conversion / ROI Attribution.  What the post does is walk through a series of scenarios  where a Marketer might be trying to calculate the ROI for their search engine marketing (SEM) spend. He starts with the “ideal” scenario: a visitor does a search, clicks on a sponsored link, comes to the site, moves through and makes a purchase. In that case, calculating/attributing ROI is very simple. But, that’s just a setup for the other scenarios…which are wayyyyyy closer to reality. The challenge is that, as Marketers, it’s we all too often ignore our own typical behavior and common sense so that we can assume that most of our potential customers behave in an overly simplistic way. When was the last time you did a search, clicked on a sponsored link, and then, during that visit, made a purchase?

Unfortunately, very, very, very few Marketing executives would ever actually spend the 45 minutes it would take to truly consume all of Avinash’s post.  And, honestly, that’s not really “the solution.” The smart Marketing executive will find the Avinashes of the world and will hire them and trust them. Avinash (and John Marshall) really make the case that “time on site” is a more useful metric for assessing the effectiveness of your SEM spend — ROI just brings in too many variables and too much complexity.

In short: Don’t treat ROI as the Holy Grail and try to tie every one of your marketing tactics to “revenue generated.” For one thing, you will head down so many rat holes that you’ll start drooling whenever someone says, “cheese.” For another thing, you will find yourself facing decisions that seem right based on your ROI calculation…but that you just know are wrong.

Another place where this topic came up was in a thread titled ROI Models – High Level Thinking on the webanalytics Yahoo! group. I responded, but others chimed in as well. Some of those responses, in my mind, are still a bit too accepting of the premise that “I need to calculate a hard ROI.” But, other responses go more to a “back up and don’t look at ROI as the be-all/end-all.”

And, finally, ROI crossed my inbox last week by way of a CMO Council press release from back in January. I saw this when it came out, but a colleague forwarded it along last week, which prompted me to re-read it. The press release emphasized how much marketers are focussing on accountability when it comes to their marketing investments. One data point that jumped out was “34 percent [of marketers] said they were planning to introduce a formal ROI tracking system.” This is an alarming statistic. Marketers absolutely should be focusing on accountability — finding ways that they can measure and analyze the results of their efforts. But, if they truly are framing this as the need for “a formal ROI tracking system,” then that means 34 percent of marketers are going to be largely chasing their tails rather than driving business value.

Analysis, Presentation

Sometimes, the Data DOES Paint a Clear Picture

I’ll admit right up front that this is the least value-add post on this blog to date. Part of me sincerely hopes that it holds that distinction indefinitely. But, I know me better than that, so no promises.

We all have them. Those moments where someone says something — in person, in an e-mail, in an instant message — that triggers a completely random, but oddly inspired, response.

What happened: One of my pet peeves is the cliche, “If you can’t measure it, don’t do it.” It sounds good, but I challenge any company to fully apply this overly simplistic maxim and survive. I’m all for having a bias towards measurement, but I get nervous when people speak in absolutes like this.

Earlier this week, I fired off an internal e-mail proposing an initiative that was extremely low cost that seemed like a good idea to me. It really wasn’t an initiative where it made sense to try to quantify the benefits, though. I made a comment as such in the e-mail — that, despite it not being practical to measure the results, I still thought it was a good idea. (I was having one of the 15-20 snarky moments I have throughout any given day.) Two of the five people on the distribution list immediately responded with demands for an ROI estimate.

FLASH!

10 minutes later, and I’d fashioned the following chart in Excel and responded to the group with my analysis:

The Bird

Everyone had a good chuckle.

Here’s the spreadsheet file itself. It’s as clean as clean can be, so feel free to snag it and put it to your own use. If you put it to use with entertaining results, I’d appreciate a quick comment with the tale. Or, if you make modifications to enhance the end result, I’d love to get a copy.

Enjoy.

Reporting

Outputs vs. Outcomes

I’ve been involved with United Way for the past seven or eight years in Austin and, now, in Columbus. One of the attractions to spending my volunteer energy with United Way is that they are very accountability-focussed. That means that, in their agency funding cycle, they require agencies that are requesting funding to specify measures and targets for the specific programs they describe in their funding requests.

For the last few months, I’ve been getting involved with the United Way of Central Ohio (side note: if you’ve thought about doing volunteer work and just can’t figure out how to get started, it’s insanely easy; one phone call to any nonprofit organization that piques your interest, and you WILL have the opportunity to get involved). I’m on a couple of standing committees that are focussed on emergency food, shelter, and financial assistance. And, I’m on an ad hoc committee focused on developing performance measures for that overall “impact area.”

One common distinction I learned when working on agency funding committees with two different United Ways is the distinction between an “outcome” and an “output.” An output is something like “provided 1,000 families in a housing crisis with one-time emergency financial assistance.” An outcome is more like “reduced the number of families who became homeless due to a financial crisis by 15% over the previous reporting period.” Does the distinction make sense? The output is what the nonprofit agency did, whereas the outcome is why they did it — what result they were really trying to achieve at the end of the day.

In the business world — specifically, in marketing — examples of outputs would be “deployed 20 new pages,” “conducted 3 webinars,” “published 2 white papers.” And, really, some highly tactical measures such as “achieved an open rate of 54%,” “achieved a clickthrough rate of 12%,” and even “drove 450 registrations” are all much more outputs than outcomes.

The marketing outcome that is wildly in vogue right now is ROI — how much revenue did all of this marketing activity drive? In this sense, Marketing in the for profit world is paralleling the nonprofit world (it’s becoming a cliche in the nonproft arena that nonprofits need to be “run more like for profit businesses”) — both are starting to accept as gospel that measuring outputs is bad, and the only measures that matter are outcome-based.

This, I fear, is another case of a perfectly valid concept being oversimplified to the point that it is presented as an absolute rule. And it really shouldn’t be. Here’s the problem with throwing out all output measures: the larger the organization and the more complex the business, the more factors there are that influence the ultimate outcome!

Take the case of a brilliantly executed Marketing campaign — just accept that it was perfect in all possible ways. BUT, during that same measurement period, the Sales organization was in total upheaval: senior leadership turnover, processes in flux, and a grossly understaffed inside sales organization. Marketing — in an effort to be outcome-based — assesses their efforts solely based on the conversion to revenue of the leads they generated and nurtured. The results were abysmal. The CMO loses his job. The CEO steps in temporarily and demands that, whatever Marketing did for the last six months…they need to do the opposite…

This example is only slightly dramatized. The same potential folly exists for nonprofits. If an agency is focussed on addressing short-term food and shelter crises, their outputs may actually be the best thing for them to measure — are they managing their resources to meet the demands for assistance that they get every day of the year? If they start focussing on longer-term, root causes of the crises, in order to get to the true outcome of food/housing crisis prevention and food/housing stability, then there will be a gap in short-term services. Better, in my book, to allow (and encourage) a focus on outputs when it makes sense. Still with a bias to outcomes, but not to the black-and-white exclusion of outputs.

I like the “outputs vs. outcomes” distinction. It’s a distinction that Marketers could benefit from making. I don’t like blanket beliefs that one is good and one is bad, or one is right and one is wrong. The world, folks, is just too complicated for that.