Reporting

So, You Think Measuring Marketing Performance Is Hard?

Not a week goes by that I don’t see, hear, read, or preach on the topic of measuring marketing results. From equating Marketing ROI to The Holy Grail, to sticking my tongue in my cheek to the point of meanness when it comes to a “simple” process for establishing corporate metrics, to mulling over Marketing ROI vs. Marketing Accountability, there really is no end to the real-world examples that warrant commentary. The reason? Because it’s hard to figure out how to measure marketing’s impact in a meaningful way. It can be done, and it needs to be done, but it requires having a very clearly defined strategy and objectives to do it well, and, even then, the measurement is not as perfect and precise as we would like it to be.

So…it’s hard. I agree.

Try being a non-profit.

I do some volunteer work with the United Way of Central Ohio. Specifically, I sit on the Meeting Emergency and Short Term Basic Needs Impact Council, as well as the Emergency Food, Shelter, and Financial Assistance Results Committee that reports into that impact council, as well as the Emergency Food, Shelter, and Financial Assistance Performance Measures Ad Hoc Committee, which reports into the results committee. Yeah. A mouthful, to say the least. But, it’s the ad hoc committee that has been doing the most tangible work of late and, lookie there!, it’s a committee geared towards performance measurement. Some of the work of that committee inspired an Outputs vs. Outcomes post earlier this year. I find a lot of parallels between measurement in the non-profit world and measurement in the Marketing world.

One difference is that, while Marketers (broad generalization alert!) typically view measurement as a necessary evil — they do want to be data-driven, and they understand the conceptual value of doing measurement…but it’s simply not baked into their DNA to truly want to do it — nonprofits increasingly view measurement as a necessity. (At least) two reasons for this:

  • In the nonprofit world, resources are pretty much infinitely scarce — no agency has a real surplus of the services they supply; if they actually get to a point where they’ve got one area reasonably well covered…they expand their offering to meet other needs of their clients
  • Donors want to know that their investment is making a difference — on the surface, this may seem similar to investors in a publicly held company; but, investors look at revenue, profitability and growth — financial measures — much more than they scrutinize “Marketing” results (although the “average tenure of a CMO is 27 months” is a stat that gets bandied around quite a bit, so there is some flow down the chain of command to Marketing for accountability); donors to nonprofits are scrutinizing “results” that need to be tied to the agency’s efforts (their investment) and meaningful in an oftentimes relatively soft context

As more and more nonprofits are being driven to collaborate to gain efficiency, more of them are working with foundations or some sort of umbrella organizing/coordinating entity. The Community Shelter Board in Columbus is a really good example of this. It’s an organization that, on its own, does not provide any direct services…but most of the homeless shelters in the area receive funding and some level of direction from the organization. And they do some pretty nice quarterly indicator reports — using plain ol’ Excel. They do it right by: 1) choosing metrics that matter and balance each other, 2) setting targets for those metrics and assessing each metric against its target, and 3) providing a contextual analysis of the results for each set of metrics.  Two thumbs up there.

Right now, the United Way of Central Ohio is trying to do something similar — narrowing its focus, establishing clear strategies in each area, and then honing in on meaningful performance measures for each strategy. It’s a fairly grueling exercise, but well worth undertaking. We constantly find ourselves battling the tendency to broaden the scope of a strategy — it’s hard to find any nonprofit that isn’t doing good work, but trying to support “everything that is good” means not really moving any of the needles in a meaningful way.

One similarity I’ve seen between the non-profit world and Marketing in the for-profit world has to do with capturing data. I touched on this in my post on being data-oriented vs. process-oriented. When trying to establish good, meaningful metrics, it can be very tempting to envision ways the data you want would be captured through a minor process change: “When the inside sales representative answers the phone, we will have him/her ask the caller where they heard about the company and get that recorded in the system so we’ll be able to tie the caller back to specific (or at least general) Marketing activity” or “In order to verify that our agency referral program is working, we’ll call the client we referred 1-2 weeks after the referral to find out if the referral was appropriate and got them the services they needed.” This is dangerous territory. The reason? In both cases, you’re inserting overhead in a process that is not inherently and immediately valuable to person using the process. Sure, it’s valuable in that you can sit back and assess the data later and determine what is/is not working about the process and use that information to come back and make improvements…but that’s an awfully abstract concept to the person who is answering the telephone day in and day out (in both of the above examples). I’ll take an imperfect proxy metric that adds zero overhead to the process that generates it any day over a more perfect metric that requires adding “jus’ a li’l” complexity to the process. And, you know what? My metric will be more accurate!

Photo by batega

Reporting

Outputs vs. Outcomes

I’ve been involved with United Way for the past seven or eight years in Austin and, now, in Columbus. One of the attractions to spending my volunteer energy with United Way is that they are very accountability-focussed. That means that, in their agency funding cycle, they require agencies that are requesting funding to specify measures and targets for the specific programs they describe in their funding requests.

For the last few months, I’ve been getting involved with the United Way of Central Ohio (side note: if you’ve thought about doing volunteer work and just can’t figure out how to get started, it’s insanely easy; one phone call to any nonprofit organization that piques your interest, and you WILL have the opportunity to get involved). I’m on a couple of standing committees that are focussed on emergency food, shelter, and financial assistance. And, I’m on an ad hoc committee focused on developing performance measures for that overall “impact area.”

One common distinction I learned when working on agency funding committees with two different United Ways is the distinction between an “outcome” and an “output.” An output is something like “provided 1,000 families in a housing crisis with one-time emergency financial assistance.” An outcome is more like “reduced the number of families who became homeless due to a financial crisis by 15% over the previous reporting period.” Does the distinction make sense? The output is what the nonprofit agency did, whereas the outcome is why they did it — what result they were really trying to achieve at the end of the day.

In the business world — specifically, in marketing — examples of outputs would be “deployed 20 new pages,” “conducted 3 webinars,” “published 2 white papers.” And, really, some highly tactical measures such as “achieved an open rate of 54%,” “achieved a clickthrough rate of 12%,” and even “drove 450 registrations” are all much more outputs than outcomes.

The marketing outcome that is wildly in vogue right now is ROI — how much revenue did all of this marketing activity drive? In this sense, Marketing in the for profit world is paralleling the nonprofit world (it’s becoming a cliche in the nonproft arena that nonprofits need to be “run more like for profit businesses”) — both are starting to accept as gospel that measuring outputs is bad, and the only measures that matter are outcome-based.

This, I fear, is another case of a perfectly valid concept being oversimplified to the point that it is presented as an absolute rule. And it really shouldn’t be. Here’s the problem with throwing out all output measures: the larger the organization and the more complex the business, the more factors there are that influence the ultimate outcome!

Take the case of a brilliantly executed Marketing campaign — just accept that it was perfect in all possible ways. BUT, during that same measurement period, the Sales organization was in total upheaval: senior leadership turnover, processes in flux, and a grossly understaffed inside sales organization. Marketing — in an effort to be outcome-based — assesses their efforts solely based on the conversion to revenue of the leads they generated and nurtured. The results were abysmal. The CMO loses his job. The CEO steps in temporarily and demands that, whatever Marketing did for the last six months…they need to do the opposite…

This example is only slightly dramatized. The same potential folly exists for nonprofits. If an agency is focussed on addressing short-term food and shelter crises, their outputs may actually be the best thing for them to measure — are they managing their resources to meet the demands for assistance that they get every day of the year? If they start focussing on longer-term, root causes of the crises, in order to get to the true outcome of food/housing crisis prevention and food/housing stability, then there will be a gap in short-term services. Better, in my book, to allow (and encourage) a focus on outputs when it makes sense. Still with a bias to outcomes, but not to the black-and-white exclusion of outputs.

I like the “outputs vs. outcomes” distinction. It’s a distinction that Marketers could benefit from making. I don’t like blanket beliefs that one is good and one is bad, or one is right and one is wrong. The world, folks, is just too complicated for that.