Reporting

How Marketing is Like Homelessness

I’ve officially succumbed to the Blog an Intriguing Title Syndrome (BITS). My payback for that, I suppose, is that I’ve blown the SEO power of my <h2> tags, my <title> tag, and keywords in the URL such that almost certainly no one who would actually be interested in this post will find it via Google or Bing. So it goes.

But, the title isn’t a pure gimmick. It’s the outgrowth of one of those, “I bet I’m the only person sitting in this church hall with 50+ other people at 3:30 AM who is having this thought,” moments. We all have those moments occasionally, right? Right?!

Gilligan, Are You Drunk? WHAT Are You Talking About?

The basic thought: Marketing is like homelessness, in that they face similar challenges when it comes to measurement.

Earlier this week, I participated in an annual homelessness count in downtown Columbus coordinated by the Community Shelter Board (CSB), which is an organization that drives coordination, collaboration, and consistency across the various homeless shelters in the area. It’s been nationally recognized as a model for how communities can efficiently and effectively meet the basic needs of the homeless. As it turns out, they’re also an organization that does a great job of measurement (which, I now realize, I’ve discussed before).

One of the questions that CSB tries to answer for the community is, “Are we reducing the number of people who are homeless over time?” It turns out that that is a pretty tough question to answer. CSB can certainly track how many beds are filled each night in the various shelters they work with, but those shelters tend to pretty much run at capacity and find creative ways to adjust their capacity as needed so they seldom turn people away. And, the weather affects how many people seek shelter on any given night. So, it’s messy to measure the true change in overall homelessness. That’s sort of like measuring marketing.

Whopper of a Disclaimer: I’m going to spend the rest of this post comparing measuring homelessness to measuring marketing. I’m pretty passionate about both, but the latter pays the bills, while the former actually has a degree of Noble Purpose attached to it. I am in no way comparing the the marketing profession to the group of underpaid and overworked people whose careers are dedicated to reducing homelessness. There is simply no contest there.

Marketing and Homelessness, Huh?

The way that we measure marketing at the highest level is often by measuring revenue, profitability, brand awareness, brand affinity, etc. These are all messy to measure in one way or another, and some of them are expensive to measure, too! It turns out, measuring homelessness is the same way.

So, there I was at 3:15 AM in a church hall waiting for all of the volunteers to arrive. Each team in the hall was made up of 5-6 people, and each team was assigned a different area of the city to physically walk around counting the homeless in that area. It’s not that it takes 5-6 people to do the counting, but there is safety in numbers. I was pretty much just one of “the numbers.” My team’s leader was Dave S., who I’ve known for several years, and whose team I explicitly asked to be assigned to. I mean, if your pre-count pep talk includes flashlight under the chin, how can you not be inspired?

Scary Leader Dave

The Outcome Alone Isn’t All That Helpful

So, we headed out and did our counting. For our group, our total homeless count was: zero. Does that mean that we’re solving homelessness in Columbus? Of course not. That might be the case (we certainly hope it is), but we were only providing one input to an overall count that included the other teams, a shelter census, and self-reported “homeless-but-not-somewhere-you-could-count-me” (i.e., a car) data. And, there was a lot of construction under way in our area, which doesn’t make for conducive overnight outdoor stays, so we were not all that surprised with what we found (two of the members of our team had covered the same area last year, and they did count a handful of people).

Marketing Analog: It’s messy to measure overall marketing outcomes, and it’s almost always impossible to draw a meaningful conclusion from a single data set. In the world of digital and social media, we don’t want to go crazy and try to assess an unorganized and overwhelming sea of data, but we do want to deliberately plan and measure using different tools and sources as appropriate to get as clear a picture as possible of a messy world.

Regardless of how the final tally turns out on the homeless count, having a solid annual measure of the key outcome we’re hoping to change is just the frame around a rather intricate and involved picture. Homelessness, like marketing results, are impacted by myriad  underlying factors. The most commonly recognized causes of homelessness are:

  • The economy — when a local economy is down, there is less prosperity, and the “barely keeping our heads above water” populace become the “drowning” populace
  • The availability of jobs with a living wage — related to the economy, but includes issues such as job skills and quality of the local public education system
  • Mental illness — without access to mental health services and medications, it can be impossible for many people to maintain a stable life
  • Drug and alcohol abuse — often, this goes hand in hand with mental illness, but, even when it doesn’t, once an addiction has set in, wheels can rapidly fall off the steady income wagon
  • Personal catastrophe — a health crisis of the individual or a family member often wrecks limited savings and can draw a person away from his/her job, which triggers a spiral that, ultimately, ends on the streets

It’s a daunting challenge to address all of these, and it’s even more daunting to try to disentangle which of these issues are interrelated and to what degree — both for an individual and at a macro level.

Marketing Analog: Trying to tease apart how economic factors, cultural trends, competitor activity, TV advertising, print advertising, radio advertising, web site content, SEO and SEM, Facebook, and Twitter all interact with each other to affect marketing outcomes is daunting and messy. The fact is, we need to effectively use multiple channels, and we need to identify cross-channel effects and measure those as best as we can. But, it’s not easy, and it’s definitely not perfect.

I’m starting to feel a little silly making this comparison, but, having volunteered on a number of “basic needs” (the lower levels of Maslow’s Hierarchy) committees over the years, it’s been interesting to watch how often the question gets asked: “What’s the single root cause that we can address to have the biggest impact?” The answer? There isn’t one. It’s got to be a multi-faceted approach. And, it’s also a fact that no one person or group can address all of the facets at once.

As it happens, one of the other volunteers on my count team was Matt K., who is the United Way of Central Ohio staff member now responsible for the main United Way committee on which I’ve been a member for the past few years. As we walked along the Scioto River early Tuesday morning, we chatted about how hard it was to identify a clean set of “leading indicators” as to whether we were making progress in our assigned community impact area: emergency food, shelter, and financial assistance. I told Matt that he and I were living in similar worlds — we both are supporting people with expectations and desires for easy, accurate, and accessible measures of something that is very complicated and messy!

Planning Is Important

One final point: our assigned area was a mile or two away from the church where we started…and no one had a car that would easily fit six people. Luckily, it was relatively warm (mid-30s), and the back of my truck was relative dry, so four of us piled into the front, while Matt K. and  Joe M. (also of United Way) climbed into the back of my truck:

Matt and Joe Ready to Ride to the Count Area

Now, had I known we would need to transport six people ahead of time, I easily could have swapped vehicles with my wife for the day and brought along a minivan rather than a small truck. But, we didn’t coordinate that up front. We didn’t fully plan for our measurement.

Marketing Analog: Planning does matter. It doesn’t mean that, without planning, you can’t gather some data, but you may not get the data you want, and it may be a little more painful (or at least chilly) to get the data you need.

I guess, as an analyst, I see data challenges all around me. I also have lower expectations for the quality and completeness of data, I’m more comfortable with wildly imperfect proxy measures, and I expect gathering meaningful data to be a messy process.

Whether counting the homeless or counting web site conversions, though, it’s definitely a whole lot more pleasant to do it with fun and interesting people. I’ve been pretty fortunate on that front!

Reporting

So, You Think Measuring Marketing Performance Is Hard?

Not a week goes by that I don’t see, hear, read, or preach on the topic of measuring marketing results. From equating Marketing ROI to The Holy Grail, to sticking my tongue in my cheek to the point of meanness when it comes to a “simple” process for establishing corporate metrics, to mulling over Marketing ROI vs. Marketing Accountability, there really is no end to the real-world examples that warrant commentary. The reason? Because it’s hard to figure out how to measure marketing’s impact in a meaningful way. It can be done, and it needs to be done, but it requires having a very clearly defined strategy and objectives to do it well, and, even then, the measurement is not as perfect and precise as we would like it to be.

So…it’s hard. I agree.

Try being a non-profit.

I do some volunteer work with the United Way of Central Ohio. Specifically, I sit on the Meeting Emergency and Short Term Basic Needs Impact Council, as well as the Emergency Food, Shelter, and Financial Assistance Results Committee that reports into that impact council, as well as the Emergency Food, Shelter, and Financial Assistance Performance Measures Ad Hoc Committee, which reports into the results committee. Yeah. A mouthful, to say the least. But, it’s the ad hoc committee that has been doing the most tangible work of late and, lookie there!, it’s a committee geared towards performance measurement. Some of the work of that committee inspired an Outputs vs. Outcomes post earlier this year. I find a lot of parallels between measurement in the non-profit world and measurement in the Marketing world.

One difference is that, while Marketers (broad generalization alert!) typically view measurement as a necessary evil — they do want to be data-driven, and they understand the conceptual value of doing measurement…but it’s simply not baked into their DNA to truly want to do it — nonprofits increasingly view measurement as a necessity. (At least) two reasons for this:

  • In the nonprofit world, resources are pretty much infinitely scarce — no agency has a real surplus of the services they supply; if they actually get to a point where they’ve got one area reasonably well covered…they expand their offering to meet other needs of their clients
  • Donors want to know that their investment is making a difference — on the surface, this may seem similar to investors in a publicly held company; but, investors look at revenue, profitability and growth — financial measures — much more than they scrutinize “Marketing” results (although the “average tenure of a CMO is 27 months” is a stat that gets bandied around quite a bit, so there is some flow down the chain of command to Marketing for accountability); donors to nonprofits are scrutinizing “results” that need to be tied to the agency’s efforts (their investment) and meaningful in an oftentimes relatively soft context

As more and more nonprofits are being driven to collaborate to gain efficiency, more of them are working with foundations or some sort of umbrella organizing/coordinating entity. The Community Shelter Board in Columbus is a really good example of this. It’s an organization that, on its own, does not provide any direct services…but most of the homeless shelters in the area receive funding and some level of direction from the organization. And they do some pretty nice quarterly indicator reports — using plain ol’ Excel. They do it right by: 1) choosing metrics that matter and balance each other, 2) setting targets for those metrics and assessing each metric against its target, and 3) providing a contextual analysis of the results for each set of metrics.  Two thumbs up there.

Right now, the United Way of Central Ohio is trying to do something similar — narrowing its focus, establishing clear strategies in each area, and then honing in on meaningful performance measures for each strategy. It’s a fairly grueling exercise, but well worth undertaking. We constantly find ourselves battling the tendency to broaden the scope of a strategy — it’s hard to find any nonprofit that isn’t doing good work, but trying to support “everything that is good” means not really moving any of the needles in a meaningful way.

One similarity I’ve seen between the non-profit world and Marketing in the for-profit world has to do with capturing data. I touched on this in my post on being data-oriented vs. process-oriented. When trying to establish good, meaningful metrics, it can be very tempting to envision ways the data you want would be captured through a minor process change: “When the inside sales representative answers the phone, we will have him/her ask the caller where they heard about the company and get that recorded in the system so we’ll be able to tie the caller back to specific (or at least general) Marketing activity” or “In order to verify that our agency referral program is working, we’ll call the client we referred 1-2 weeks after the referral to find out if the referral was appropriate and got them the services they needed.” This is dangerous territory. The reason? In both cases, you’re inserting overhead in a process that is not inherently and immediately valuable to person using the process. Sure, it’s valuable in that you can sit back and assess the data later and determine what is/is not working about the process and use that information to come back and make improvements…but that’s an awfully abstract concept to the person who is answering the telephone day in and day out (in both of the above examples). I’ll take an imperfect proxy metric that adds zero overhead to the process that generates it any day over a more perfect metric that requires adding “jus’ a li’l” complexity to the process. And, you know what? My metric will be more accurate!

Photo by batega