Self-Quantification: Implications for marketing & privacy
At the intersection of fitness, analytics and social media, a new trend of “self-quantification” is emerging. Devices and Applications like Jawbone UP, Fitbit, Nike Fuel Band, Runkeeper and even Foursquare are making it possible for individuals to collect tremendous detail about their lives: every step, every venue visited, every bite, every snooze. What was niche, or reserved for professional athletes or the medically-monitored, has become mainstream, and is creating a wealth of incredibly personal data.
In my previous post, I discussed what this kind of tracking could teach us about the practice of analytics. Now, I’d like to consider what it means for marketing, targeting and the privacy debate.
Implications for marketing, personalisation & privacy
I have argued for some time that for consumers to become comfortable with this new data-centric world, they need to see the benefits of data use.
There are two sides to this:
1. Where a business is using consumers’ data, they need to provide the consumer a benefit in exchange. A great way is to actually share that data back to the consumer.
Some notable examples:
- Recommendations: “People who looked at Product X also looked at Product Y”, as seen on sites like Amazon.com.
- Valuation and forecasts: Websites like True Car, Kelley Blue Book and Zillow crunch data from thousands of transactions and provide back to consumers, to help them understand how the pricing they are looking at compares to the broader market.
- Credit scores: Companies like Credit Karma offer a wealth of data back to consumers to understand their credit and help them make better financial decisions.
- Ratings and Reviews: Companies like CNet inform customers via their editorial reviews, and a wealth of sites like Amazon and Newegg provide user ratings to help inform buying decisions.
2. Outside of business data, consumers’ own collection and use of data helps increase the public understanding of data. The more comfortable individuals get with data in general, the easier it is to explain data use by organisations. The coming generations will be as fluent with data as millennials today are fluent with social media and technology.
This type of data is a huge opportunity for marketers. Consider the potential for brands like Nike or Asics to deliver truly right-time marketing: “Congratulations on running 350 miles in the last quarter! Did you know that running shoes should be replaced every 300-400 miles? Use this coupon code for 10% off a new pair.” Or McDonalds to use food intake data to tell them that 1) The consumer hasn’t yet eaten lunch (and it’s 30mins past their usual lunch time), 2) The consumer has been following a healthy diet and 3) The consumer is on the road driving past a McDonalds, and promote new healthy items from their menu. These are amazing examples of truly personalised marketing to deliver the right offer at the right time to the right person.
However, it is also using incredibly personal data and raises even newer privacy concerns than simple online ad targeting. Even if a marketer could do all of that today, the truth is, it would probably be construed as “creepy” or, worse, a disturbing invasion of privacy. After all, we’re not even comfortable sharing our weight with the DMV. Can you imagine if you triggered a Weight Watchers ad in response to your latest Withings weigh-in?!
So how must marketers tread with respect to this “self-quantification” data and privacy?
1. We need to provide value. This might sound obvious – of course marketers need to provide value. However, I would argue that when consumers are trusting us with what amounts to every detail of their lives, we must deliver something that is of more value to the consumer than it is to us. This all comes down to the idea of marketing as a service: it should be so helpful you’d pay for it.
2. There has to be consent. This technology is too new, and there are too many concerns about abuse, for this to be anything but opt-in. (The idea of health insurance companies rejecting consumers based on lifestyle is a typical argument used here.) If marketers provide for (and respect) opt-in and -out, and truly deliver the right messaging, they’ll earn the right to broaden their reach.
3. It requires crystal-clear transparency. Personalisation and targeting today is already considered “creepy.” Use of this incredibly personal data requires absolutely transparency to the end user. For example, when being shown an offer, consumers should know exactly what they did to trigger it, and be able to give feedback on the targeted message.
This already exists in small forms. For example, the UP interface already gives you daily “Did you know?”s with fun facts about your data vs the UP community. Users can like or flag tips, to give feedback on whether they are helpful. There has to be this kind of functionality, or users only option to targeting will be to revoke access via privacy settings.
4. We need to speak English. No legalese privacy policies and no burying what we’re really doing on page 47 of a document we know no one will ever read. Consumers will be outraged that we didn’t tell them the truth about what you were doing, and we’ll never regain that trust.
5. We have to get it right. And by that, I mean, right by the consumer’s perspective. There will be no second chances with this kind of data. That requires careful planning and really mapping out what data we need, how we’ll get consent, how we’ll explain what we’re doing and ensuring the technology works flawlessly. Part of the planning process has to be dotting every i and crossing every t and truly vetting a plan for this data use. If marketers screw this up, we will never get that permission again.
This includes getting actual consumer feedback. A small beta release with significant qualitative feedback can help us discover whether what we’re doing is helpful or creepy.
6. Don’t get greedy. If marketers properly plan this out, we should be 100% clear on exactly what data we need, and not get greedy and over collect. Collecting information we don’t need will hurt opt-in. This may involve, for example, clearly explaining to consumers what data we collect for their use, and what we use for targeting.
7. Give users complete control. This will include control over what, of their data, is shared with the company, what is shared with other users, what is shared anonymously, what is used for targeting. There has to be an exhaustive (but user friendly) level of control to truly show respect for informed and control opt-in. This includes the ability to give feedback on the actual marketing. Without the ability to continually tell a business what’s creepy and not, we end up in a binary system of either “consenting” or “not”, rather than an on-going conversation between consumer and business about what is acceptable.
Think about the user reaction every time Facebook changes their privacy policy or controls. People feel incredible ownership over Facebook (it’s “their” social world!) even though logically we know Facebook is a business and does what suits their goals. The tools of the future are even more personal: we’re talking about tracking every minute of sleep, or tracking or precise location. This data is the quantification of who we are.
With opportunity comes responsibility
This technology is an amazing opportunity for marketers and consumers, if done well. However, marketers historically have a habit of “do first, ask permission later.” To be successful, we need to embark on this with consumers’ interests and concerns put first, or we’ll blow it before we even truly begin.