General

#eMetrics Reflection: Privacy Is Getting More Tangible

I’m chunking up my reflections on last month’s eMetrics conference in San Francisco into several posts. I had a list of eight possible topics, and this is the fourth and (probably) final one that I’ll actually get to.

I’ve attended the “privacy” session at a number of recent eMetrics, and the San Francisco one represented a big step forward in terms of specificity. “Privacy” seems to be a powerful word in the #measure industry — it’s a single word that seems to magically turn many people and companies into ostriches! It’s not that we want to avoid the topic, but there is so much complexity and uncertainty that putting our heads in the sand and kicking the can down the road (everyone loves a good mixed metaphor, right?) seems to be the default course of action.

In the session sardonically titled “Attend this Session or Pay €1 Million,” René Dechamps Otamendi of Mind Your Privacy covered European privacy regulations and Joanne McNabb of the California Department of Justice covered California and US privacy regulations.

When Pop Culture Picks It Up…

I was a West Wing fan, but had no memory of this clip that René shared:

When you’ve got mainstream network television referencing a topic, it’s a topic that is at least on the periphery of the mainstream.

“Fundamental Right” vs. “Business/Consumer Negotiation”

René pointed out that many Americans miss the point when it comes to the European privacy regulations — in typical America-centric fashion, we ignore history. We see privacy as a topic that is up for debate — how do we protect consumers with minimal regulation so that businesses can capitalize on as much personal data as possible.

In Europe…there was the Holocaust. René described how, in The Netherlands prior to WWII, the  government maintained detailed and accurate records on every citizen. When the Nazis invaded, this data made it very easy for them to identify and persecute Jews. Of the 140,000 Jews who lived in The Netherlands prior to 1940, only 30,000 survived the war, and historians point to the availability of this data as one of the main reasons for this. Yikes! For many Europeans, this sort of history is both deeply embedded and strongly linked to the topic of personal and online privacy.

Thinking of privacy as an undisputed as a fundamental right is somewhat eye-opening.

It Doesn’t Matter Where Your Company Is Based

This isn’t exactly news, but it seems to be one of the excuses marketers use for burying their heads in the sand: “We’re based in Ohio — not California or Europe. So, how much do we have to worry about privacy regulations there?”

The answer comes down to where your customers are. The European Directive, as well as California regulations, do not care where a company is based. They’re focused on where the consumers interacting with those companies are. Pull up your visitor geography reports in your web analytics platform and look at where your traffic is coming from — anywhere that has a non-miniscule percentage of traffic is likely somewhere that you need to understand privacy-regulation-wise.

Why California instead of “the U.S.?”

Joanne pointed out that California is clearly in the forefront when it comes to developing, implementing, and enforcing privacy regulations in the U.S. The California Online Protection and Privacy Act (CalOPPA) has been in effect since 2004 (although not widely understood for the first few years). That’s closing in on a decade!

To me, this sounded a lot like fuel economy standards in the auto industry — California is a large enough market that businesses can’t afford to ignore the state’s residents. At the same time, other states, and the federal government (because the U.S. has a long — and checkered — history of using the states as laboratories for testing ideas), are watching California to see what they figure out. There is a very good chance that what works for California will be a basis for other states and for federal regulations.

Is California the Same As Europe?

Yes and no. They’re the same in that they have a similar orientation towards “individuals’ rights.” They’re the same in that they are increasingly starting to enforce their regulations (with very real fines levied on companies).

They’re different…in that the U.S. and Europe are different — both culturally and structurally.

They follow developments in each others’ worlds, but they’re not actively marching towards a single, unified regulation.

So, Where Should Companies Start?

Step 1: Check your privacy policy. Really. Read it. Read it for your country-specific sites (simply translating your U.S. privacy policy into German doesn’t work!). If you give it a really close read, are you even complying with what you say you are?

Step 2: Learn some details. For Europe, reach out to René at the email address in the image below. He’s got a document that explains the ins and outs of EU privacy regulations (if the number “27” doesn’t mean anything to you, you haven’t learned enough):

euprivacy27dpas Rene's email

For California, one resource is the California Attorney General’s site for online privacy. Unfortunately, it is a bureaucratically built site, so be ready for some heavy document-wading.

Step 3: Educate your company. This one is no small task, because, when asked who to include in that discussion, it seemed like a simpler answer would have come if the question was who not to include. The web team, marketing, legal, and IT are a good start. The best hook is “We could be fined 1,000,000 euros…”

In Short: It’s Still Messy, but Things Are Getting Clearer

The heading says it all. “We” all need to take our heads out of the sand and get smarter on this. If a regulatory agency comes calling, the worst response is, “Tell me who you are again?” The best (but not currently possible) response is, “We’re totally compliant.” A good response is, “We’re working on it, here’s what we’ve done, and here’s our roadmap to do more.”

Analytics Strategy, Social Media

Privacy: It's a 2.5-Dimensional Issue

I’m keeping the voting open for another week or so on my “choose a new profile picture” poll, so if you haven’t voted yet, please click over and do so. There’s a charitable donation (by me!) involved!

“Privacy” is a hot topic in the world of marketing analytics, driven primarily by shifting consumer (and, in turn, regulatory) sentiment on the subject. That shifting sentiment, I think, is largely being driven by the increasing integration of social media into our lives and our online behavior.

The WAA stepped up and put together a Code of Ethics a few months ago, and privacy is going to be a recurring topic at eMetrics and other conferences for the foreseeable future. Following the San Francisco eMetrics conference, Stéphane Hamel put together three scenarios and asked the #measure community to vote as to the ethics and allowability of each situation. He then revealed the results and added his own thoughts. Towards the end of that second post, Stéphane noted that he was disappointed by the lack of interest in the exercise, given the generally accepted importance of the topic.

Emer Kirrane responded in the comments:

It’s interesting that there seems to be a correlation between legality and ethics in the minds of your respondents. To me, the Code of Ethics is there as a flag against practices that are deemed unethical by the community, rather than deemed unethical by law.

Stéphane’s concern and Emer’s response have been bouncing around in my brain for several weeks. My conclusion: “ethics vs. legality” is going to continue to give us fits.

I realize this isn’t the first time that “ethics” and “the law” haven’t perfectly aligned (they almost never do, actually, even though that, from a purist point of view, is the goal), but bear with me — it’s worth using that lens to explore the issue and outline the challenges we’re going to have to deal with. These are two very different dimensions of the privacy debate, and one of them is in flux on several fronts.

Why 2.5 Dimensions?

Obviously, there is a legal/regulatory dimension, and there is an ethical dimension. But, really, the legal/regulatory dimension is heavily driven and influenced by consumer perceptions and fears. I actually wrote some thoughts on that a couple of years ago. With high-profile Facebook snafus and high-profile media outlets reporting on cookies and cross-site tracking, politicians have found an issue that their constituents care about (or can be prodded to care about). So, in a sense, the legal/regulatory dimension has some added “oomph” of consumer concerns behind it; I’m calling that “consumer perspective” another half a dimension.

It’s possible that “consumer perception” should be a third dimension in and of itself. But, oh boy, that would make for some hairy sketching in the remainder of this post. I’m pretty sure I’m not just punting, though — the will of the consumer when it comes to something like privacy does generally get manifested through some form of government regulation.

Start with the Basics

Two dimensions: legal and ethical. We can look at them like this:

Various practices raise privacy questions. In theory, we can plot each of them on this (conceptual) grid — there are more than shown here, but I’m just laying out the basic idea of the framework:

In Theory, We’d Have Harmonious Dimensions

If life was simple, we would have perfect clarity for each dimension, and perfect alignment between dimensions:

Notice the shaded quadrants at top left and bottom right — there would be no practices that were ethical but not legal, nor would there be any practices that were legal but unethical.

Alas! Privacy is Rife with Gray Areas!

Reality is more like this — gray areas rather than hard lines along both dimensions:

Ugh. Things get messy. There are more activities that are questionable — they may or may not be legal and/or they may or may not be ethical! Argh!

But Wait! There’s More!

Ever since the web went mainstream, it’s been a more global medium than anything that came before. And, we’ve all run into cases and concerns that our standard web analytics implementation runs afoul of the law in some country somewhere. This grid illustrates that wrinkle, too — the legal/regulatory gray areas live in different places depending on the country (only the U.S. and the E.U. are shown here — it’s an illustrative diagram, people! Not a comprehensive one!):

And the big blue arrow shows where pressure is being applied (back to that half-dimension of consumer fears mentioned at the beginning of this post). It’s a little counterintuitive that the arrow is pointing upward, isn’t it? How could it be that things are trending towards “allowed?” They’re not. Rather, the “interpretation zone” is moving upward — practices that used to be “clearly allowed” aren’t inherently changing what they are, but those practices are moving from “in the clear” towards the gray area.

Helpful?

This was definitely one of those situations where, when I initially had a rough picture in my mind that would represent these two dimensions, it was simple and clear. It was only as I put pen to paper to sketch it out that it turned out to be tricky. Shortly after I finished writing this post (but, obviously, before I published it…as I’m adding this comment at the end), Jason Thompson made a really good case as to what is (misguidedly) driving the legal dimension out of alignment with the ethical perspective. That reminded me that I keep meaning to go back and re-read the last chapter (chapter 9?) of Jim Sterne’s Social Media Metrics book, as I recall that it was an intriguing non-sequitur that considered turning the entire “tracking” model on its head. Food for thought for another post, that.

What do you think? Is this an effective representation of the shifting privacy landscape we’re dealing with? What does it miss?

Analytics Strategy, Social Media

Data Portability vs. Privacy

There is a lot of buzz of late regarding Robert Scoble getting knocked off of Facebook as he was testing out Plaxo and, in the process, scraping data from Facebook. The debate that has primarily raged has been around who “owns” our data when we load it into a social media site. I’m pretty sure that the Terms of Use we all blithely accept spell that out fairly clearly. I’m also pretty sure that legalese is largely irrelevant when it comes to the court of public opininion, as Facebook seems to continually rediscover!

Debbie Weil had an interesting take on the situation in her post: The controversial issue of ”data portability” (or what we used to call “privacy”). She makes the point that, “With so many of us living so much of our lives online we are trusting both that our ‘data’ won’t be misused and that it won’t disappear.” We don’t often enough recognize that data portability and privacy, if not directly in conflict, apply pressure in two different directions. Chris Brogan, Jeremiah Owyang, and many, many others have touched on the subject. In Brogan’s case, and in many of the comments on Twitter, the emphasis is on the nuisance factor of having to re-enter the same information in multiple places. Generally, there is some nod to “privacy” — “it needs to be secure, private, with configurable access permissions” — but that gets thrown in almost as an afterthought. On the other hand, it only takes one or two examples of some form of identity theft to give people pause about making their data truly portable. As a matter of fact, an on-going discussion in the world of web analytics is, “How much detail can we — and should we — track and keep on visitors to our sites?” And, when governments get involved, the emphasis is virtually always on ensuring privacy rather than on improving efficiency (in the U.S., HIPAA and CAN-SPAM come to mind immediately).

This is a truly thorny issue, and it comes down to trying to accurately manage personal preferences across multiple interrelated/interconnected systems. On one end of the spectrum, the privacy paranoid person resists sharing any true information whatsoever, and he can aggressively tell sites not to share his information in any way whatsoever — even with him! This poor soul is almost definitely going to give himself high blood pressure, and the shorter life he is going to live is going to be inefficiently lived as he continually puts up barriers that he has to repeatedly climb over. On the other extreme is the person who will openly share even his bank account details because he doesn’t believe it will ever bite him in the ass (we can label this archetype Jeremy Clarkson).

The reality is that 99% of us live somewhere in between these two extremes. Most of us believe that where we have placed ourselves on this spectrum is the obviously logical place to be. And most of us are uncomfortable shifting even slightly from our current position towards either end of that spectrum.

The person who has a finite number of cell phone minutes each month on herplan may fiercely guard that number while freely sharing her home number. Another person may have unlimited minutes and no issues with screening her cell phone calls as they arrive, so may prefer that number as her primary, most public contact channel.

This means any “solution” will have to be highly configurable. Which, sadly, means that it may be cumbersome to manage. And may struggle to get adopted. I’ll continue to keep my fingers crossed that OpenID, The Todeka Project, or some other approach can allow us to personalize our point on the privacy/portability spectrum.