Adobe Analytics, General, google analytics, Technical/Implementation

Fork in the Road: The Big Questions Organizations are Trying to Answer

In a normal year, we’d be long past the point in the calendar where I had written a blog post on all of the exciting things I had seen at Adobe Summit. Unfortunately, nothing about this spring has been normal other than that Summit was in person again this year (yay!), because I was unable to attend. Instead, it was my wife and 3 of my kids that headed to Las Vegas the last week in March; they saw Taylor Swift in concert instead of Run DMC, and I stayed home with the one who had other plans.

And boy, does it sound like I missed a lot. I knew something was up when Adobe announced a new product analytics-based solution to jump into what has already been a pretty competitive battle. Then, another one of our partners, Brian Hawkins, started posting excitedly on Slack that historically Google-dominant vendors were gushing about the power of Analytics Workspace and Customer Journey Analytics (CJA). Needless to say, it felt a bit like three years of pent-up remote conference angst went from a simmer to a boil this year, and I missed all the action. But, in reading up on everyone else’s takes from the event, it sure seems to track with a lot of what we’ve been seeing with our own clients over the past several months as well.

Will digital analytics or product analytics win out?

Product analytics tools have been slowly growing in popularity for years; we’ve seen lots of our clients implement tools like Heap, Mixpanel, or Amplitude on their websites and mobile apps. But it has always been in addition to, not as a replacement for traditional digital analytics tools. 2022 was the year when it looked like that might change, for two main reasons:

  • Amplitude started adding traditional features like marketing channel analysis into its tool that had previously been sorely lacking from the product analytics space;
  • Google gave a swift nudge to its massive user base, saying that, like it or not, it will be sunsetting Universal Analytics, and GA4 will be the next generation of Google Analytics.

These two events have gotten a lot of our clients thinking about what the future of analytics looks like for them. For companies using Google Analytics, does moving to GA4 mean that they have to adopt a more product analytics/event driven approach? Is GA4 the right tool for that switch?

And for Adobe customers, what does all this mean for them? Adobe is currently offering Customer Journey Analytics as a separate product entirely, and many customers are already pretty satisfied with what they have. Do they need to pay for a second tool? Or can they ditch Analytics and switch to CJA without a ton of pain? The most interesting thing to me about CJA is that it offers a bunch of enhancements over Adobe Analytics – no limits on variables, uniques, retroactivity, cross-channel stitching – and yet many companies have not yet decided that the effort necessary to switch is worth it.

Will companies opt for a simple or more customizable model for their analytics platform?

Both GA4 and Amplitude are on the simpler side of tools to implement; you track some events on your website, and you associate some data to those events. But the data model is quite similar between the two (I’m sure this is an overstatement they would both object to, but in terms of the data they accept, it’s true enough). On the other hand, for CJA, you really need to define the data model up front – even if you leverage one of the standard data models Adobe offers. And any data model is quite different from the model used by Omniture SiteCatalyst / Adobe Analytics for the better part of the last 20 years – though it probably makes far more intuitive sense to a developer, engineer, or data scientist.

Will some companies answer to the “GA or Adobe” question be “both?”

One of the more surprising things I heard coming out of Summit was the number of companies considering using both GA4 and CJA to meet their reporting needs. Google has a large number of loyal customers – Universal Analytics is deployed on the vast majority of websites worldwide, and most analysts are familiar with the UI. But GA4 is quite different, and the UI is admittedly still playing catchup to the data collection process itself. 

At this point, a lot of heavy GA4 analysis needs to be done either in Looker Studio or BigQuery, which requires SQL (and some data engineering skills) that many analysts are not yet comfortable with. But as I mentioned above, the GA4 data model is relatively simple, and the process of extracting data from BigQuery and moving it somewhere else is straightforward enough that many companies are looking for ways to keep using GA4 to collect the data, but then use it somewhere else.

To me, this is the most fascinating takeaway from this year’s Adobe Summit – sometimes it can seem as if Adobe and Google pretend that the other doesn’t exist. But all of a sudden, Adobe is actually playing up how CJA can help to close some of the gaps companies are experiencing with GA4.

Let’s say you’re a company that has used Universal Analytics for many years. Your primary source of paid traffic is Google Ads, and you love the integration between the two products. You recently deployed GA4 and started collecting data in anticipation of UA getting cut off later this year. Your analysts are comfortable with the old reporting interface, but they’ve discovered that the new interface for GA4 doesn’t yet allow for the same data manipulations that they’ve been accustomed to. You like the Looker Studio dashboards they’ve built, and you’re also open to getting them some SQL/BigQuery training – but you feel like something should exist between those two extremes. And you’re pretty sure GA4’s interface will eventually catch up to the rest of the product – but you’re not sure you can afford to wait for that to happen.

At this point, you notice that CJA is standing in the corner, waving both hands and trying to capture your attention. Unlike Adobe Analytics, CJA is an open platform – meaning, if you can define a schema for your data, you can send it to CJA and use Analysis Workspace to analyze it. This is great news, because Analysis Workspace is probably the strongest reporting tool out there. So you can keep your Google data if you like it – keep it in Google, leverage all those integrations between Google products – but also send that same data to Adobe and really dig in and find the insights you want.

I had anticipated putting together some screenshots showing how easy this all is – but Adobe already did that for me. Rather than copy their work, I’ll just tell you where to find it:

  • If you want to find out how to pull historical GA4 data into CJA, this is the article for you. It will give you a great overview on the process.
  • If you want to know how to send all the data you’re already sending to GA4 to CJA as well, this is the article you want. There’s already a Launch extension that will do just that.

Now maybe you’re starting to put all of this together, but you’re still stuck asking one or all of these questions:

“This sounds great but I don’t know if we have the right expertise on our team to pull it off.”

“This is awesome. But I don’t have CJA, and I use GTM, not Launch.”

“What’s a schema?”

Well, that’s where we come in. We can walk you through the process and get you where you want to be. And we can help you do it whether you use Launch or GTM or Tealium or some other tag management system. The tools tend to be less important to your success than the people and the plans behind them. So if you’re trying to figure out what all this industry change means for your company, or whether the tools you have are the right ones moving forward, we’re easy to find and we’d love to help you out.

Photo credits: Thumbnail photo is licensed under CC BY-NC 2.0

Technical/Implementation

Ways to Minimize the Impact of ITP 2.1 on your Analytics Practice

Demystified Partner Tim Patten also contributed to this blog post.

Earlier this week, I shared our team’s thoughts about Apple’s Intelligent Tracking Protocol (ITP), specifically version 2.1 and its impact on digital analytics. These changes have been exhaustively covered, and we likely didn’t add anything new on the topic. But we thought those comments were important to lead into what I think is the tactical discussion that every company needs to have to ensure they deal with ITP 2.1 (or 2.2, or any future version as well) in a way that is appropriate for its business.

Admittedly, we haven’t done a ton of research on the impact of ITP to digital marketing in general – how it impacts paid search, or display, or social media marketing tools. I mostly work with clients on deploying analytics tools, and generally using either Adobe or Google Analytics – so that’s where most of our thoughts have been since Apple announced ITP 2.1. So most of what follows will focus on those 2 vendors. But since the impact of these changes is not limited to traditional “web analytics,” we’ll also share a few thoughts on a few more all-encompassing potential solutions.

Adobe Analytics (and other Adobe tools)

Omniture’s earliest implementation used a third-party cookie, set when analytics requests were sent to its servers (first using the 2o7.net domain, followed by omtrdc.net). The negative aspects of third-party cookies led Omniture to then introduce a new approach. For nearly a decade, the new best practices recommendation for implementing Omniture’s (and then Adobe’s) analytics tool was to work with them to make it appear as if your analytics data were being sent to your own servers. This would be done by creating a CNAME, and then specifying that subdomain as your tracking server (like metrics.example.com). The first request by a new visitor to your site would be sent to that subdomain, followed by a response with a “set cookie” header that would make your analytics visitor ID a first-party cookie. All analytics requests would be sent to that subdomain as well – without the header, since the cookie was already set.

About five years ago, Adobe decided that was a bit of a cumbersome approach, and as part of its new Experience Cloud ID Service, began setting a first-party cookie using JavaScript. While you could still work with Adobe to use the CNAME approach, it became less critical – and even when using a CNAME, the cookie was still set exclusively with JavaScript. This was a brilliant approach – right up until ITP 2.1 was announced, and all of a sudden, a very high percentage of Safari website visitors now had a visitor ID cookie that would be deleted in 7 days, with nothing they could do about it.

As of May 15, Adobe now has a workaround in place for its customers that had been leveraging the Experience Cloud ID. Customers that already had a CNAME were immediately ready to use this solution, but the rest are required to introduce a CNAME to take advantage. In addition to the CNAME, you must update to the latest version of the Experience Cloud Visitor ID service (version 4.3). In most cases, this can be done through your tag management system – though not all TMS tools have offered this update yet.

It’s important to note that this solution acts like a workaround – it will set an additional cookie called “s_ecid” in the user’s browser using your CNAME tracking server. It does not reissue the older AMCV cookie that was previously used for visitor identification; instead, it uses the s_ecid cookie as a fallback in case the ACMV cookie has expired. The total number of cookies is frequently a concern for IT teams, so make sure you know this if you opt for this approach. You can read more about this implementation in Adobe’s help documentation.

The last important thing to be aware of is that this fix is only for the Experience Cloud visitor ID. It does not address Adobe Target’s mbox cookie, or any other cookies used by other products in the Adobe Marketing Cloud that were previously set with JavaScript. So it solves the biggest problem introduced by ITP 2.1 – but not all of them.

Google Analytics

Ummm…how do I put this nicely?

To this point, Google has offered very little in the way of a recommendation, solution, or anything else when it comes to ITP 2.1. Google’s stance has been that if you feel it’s a problem, you should figure out how to solve it yourself.

This may be somewhat unfortunate – but it makes sense when you think that not only is Google Chrome a competitor to Safari, but other tools in Google’s marketing suite have been heavily impacted each time ITP has introduced new restrictions – and Google didn’t do anything then, either. So this is not a new or unexpected development.

All of this leads to an interesting question: what do I do if I don’t use Adobe Analytics? Or if I use it without a CNAME? Or if I care about other vendors besides Adobe? Luckily there are a few options out there.

Roll Your Own

If you’re looking for a workaround to preserve your cookies, you could always build your own homegrown solution. Simo Ahava discussed several potential ideas here – many of which have real shortcomings. In my opinion, the most viable of these are a series of similar approaches that involve routing some traffic on each page through a type of server-side “gateway” that would “clean” all your cookies for you by re-issuing them with the Set-cookie header. This approach works regardless of how many domains and subdomains your site encompasses, which makes it a fairly robust approach.

It’s not without its challenges, however. The main challenge is that it requires at least some amount of development work and some long-term maintenance of whatever server-side tools you use – a server-side script, a custom CNAME, etc. You’ll encounter another challenge if your site is a single-page application or does any virtual page view tracking – because some vendors will continue to update their cookies as the user interacts with the page, and each cookie update re-corrupts the cookie. So your homegrown solution has to make sure that it continuously cleans the cookies for as long as the page is open in the browser. Another item that you will need to manage on your own is the ability to handle a user’s opt-out settings across all of the different cookies that you manage through this new “gateway.”

Third-Party Tools

If building your own custom solution to solve the problems introduced by ITP 2.1 sounds tedious (at best) or a nightmare (at worst), as luck would have it, you have one last option to consider. There are a handful of companies that have decided to tackle the problem for you. The one I have the most experience with is called Accutics, and you may have seen them at Adobe Summit or heard them on a recent episode of the Digital Analytics Power Hour.

The team at Accutics saw an opportunity in Apple’s ITP 2.1 announcements, and built what they call a “cookie saver” solution that can deal with all of your JavaScript-based cookies. It’s an approach very similar to the solution Adobe deployed a few weeks ago – with the added benefit that it will work for as many cookies as you need it to. They’ve also built their tool to deal with the single-page app considerations I mentioned in the previous section, as they continuously monitor the cookies you tell them you want to preserve to ensure they stay clean (though they do this just like Adobe, so you might notice a few additional cookies show up in your browser as a result). Once you’ve gotten a CNAME set up, the Accutics solution can be quickly deployed through any tag management system in a matter of minutes, so the solution is relatively painless compared to the impact of ITP 2.1.

Conclusion

While Apple’s release of ITP 2.1 may feel a bit like someone tossed a smoke bomb into the entryway of your local grocery store, the good news is that you have options to deal with it. Some of these options are more cumbersome than others – but you don’t have to feel helpless. You can use the analysis of your data to determine the impact of ITP on your business, as well as the potential solutions out there, to identify what the right approach should be as you move forward. ITP won’t be the last – or most problematic – innovation in user privacy that poses a challenge for digital analytics. Luckily, there are workarounds available to you – you just need to decide which solution will allow you to best balance your customers’ privacy with your organization’s measurement goals.

Technical/Implementation

A Less Technical Guide to Apple’s ITP 2.1 Changes

Demystified Partner Tim Patten also contributed to this blog post.

There are likely very few analysts and developers that have not yet heard that Apple recently introduced some major changes into its Safari web browser. A recent version of Apple’s Intelligent Tracking Protocol (ITP 2.1) has the potential to fundamentally change the way business is done and analyzed online, and this has a lot of of marketers quite worried about the future of digital analytics. You may have noticed that we at Demystified have been pretty quiet about the whole thing – as our own Adam Greco has frequently reminded us over the past few weeks. This isn’t because Kevin, Tim, and I don’t have some strong opinions about the whole thing, or some real concerns about what it means for our industry. Rather, it’s based on 2 key reasons:

  • Apple has released plenty of technical details about ITP 2.1 – what problems Apple sees and is trying to solve, what the most recent versions of Safari do to solve these problems, and what other restrictions may lie ahead. What’s more, the Measure Slack community has fostered robust discussion on what ITP 2.1 means to marketers, and we wholeheartedly endorse all the discussion taking place there.
  • ITP 2.1 is a very new change, and a very large shift – and we’ve all seen that the leading edge of a technological shift sometimes ends up being a bit ahead of its time. While discussing the potential implications of ITP 2.1 with clients and peers, we have been taking a bit of a “wait and see” approach to the whole thing. We’ve wanted to see not just what other browsers will do (follow suit like Firefox? hold steady like Chrome?), but what the vendors impacted by these changes – and that our clients care most about – will decide to do about them.

Now that the dust has settled a bit, and we’ve moved beyond ITP 2.1 to even more restrictions with ITP 2.2 (which lowers the limit from seven days to one if the URL contains query parameters mean to pass IDs from one domain to another), we feel like we’re on a little bit firmer footing and prepared to discuss some of the details with our clients. As Tim and I talked about what we wanted to write, we landed on the idea that most of the developers we talk to have a pretty good understanding about what Apple’s trying to do here – but analysts and marketers are still somewhat in the dark. So we’re hoping to present a bit of a “too long, didn’t read” summary of ITP 2.1. A few days from now, we’ll share a few thoughts on what we think ITP 2.1 means for most of the companies we work with, that use Adobe or Google Analytics, and are wondering most about what it means for the data those vendors deliver. If you feel like you’re still in the dark about cookies in general, you might want to review a series of posts I wrote a few years ago about why they are important in digital marketing. Alternatively, if you find yourself more interested in the very technical details of ITP, Simo Ahava had a great post that really drilled into how it works.

What is the main problem Apple is trying to solve with ITP?

Apple has decided to take a much more proactive stance on protecting consumer privacy than other companies like Facebook or Google. ITP is its plan for these efforts. Early versions of ITP released through its Safari web browser revolved primarily around limiting the spread of third-party cookies, which are generally agreed upon to be intrusive. Basically, Safari limited the amount of time a third-party cookie could be stored unless the user interacted with the site that set the cookie and it was obvious he or she had an interest in the site.

Advertisers countered this effort pretty easily by coming up with ways to pass IDs between domains through the query string, grabbing values from third-party cookies and rewriting them as first-party cookies, and so forth. So Apple has now tightened controls even further with ITP 2.1 – though the end goal of protecting privacy remains the same.

What is different about ITP 2.1?

The latest versions of ITP take these efforts forward multiple levels. Where earlier versions of ITP focused mainly on third-party cookies, 2.1 takes direct aim at first-party cookies. But not all first-party cookies – just those that are set and manipulated with JavaScript (using the document.cookie browser object). Most cookies that contribute to a user’s website experience are set on the server as part of a page load – for example, if a site sets a cookie containing my user ID when I log in, to keep me logged in on subsequent pages. This is done because the server’s response to the browser includes a special header instructing the browser to set a cookie to store a value. Most advertising cookies are set with code in the vendors’ JavaScript tags, and JavaScript cannot specify that header. Apple has made the giant leap to assuming that any cookie set with JavaScript using document.cookie is non-essential and potentially a contributor to cross-site tracking, the elimination of which is the key goal of ITP. Any cookies set in this way will be discarded by Safari after a maximum of 7 days – unless the user returns to the site before the 7 days passes, which resets the timer – but only by another maximum 7 days.

What does this mean for my analytics data?

The side effect of this decision is that website analytics tracking is potentially placed on the same footing as online advertising. Google Analytics sets its unique client ID cookie in this way – as does Adobe Analytics for many implementations. While it may be difficult for a non-developer to understand the details of ITP 2.1, it’s far easier to understand the impact on data quality when user identification is reset so frequently.

If you think this seems a bit heavy-handed on Apple’s part, you’re not alone. But, unfortunately, there’s not a lot that we as analysts and developers can do about it. And Apple’s goal is actually noble – online privacy and data quality should be a priority for each of us. The progressive emphasis by Apple is a result of so many vendors seeking out workarounds to stick with the status quo rather than coming up with new, more privacy-focused ways of doing business online.

Before you decide that ITP 2.1 is the end of analytics or your career as you know it, there are some things to think about that might help you talk yourself off the ledge. You can put your data to the test to see how big of a deal ITP is for you:

  • How much of your traffic comes from mobile devices? Apple is the most common manufacture of mobile devices, so if you have a lot of mobile traffic, you should be more concerned with ITP.
  • How much of your traffic comes from webkit browsers (Safari being by far the largest)? Safari still has a pretty small share of desktop web traffic – but makes up a much larger share of mobile traffic because it is the default browser on iOS devices. While other browsers like Firefox have shown signs they might follow Apple’s lead, there still isn’t a critical mass of other browsers giving the indication they intend to implement the same restrictions.
  • Does your website require authentication to use? If the answer is yes, all of the major analytics providers offer means to use your own unique identifier rather than the default ones they set via JavaScript-based cookies.
  • Does your website have a high frequency of return visits? If your user base returns to the site very frequently within a 7-day window, the impact to you may be relatively low (though Apple also appears to be experimenting with a window as low as 1 day).

After reading all of these questions, you may still be convinced ITP 2.1 is a big deal for your organization – and you’re probably right. Unique visitor counts will likely be inflated, and attribution analytics will be heavily impacted if the window is capped at 7 days – and these are just the most obvious effects of the changes. There are several different paths you can take from here – some of which will reduce or eliminate your problems, and others that will ignore it and hope it goes away. We’ll follow up later this week to describe these options – and specifically how they relate to Adobe and Google Analytics, since they are the tools most of our clients rely on to run their businesses.

Tag Management, Technical/Implementation

Single-Page Apps: Dream or Nightmare?

A few months ago, I was discussing a new project with a prospective client, and they described what they needed like this: “We have a brand new website and need to re-implement Adobe Analytics. So far we have no data layer, and we have no developer resources in place for this project. Can you help us re-implement Adobe Analytics?” I generally avoid projects just like this – not because I can’t write server-side application code in several languages, but because even if I am going to write code for a project like that, I still need a sharp developer or two to bounce ideas off of, and ask questions to find out where certain files are located, what standards they want me to follow, and other things like that. In an effort to do due diligence, I asked them to follow up with their IT team on a few basics. Which platform was their site built on? Which programming languages would be required?

When they followed up by saying that the site was built on Websphere using ReactJS, I was sure this project was doomed to failure – every recent client I had worked with that was using either of these technologies struggled mightily, and here was a client using both! In addition, while I understand the premise behind using ReactJS and can generally work my way through a ReactJS application, having to do all the heavy lifting myself was a terrifying thought. In an effort to do due diligence, I agreed to discuss this project with some members of their IT team.

On that call, I quickly realized that there had been a disconnect in how the marketing folks on the project had communicated what the IT folks wanted me to know. I learned that a data layer already existed on the site – and it already contained pretty much everything identified in the solution design that needed to be tracked. We still had to identify a way to track a few events on the website (like cart adds), but I felt good enough about the project to take it on.

This project, and a handful of others over the past year, have challenged some strong opinions I’ve held on single page applications (SPAs for short). Here are just a few of those:

  • SPAs have just as many user experience challenges as the page-based applications they are designed to replace.
  • SPAs are present a major measurement challenge for traditional analytics tools like Adobe or Google Analytics.
  • Most companies move to an SPA-based website because they look and sound cool – they’re just the latest “shiny object” that executives decide they have to have.

While I still hold each of these opinions to some degree, the past few months have given me a much more open mind about single-page applications and frameworks like React or Angular. Measurement of SPAs is definitely a challenge – but it’s not an insurmountable one. If your company is thinking about moving to a single-page application, you need to understand that – just like the site itself is going to be fundamentally different than what you’re used to – the way you measure it will be as well. I’d like to offer a few things you’ll want to strongly consider as you decide how to track your new SPA.

A New Data Architecture

In many ways, SPAs are much better equipped to support a data layer than the old, Frankenstein-ish website you’re moving away from. Many companies I know have such old websites that they pre-date their adoption of a tag management system. Think about that – a tool you probably purchased at least six years ago still isn’t as old as your website itself! So when you implemented your TMS, you probably bolted on your data layer at the same time, grabbing data wherever you could find it.

Migrating to an SPA – even for companies that do this one page at a time – requires a company to fundamentally rethink its approach to data. It’s no longer available in the same ways – which is a good thing. Rather than building the data layer one template at a time like in the past, an SPA typically accesses the data it needs to build a page through a series of APIs that are exposed by back-end development teams. For example, data related to the authenticated user is probably retrieved as the page loads from a service connected to your CRM; data relevant to the contents of a customer’s shopping cart may be accessed through an API integrated with your e-commerce platform; and the content for your pages is probably accessed through an integration with your website CMS. But unlike when you implemented your data layer the first time – when your website already had all that data the way it needed it and in the right locations on the page – your development team has to rethink and rebuild all of that data architecture. You both need the data this time around – which should make collaboration much easier and help you avoid claims that they just can’t get you the data you need.

Timing Challenges for Data Availability

As part of this new approach to data, SPAs typically also introduce a shift in the way they make this data accessible to the browser. The services and APIs I mentioned in the previous section are almost always asynchronous – which introduces a new challenge for measurement teams implementing tracking on SPAs.

On a traditional website, the page is generated on the server, and as this happens, data is pulled into the page from appropriate systems. That data is already part of the page when it is returned to the browser. On an SPA, the browser gets an almost “empty” page with a bunch of instructions on where to get the relevant data for the page; then, as the user navigates, rather than reloading a new page, it just gets a smaller set of instructions for how to update the relevant parts of the page to simulate the effect of navigation.

This “set of instructions” is the API calls I mentioned earlier – the browser is pulling in user data from one service, cart data from another, and product/content data from yet another. As data is made available, it is inserted into the page in the appropriate spot. This can have a positive impact on user experience, because less-relevant data can be added as it comes back, rather than holding up the loading of the entire page. But let’s just say it presents quite a challenge to analytics developers. This is because most tag management systems were built and implemented under the assumption that you’d want to immediately track every page as it loads, and that every new page would actually be a new page. SPAs don’t work like that – if you track an SPA on the page load, or even the DOM ready event, you’re probably going to track it before a significant amount of data is available. So you have to wait to track the initial page load until all the data is ready – and then you have to track subsequent page refreshes of the SPA as if a new page had actually loaded.

You may have experienced this problem before with a traditional website – many companies experiment with the idea of an SPA by trying it out on a smaller part of their website, like user authentication or checkout. Or you’ve maybe seen it with certain third-party tools like your recommendation engine – which, while not really an SPA, have similar timing issues because they feed content onto the page asynchronously. The good news is that most companies that go all-in on SPAs do so all at once, rather than trying to migrate single sections over a longer period of time. They undertake a larger replatforming effort, which probably makes it easier to solve for most of these issues.

Figuring out this timing is one of the most important hurdles you’ll need to coordinate as you implement tracking on an SPA – and it’s different for every site. But the good news is that – as long as you’re using one of the major tag management systems – or planning to migrate from Adobe DTM to Launch as part of your project – it’s the hard part. Because every major TMS has a solution to this problem built right in that allows you to fire any tag on any event that occurs on the page. So your web developers just need to notify your analytics developers when the page is truly “ready.” (Again, if you’re still using Adobe DTM, I can’t emphasize strongly enough that you should switch to Launch if you’re building an SPA. DTM has a few notable “features” that pose major problems for SPAs.)

A New Way of Tracking Events

Another major shift between traditional websites and SPAs is in how on-page events are most commonly tracked. It’s likely that when you first implemented a tag management system, you used a combination of CSS selectors and custom JavaScript you deployed in the TMS, along with events you had your web developers tag that would “trigger” the TMS to do something. Because early sales teams for the major TMS companies used a pitch along the lines of “Do everything without IT!” many companies tried to implement as much tracking as they could using hacks and one-offs in the TMS. All of this custom JavaScript may have had the effect on your TMS of moving all your ugly, one-off tracking JavaScript from your website code to your TMS – without making the actual tracking any cleaner or more elegant.

The good news is that SPAs will force you to clean up your act – because many of the traditional ways of tracking fall down. Because an SPA is constantly updating the DOM without loading a new page, you can’t just add a bunch of event listeners that bind when the page loads (or on DOM ready). You’d need to turn off all your listeners on each page refresh and turn on a bunch of new ones, which can be tedious and prone to error. Another option that will likely not work in every case is to target very broad events (like a body “click”) and then within those handlers just see which element first triggered the event. This approach could also potentially have a negative impact on the user’s experience.

Instead, many teams developing an SPA also develop a new model for listening and responding to events that, just like the data layer, can be leveraged by analytics teams as well.

The company I mentioned at the beginning of this post had an entire catalog of events they already needed to listen for to make the SPA work – for example, they needed to listen for each cart add event so that they could send data about that item to their e-commerce system. The e-commerce system would then respond with an updated version of all the data known about a future order. So they built an API for this – and then, the analytics team was able to use it as well. Without any additional development, we were able to track nearly every key interaction on the website. This was all because they had taken the time to think about how events and interactions should work on the website, and they built something that was extensible to other things on the website than just its core purpose.

This is the kind of thing that a company would almost never do with an old website – it’s a large effort to build this type of event service, and it has to be done inside an old, messy codebase. But when you build an SPA, you have to do it anyway – so you might as well add a little bit more work up front to save you a ton of time later on. Developers figure these kinds of things out as they go – they learn tricks that will save time in the future. SPAs can offer a chance to put some of these tricks into action.

Conclusion

There are many other important things to consider when building a single-page application, and it’s a major undertaking that can take longer than a company plans for. But while I still feel that it’s more difficult to implement analytics on an SPA than any other type of web-based application, it doesn’t have to be the nightmare that many companies encounter. Just remember to make sure your development team is building all this new functionality in a way that everyone can benefit from:

  • While they’re making sure all the data necessary for each view (page) of the website is available, make sure they provide hooks so that other teams (like analytics) can access that data.
  • Consider the impact on your website of all of that data showing up at different times.
  • Develop an event model that makes it easy to track key interactions on the site without relying on fragile CSS selectors and DOM hacks.

A few weeks ago at our ACCELERATE conference, I led a roundtable for the more technical minded attendees. The #1 challenge companies were dealing with when it came to analytics implementation was SPAs. But the key is to take advantage of all the opportunities an SPA can offer – you have to realize it gives you the chance to fix all the things that have broken and been patched together over the years. Your SPA developers are going to spend a lot of time getting the core functionality right – and they can do it in a way that can make your job easier, too, if you get out in front of them and push them to think in innovative ways. If you do, you might find yourself wondering why some folks complain so much about tracking single-page apps. But if you don’t, you’ll be right there complaining with everyone else. If you’re working with SPAs, I’d love to hear from you about how you’re solving the challenges they present – or where you’re stuck and need a little help.

Photo Credit: www.gotcredit.com

Industry Analysis, Tag Management, Technical/Implementation

Stop Thinking About Tags, and Start Thinking About Data

Nearly three weeks ago, I attended Tealium’s Digital Velocity conference in San Francisco. I’ve attended this event every year since 2014, and I’ve spent enough time using its Universal Data Hub (the name of the combined UI for AudienceStream, EventStream, and DataAccess, if you get a little confused by the way these products have been marketed – which I do), and attended enough conferences, to know that Tealium considers these products to be a big part of its future and a major part of its product roadmap. But given that the majority of my my clients are still heavily focused on tag management and getting the basics under control, I’ve spent far more time in Tealium iQ than any of its other products. So I was a little surprised as I left the conference on the last day by the force with which my key takeaway struck me: tag management as we knew it is dead.

Back in 2016, I wrote about how much the tag management space had changed since Adobe bought Satellite in 2013. It’s been awhile since tag management was the sole focus of any of the companies that offer tag management systems. But what struck me at Digital Velocity was that the most successful digital marketing organizations – while considering tag management a prerequisite for their efforts – don’t really use their tools to manage tags at all. I reflected on my own clients, and found that the most successful ones have realized that they’re not managing tags at all – they’re managing data. And that’s why Tealium is in such an advantageous position relative to any of the other companies still selling tag management systems while Google and Adobe give it away for free.

This idea has been kicking around in my head for awhile now, and maybe I’m stubborn, but I just couldn’t bring myself to admit it was true. Maybe it’s because I still have clients using Ensighten and Signal – in spite of the fact that neither product seems to have committed many resources to their tag management products lately (they both seem much more heavily invested in identity and privacy these days). Or maybe it’s because I still think of myself as the “tag management guy” at Demystified, and haven’t been able to quite come to grips with how much things have changed. But my experience at Digital Velocity was really the final wake-up call.

What finally dawned on me at Digital Velocity is that Tealium, like many of their early competitors, really doesn’t think of themselves as a tag management company anymore, either. They’ve done a much better job of disguising that though – because they continue to invest heavily in TiQ, and have even added some really great features lately (I’m looking at you, New JavaScript Code Extension). And maybe they haven’t really had to disguise it, either,  because of a single decision they made very early on in their history: the decision to emphasize a data layer and tightly couple it with all the core features of its product. In my opinion, that’s the most impactful decision any of the early tag management vendors made on the industry as a whole.

Most tag management vendors initially offered nothing more than code repositories outside of a company’s regular IT processes. They eventually layered on some minimal integration with a company’s “data layer” – but really without ever defining what a data layer was or why it was important. They just allowed you to go in and define data elements, write some code that instructed the TMS on how to access that data, and then – in limited cases – gave you the option of pushing some of that data to your different vendor tags.

On the other hand, Tealium told its customers up front that a good data layer was required to be successful with TiQ. They also clearly defined best practices around how that data layer should be structured if you wanted to tap into the power of their tool. And then they started building hundreds of different integrations (i.e. tags) that took advantage of that data layer. If they had stopped there, they would have been able to offer customers a pretty useful tool that made it easier to deploy and manage JavaScript tags. And that would have made Tealium a pretty similar company to all of its early competitors. Fortunately, they realized they had built something far more powerful than that – the backbone of a potentially very powerful customer data platform (or, as someone referred to Tealium’s tag management tool at DV, a “gateway drug” to its other products).

The most interesting thing that I saw during those 2 days was that there are actual companies for which tag management is only a subset of what they are doing through Tealium. In previous years, Tealium’s own product team has showcased AudienceStream and EventStream. But this year, they had actual customers showing off real-world examples of the way that they have leveraged these products to do some pretty amazing things. Tealium’s customers are doing much more real-time email marketing than you can do through traditional integrations with email service providers. They’re leveraging data collected on a customer’s website to feed integrations with tools like Slack and Twilio to meet customers’ needs in real-time. They’re solving legitimate concerns about the impact all these JavaScript tags have on page-load performance to do more flexible server-side tagging than is possible through most tools. And they’re able to perform real-time personalization across multiple domains and devices. That’s some really powerful stuff – and way more fun to talk about than “tags.” It’s also the kind of thing every company can start thinking about now, even if it’s something you have to ramp up to first.

In conclusion, Tealium isn’t the only company moving in this direction. I know Adobe, Google, an Salesforce all have marketing tools offer a ton of value to their customers. Segment offers the ability to do server-side integrations with many different marketing tools. But I’ve been doing tag management (either through actual products or my own code) for nearly 10 years, and I’ve been telling customers how important it is to have a solid data layer for almost as long – at Salesforce, we had a data layer before anyone actually called it that, and it was so robust that we used it to power everything we did. So to have the final confirmation that tag management is the past and that customer data is the future was a pretty cool experience for me. It’s exciting to see what Adobe Launch is doing with its extension community and the integration with the newest Adobe mobile SDKs. And there are all kinds of similar opportunities for other vendors in the space. So my advice to marketers is this: if you’re still thinking in terms of tags, or if you still think of all your third-party vendors as “silos,” make the shift to thinking about data and how to use it to drive your digital marketing efforts.

Photo Credit: Jonathan Poh (Flickr)

Adobe Analytics, Tag Management, Technical/Implementation, Testing and Optimization

Adobe Target + Analytics = Better Together

Last week I wrote about an Adobe Launch extension I built to familiarize myself with the extension development process. This extension can be used to integrate Adobe Analytics and Target in the same way that used to be possible prior to the A4T integration. For the first several years after Omniture acquired Offermatica (and Adobe acquired Omniture), the integration between the 2 products was rather simple but quite powerful. By using a built-in list variable called s.tnt (that did not count against the 3 per report suite available to all Adobe customers), Target would pass a list of all activities and experiences in which a visitor was a participant. This enabled reporting in Analytics that would show the performance of each activity, and allow for deep-dive analysis using all the reports available in Analytics (Target offers a powerful but limited number of reports). When Target Standard was released, this integration became more difficult to utilize, because if you choose to use Analytics for Target (A4T) reporting, the plugins required to make it work are invalidated. Luckily, there is a way around it, and I’d like to describe it today.

Changes in Analytics

In order to continue to re-create the old s.tnt integration, you’ll need to use one of your three list variables. Choose the one you want, as well as the delimiter and the expiration (the s.tnt expiration was 2 weeks).

Changes in Target

The changes you need to make in Target are nearly as simple. Log into Target, go to “Setup” in the top menu and then click “Response Tokens” in the left menu. You’ll see a list of tokens, or data elements that exist within Target, that can be exposed on the page. Make sure that activity.id, experience.id, activity.name, and experience.name are all toggled on in the “Status” column. That’s it!

Changes in Your TMS

What we did in Analytics and Target made an integration possible – we now have a list variable ready to store Target experience data, and Target will now expose that data on every mbox call. Now, we need to connect the two tools and get data from Target to Analytics.

Because Target is synchronous, the first block of code we need to execute must also run synchronously – this might cause problems for you if you’re using Signal or GTM, as there aren’t any great options for synchronous loading with those tools. But you could do this in any of the following ways:

  • Use the “All Pages – Blocking (Synchronous)” condition in Ensighten
  • Put the code into the utag.sync.js template in Tealium
  • Use a “Top of Page” (DTM) or “Library Loaded” rule (Launch)

The code we need to add synchronously attaches an event listener that will respond any time Target returns an mbox response. The response tokens are inside this response, so we listen for the mbox response and then write that code somewhere it can be accessed by other tags. Here’s the code:

	if (window.adobe && adobe.target) {
		document.addEventListener(adobe.target.event.REQUEST_SUCCEEDED, function(e) {
			if (e.detail.responseTokens) {
				var tokens = e.detail.responseTokens;
				window.targetExperiences = [];
				for (var i=0; i<tokens.length; i++) {
					var inList = false;
					for (var j=0; j<targetExperiences.length; j++) {
						if (targetExperiences[j].activityId == tokens[i]['activity.id']) {
							inList = true;
							break;
						}
					}
					
					if (!inList) {
						targetExperiences.push({
							activityId: tokens[i]['activity.id'],
							activityName: tokens[i]['activity.name'],
							experienceId: tokens[i]['experience.id'],
							experienceName: tokens[i]['experience.name']
						});
					}
				}
			}
			
			if (window.targetLoaded) {
				// TODO: respond with an event tracking call
			} else {
				// TODO: respond with a page tracking call
			} 
		});
	}
	
	// set failsafe in case Target doesn't load
	setTimeout(function() {
		if (!window.targetLoaded) {
			// TODO: respond with a page tracking call
		}
	}, 5000);

So what does this code do? It starts by adding an event listener that waits for Target to send out an mbox request and get a response back. Because of what we did earlier, that response will now carry at least a few tokens. If any of those tokens indicate the visitor has been placed within an activity, it checks to make sure we haven’t already tracked that activity on the current page (to avoid inflating instances). It then adds activity and experience IDs and names to a global object called “targetExperiences,” though you could push it to your data layer or anywhere else you want. We also set a flag called “targetLoaded” to true that allows us to use logic to fire either a page tracking call or an event tracking call, and avoid inflating page view counts on the page. We also have a failsafe in place, so that if for some reason Target does not load, we can initiate some error handling and avoid delaying tracking.

You’ll notice the word “TODO” in that code snippet a few times, because what you do with this event is really up to you. This is the point where things get a little tricky. Target is synchronous, but the events it registers are not. So there is no guarantee that this event will be triggered before the DOM ready event, when your TMS likely starts firing most tags.. So you have to decide how you want to handle the event. Here are some options:

  • My code above is written in a way that allows you to track a pageview on the very first mbox load, and a custom link/event tracking call on all subsequent mbox updates. You could do this with a utag.view and utag.link call (Tealium), or trigger a Bootstrapper event with Ensighten, or a direct call rule with DTM. If you do this, you’ll need to make sure you configure the TMS to not fire the Adobe server call on DOM ready (if you’re using DTM, this is a huge pain; luckily, it’s much easier with Launch), or you’ll double-count every page.
  • You could just configure the TMS to call a custom link call every time, which will probably increase your server calls dramatically. It may also make it difficult to analyze experiences that begin on page load.

What my Launch extension does is fire one direct call rule on the first mbox call, and a different call for all subsequent mbox calls. You can then configure the Adobe Analytics tag to fire an s.t() call (pageview) for that initial direct call rule, and an s.tl() call for all others. If you’re doing this with Tealium, make sure to configure your implementation to wait for your utag.view() call rather than allowing the automatic one to track on DOM ready. This is the closest behavior to how the original Target-Analytics integration worked.

I’d also recommend not limiting yourself to using response tokens in just this one way. You’ll notice that there are tokens available for geographic data (based on an IP lookup) and many other things. One interesting use case is that geographic data could be extremely useful in achieving GDPR compliance. While the old integration was simple and straightforward, and this new approach is a little more cumbersome, it’s far more powerful and gives you many more options. I’d love to hear what new ways you find to take advantage of response tokens in Adobe Target!

Photo Credit: M Liao (Flickr)

Adobe Analytics, Tag Management, Technical/Implementation

My First Crack at Adobe Launch Extension Development

Over the past few months, I’ve been spending more and more time in Adobe Launch. So far, I’m liking what I see – though I’m hoping the publish process gets ironed out a bit in coming months. But that’s not the focus of this post; rather, I wanted to describe my experience working with extensions in Launch. I recently authored my first extension – which offers a few very useful ways to integrate Adobe Target with other tools and extensions in Launch. You can find out more about it here, or ping me with any questions if you decide to add the extension to your Launch configuration. Next week I’ll try and write more about how you might something similar using any of the other major tag management systems. But for now, I’m more interested in how extension development works, and I’d like to share some of the things I learned along the way.

Extension Development is New (and Evolving) Territory for Adobe

The idea that Adobe has so freely opened up its platform to allow developers to share their own code across Adobe’s vast network of customers is admittedly new to me. After all, I can remember the days when Omniture/Adobe didn’t even want to open up its platform to a single customer, much less all of them. Remember the days of usage tokens for its APIs? Or having to pay for a consulting engagement just to get the code to use an advanced plugin like Channel Manager? So the idea that Adobe has opened things up to the point where I can write my own code within Launch, programmatically send it to Adobe, and have it then available for any Adobe customer to use – that’s pretty amazing. And for being so new, the process is actually pretty smooth.

What Works Well

Adobe has put together a pretty solid documentation section for extension developers. All the major topics are covered, and the Getting Started guide should help you get through the tricky parts of your first extension like authentication, access tokens, and uploading your extension package to the integration environment. One thing to note is that just about everything you define in your extension is a “type” of that thing, not the actual thing. For example, my extension exposes data from Adobe Target for use by other extensions. But I didn’t immediately realize that my data element definitions didn’t actually define new data elements for use in Launch; it only created a new “type” of data element in the UI that can then be used to create a data element. The same is true for custom events and actions. That makes sense now, but it took some getting used to.

During the time I spent developing my extension, I also found the Launch product team is working continuously to improve the process for us. When I started, the documentation offered a somewhat clunky process to retrieve an access token, zip my extension, and use a Postman collection to upload it. By the time I was finished, Adobe had released a Node package (npm) to basically do all the hard work. I also found the Launch product team to be incredibly helpful – they responded almost immediately to my questions on their Slack group. They definitely seem eager to build out a community as quickly as possible.

I also found the integration environment to be very helpful in testing out my extension. It’s almost identical to the production environment of Launch; the main difference is that it’s full of extensions in development by people just like me. So you can see what others are working on, and you can get immediate feedback on whether your extension works the way it should. There is even a fair amount of error logging available if you break something – though hopefully this will be expanded in the coming months.

What Could Work Better

Once I finished my extension, I noticed that there isn’t a real natural spot to document how your extension should work. I opted to put mine into the main extension view, even though there was no other configuration needed that would require such a view. While I was working on my extension, it was suggested that I put instructions in my Exchange listing, which doesn’t seem like a very natural place for it, either.

I also hope that, over time, Adobe offers an easier way to style your views to match theirs. For example, if your extension needs to know the name of a data element it should populate, you need a form field to collect this input. Making that form look the same as everything else in Launch would be ideal. I pulled this off by scraping the HTML and JavaScript from one Adobe’s own extensions and re-formatting it. But a “style toolkit” would be a nice addition to keep the user experience the same.

Lastly, while each of the sections in the Getting Started guide had examples, some of the more advanced topics could use some more additional exploration. For example, it took me a few tries to decide whether my extension would work better with a custom event type, or with just some custom code that triggered a direct call rule. And figuring out how to integrate with other extensions – how to access other extensions’ objects and code – wasn’t exactly easy and I still have some unanswered questions because I found a workaround and ended up not needing it.

Perhaps the hardest part of the whole process was getting my Exchange listing approved. The Exchange covers a lot of integrations beyond just Adobe Launch, some of which are likely far more complex than what mine does. A lot of the required images, screenshots, and details seemed like overkill – so a tiered approach to listings would be great, too.

What I’d Like to See Next

Extension development is in its infancy still, but one thing I hope is on the roadmap is the ability to customize an extension to work the way you need it. For a client I recently migrated, they used both Facebook and Pinterest, though the extensions didn’t work for their tag implementation. There were events and data they needed to capture that the extension didn’t support. I hope that in a future iteration, I’ll be able to “check out” an extension from the library and download the package, make it work the way I need and either create my own version of the extension or contribute to an update of someone else’s extension that the whole community can benefit from. The inability to customize tag templates has plagued every paid tag management solution except Tealium (which has supported it from the beginning) for years – in my opinion, it’s what turns tag management from a tool used primarily to deploy custom JavaScript into a powerful digital marketing toolbelt. It’s not something I’d expect so early in the game, but I hope it will be added soon.

In conclusion, my hat goes off to the Launch development team; they’ve come up with a really great way to build a collaborative community that pushes Launch forward. No initial release will ever be perfect, but there’s a lot to work with and a lot of opportunity for all of us in the future to shape the direction Launch takes and have some influence in how it’s adopted. And that’s an exciting place to be.

Photo Credit: Rod Herrea (Flickr)

Tag Management, Technical/Implementation

Helpful Implementation Tip – Rewrite HTML on Page Load with Charles Proxy

There are a variety of methods and tools used for debugging and QA’ing an analytics implementation.  While simply using the developer tools built into your favorite browser will usually suffice for some of the more common QA needs, there are times that a more robust tool is needed.  One such situation is the need to either swap out code on the page, or add code to a page.

To use an example, there are many times that a new microsite will be launching, but due to dev sprints/cycles, the Tag Management System (TMS), and dataLayer, that you are going to be working with hasn’t been added to the site yet.  However, you may need to get some tags set up in the TMS and ensure they are working, while the engineering team works on getting the TMS installed on the site. In these situations, it would be very difficult to ensure that everything in the TMS is working correctly prior to the release.  This is one of many situations where Charles Proxy can be a useful tool to have available.  

Charles Proxy is a proxy tool that sits in the middle of your connection to the internet.  This means that it captures and processes every bit of information that is sent between your computer and the internet and therefore can allow you to manipulate that information.  One such manipulation you can perform is to change the body of any response that is received from a web server, i.e. the HTML of a web page.

To go back to my example above, let’s say that I wanted to install Google Tag Manager (GTM) on a webpage.  I would open up Charles, go to the Tools menu and then go to Rewrite. I would then create a new rule that replaces the “</head>” text with the dataLayer definition, GTM snippet and then the “</head>” text. (See below for how this is setup).  This will result in the browser (only on your computer and only while Charles is open with the rewrite enabled) receiving an HTML response that contains the GTM snippet.

Example Setup

Step 1: Open the Rewrite tool from the “Tools” menu.

Step 2: Click the add button on the bottom left to add a new Rewrite rule.  Then click on the Add button on the middle right to add a configuration for which domains/pages to apply this rewrite rule to.

Step 3: Click the Add button on the bottom right.  This will allow you to specify what action you want to take on the domains/pages that you specified.  For this example, you will want to choose a “Type” of “Body” and “Where” of “Response” as you are modifying the Body of the response. Under “Match” you are going to put the closing “</head>” tag as this is where you are going to install the TMS.  Then, under “Replace”, you will put the snippet you want to place before the closing head tag (in this example, the TMS and dataLayer tags) followed by the same closing “</head>” tag. When you are done, click “OK” to close the Action window.

Step 4: Click OK on the Rewrite Settings window to save your rewrite rule.  Then refresh the domains/pages in your browser to see whether your new rewrite rule is working as expected.

Why use Charles instead of standard browser developer tools?

While you could perform this same task using the developer tools in your chosen browser, that would have to be done on each page that you need to QA, each time a page is loaded.  A Charles rewrite, on the other hand, would be automatically placed on each page load. It also ensures that the GTM snippet and dataLayer are loaded in the correct place in the DOM and that everything fires in the correct order.  This is essential to ensuring that your QA doesn’t return different results than it would on your production site (or staging site once the GTM snippet is placed).

There are many ways that Charles rewrites can be used.  Here are a few examples of when I utilize rewrites –

  • Changing the container bootstrap/container that is used for staging sites (this is less common, but sometimes needed depending on the situation);
  • Adding/changing core JavaScript that is needed for analytics to function;
  • Modifying HTML to test out specific scenarios with tracking (instead of having to do so in the browser’s developer tools on each page load);
  • Manipulating the dataLayer on each page prior to a staging site update.  This can be useful for testing out a tagging plan prior to sending to a dev team (which helps to ensure less back and forth in QA when something wasn’t quite defined correctly in your requirements).

I hope you have found this information useful.  What are your thoughts? Do you have any other great use cases that I may have missed?  Leave your comments below!

Adobe Analytics, Featured, Tag Management, Technical/Implementation

A Coder’s Paradise: Notes from the Tech Track at Adobe Summit 2018

Last week I attended my 11th Adobe Summit – a number that seems hard to believe. At my first Summit back in 2008, the Great Recession was just starting, but companies were already cutting back on expenses like conferences – just as Omniture moved Summit from the Grand America to the Salt Palace (they moved it back in 2009 for a few more years). Now, the event has outgrown Salt Lake City – with over 13,000 attendees last week converging on Las Vegas for an event with a much larger footprint than just the digital analytics industry.

With the sheer size of the event and the wide variety of products now included in Adobe’s Marketing and Experience Clouds, it can be difficult to find the right sessions – but I managed to attend some great labs, and wanted to share some of what I learned. I’ll get to Adobe Launch, which was again under the spotlight – only this year, it’s actually available for customers to use. But I’m going to start with some of the other things that impressed me throughout the week. There’s a technical bent to all of this – so if you’re looking for takeaways more suited for analysts, I’m sure some of my fellow partners at Demystified (as well as lots of others out there) will have thoughts to share. But I’m a developer at heart, so that’s what I’ll be emphasizing.

Adobe Target Standard

Because Brian Hawkins is such an optimization wizard, I don’t spend as much time with Target as I used to, and this was my first chance to do much with Target Standard besides deploy the at.js library and the global mbox. But I attended a lab that worked through deploying it via Launch, then setting up some targeting on a singe-page ReactJS application. My main takeaway is that Target Standard is far better suited to running an optimization program on a single-page application than Classic ever was. I used to have to utilize nested mboxes and all sorts of DOM trickery to delay content from showing until the right moment when things actually took place. But with Launch, you can easily listen for page updates and then trigger mboxes accordingly.

Target Standard and Launch also makes it easier to handle a common issue with frameworks like ReactJS where the data layer is being asynchronously populated with data from API calls – so you can run a campaign on initial page load even if it takes some time for all the relevant targeting data to be available.

Adobe Analytics APIs

The initial version of the Omniture API was perhaps the most challenging API I’ve ever used. It supported SOAP only, and from authentication to query, you had to configure everything absolutely perfectly for it to work. And you had to do it with no API Explorer and virtually no documentation, all while paying very close attention to the number of requests you were making, since you only had 2,000 tokens per month and didn’t want to run out or get charged for more (I’m not aware this ever happened, but the threat at least felt real!).

Adobe adding REST API support a few years later was a career-changing event for me, and there have been several enhancements and improvements since, like adding OAUTH authentication support. But what I saw last week was pretty impressive nonetheless. The approach to querying data is changed significantly in the following ways:

  • The next iteration of Adobe’s APIs will offer a much more REST-ful approach to interacting with the platform.
  • Polling for completed reports is no longer required. It will likely take several more requests to get to the most complicated reports, but each individual request will run much faster.
  • Because Analytics Workspace is built on top of a non-public version of the API, you truly will be able to access any report you can find in the UI.
  • The request format for each report has been simplified, with non-essential parameters either removed or at least made optional.
  • The architecture of a report request is fundamentally different in some ways – especially in the way that breakdowns between reports work.
  • The ability to search or filter on reports is far more robust than in earlier versions of the API.

Launch by Adobe

While Launch has been available for a few months, I’ve found it more challenging than I expected to talk my clients into migrating from DTM to Launch. The “lottery” system made some of my clients wonder if Launch was really ready for prime-time, while the inability to quickly migrate an existing DTM implementation over to Launch has been prohibitive to others. But whatever the case may be, I’ve only started spending a significant amount of time in Launch in the last month or so. For customers who were able to attend labs or demos on Launch at Summit, I suspect that will quickly change – because the feature set is just so much better than with DTM.

How Launch Differs from DTM

My biggest complaint about DTM has always been that it hasn’t matched the rest of the Marketing Cloud in terms of enterprise-class features. From a limited number of integrations available, to the rigid staging/production publishing structure, I’ve repeatedly run into issues where it was hard to make DTM work the way I needed for some of my larger clients. Along the way, Adobe has repeatedly said they understood these limitations and were working to address them. And Launch does that – it seems fairly obvious now that the reason DTM lagged in offering features other systems did is because Adobe has been putting way more resources into Launch over the past few years. It opens up the platform in some really unique ways that DTM never has:

  • You can set up as many environments as you want.
  • Minification of JavaScript files is now standard (it’s still hard to believe this wasn’t the case with DTM).
  • Anyone can write extensions to enhance the functionality and features available.
  • The user(s) in charge of Launch administration for your company have much more granular control over what is eventually pushed to your production website.
  • The Launch platform will eventually offer open APIs to allow you to customize your company’s Launch experience in virtually any way you need.

With Great Power Comes Great Responsibility

Launch offers a pretty amazing amount of control that make for some major considerations to each company that implements it. For example, the publishing workflow is flexible to the point of being a bit confusing. Because it’s set up almost like a version control system like Git, any Launch user can set up his or her own development environment and configure in any number of ways. This means each user has to then choose which version of every single asset to include in a library, promote to staging/production, etc. So you have to be a lot more careful than when you’re publishing with DTM.

I would hope we’ve reached a point in tag management where companies no longer expect a marketer to be able to own tagging and the TMS – it was the sales pitch made from the beginning, but the truth is that it has never been that easy. Even Tealium, which (in my opinion) has the most user-friendly interface and the most marketer-friendly features, needs at least one good developer to tap into the whole power of the tool. Launch will be no different; as the extension library grows and more integrations are offered, marketers will probably feel more comfortable making changes than they were with DTM – but this will likely be the exception and not the rule.

Just One Complaint

If there is one thing that will slow migration from DTM to Launch, it is be the difficulty customers will face in migration. One of the promises Adobe made about Launch at Summit in 2017 was that you would be able to migrate from DTM to Launch without updating the embed code on your site. This is technically true – you can configure Launch to publish your production environment to an old DTM production publishing target. But this can only be done for production, and not any other environment – which means you can migrate without updating your production embed code, but you will need to update all your non-production codes. Alternatively, you can use a tool like DTM Switch or Charles Proxy for your testing – and that will work fine for your initial testing. But most enterprise companies want to accumulate a few weeks of test data for all the traffic on at least one QA site before they are comfortable deploying changes to production.

It’s important to point out that, even if you do choose to migrate by publishing your Launch configuration to your old production DTM publishing target, you still have to migrate everything currently in DTM over to Launch – manually. Later this year, Adobe has said that they will release a true migration tool that will allow customers to pull rules, data elements, and tags from a DTM property into a new Launch property and migrate them without causing errors. Short of such a tool, some customers will have to invest quite a bit to migrate everything they currently have in DTM over to Launch. Until then, my recommendation is to figure out the best migration approach for your company:

  1. If you have at least one rockstar analytics developer with some bandwidth, and a manageable set of rules and tags in DTM, I’d start playing around with migration in one of your development environments, and put together an actual migration plan.
  2. If you don’t have the resources yet, I’d probably wait for the migration tool to be available later in the year – but still start experimenting with Launch on smaller sites or as more resources become available.

Either way, for some of my clients that have let their DTM implementations get pretty unwieldy, moving from DTM to Launch offers a fresh start and a chance to upgrade to Adobe’s latest technology. No matter which of these two situations you’re in, I’d start thinking now (if you haven’t already) about how you’re going to get your DTM properties migrated to Launch. It is superior to DTM in nearly every way, and it is going to get nearly all of the development resources and roadmap attention from Adobe from here on out. You don’t need to start tomorrow – and if you need to wait for a migration tool, you’ll be fine. But if your long-term plan is to stay with DTM, you’re likely going to limit your ability in the future to tap into additional features, integrations and enhancements Adobe makes across its Marketing and Experience Cloud products.

Conclusion

We’ve come a long ways from the first Summits I attended, with only a few labs and very little emphasis on the technology itself. Whether it was new APIs, new product features announcements, or the hands-on labs, there was a wealth of great information shared at Summit 2018 for developers and implementation-minded folks like me – and hopefully you’re as excited as I am to get your hands on some of these great new products and features.

Photo Credit: Roberto Faccenda (Flickr)

Adobe Analytics, Tag Management, Technical/Implementation

Adobe Data Collection Demystified: Ten Tips in Twenty(ish) Minutes

We are all delighted to announce our first of hopefully many live presentations on the YouTube platform coming up on March 20th at 11 AM Pacific / 2 PM Eastern!  Join Josh West and Kevin Willeitner, Senior Partners at Analytics Demystified and recognized industry leaders on the topic of analytics technology, and learn some practical techniques to help you avoid common pitfalls and improve your Adobe data collection.  Presented live, Josh and Kevin will touch on aspects of the Adobe Analytics collection process from beginning to end with tips that will help your data move through the process more efficiently and give you some know-how to make your job a little easier.

The URL for the presentation is https://www.youtube.com/watch?v=FtJ40TP1y44 and if you’d like a reminder before the event please just let us know.

Again:

Adobe Data Collection Demystified
Tuesday, March 20th at 11 AM Pacific / 2 PM Eastern
https://www.youtube.com/watch?v=FtJ40TP1y44

Also, if you are attending this year’s Adobe Summit in Las Vegas … a bunch of us will be there and would love to meet in person. You can email me directly and I will coordinate with Adam Greco, Brian Hawkins, Josh West, and Kevin Willeitner to make sure we have time to chat.

Adobe Analytics, Featured, General, google analytics, Technical/Implementation

Can Local Storage Save Your Website From Cookies?

I can’t imagine that anyone who read my last blog post set a calendar reminder to check for the follow-up post I had promised to write, but if you’re so fascinated by cookies and local storage that you are wondering why I didn’t write it, here is what happened: Kevin and I were asked to speak at Observepoint’s inaugural Validate conference last week, and have been scrambling to get ready for that. For anyone interested in data governance, it was a really unique, and great event. And if you’re not interested in data governance, but you like outdoor activities like mountain biking, hiking, fly fishing, etc. – part of what made the event unique was some really great networking time outside of a traditional conference setting. So put it on your list of potential conferences to attend next year.

My last blog post was about some of the common pitfalls that my clients see that are caused by an over-reliance on cookies. Cookies are critical to the success of any digital analytics implementation – but putting too much information in them can even crash a customer’s experience. We talked about why many companies have too many cookies, and how a company’s IT and digital analytics teams can work together to reduce the impact of cookies on a website.

This time around, I’d like to take a look at another technology that is a potential solution to cookie overuse: local storage. Chances are, you’ve at least heard about local storage, but if you’re like a lot of my clients, you might not have a great idea of what it does or why it’s useful. So let’s dive into local storage: what it is, what it can (and can’t) do, and a few great uses cases for local storage in digital analytics.

What is Local Storage?

If you’re having trouble falling asleep, there’s more detail than you could ever hope to want in the specifications document on the W3C website. In fact, the W3C makes an important distinction and calls the actual feature “web storage,” and I’ll describe why in a bit. But most people commonly refer to the feature as “local storage,” so that’s how I’ll be referring to it as well.

The general idea behind local storage is this: it is a browser feature designed to store data in name/value pairs on the client. If this sounds a lot like what cookies are for, you’re not wrong – but there are a few key differences we should highlight:

  • Cookies are sent back and forth between client and server on all requests in which they have scope; but local storage exists solely on the client.
  • Cookies allow the developer to manage expiration in just about any way imaginable – by providing an expiration timestamp, the cookie value will be removed from the client once that timestamp is in the past; and if no timestamp is provided, the cookie expires when the session ends or the browser closes. On the other hand, local storage can support only 2 expirations natively – session-based storage (through a DOM object called sessionStorage), and persistent storage (through a DOM object called localStorage). This is why the commonly used name of “local storage” may be a bit misleading. Any more advanced expiration would need to be written by the developer.
  • The scope of cookies is infinitely more flexible: a cookie could have the scope of a single directory on a domain (like http://www.analyticsdemystified.com/blogs), or that domain (www.analyticsdemystified.com), or even all subdomains on a single top-level domain (including both www.analyticsdemystified.com and blog.analyticsdemystified.com). But local storage always has the scope of only the current subdomain. This means that local storage offers no way to pass data from one subdomain (www.analyticsdemystified.com) to another (blog.analyticsdemystified.com).
  • Data stored in either localStorage or sessionStorage is much more easily accessible than in cookies. Most sites load a cookie-parsing library to handle accessing just the name/value pair you need, or to properly decode and encode cookie data that represents an object and must be stored as JSON. But browsers come pre-equipped to make saving and retrieving storage data quick and easy – both objects come with their own setItem and getItem methods specifically for that purpose.

If you’re curious what’s in local storage on any given site, you can find out by looking in the same place where your browser shows you what cookies it’s currently using. For example, on the “Application” tab in Chrome, you’ll see both “Local Storage” and “Session Storage,” along with “Cookies.”

What Local Storage Can (and Can’t) Do

Hopefully, the points above help clear up some of the key differences between cookies and local storage. So let’s get into the real-world implications they have for how we can use them in our digital analytics efforts.

First, because local storage exists only on the client, it can be a great candidate for digital analytics. Analytics implementations reference cookies all the time – perhaps to capture a session or user ID, or the list of items in a customer’s shopping cart – and many of these cookies are essential both for server- and client-side parts of the website to function correctly. But the cookies that the implementation sets on its own are of limited value to the server. For example, if you’re storing a campaign ID or the number of pages viewed during a visit in a cookie, it’s highly unlikely the server would ever need that information. So local storage would be a great way to get rid of a few of those cookies. The only caveat here is that some of these cookies are often set inside a bit of JavaScript you got from your analytics vendor (like an Adobe Analytics plugin), and it could be challenging to rewrite all of them in a way that leverages local storage instead of cookies.

Another common scenario for cookies might be to pass a session or visitor ID from one subdomain to another. For example, if your website is an e-commerce store that displays all its products on www.mystore.com, and then sends the customer to shop.mystore.com to complete the checkout process, you may use cookies to pass the contents of the customer’s shopping cart from one part of the site to another. Unfortunately, local storage won’t help you much here – because, unlike cookies, local storage offers no way to pass data from one subdomain to another. This is perhaps the greatest limitation of local storage that prevents its more frequent use in digital analytics.

Use Cases for Local Storage

The key takeaway on local storage is that there are 2 primary limitations to its usefulness:

  • If the data to be stored is needed both on the client/browser and the server, local storage does not work – because, unlike cookies, local storage data is not sent to the server on each request.
  • If the data to be stored is needed on multiple subdomains, local storage also does not work – because local storage is subdomain-specific. Cookies, on the other hand, are more flexible in scope – they can be written to work across multiple subdomains (or even all subdomains on the same top-level domain).

Given these considerations, what are some valid use cases when local storage makes sense over cookies? Here are a few I came up with (note that all of these assume that neither limitation above is a problem):

  • Your IT team has discovered that your Adobe Analytics implementation relies heavily on several cookies, several of which are quite large. In particular, you are using the crossVisitParticipation plugin to store a list of each visit’s traffic source. You have a high percentage of return visitors, and each visit adds a value to the list, which Adobe’s plugin code then encodes. You could rewrite this plugin to store the list in the localStorage object. If you’re really feeling ambitious, you could override the cookie read/write utilities used by most Adobe plugins to move all cookies used by Adobe (excluding visitor ID cookies of course) into localStorage.
  • You have a session-based cookie on your website that is incremented by 1 on each page load. You then use this cookie in targeting offers based on engagement, as well as invites to chat and to provide feedback on your site. This cookie can very easily be removed, pushing the data into the sessionStorage object instead.
  • You are reaching the limit to the number of Adobe Analytics server calls or Google Analytics hits before you bump up to the next pricing tier, but you have just updated your top navigation menu and need to measure the impact it’s having on conversion. Using your tag management system and sessionStorage, you could “listen” for all navigation clicks, but instead of tracking them immediately, you could save the click information and then read it on the following page. In this way, the click data can be batched up with the regular page load tracking that will occur on the following page (if you do this, make sure to delete the element after using it, so you can avoid double-tracking on subsequent pages).
  • You have implemented a persistent shopping cart on your site and want to measure the value and contents of a customer’s shopping cart when he or she arrives on your website. Your IT team will not be able to populate this information into your data layer for a few months. However, because they already implemented tracking of each cart addition and removal, you could easily move this data into a localStorage object on each cart interaction to help measure this.

All too often, IT and analytics teams resort to the “just stick it in a cookie” approach. That way, they justify, we’ll have the data saved if it’s ever needed. Given some of the limitations I talked about in my last post, we should all pay close attention to the number, and especially the size, of cookies on our websites. Not doing so can have a very negative impact on user experience, which in turn can have painful implications for your bottom line. While not perfect for every situation, local storage is a valuable tool that can be used to limit the number of cookies used by your website. Hopefully this post has helped you think of a few ways you might be able to use local storage to streamline your own digital analytics implementation.

Photo Credit: Michael Coghlan (Flickr)

Adobe Analytics, Featured, google analytics, Technical/Implementation

Don’t Let Cookies Eat Your Site!

A few years ago, I wrote a series of posts on how cookies are used in digital analytics. Over the past few weeks, I’ve gotten the same question from several different clients, and I decided it was time to write a follow-up on cookies and their impact on digital analytics. The question is this: What can we do to reduce the number of cookies on our website? This follow-up will be split into 2 separate posts:

  1. Why it’s a problem to have too many cookies on your website, and how an analytics team can be part of the solution.
  2. When local storage is a viable alternative to cookies.

The question I described in the introduction to this post is usually posed to me like this: An analyst has been approached by someone in IT that says, “Hey, we have too many cookies on our website. It’s stopping the site from working for our customers. And we think the most expendable cookies on the site are those being used by the analytics team. When can you have this fixed?” At this point, the client frantically reaches out to me for help. And while there are a few quick suggestions I can usually offer, it usually helps to dig a little deeper and determine whether the problem is really as dire as it seems. The answer is usually no – and, surprisingly, it is my experience that analytics tools usually contribute surprisingly little to cookie overload.

Let’s take a step back and identify why too many cookies is actually a problem. The answer is that most browsers put a cap on the maximum size of the cookies they are willing to pass back and forth on each network request – somewhere around 4KB of data. Notice that the limit has nothing to do with the number of cookies, or even the maximum size of a single cookie – it is the total size of all cookies sent. This can be compounded by the settings in place on a single web server or ISP, that can restrict this limit even further. Individual browsers might also have limits on the total number of cookies allowed (a common maximum number is 50) as well as the maximum size of any one cookie (usually that same 4KB size).

The way the server or browser responds to this problem varies, but most commonly it’s just to return a request error and not send back the actual page. At this point it becomes easy to see the problem – if your website is unusable to your customers because you’re setting to many cookies that’s a big problem. To help illustrate the point further, I used a Chrome extension called EditThisCookie to find a random cookie on a client’s website, and then add characters to that cookie value until it exceeded the 4KB limit. I then reloaded the page, and what I saw is below. Cookies are passed as a header on the request – so, essentially, this message is saying that the request header for cookies was longer than what the server would allow.

At this point, you might have started a mental catalog of the cookies you know your analytics implementation uses. Here are some common ones:

  • Customer and session IDs
  • Analytics visitor ID
  • Previous page name (this is a big one for Adobe users, but not Google, since GA offers this as a dimension out of the box)
  • Order IDs and other values to prevent double-counting on page reloads (Adobe will only count an order ID once, but GA doesn’t offer this capability out of the box)
  • Traffic source information, sometimes across multiple visits
  • Click data you might store in a cookie to track on the following page, to minimize hits
  • You’ve probably noticed that your analytics tool sets a few other cookies as well – usually just session cookies that don’t do much of anything useful. You can’t eliminate them, but they’re generally small and don’t have much impact on total cookie size.

If your list looks anything like this, you may be wondering why the analytics team gets a bad rap for its use of cookies. And you’d be right – I have yet to have a client ask me the question above that ended up being the biggest offender in terms of cookie usage on the site. Most websites these days are what I might call “Frankensteins” – it becomes such a difficult undertaking to rebuild or update a website that, over time, IT teams tend to just bolt on new functionality and features without ever removing or cleaning up the old. Ask any developer and they’ll tell you they have more tech debt than they can ever hope to clean up (for the non-developers out there, “tech debt” describes all the garbage left in your website’s code base that you never took the time to clean up; because most developers prefer the challenge of new development to the tediousness of cleaning up old messes, and most marketers would rather have developers add new features anyway, most sites have a lot of tech debt).  If you take a closer look at the cookies on your site, you’d probably find all sorts of useless data being stored for no good reason. Things like the last 5 URLs a visitor has seen, URL-encoded twice. Or the URL for the customer’s account avatar being stored in 3 different cookies, all with the same name and data – one each for mysite.com,  www.mysite.com, and store.mysite.com. Because of employee turnover and changing priorities, a lot of the functionality on a website are owned by different development on the same team – or even different teams entirely. It’s easy for one team to not realize that the data it needs already exists in a cookie owned by another team – so a developer just adds a new cookie without any thought of the future problem they’ve just added to.

You may be tempted to push back on your IT team and say something like, “Come talk to me when you solve your own problems.” And you may be justified in thinking this – most of the time, if IT tells the analytics team to solve its cookie problem, it’s a little like getting pulled over for drunk driving and complaining that the officer should have pulled over another driver for speeding instead while failing your sobriety test. But remember 2 things (besides the exaggeration of my analogy – driving while impaired is obviously worse than overusing cookies on your website):

  1. A lot of that tech debt exists because marketing teams are loathe to prioritize fixing bugs when they could be prioritizing new functionality.
  2. It really doesn’t matter whose fault it is – if your customers can’t navigate your site because you are using too many cookies, or your network is constantly weighed down by the back-and-forth of unnecessary cookies being exchanged, there will be an impact to your bottom line.

Everyone needs to share a bit of the blame and a bit of the responsibility in fixing the problem. But it is important to help your IT team understand that analytics is often just the tip of the iceberg when it comes to cookies. It might seem like getting rid of cookies Adobe or Google sets will solve all your problems, but there are likely all kinds of cleanup opportunities lurking right below the surface.

I’d like to finish up this post by offering 3 suggestions that every company should follow to keep its use of cookies under control:

Maintain a cookie inventory

Auditing the use of cookies frequently is something every organization should do – at least annually. When I was at salesforce.com, we had a Google spreadsheet that cataloged our use of cookies across our many websites. We were constantly adding and removing the cookies on that spreadsheet, and following up with the cookie owners to identify what they did and whether they were necessary.

One thing to note when compiling a cookie inventory is that your browser will report a lot of cookies that you actually have no control over. Below is a screenshot from our website. You can see cookies not only from analyticsdemystified.com, but also linkedin.com, google.com, doubleclick.net, and many other domains. Cookies with a different domain than that of your website are third-party, and do not count against the limits we’ve been talking about here (to simplify this example, I removed most of the cookies that our site uses, leaving just one per unique domain). If your site is anything like ours, you can tell why people hate third-party cookies so much – they outnumber regular cookies and the value they offer is much harder to justify. But you should be concerned primarily with first-party cookies on your site.

Periodically dedicate time to cookie cleanup

With a well-documented inventory your site’s use of cookies in place, make sure to invest time each year to getting rid of cookies you no longer need, rather than letting them take up permanent residence on your site. Consider the following actions you might take:

  • If you find that Adobe has productized a feature that you used to use a plugin for, get rid of it (a great example is Marketing Channels, which has essentially removed the need for the old Channel Manager plugin).
  • If you’re using a plugin that uses cookies poorly (by over-encoding values, etc.), invest the time to rewrite it to better suit your needs.
  • If you find the same data actually lives in 2 cookies, get the appropriate teams to work together and consolidate.

Determine whether local storage is a viable alternative

This is the real topic I wanted to discuss – whether local storage can solve the problem of cookie overload, and why (or why not). Local storage is a specification developed by the W3C that all modern browsers have now implemented. In this case, “all” really does mean “all” – and “modern” can be interpreted as loosely as you want, since IE8 died last year and even it offered local storage. Browsers with support for local storage offer developers the ability to store that is required by your website or web applicaiton, in a special location, and without the size and space limitations imposed by cookies. But this data is only available in the browser – it is not sent back to the server. That means it’s a natural consideration for analytics purposes, since most analytics tools are focused on tracking what goes on in the browser.

However, local storage has limitations of its own, and its strengths and weaknesses really deserve their own post – so I’ll be tackling it in more detail next week. I’ll be identifying specific uses cases that local storage is ideal for – and others where it falls short.

Photo Credit: Karsten Thoms

Adobe Analytics, Tag Management, Technical/Implementation

Star of the Show: Adobe Announces Launch at Summit 2017

If you attended the Adobe Summit last week and are anything like me, a second year in Las Vegas did nothing to cure the longing I felt last year for more of a focus on digital analytics rather than experience (I still really missed the ski day, too). But seeing how tag management seemed to capture everyone’s attention with the announcement of Adobe Launch, I had to write a blog post anyway. I want to focus on 3 things: what Launch is (or will be), what it means for current users of DTM, and what it means for the rest of the tag management space.

Based on what I saw at Summit, Launch may be the new catchy name, but it looks like the new product may finally be worthy of the name given to the old one (Dynamic Tag Management, or DTM). I’ve never really thought there was much dynamic about DTM – if you ask me, the “D” should have stood for “Developer,” because you can’t really manage any tags with DTM unless you have a pretty sharp developer. I’ve used DTM for years, and it has been a perfectly adequate tool for what I needed. But I’ve always thought more about what it didn’t do than what it did: it didn’t build on the innovative UI of its Satellite forerunner (the DTM interface was a notable step backwards from Satellite); it didn’t make it easier to deploy any tags that weren’t sold by Adobe (especially after Google released enhanced e-commerce), and it didn’t lead to the type of industry innovation I hoped it would when Adobe acquired Satellite in 2013 (if anything, the fact that the biggest name in the industry was giving it away for free really stifled innovation at some – but not all – of its paid competitors). I always felt it was odd that Adobe, as the leading provider of enterprise-class digital analytics, offered a tag management system that seemed so unsuited to the enterprise. I know this assessment sounds harsh – but I wouldn’t write it here if I hadn’t heard similar descriptions of DTM from Adobe’s own product managers while they were showing off Launch last week. They knew they could do tag management better – and it looks like they just might have done it.

How Will Launch Be Different?

How about, “In every way except that they both allow you to deploy third-party tags to your website.” Everything else seems different – and in a good way. Here are the highlights:

  • Launch is 100% API driven: Unlike most software tools, which get built, and then the API is added later, Adobe decided what they wanted Launch to do; then they built the API; and then they built the UI on top of that. So if you don’t like the UI, you can write your own. If you don’t like the workflow, you can write your own. You can customize it any way you want, or write your own scripts to make commonly repeated tasks much faster. That’s a really slick idea.
  • Launch will have a community behind it: Adobe envisions a world where vendors write their own tag integrations (called “extensions”) that customers can then plug into their own Launch implementations. Even if vendors don’t jump at the chance to write their own extensions, I can at least see a world where agencies and implementation specialists do it for them, eager to templatize the work they do every day. I’ve already got a list of extensions I can’t wait to write!
  • Launch will let you “extend” anything: Most tag management solutions offer integrations but not the ability to customize them. If the pre-built integration doesn’t work for you, you get to write your own. That often means taking something simple – like which products a customer purchased from you – and rewriting the same code dozens of times to spit it out in each vendor’s preferred format. But Launch will give the ability to have sharable extensions that do this for you. If you’ve used Tealium, it means something similar to the e-commerce extension will be possible, which is probably my favorite usability/extensibility feature any TMS offers today.
  • Launch will fix DTM’s environment and workflow limitations: Among my clients, one of the most common complaints about DTM is that you get 2 environments – staging and production. If your IT process includes more, well, that’s too bad. But Launch will allow you to create unlimited environments, just like Ensighten and Tealium do today. And it will have improved workflow built in – so that multiple users can work concurrently, with great care built into the tool to make sure they don’t step on each others’ toes and cause problems.

What Does Launch Mean for DTM Customers?

If you’re a current DTM customer, your first thought about Launch is probably, “Wow, this is great! I can’t wait to use it!” Your second thought is more likely to be, “Wait. I’ve already implemented DTM, and now it’s totally changed. It will be a huge pain to switch now.”

The good news is that, so far, Adobe is saying that they don’t anticipate that companies will need to make any major changes when switching from DTM to Launch (you may need to update the base tag on each page if you plan to take advantage of the new environments feature). They are also working on a migration process that will account for custom JavaScript code you have already written. It may make for a bit of initial pain in migrating custom scripts over, but it should be a pretty smooth process that won’t leave you with a ton of JavaScript errors when you do it. Adobe has also communicated for over a year which parts of the core DTM library will continue to work in the future, and which will not. So you can get ready for Launch by making sure all your custom JavaScript is in compliance with what will be supported in the future. And the benefits over the current DTM product are so obvious that it should be well worth a little bit of up-front pain for all the advantages you’ll get from switching (though if you decide you want to stick with DTM, Adobe plans to continue supporting it).

So if you have decided that Launch beats DTM and you want to switch, the next question is, “When?” And the answer to that is…”Soon.” Adobe hasn’t provided an official launch date, and product managers said repeatedly that they won’t release Launch until it’s world-class. That should actually be welcome news – because making this change will be challenging enough without having to worry about whether Adobe is going to get it right the first time.

What Does Launch Mean for Tag Management?

I think this is really the key question – how will Launch impact the tag management space? Because, while Adobe has impressively used DTM as a deployment and activation tool on an awful lot of its customers’ websites, I still have just as many clients that are happily using Ensighten, GTM, Signal, or Tealium. And I hope they continue to do so – because competition is good for everyone. There is no doubt that Ensighten’s initial product launch pushed its competitors to move faster than they had planned; and that Tealium’s friendly UI has pushed everyone to provide a better user experience (for awhile, GTM’s template library even looked suspiciously like Tealium’s). Launch is adding some features that have already existed in other tools, but Adobe is also pushing some creative ideas that will hopefully push the market in new directions.

What I hope does not happen, though, is what happened when Adobe acquired Satellite in 2013 and started giving it away for free. A few of the the tools in the space are still remarkably similar in actual features in 2017 to what they were in 2013. The easy availability of Adobe DTM seemed to depress innovation – and if your tag management system hasn’t done much in the past few years but redo its UI and add support for a few new vendors, you know what I mean (and if you do, you’ve probably already started looking at other tools anyway). I fear that Launch is going to strain those vendors even more, and it wouldn’t surprise me at all if Launch spurs a new round of acquisitions. But my sincere hope is that the tools that have continued to innovate – that have risen to the challenge of competing with a free product and developed complementary products, innovative new features, and expanded their ecosystem of partners and integrations – will use Launch as motivation to come up with new ways of fulfilling the promise of tag management.

Last week’s announcement is definitely exciting for the tag management space. While Launch is still a few months away, we’ve already started talking at Analytics Demystified about which extensions our clients using DTM would benefit from – and how we can use extensions to get involved in the community that will surely emerge around Launch. If you’re thinking about migrating from DTM to Launch and would like some help planning for it, please reach out – we’d love to help you through the process!

Photo Credit: NASA Goddard Space Flight Center

Adobe Analytics, Featured, Technical/Implementation

Engagement Scoring + Adobe Analytics Derived Metrics

Recently, I was listening to an episode of the Digital Analytics Power Hour that discussed analytics for sites that have no clear conversion goals. In this podcast, the guys brought up one of the most loaded topics in digital analytics – engagement scoring. Called by many different names like Visitor Engagement, Visitor Scoring, Engagement Scoring, the general idea of this topic is that you can apply a weighted score to website/app visits by determining what you want your visitors to do and assigning a point value to that action. The goal is to see a trend over time of how your website/app is performing with these weights applied and/or assign these scores to visitors to see how score impacts your KPI’s (similar to Marketing Automation tools). I have always been interested in this topic, so I thought I’d delve into it a bit while it was fresh in my mind. And if you stick around until the end of this post, I will even show how you can do visitor scoring without doing any tagging at all using Adobe Analytics Derived Metrics!

Why Use Visitor Scoring?

If you have a website that is focused on selling things or lead generation, it is pretty easy to determine what your KPI’s should be. But if you don’t, driving engagement could actually be your main KPI. I would argue that even if you do have commerce or lead generation, engagement scoring can still be important and complement your other KPI’s. My rationale is simple. When you build a website/app, there are things you want people to do. If you are a B2B site, you want them to find your products, look at them, maybe watch videos about them, download PDF’s about them and fill out a lead form to talk to someone. Each of these actions is likely already tracked in your analytics tool, but what if you believe that some of these actions are more important than others? Is viewing a product detail page as valuable as watching a five minute product video? If you had two visitors and each did both of these actions, which would you prefer? Which do you think is more likely to be a qualified lead? Now mix in ALL of the actions you deem to be important and you can begin to see how all visitors are not created equal. And since all of these actions are taking place on the website/app, why would you NOT want to quantify and track this, regardless of what type of site you manage?

In my experience, most people do not undertake engagement scoring for one of the following reasons:

  • They don’t believe in the concept
  • They can’t (or don’t have the energy to) come up with the scoring model
  • They don’t know how to do it

In my opinion, these are bad reasons to not at least try visitor scoring. In this post, I’ll try to mitigate some of these. As always, I will show examples in Adobe Analytics (for those who don’t know me, this is why), but you should be able to leverage a lot of this in other tools as well.

The Concept

Since I am by no means the ultimate expert in visitor scoring, I am not in a position to extol all of its benefits. I have seen/heard arguments for it and against it over the years. If you Google the topic, you will find many great resources on the subject, so I encourage you to do that. For the sake of this post, my advice is to try it and see what you think. As I will show, there are some really easy ways to implement this in analytics tools, so there is not a huge risk in giving it a try.

The Model

I will admit right off the bat that there are many out there much more advanced in statistics than me. I am sure there are folks out there that can come up with many different visitor scoring models that will make mine look childish, but in the interest of trying to help, I will share a model that I have used with some success. The truth is, that you can create whatever model you want to use is fine, since it is for YOUR organization and not one to be compared to others. There is no universal formula that you will benchmark against. You can make yours as simple or complex as you want.

I like to use the Fibonacci-like approach when I do visitor scoring (while not truly Fibonacci, my goal is to use integers that are somewhat spaced out to draw out the differences between actions as you will see below). I start by making a list of the actions visitors can take on my website/app and narrow it down to the ones that I truly care about and want to include in my model. Next I sort them from least valuable to most valuable. In this example, let’s assume that my sorted list is as follows:

  1. View Product Page
  2. View at least 50% of Product Video
  3. View Pricing Tab for Product
  4. Complete Lead Generation Form

Next, I will assign “1” point to the least important item on the list (in this case View Product Page). Then I will work with my team to determine how many Product Page Views they feel is equivalent to the next item on the list (in this case 50% view of Product Video). When I say equivalent, what I mean is that if we had two website visitors and one viewed at least 50% of a product video and the other just viewed a bunch of product detail pages, at what point would they consider them to be almost equal in terms of scoring? Is it four product page views or only two? Somehow, you need to get consensus on this and pick a number. If your team says that three product page views is about the same as one long product video view, then you would assign “3” points each time a product video view hist at least 50%. Next you would move on to the third item (Pricing Page in this example) and follow the same process (how many video views would you take for one video view?). Let’s say when we are done, the list looks like this:

  1. View Product Page (1 Point)
  2. View at least 50% of Product Video (3 Points)
  3. View Pricing Tab for Product (6 Points)
  4. Complete Lead Generation Form (15 Points)

Now you have a model that you can apply to your website/app visitors. Will it be perfect? No, but is it better than treating each action equally? If you believe in your scores, then it should be. For now, I wouldn’t over-think it. You can adjust it later if you want, but I would give it a go under the theory that “these are the main things we want people to do, and we agreed on which were more/less important than the others, so if the overall score rises, then we should be happy and if it declines, we should be concerned.”

How To Implement It

Implementing visitor scoring in Adobe Analytics is relatively painless. Once you have identified your actions and associated scores in the previous step, all you need to do is write some code or do some fancy manipulation of your Tag Management System. For example, if you are already setting success events 13, 14, 15, 16 for the actions listed above, all you need to do is pass the designated points to a numeric Success Event. This event will aggregate the scores from all visitors into one metric that you later divide by either Visits or Visitors to normalize (for varying amounts of Visits and Visitors to your site/app). This approach is well documented in this great blog post by Ben Gaines from Adobe.

Here is what a Calculated Metric report might look like when you are done:

Website Engagement

Using Derived Metrics

If you don’t have development resources or you want to test out this concept before bugging your developers, I have come up with a new way that you can try this out without any development. This new approach uses the new Derived Metrics concept in Adobe Analytics. Derived Metrics are Calculated Metrics on steroids! You can do much more complex formulas than in the past and apply segments to some or all of your Calculated Metric formula. Using Derived Metrics, you can create a model like the one we discussed above, but without any tagging. Here’s how it might work:

First, we recall that we already have success events for the four key actions we care about:

Screen Shot 2015-09-03 at 3.57.27 PM

 

Now we can create our new “Derived” Calculated Metric for Visitor Score. To do this, we create a formula that multiplies each action by its weight score and then sums them (it may take you some time to master the embedding of containers!). In this case, we want to multiply the number of Product Page Views by 1, the number of Video Views by 3, etc. Then we divide the sum by Visits so the entire formula looks like this:

Formula

 

Once you save this formula, you can view it in the Calculated Metrics area to see how your site is performing. The cool part of this approach is that this new Visitor Score Calculated Metric will work historically as long as you have data for the four events (in this case) that are used in the formula. The other cool part is that if you change the formula, it will change it historically as well (which can also be a bad thing, so if you want to lock in your scores historically, use Ben’s approach of setting a new event). This allows you to play with the scores and see the impact of those changes.

But Wait…There’s More!

Here is one other bonus tip. Since you can now apply segments and advanced formulas to Derived Metrics, you can customize your Visitor Score metric even further. Let’s say that your team decides that if the visitor is a return visitor, that all of the above scores should be multiplied by 1.5. You can use an advanced formula (in this case an IF Statement) and a Segment (1st Time Visits) to modify the formula above and make it more complex. In this case, we want to first check if the visit is a 1st time visit and if so, use our normal scores, but if it isn’t change the scores to be 1.5x the original scores. To do this, we add an IF statement and a segment such that when we are done, the formula might look like this (warning: this is for demo purposes only and I haven’t tested this!):

Advanced Formula

If you had more patience than I do, you could probably figure out a way to multiply the Visit Number by the static numbers to exponentially give credit if you so desired. The advanced formulas in the Derived Metric builder allow you to do almost anything you can do in Microsoft Excel, so the sky is pretty much the limit when it comes to making your Visitor Score Metric as complex as you want. Tim Elleston shows some much cooler engagement metric formulas in his post here: http://www.digitalbalance.com.au/our-blog/how-to-use-derived-metrics/

Final Thoughts

So there you have it. Some thoughts on why you may want to try visitor scoring, a few tips on how to create scores and some information on how to implement visitor scoring via tags or derived metrics. If you have any thoughts or comments, let me know at @adamgreco.

Featured, Tag Management, Technical/Implementation

A Developer’s Perspective on Features Every Tag Management System Should Offer

Almost 2 years ago I wrote a blog post on some of the major questions companies face when choosing a tag management system. I very carefully crafted a post that talked about the main trends in the industry and some of the strengths of the major products in the industry. I wasn’t prepared for the response that post received, though in hindsight I should have been: I heard from nearly all the major TMS companies, and each seemed to feel a lot more strongly about any perceived weaknesses I mentioned than about any of the strengths. But that post taught me an important lesson about the weight that a Demystified opinion can carry throughout the analytics community, and about the competitive nature of the tag management space.

Since then, I have chosen my words very carefully when mentioning any of the 5 leading tag management systems. I always preface my comments by saying that we have clients using all of them, and doing so quite successfully. I even refer to the companies by listing them in alphabetical order, and then explain the reason for the order I have chosen – lest anyone think it’s an unofficial ranking of my fondness for any of them (in this regard, DTM benefited a lot more from its acquisition by Adobe than Signal did by rebranding!).

However, seeing how I lead Demystified’s tag management practice, it’s probably time to dangle a toe back in treacherous water. I’d like to provide a list of what I call “essential features” that any tag management system should offer. In some cases, the feature is offered by all of them, and in others, by only one or two – but I will leave you to research that, rather than pointing it out for you. A few caveats before I get started:

  • You’ll find no mention at all of delivery network (100% client-side versus client/server hybrid). I find that both approaches offer such a dramatic improvement in page performance over traditional tagging that I have little interest in picking nits one way or the other.
  • I feel similarly about the synchronous/asynchronous argument as well. There are compelling reasons for both, but you can deploy any system either way (though it may go against the vendor’s best practices). Just remember to make it clear to each vendor you talk to if you plan on deploying synchronous tags (like an A/B testing tag) through their system, and find out whether such tags are supported, and any special considerations for implementing them.

Creating a list like this is a bit tricky because some of the tools are free. While it’s obviously much more palatable to forego a particular feature when you’re not paying for the tool, there are some features that are important enough to me that I’d have to have them whether the tool was free or not. Without further ado, here is my list of essential tag management features:

1. Support for versioning and rollbacks. This is the absolute most important feature any tag management system has to support. Any time you’re dealing with remotely hosted JavaScript, you must be able to remove a rogue vendor’s code at a moment’s notice. Don’t assume that an annoying error or warning icon in the corner of the browser is your worst-case scenario; a broken user experience or a cross-site scripting error could carry a huge cost to your business. The closer the system comes to the version control employed by IT teams, the better – the ability to stage changes, save them for a later release while still deploying new tags that are ready and tested, or revert to a previous version in production without completely losing those changes in your development environment are incredibly valuable features. I’ve seen people lose their jobs because the wrong content ended up on a website at the wrong time, and a company just couldn’t remove it fast enough.

2. Customizable integrations with your most common tags. If you’re new to tag management, you probably don’t realize what goes on under the hood when you implement a tag in the TMS. If the TMS offers a supported tag integration with a particular vendor (often called a template, tool, or app), that integration generates a block of JavaScript that represents what the “average” company uses to implement the vendor’s tag. Most of the time, that’s all you need – you just specify which pages the tag goes on, where to find the data the tag needs, and you’re done. But the ability to customize that code block when you need to is important – because every once in awhile, you’re bound to want to work with a vendor in a slightly different way than everyone else does – and the alternative to customizing the integration offered by the TMS is just to write your own completely. Isn’t that one of the main problems you were trying to solve in the first place.

3. The ability to handle frequent, repetitive tasks without a developer. The original promise of tag management was that you could add third-party tags to your site without a developer. The past few years have proven the fallacy of that idea – but it sure is nice to let your marketers make basic changes to tags. If you decide you want to capture the page title in an Adobe eVar, or that you need to pass the product name to Adwords or DFA, those are simple changes you shouldn’t have to send to a developer. It should be easy to get data you already have (and have already configured in your TMS) to other vendors that want it.

4. The ability to send the same data in a slightly different format with little effort. If you’ve spent even the slightest time looking at what data you’re actually sending to your tag vendors, you’ve seen some common threads. They all want the same things: which products a customer purchased, how much they paid, the unique ID generated to a web lead in your CRM, and so on. But they likely want this data in a different format: one vendor may want a list of products delimited by a comma, and another may want them delimited by a pipe. A good TMS has integrations that don’t require you to customize the format of all this common data – it will do it for you.

5. Support for events, interactions, and data that changes throughout a page’s lifecycle. We’re moving from the world of multi-page to single-page web apps. While your TMS likely offers you a way to track these interactions, look for one whose API most closely matches the model used by your IT team – whether it’s JavaScript you’ll have them add when interactions occur, or the ability to “listen” for events they’re already capturing. And make sure that as you configure data elements in the system, those elements can be updated throughout the page’s lifecycle – it’s incredibly annoying to have to develop a hack to something as common as that an unknown user authenticates on your site using Ajax, but your TMS doesn’t know because when the page first loaded the user was still anonymous. Your development team will be more supportive of tag management if they feel the tool supports them – rather than the other way around.

6. Consideration for tag optimization and caching. Besides decoupling your IT release process from your digital marketing and tagging effort, it’s possible that the greatest potential benefit in migrating to a tag management system is the improvement it provides to your website’s performance. But the TMS should allow you the flexibility to fine-tune that performance benefit by loading only the code and logic required for the tags on that page, rather than loading the code and logic that could be required for all tags used across your site. Even if all that logic is cached, it still needs to be run on page after page after page. In other words, there’s no reason for your homepage to load code that doesn’t actually need to run until it’s time to fire order confirmation tags. I also love it when a system allows you to cache code and reuse it throughout the site when you need the same basic tag throughout your site. If you load a Doubleclick tag on 50 pages on your site, and the only difference is the ‘type’ or ‘cat’ parameter, there’s no reason for the TMS to reload an uncached version of that logic on all 50 pages – load it once and have it run again and again from the browser cache. If the TMS allows you to manage those subtle differences in a single place rather than in 50 different tags, this also offers a huge benefit to the folks managing your tags, who now can support a single tag implementation instead of 50. Even small optimization features can make the end-users of your TMS and your website very happy.

So if you’re new to tag management, hopefully this list helps you choose the tool that will be the best fit with your organization. And if you adopted tag management earlier, hopefully it it helps you make sure you’ve got the right system in place – and the right processes to manage it. I’ve tried to come up with a list of features that will appeal to both developer and marketer end users, because both play an important part in a company’s digital marketing efforts. And in the end, that’s what tag management is really about – all these tags serve no useful purpose on your website if they’re not allowing you to run your online business more effectively.

Photo Credit: Bill Dickinson (Flickr)

Adobe Analytics, Featured, google analytics, Technical/Implementation

The Hard Truth About Measuring Page Load Time

Page load performance should be every company’s #1 priority with regard to its website – if your website is slow, it will affect all the KPIs that outrank it. Several years ago, I worked on a project at salesforce.com to improve page load time, starting with the homepage and all the lead capture forms you could reach from the homepage. Over the course of several months, we refactored our server-side code to run and respond faster, but my primary responsibility was to optimize the front-end JavaScript on our pages. This was in the early days of tag management, and we weren’t ready to invest in such a solution – so I began sifting through templates, compiling lists of all the 3rd-party tags that had been ignored for years, talking to marketers to find out which of those tags they still needed, and then breaking them down to their nitty-gritty details to consolidate them and move them into a single JavaScript library that would do everything we needed from a single place, but do it much faster. In essence, it was a non-productized, “mini” tag management system.

Within 24 hours of pushing the entire project live, we realized it had been a massive success. The difference was so noticeable that we could tell the difference without having all the data to back it up – but the data eventually told us the exact same story. Our monitoring tool was telling us our homepage was loading nearly 50% faster than before, and even just looking in Adobe at our form completion rate (leads were our lifeblood), we could see a dramatic improvement. Our data proved everything we had told people – a faster website couldn’t help but get us more leads. We hadn’t added tags – we had removed them. We hadn’t engaged more vendors to help us generate traffic – we were working with exactly the same vendors as before. And in spite of some of the marketing folks being initially hesitant about taking on a project that didn’t seem to have a ton of business value, we probably did more to benefit the business than any single project during the 3 1/2 years I worked there.

Not every project will yield such dramatic results – our page load performance was poor enough that we had left ourselves a lot of low-hanging fruit. But the point is that every company should care about how their website performs. At some point, almost every client I work with asks me some variation of the following question: “How can I measure page load time with my analytics tool?” My response to this question – following a cringe – is almost always, “You really can’t – you should be using another tool for that type of analysis.” Before you stop reading because yet another tool is out of the question, note that later on in this post I’ll discuss how your analytics tool can help you with some of the basics. But I think it’s important to at least acknowledge that the basics are really all those tools are capable of.

Even after several years of hearing this question – and several enhancements both to browser technology and the analytics tools themselves – I still believe that additional tools are required for robust page load time measurement. Any company that relies on their website as a major source of revenue, leads, or even just brand awareness has to invest in the very best technologies to help that website be as efficient as possible. That means an investment not just in analytics and optimization tools, but performance and monitoring tools as well. At salesforce.com, we used Gomez – but there are plenty of other good services as well that can be used on a small or large scale. Gomez and Keynote both simulate traffic to your site using any several different test criteria like your users’ location, browser, and connection speed. Other tools like SOASTA actually involve real user testing along some of the same dimensions. Any of these tools are much more robust than some of the general insight you might glean from your web analytics tool – they provide waterfall breakdowns and allow you to isolate where your problems come from and not just that they exist. You may find that your page load troubles only occur at certain times of the day or in certain parts of the world, or that they are happening in a particular leg of the journey. Maybe it’s a specific third-party tag or a JavaScript error that you can easily fix. In any case, these are the types of problems your web analytics tool will struggle to help you solve. The data provided by these additional tools is just much more actionable and helpful in identifying and solving problems.

The biggest problem I’ve found in getting companies to adopt these types of tools is often more administrative than anything. Should marketing or IT manage the tool? Typically, IT is better positioned to make use of the data and act on it to make improvements, but marketing may have a larger budget. In a lot of ways, the struggles are similar to those many of my clients encounter when selecting and implementing a tag management system. So you might find that you can take the learnings you gleaned from similar “battles” to make it easier this time. Better yet, you might even find that one team within your company already has a license you can use, or that you can team up to share the cost. However, if your company isn’t quite ready yet to leverage a dedicated tool, or you’re sorting through red tape and business processes that are slowing things down, let’s discuss some things you can do to get some basic reporting on page load time using the tools you’re already familiar with.

Anything you do within your analytics tool will likely be based on the browser’s built-in “timing” object. I’m ashamed to admit that up until recently I didn’t even realize this existed – but most browsers provide a built-in object that provides timestamps of the key milestone events of just about every part of a page’s lifecycle. The object is simply called “performance.timing” and can be accessed from any browser’s console. Here are some of the useful milestones you can choose from:

  • redirectStart and redirectEnd: If your site uses a lot of redirects, it could definitely be useful to include that in your page load time calculation. I’ve only seen these values populated in rare cases – but they’re worth considering.
  • fetchStart: This marks the time when the browser first starts the process of loading the next page.
  • requestStart: This marks the time when the browser requests the next page, either from a remote server or from its local cache.
  • responseEnd: This marks the time when the browser downloads the last byte of the page, but before the page is actually loaded into the DOM for the user.
  • domLoading: This marks the time when the browser starts loading the page into the DOM.
  • domInteractive: This marks the time when enough of the page has loaded for the user to begin interacting with it.
  • domContentLoaded: This marks the time when all HTML and CSS are parsed into the DOM. If you’re familiar with jQuery, this is basically the same as jQuery’s “ready” event (“ready” does a bit more, but it’s close enough).
  • domComplete: This marks the time when all images, iframes, and other resources are loaded into the DOM.
  • loadEventStart and loadEventEnd: These mean that the window’s “onload” event has started (and completed), and indicate that the page is finally, officially loaded.

JavaScript timing object

There are many other timestamps available as part of the “performance” object – these are only the ones that you’re most likely to be interested in. But you can see how it’s important to know which of these timestamps correspond to the different reports you may have in your analytics tool, because they mean different things. If your page load time is measured by the “loadEventEnd” event, the data probably says your site loads at least a few hundred milliseconds slower than it actually appears to your users.

The major limitation to using JavaScript timing is exactly what you’d expect: cross-browser compatibility. While IE8 is (finally!) a dying browser, it has not historically been the only one to lack support – mobile Safari has been a laggard as well as well. However, as of late 2015, iOS now supports this feature. Since concern for page load time is even more important for mobile web traffic, and since iOS is still the leader in mobile traffic for most websites, this closes what has historically been a pretty big gap. When you do encounter an older browser, the only way to fill this gap accurately for browsers lacking timing support is to have your development team write its own timestamp as soon as the server starts building the page. Then you can create a second timestamp when your tags fire, subtract the difference, and get pretty close to what you’re looking for. This gets a bit tricky, though, if the server timezone is different than the browser timezone – you’ll need to make sure that both timestamps are always in the same timezone.

This functionality is actually the foundation of both Adobe Analytics’ getLoadTime plugin and Google Analytics’ Site Speed reports. Both have been available for years, and I’ve been suspicious of them since I first saw them. The data they provide is generally sound, but there are a few things to be aware of if you’re going to use them – beyond just the lack of browser support I described earlier.

Adobe’s getLoadTime Plugin

Adobe calculates the start time using the most accurate start time available: either the browser’s “requestStart” time or a timestamp they ask you to add to the top of the page for older browsers. This fallback timestamp is unfortunately not very accurate – it doesn’t indicate server time, it’s just the time when the browser got to that point in loading the page. That’s likely to be at least a second or two later than when the whole process started, and is going to make your page load time look artificially fast. The end time is when the tag loads – not when the DOM is ready or the page is ready for user interaction.

When the visitor’s browser is a modern one supporting built-in performance timing, the data provided by Adobe is presented as a series of numbers (in milliseconds) that the page took to “load.” That number can be classified into high-level groups, and it can be correlated to your Pages report to see which pages load fastest (or slowest). Or you can put that number into a custom event that can be used in calculated metrics to measure the average time a given page takes to load.

Adobe Analytics page load time report

Google’s Site Speed Reports

Google’s reports, on the other hand, don’t have any suspect handling of older browsers – the documentation specifically states that the reports only work for browsers that support the native performance timing object. But Google’s reports are averages based on a sampling pool of only 1% of your visitors (which can be increased) – but you can see how a single visitor making it into that small sample from a far-flung part of the world could have a dramatic impact on the data Google reports back to you. Google’s reports do have the bonus of taking into account many other timing metrics the browser collects besides just the very generic interpretation of load time that Adobe’s plugin offers.

Google Analytics page load time report

As you can see, neither tool is without its flaws – and neither is very flexible in giving you control over which time metrics their data is based on. If you’re using Adobe’s plugin, you might have some misgivings about their method of calculation – and if you’re using Google’s standard reports, that sampling has likely led you to cast a suspicious eye on those reports when you’ve used them in the past. So what do you do if you need more than that? The only real answer is to take matters into your own hands. But don’t worry – the actual code is relatively simple and can be implemented with minimal development effort, and it can be done right in your tag management system of choice. Below is a quick little code snippet you can use as a jumping-off point to capture the page load time on each page of your website using built-in JavaScript timing.

	function getPageLoadTime() {
		if (typeof(performance) !== 'undefined' && typeof(performance.timing) == 'object') {
			var timing = performance.timing;
			
			// fall back to less accurate milestones
			var startTime = performance.timing.redirectStart ||
					performance.timing.fetchStart ||
					performance.timing.requestStart;
			var endTime = performance.timing.domContentLoadedEventEnd ||
					performance.timing.domInteractive ||
					performance.timing.domComplete ||
					performance.timing.loadEventEnd;
			
			if (startTime && endTime && (startTime < endTime)) {
				return (endTime - startTime);
			}
		}
		
		return 'data not available';
	}

You don’t have to use this code exactly as I’ve written it – but hopefully it shows you that you have a lot of options to do some quick page load time analysis, and you can come up with a formula that works best for your own site. You (or your developers) can build on this code pretty quickly if you want to focus on different timing events or add in some basic support for browsers that don’t support this cool functionality. And it’s flexible enough to allow you to decide whether you’ll use a dimensions/variables or metrics/events to collect this data (I’d recommend both).

In conclusion, there are some amazing things you can do with modern browsers’ built-in JavaScript timing functionality, and you should do all you can to take advantage of what it offers – but always keep in mind that there are limitations to this approach. Even though additional tools that offer dedicated monitoring services carry an additional cost, they are equipped to encompass the entire page request lifespan and can provide much more actionable data. Analytics tools allow you to scratch the surface and identify that problems exist with your page load time – but they will always have a difficult time identifying what those problems are and how to solve them. The benefit of such tools can often be felt across many different groups within your organization – and sometimes the extra cost can be shared the same way. Page load time is an important part of any company’s digital measurement strategy – and it should involve multiple tools and collaboration within your organization.

Photo Credit: cod_gabriel (Flickr)

Technical/Implementation

Slack Demystified

Those of you who follow my blog have come to know that when I learn a product (like Adobe SiteCatalyst), I really get to know it and evangelize it. Back in the 90’s I learned the Lotus Notes enterprise collaboration software and soon became one of the most proficient Lotus Notes developers in the world, building most of Arthur Andersen’s global internal Lotus Notes apps. In the 2000’s, I came across Omniture SiteCatalyst, and after a while had published hundreds of blog posts on Omniture’s (Adobe’s) website and my own and eventually a book! One of my favorite pastimes is finding creative ways to apply a technology to solve everyday problems or to make life easier.

That being said, this post has to do with my new favorite technology – Slack. Admittedly, this post has very little to do with web analytics or Adobe Analytics, so if that is what you are interested in, you can stop reading now. But I suggest that you continue reading, as it may give you a heads-up on one of the most interesting technologies I have seen in a while, and maybe you will get as addicted to it as I am…

What is Slack?

If you have not yet heard of Slack – you will soon. It is one of the hottest technologies out there right now (started almost by accident), and has the potential to change the way business gets done. Slack is a tool that allows teams to collaborate around pre-defined topics (channels) and private groups. It also provides direct messaging between team members and integrations with other technologies. I think of it as a team message board, instant messaging, a file repository and private group discussions all in one place. That sounds deceptively simple (like its interface), but it is extremely powerful. Most people work with a finite number of folks on a daily basis. Those interactions take place in face-to-face meetings, e-mails, file sharing on dropbox, phone calls and often times instant message interactions. Unfortunately, this means that you have to constantly jump between your phone, your e-mail client, your IM client, your dropbox account, etc… Sometimes you may feel like you spend a good chunk of your day just looking for stuff instead of doing real work! The beauty of Slack is that you can push almost all of these interactions and content into one centralized tool and that tool can be accessed from a webpage, a [great] mobile app or a desktop app (I use the Mac client). In addition the integrations Slack provides with other tools like Dropbox, WordPress, Twitter ZenDesk, etc… allow you to push even more things into the Slack interface so you have even fewer places to go and find stuff.

At our consultancy, we have seen a massive adoption of Slack and our use of e-mail has decreased by at least 75%. If you have kids like mine, who never bother to open an e-mail, but live for text messages, you can imagine that this trend will only continue as the younger generation enters the workforce. The business world moves too fast these days and I think the millennials will flock to tools like Slack in the future. So…in this post, I am going to do what I always do – share cool ways to use technology and share what I have done with it. Please bear with me as I put web analytics on hold for one post!

Channels

The first way our firm uses Slack is by taking advantage of the “channel” feature. Channels are like bulletin boards with a pre-defined topic. For example, some people at our firm are interested in Adobe Analytics products, while others are interested in Google Analytics products (or both). By creating a channel for each of these, anyone can post an article, share a file, ask a question or share something they learned in the appropriate channel. Everyone within the team has the choice as to whether they want to “join” the channel. If you join the channel, you can see all of the stuff posted there and set your notifications accordingly (determine if you want desktop or mobile notifications- more on this later). You can leave a channel at any time and re-join at any time, and there are no limits on the number of channels you can create (as far as I know).

As an example, here you can see some questions posed within our Adobe channel and how easy it was for our team members to get answers that might have otherwise sat buried in e-mail:

Keep in mind that in addition to text replies, users could have inserted images, files, links or videos into the above thread. Also remember that some of these replies could have come from the mobile app while folks are on the road.

Private Groups

If you want to have a private channel, with just a few folks, you can create a Private Group. Private Groups are like group instant message threads, but can also contain files, images, etc. We use Private Groups for client projects in which multiple team members are involved. In the Private Group, any questions or updates related to THAT client are shared with only those team members who are involved in the project (instead of everyone publicly). Just the other day, we had a client encounter a minor emergency, and immediately our team began discussing options on Slack, came to a resolution and implemented some patch code to fix the client issue. In the past, it would have taken us hours to schedule a meeting, review the issue and figure out a solution, but with Slack the entire process was done in under ten minutes and the client was blown away!

Another great use for Private Groups is tele-conference calls. We use this as a “backchannel” when on client calls to chat with each other during calls to make sure we are all on the same page with our responses.

File Sharing

Many of us spend our lives making and editing files. Whether they be spreadsheets, presentations, etc… To store these files, many companies use Dropbox or something similar. As you would expect, Slack has a tight integration with these tools. Since we use Dropbox, I’ll use that as an example. I have connected my Dropbox account to Slack so when I choose to import a file, I see Dropbox as one of the options:

From there, I find the file I am looking for…

…and then I add it to Slack:

This process only takes a few seconds, but the cool part is that the entire document I have uploaded will be indexed and be searchable from now on:

Another thing that has frustrated me in the past related to file sharing, is not knowing when my co-workers are creating great new documents. Unless you are continuously reviewing Dropbox notifications (which are way to numerous), a lot of this activity can slip through the cracks. Luckily, there is another cool feature in Slack that can come to the rescue! This feature is found within the Notifications area. Within this area there is a “Highlight Words” box that allows you to list out specific phrases that you want to be alerted about. In this example, I have listed three specific words for which I want Slack to notify me about whenever they occur within a document, channel discussion or private group that I have access to see:

As you can see below, my designated words are highlighted and I will see an unread count for any items that match my criteria:

In addition to highlighting keywords, you can also use one of my favorites tools – IFTTT (or Zapier) to be alerted when a new file has hit your file tool of choice. Hopefully you are already familiar with these great tools that allow you to connect different technologies. But Slack + IFTTT/Zapier = 🙂 in my opinion! Let’s look at one practical example. Imagine that I want to know anytime one of my partners has created a new proposal and added it to our shared dropbox folder. Since they may not have remembered that they should always include my services in their proposal, I like to gently remind them! To do this, I can have IFTTT/Zapier monitor our “Proposal” dropbox folder for new files and post a link to new proposals to a Private Group or Public Channel so we are all aware of each other’s proposals. For example, let’s say that I see a new proposal come in from one of my partners for XYZ Company and I know the CIO there. Having visibility into this activity allows me to help and takes no extra work for my partner. Here is an example of the Zapier recipe I might use:

This recipe will automatically post any new files in the proposals dropbox folder to the “proposals” channel, which any of my co-workers can follow if they choose:

As you can see, there are tons of ways to share files and be alerted when your co-workers are adding files that might be of interest to you and most of them integrate into Slack automatically.

Slack – Twitter Integration

If you are into Twitter, you probably spend time tweeting, following people or monitoring hashtags. To do this, you may use the Twitter site or App (old Tweetdeck app). For me, there are only a few things I really care about when it comes to Twitter:

  • Is someone talking about me or re-tweeting my stuff?
  • Are my business partners tweeting?
  • Is there anything going on in the hashtags I care about (though these are becoming SPAM so I care less about this these days!)?

The good news is that I can now monitor all of this in Slack, again using IFTTT (or Zapier). So let’s see how this integration would be setup. First, let’s get all of my Twitter mentions into Slack. To do this, I would simply create a recipe in IFTTT that connects Twitter to Slack using the following:

 

In this case, I have decided to post my Twitter mentions to a private channel called “adam-twitter-mentions” that only I see. I could have alternatively posted them to my personal “Slackbot” area (which is like your own personal notepad within Slack), but I didn’t want to clutter that with Twitter mentions (since I have some cool uses for that coming later). Once this rule is active, any time I am mentioned on Twitter, a copy of the Tweet will be automatically imported into my private Slack group and I will see a new “unread” item as seen here:

Next, I want to know if any of my co-workers are tweeting, since I may want to be a good partner and re-tweet their stuff to my personal network. To do this, I create a different IFTTT recipe that looks for their Twitter handles. I am lucky to work with a small group of folks, but you can add as many of your co-workers as you want and also include your company’s Twitter account as well:

This recipe will run every fifteen minutes or so and push tweets from these accounts to a public “tweets-demystified” channel. My co-workers then have the option to subscribe to this channel or not:

Finally, if I want to follow a specific Twitter hashtag, I can create a recipe for that. As an example, if I want to follow the #Measure hashtag (used by the web analytics industry), I can push in all of those tweets into Slack using this recipe:

In this example, I am pushing #Measure tweets to my personal “Slackbot” just for illustrative purposes, but in reality, I would probably create a private group or channel for this given that a LOT of data will end up here:

As you can see, I now have the things I care the most about in Twitter in the same tool that I am using to collaborate with my co-workers, clients and conduct instant messages. This helps me by reducing the number of tools I have to interact with, but there are other reasons to do this as well. First, The tweets in Slack can be commented on by my partners, which can lead to fun and interesting discussions. But my favorite reason for doing this is that everything imported into Slack is 100% searchable. In this case, this means that I can search amongst all of my tweets and my co-workers’ tweets from today on, and don’t have to go to different tools to do it. Let’s say I am doing some research on “Visitor Engagement” for a client. I can now go to Slack and search for “Visitor Engagement,” and know that I will find any discussions, files and tweets that mention “Visitor Engagement” within my company (and if I include the hashtag tweets, I can also see if anyone else in the world has written about it!). That is extremely powerful!

Slack – Blog Integration

Another thing I may want to be aware of, is when my co-workers release new blog posts. Our firm uses both WordPress and Tumblr, which can both be integrated with Slack. This integration is pretty straight-forward in that it simply posts a link to Slack whenever each of us posts something new. To do this, we created a blog channel and I created an IFTTT rule to push new posts into the channel using this recipe:

This will result in the following in Slack:

Slack – Pocket Integration

While on the subject of sharing blog posts, another one of my favorite Slack integrations uses Pocket to move blogs and articles into Slack. If you are not familiar with Pocket, it is a handy tool that allows you to save web pages that you want to read later and apply tags to them. For example, if I see an article on Twitter that I like and want to read later or share with a co-worker, I can save it to my Pocket list and then retrieve it in the future through the Pocket mobile app or website. But using Pocket with Slack takes this to a new level. In IFTTT, I have created a series of recipes that map Pocket tags to channels in our Slack implementation. For example, if I want to share a blog post I liked with my co-workers, all I need to do is save it to Pocket and tag it with the tag “blog” and within fifteen minutes, a link to it will be posted in the previously shown “industry-news-blogs” channel. Here is what the recipe looks like:

Once again, my partners can comment on it and the article text is fully searchable from now on. In my case, I have set-up several of these recipes, such that if I find a good article about Adobe technology, it will be posted to our “Adobe” channel and likewise for Google.

Slack – Email Integration

Another type of content that I may want to push into Slack is e-mail. While Slack does reduce e-mail usage, e-mail will probably never go away. The Slack pricing page states that more e-mail to Slack functionality is coming soon, but in the meantime, I found another way to use IFTTT to send specific e-mails into Slack. Before I show how to do this, let’s consider why sending e-mails into Slack could be worthwhile. In general, I wouldn’t want to clutter my Slack implementation with ALL of my e-mail, but there are times when an important e-mail comes through that may be useful in the future. Perhaps it is a key project status update or approval from your client or boss that you want to save in case the s#%t hits the fan one day! Another reason might be to take advantage of the full-text searching capabilities of Slack so that future searches will include key e-mail messages.

Regardless of your reason, here is an example of how I push e-mails from my work Gmail account into Slack. First, I create a Gmail label that I will use to tell IFTTT which e-mails should be sent. In my case, I simply made a label named “Slack” (keep in mind it is case-sensitive) using normal Gmail label functionality. Next, I created the following recipe in IFTTT:

Once this is active, all I need to do is apply the label of “Slack” to any e-mail and it will be sent to Slack:

In this case, I am pushing e-mails to my personal “Slackbot” since I don’t plan to do this very often and it is an easy, private place to keep these messages. Of course, I could have just as easily pushed these e-mails into a private group, but for now Slackbot will meet my needs.

Slack – Task Management Integration

If your company uses a task/work management tool like Asana, Wunderlist, etc., you can push new task starts and completions into project channels. This allows all team members to see progress being made and to ask questions about tasks via the reply feature in Slack:

Guest & Restrictred Access

If you work in a business where you need to share discussions and files with people outside of your organization, you use the paid version of Slack to create special accounts that allow you to grant limited Slack access to external users:

We use this feature to add clients to private groups for projects. This gives is a direct line to our clients and an easy way for them to post project questions and files. Instead of sending an e-mail and copying tons of people, clients can post a query to the Slack group and know that one of the team members will get back to them in short order. This feature also helps us get around limitations associated with sending large files over e-mail or the need to send secure messages via Dropbox.

Notifications

Through Slack’s highly customizable notifications area, you can determine how often you want Slack to bug you about activity in each of your channels and groups. For example, you can see below, that while I am working during the day, I have notifications turned off on my desktop for many of my channels. This means that my Mac won’t pop-up stuff and distract me from my work, but I can still tab over to Slack anytime I want and see how much new activity is there. But if something is posted in the “all-demystified” channel, I will get a mobile alert, since that tends to be more important stuff (per our internal policy). I often get many questions in the “Adobe” channel, so if my name is mentioned there, I will also get alerted on my mobile device:

Summary

As you can see, I have had a lot of fun using Slack at our company and pushing all sorts of content into it so it becomes our primary focal point for communication. Unfortunately, due to client restrictions, I can’t show some of the coolest ways we have used the tool, but my hope is that this post helps you see how a seemingly simple tool can do many powerful things when thought of as a central repository for knowledge for yourself and your company. Since Slack is a young company, I am sure that more features and integrations will be forthcoming, but I highly recommend that you check it out (this link includes a $100 credit in case you ever want the paid version) by finding a group of people at your company who need to collaborate on a regular basis or on a specific project. The best part is that you can start with Slack for free and then graduate to the paid version once you are as addicted as I am!

If you want to stay up to date on the latest Slack features and enhancements, subscribe to this IFTTT recipe…

…and this recipe which shares periodic tips:

Internally, I have created a public channel for both of these items so our team can learn more about Slack

Finally, if you are a Slack user and have found other super-cool things you can do with it, please share those here…Thanks!

Adobe Analytics, Technical/Implementation

Profile Website Visitors via Campaign Codes and More

One of the things customers ask me about is the ability to profile website visitors. Unfortunately, most visitors to websites are anonymous, so you don’t know if they are young, old, rich, poor, etc. If you are lucky enough to have authentication or a login on your website, you may have some of this information, but for most of my clients the “known” percentage is relatively low. In this post, I’ll share some things you can do to increase your visitor profiling by using advertising campaigns and other tools.

Advertising Campaign Tracking Codes

If you have been using Adobe Analytics (or Google Analytics) for any length of time, you are probably already capturing campaign tracking codes when visitors reach your website. In Adobe Analytics, this is done via the s.campaigns variable. While this data is valuable to see which campaign codes are working to get you conversions, it can also be used to profile your visitors if used strategically.

Let’s look at an example. Imagine that your advertising team is looking to reach 18-21 year old males. To do this, they can work with an agency to identify the most likely places to reach this audience through publishers like Facebook or display advertising targeted at sites geared towards this demographic. If you embed campaign tracking codes in those sites that have a high probability of targeting 18-21 males, you can assume that many visits to your website from these campaign codes will be from this demographic. Therefore, you can use SAINT Classifications to classify these codes into a segment profile. If the following tracking codes all came from this targeted campaign, you might classify it like this:

Once you have classified the codes by demographic, you can use segmentation to isolate Visits (and Visitors) who came from these codes. While this may not be a large population, you can segment the data and treat it as a sample size to see how that demographic is performing vs. your general population or other demographics. Keep in mind that you may get some false positives since ad targeting isn’t an exact science, but if your advertising is well targeted, you should have a decent amount of confidence in your segment. In fact, there may be cases in which the sole purpose of spending a small amount on advertising is to test out how a different target demographic uses your website.

Business to Business via Demandbase

If you work for a Business to Business (B2B) company, in addition to using campaign codes to profile visitors, you can also use tools like Demandbase to identify anonymous visitors (companies) to your website. I have used this in the past when I worked for Salesforce.com and in my current role at B2B clients. It is amazing how much information you can gather at the company level including Company, Industry, Size, etc. This information can be embedded into your web analytics implementation so that you can segment on it along with your other eVars and sProps:

This allows you to build segments on this data:

And you can see reports like this:

Here is a brief video I did a few years back on this integration:

Summary

As you can see, whether you are a B2C or B2B company, there are some quick wins you can achieve by adding meta-data to campaign tracking codes and using other technologies to identify anonymous visitors. These short-term solutions can be augmented by more robust tools offered by Adobe, Google and others, but these ideas may be a way to get started and build a case for more advanced visitor profiling. If you have other techniques you have used, feel free to leave a comment here.

Adobe Analytics, Technical/Implementation

New or Old Report Suite When Re-implementing?

In the recent white paper I wrote in partnership with Adobe, I discuss ways to re-energize your web analytics implementation. Often times, this involves re-assessing your business requirements and rolling out a more updated web analytics implementation. However, if you decide to make changes to your implementation in a tool like Adobe Analytics (SiteCatalyst), at some point you will have to make a decision as to whether you should pass new data into the existing report suite or begin fresh with a new report suite. This can be a tough decision, and I thought I would use this blog post to share some things to consider to help you make the best choice for your organization.

Advantages of Using The Existing Report Suite

To begin, let’s look at the benefits of using the same report suite when you re-implement. The main one that comes to mind is the ability to see historical trends of your data. In web analytics, this is important, since seeing a trend of Visits or Orders gives you a better context from which to analyze your data. In SiteCatalyst, you get the added benefit of seeing monthly and yearly trend lines in reports to show you month over month and year over year activity. Obviously, if you decide to start fresh with a new report suite, your users will only see data from the date you re-implement in the SiteCatalyst interface.

Another benefit of continuing with your existing report suite is that you will retain unique visitors for those that have visited your site in the past and have not deleted their cookies. When you begin with a new report suite, all visitors will be new unique visitors so you will be starting your unique visitor counts over from the day you re-implement. Starting with a new report suite will also result in some recency reports(i.e. Visit Number, Returning Visitors and Customer Loyalty) being negatively impacted. Additionally, using an existing report suite allows you to retain any values currently persisting in Conversion Variables (eVars). Often times you have eVar values that are meant to persist until a KPI takes place or until a specific timeframe occurs. If you create a new report suite, all eVars will start over since they are tied to the SiteCatalyst cookie ID.

Another area to consider is Segmentation. It is common to use a Visitor container within a SiteCatalyst segment to look for visitors who have performed an action at some point in the past. This segment will rely on the cookie ID so if you begin with a new report suite, you will lose visitors in your desired segment. For example, let’s say you have a segment that looks for visitors who have come from an e-mail at some point in the past and ordered in today’s visit. If you create a new report suite, you will lose all data from people who may have come from an e-mail prior to the new report suite being created.

If your end-users have dashboards, bookmarks and alerts setup, using the existing report suite will avoid the need to re-create them in the new report suite for variables that remain unchanged. Depending upon how active your users are, this can have a significant impact, as re-creating these can result in a lot of re-work.

There are many other items to consider, but these are the ones that I have seen come up most often as advantages of keeping the existing report suite when re-implementing.

Advantages of Using A New Report Suite

So now that I have scared you off of using a new report suite when re-implementing, let me take the counter-arguement. Despite all of the advantages listed above, there are many cases in which I recommend starting with a brand new report suite. The most obvious is when the current implementation is proven to be grossly incorrect or misaligned. I often encounter situations in which the current implementation hasn’t been updated for years and not at all related to what is currently on the website (or mobile app). If what you have doesn’t answer the relevant business questions, all of the advantages listed above become obsolete. In this situation, seeing historical trends of irrelevant data points, losing eVar values or report bookmarks isn’t a big deal. You may still lose out your historical unique visitor counts since that is out-of-the-box functionality, but I don’t think this justifies not starting with a clean slate. If you are not sure if your current implementation is aligned with your latest business goals, I highly recommend that you perform an implementation audit. This will help you understand how good or bad your implementation is, which is a key component of making the new vs. existing report suite decision.

The next situation is one in which the current implementation is using many of the allotted SiteCatalyst variables, but the new implementation has so much data to collect that it has to re-use the same variables going forward. This gets messy since it is easy to re-name existing variables, but you cannot remove historical data from them. Therefore, if you convert event 1 from “Internal Searches” to “Leads,” because you no longer have a search function and are out of success events, you can get into trouble when your end-users view a trend of leads for this month and see that they are a fraction of what they were last year! Your users may not understand that the data they are seeing from last year is “Internal Searches” and not “Leads,” and may sound off alarms indicating that the website is broken and conversion has fallen off the cliff! While you can do your best to annotate SiteCatalyst reports and educate people, the re-use of existing variables is always a risk, whereas using a new report suite does not require the re-use of existing variables and can avoid this confusion. Where possible, I suggest that you use previously unused variables for your new implementation so this historical data issue doesn’t affect you. Obviously, this requires that your existing implementation isn’t using most or all of your available SiteCatalyst variables. Hence, one key factor when deciding whether to use an existing report suite or create a new one is counting the number of incremental variables you will need variable slots for and determining whether you have enough to avoid having to re-use old variables for new data. If you have enough, that may tip the scale to re-use, but if you don’t, it may make you lean towards a new report suite.

When it comes to historical trends, one thing to keep in mind is that even if you choose to create a new report suite, it is still possible to see historical trends for data that the new and old report suites have in common. This can be done by importing data into the new suite using Data Sources. This is most effective when the data you are uploading are success events (numbers) and a bit more difficult for eVar and sProp data. The main benefit of this approach is that it allows your SiteCatalyst users to see the data from within the SiteCatalyst interface. Another option is to use Adobe ReportBuilder. Within Excel, you can build a data block for the data in the old report suite and then another data block for the same data in the new report suite and then merge the two together in a graph using two data ranges. Doing this allows you to create charts and graphs that span the old and the new, but these are only available in Excel and not in the SiteCatalyst interface.

Another justification for starting with a new report suite is that your current suite has data that is untrustworthy. I often talk to companies who say that they simply do not trust that the data in SiteCatalyst is correct. As I mention in the white paper, trust is an easy thing to lose and a hard thing to earn back. Your SiteCatalyst reports can be correct nine times out of ten, but people will focus on the one time it was wrong. When this happens too often, it may be time to start with a new report suite and make sure that anything added to this new suite is validated and trusted. This can help you create a new perception and help you re-build the trust that is so essential to web analytics.

Final Thoughts

As you can see, there are many things to consider when it comes to re-implementation and report suites. The current state of your implementation and its data will be the biggest decision points, but every situation is different. Hopefully this helps provide a framework for making the decision and allows you to weigh the pros and cons of each approach.

Technical/Implementation

Reenergizing Your Web Analytics Program & Implementation

Those of you who have read my blog posts (and book) over the years, know that I have lots of opinions when it comes to web analytics, web analytics implementations and especially those using Adobe Analytics. Whenever possible, I try to impart lessons I have learned during my web analytics career so you can improve things at your organization. However, much of what I have written in the past has been product-related, covering features, functions and implementation tips. Obviously, there is much more than that involved when it comes to success in web analytics.

As some of you may know, the last role I held when I worked at Omniture (prior to Adobe acquisition) was one in which I was tasked with “saving” accounts that had gone astray. I encountered many accounts that had either a dysfunctional web analytics program or implementation. One way or another, they were not getting the desired value from their investment in SiteCatalyst. In my time serving this role, I came to see many common characteristics of those who were having problems and identified specific ways to address them to get clients back on track. After I left Omniture, I joined Salesforce.com as the head of web analytics. In that role, I encountered similar issues, as the Salesforce.com implementation and program had many of the same problems I had seen while at Omniture. Over the next few years, I had the opportunity to test out my “client-saving” techniques in a real life setting and had some great success in turning around the web analytics program at Salesforce.com.

While at Analytics Demystified for the past three years, I have continued my mission to help ailing web analytics programs and had the good fortune to work with some great clients. These clients have entrusted me to show them how to bring their web analytics programs back from the abyss or to improve good things they are already doing. Working with the great partners at Analytics Demystified, I have been able to learn and improve upon things I have done in the past. Last year at the Chicago eMetrics conference, I documented my lessons learned into a forty-five minute presentation entitled “Bringing your Web Analytics Program Back from the Dead!” I was a bit worried that no one would actually show up to my session, since coming was an implicit admission that things weren’t going so well. But to my surprise, there was standing room only! Jim Sterne informed me that I had about 95% of all attendees in my breakout session! I was excited to share my experiences and afterwards, received a great response from the crowd, as well as a rush of people attacking me at the stage with follow-up questions. Apparently, I had hit some sort of nerve with the topic (Note: This summer I will be presenting a follow-up session at Chicago eMetrics on the topic)!

Since then, I wondered how I could share this information with more folks who may be interested in improving or re-energizing their web analytics programs and/or implementations. I considered writing a book on the topic, but having recently written a book, I knew that this was a massive undertaking and that my busy schedule wouldn’t allow it. Instead, I decided to partner with my old friends at Adobe to create a new white paper on the topic. In this white paper, I have tried to get down to the core tenants of my approach to reenergizing web analytics programs and synthesized it to under twenty pages of content. While most of the concepts in the paper were learned working with Adobe clients, I believe that the principles will apply to any web analytics technology or program. In fact, I believe that the white paper would also apply to non-web analytics programs, as much of it goes back years to by time working at Arthur Andersen in the nineties.

Therefore, without any more preamble, I am pleased to announce the immediate availability of this new Adobe-sponsored white-paper entitled “Reenergizing Your Web Analytics Program.” I hope that you will take the time to read it and take advantage of some of the lessons and techniques I have learned over the past 10+ years so that you and your organization can improve your program/implementation. As a young industry, I think it is the responsibility of us “old-timers” to pass on what we have learned so others don’t have to “reinvent the wheel.”

Click here to download white paper

A big thanks goes out to my friends at Adobe for sponsoring this white paper and making it happen. Enjoy!

Analytics Strategy, General, Technical/Implementation

Five Tips to Help Speed Up Adoption of your Analytics Tool

New technologies are easier bought than adopted…

All too often, expensive “simple, click of a button” analytics tools are purchased with the best of intentions, but end up a niche solution used by a select few. If you think about this on a “cost per user” basis, or (better yet) a “cost per decision” basis, suddenly your return on investment doesn’t seem as good as the mass-adopted, enterprise-wide solution you were hoping for.

So what can you do to better disseminate information and encourage use of your analytics investments? Here are five quick tips to help adoption in your organisation.

1. Familiarity breeds content

I am the first to admit that I can be pedantic about data visualization and information presentation. However, where possible (aka, where it will adequately convey the point) I will intentionally use the available visualisations in the analytics “system of record” when sharing information with business users. While I could often generate better custom visuals, seeing charts, tables and visualisations from their analytics tool can help increase users’ comfort level with the system, and ultimately help adoption. When users later log in for themselves, things look “familiar” and they feel more equipped to explore the information in front of them.

2. Coax them in

Just as standard visualisations don’t always float my boat in many analytics tools, I am often underwhelmed by custom reporting and dashboarding capabilities. Yet despite limitations, they do have inherent value: they get users to log in.

So while it can be tempting to exclusively leverage Excel plugins or APIs or connections to Tableau to deliver information outside of the primary reporting tool, don’t overlook the value of building dashboards within your analytics solution. Making it clear that your analytics solution is the home of critical information can help with adoption, by getting users to log in to view results pertinent to them.

3. Measure your measurement

If you want to drive adoption, you need to be measuring adoption! A lot of analytics tools will give administrators visibility into who is using the tool, how recently and how often. Keep an eye on this, and be on the lookout for users who might benefit from a little extra attention and help. For example, users who never log in, yet always ask for basic information from your analytics team.

If your solution doesn’t offer this kind of insight, there are still things you can do to understand usage. Consider sending out a user survey to help you understand what people use and don’t use, and why. Do you have an intranet or other internal network for sharing analytics findings? Even though this won’t reflect tool usage, consider implementing web analytics tracking to understand engagement with analytics content more generally. (If you post all this information via intranet and no one ever views it, it’s likely they don’t log in to your analytics tool either!)

Want to take it a step further? Set an adoption rate goal for your team, and a reward if it’s met. (Perhaps a fun off-site activity, or happy hour or lunch as a team.)

4. Training, training, training

Holding (and repeating!) regular trainings is critical for adoption. Even very basic training can help users feel comfortable logging in to their analytics solution (where perhaps they would have been otherwise tempted to just “ask Analytics.”)

But don’t just make this a one-time thing. Repeat your trainings, and consider recording them for “on-demand” access. After all, new team members join all the time, and existing employees often need a “refresher.”

Don’t be afraid to get creative with your training delivery methods! “Learn in the Loo” signs in bathrooms can be a sneaky way to grab available attention.

5. Pique their interest

While as analysts we absolutely need to be focused on actionable data, sometimes “fun facts” can intrigue business users and get them to engage with your analytics tool. Consider sharing interesting tidbits, including links to more details in your analytics solution. Quick soundbytes (“Guess what, we saw a 15% lift in visits driven by this Tumblr post!”) can be shared via internal social networks, intranet, email, or even signs posted around the office.

What are some of your tips for helping grow adoption?

Adobe Analytics, General, Technical/Implementation

Big vs. Little Implementations [SiteCatalyst]

Over the years, I have worked on Adobe SiteCatalyst implementations for the largest of companies and the smallest of companies. In that time, I have learned that you have to have a different mindset when it comes to each type of implementation. Implementing both the same way can lead to issues. Big implementations (which can be either large due to complexity or traffic volume) are not inherently better or worse, just different. For example, an implementation at a company like Expedia is going to be very different than an implementation at a small retail website. Personally, I find things that excite me about both types. When working with a large website, the volume of traffic can be amazing and your opportunities to improve conversion are enormous. One cool insight that improves conversion by a small percentage, can mean millions of dollars! Conversely, when working with a smaller website, you usually have a smaller development team, which means that you can be very agile and implement things almost immediately.

Hence, there are pros and cons with each type of website and these are important things to consider when approaching an implementation or possibly when considering what type of company you want to work for as a web analyst. The following will outline some of the distinctions I have found over the years in case you find them to be helpful.

Implementation Differences

The following are some of the SiteCatalyst areas that I have found to be most impacted by the size of the implementation:

 

Multi-suite Tagging
Most large websites have multiple locations, sites or brands and use multi-suite tagging. When you bring together data from multiple websites into one “global” suite, you have to be sure that all of the variables line up amongst the different child report suites. Failure to do this will result in data collisions that will taint Success Event metrics or combine disparate eVar/sProp values. If you have 10+ report suites, it almost becomes a full-time job to manage these, making sure that renegade developers don’t start populating variables without your knowledge. If you use multi-suite tagging and have a global report suite, my suggestion is to keep every report suite as standardized as possible. This may sound draconian, but it works.

For example, let’s say you have five report suites that are using eVars 1-45 and a few other report suites that require some new eVars. Even if the latter report suites don’t intend to use eVars 1-45 (which I doubt), I would still recommend that you use eVars 46 on for the new eVars for the additional report suites. This will ensure that you don’t encounter data conflicts. Taking this a step further, I would label eVars 1-45 as they are in the initial report suites using the Administration Console. I would also label eVars 46 on with the new variable names in the original set of report suites. At the end of the day, when you highlight all report suites in the Admin Console and choose to see your eVars, you should strive to see no “Multiple” values. That means you have a clean implementation and no variable conflicts. Otherwise, you will encounter what I call “Multiple Madness” (shown here).

If you really have a need for each website to track its own site-specific data points, one best practice is to save the last few Success Events, eVars and sProps for site-specific variables. For example, you may reserve Success Events 95-100 and eVars 70-75 to be different in each report suite. That will provide some flexibility to site owners. You just have to recognize that those Success Events and eVars should be hidden (or disabled) in the global report suite so there is no confusion. Another exception to the rule might be sites that are dramatically different than the core websites. For example, you may have a mobile app or intranet site that you are tracking with SiteCatalyst. This mobile app or intranet site may be so drastically different from your other sites that you want to have it in its own separate report suite that will never merge with your other report suites. In this case, you can either create a separate Company Login or just keep that one report suite separate from the others and use any variables you want for it. Keep in mind that the Administration Console allows you to create “groups” of report suites so you can group common ones together and use that group to make sure you don’t have any “multiple” issues. You can also use the Menu Customization feature to hide variables in report suites where they are not applicable. Even if you don’t currently have a global report suite, I still recommend following the preceding approach. You never know when you might later decide to bring multiple report suites together, and using my approach makes doing so a breeze (simply changing the s_account variable) versus having to re-implement variables and move them to open slots at a later date. The latter will cause you to lose historical trends, modify reports and dashboards and confuse your end-users.

When you have a smaller implementation, it is common to have just one production report suite. This avoids the preceding multi-suite tagging issues and makes your life a lot easier!

Variable Conservation
As if coordinating variables across multiple report suites isn’t hard enough, this issue is compounded by the fact that multi-suite tagging means that you only have ~110 success events, ~78 eVars and ~78 sProps to use for all sites together vs. being able to use ~250 variables differently for each website. This means that most large implementations inevitably run out of variables (eVars are usually the first type of variable to run out). Therefore, large implementations have to be very aggressive on conserving variables, which can handcuff them at times. As a web analyst, you can often make a case for tracking almost anything, since the more data you have the more analyses you can produce and the more items you can add to your segments. Unfortunately, when dealing with a large implementation, for the reasons cited above, you may need to prioritize which data elements are the most important to track lest you run out of variables. This isn’t necessarily a bad thing as it helps your organization focus on what is really important across the entire business and tracking more isn’t always better.

If you contrast this with a smaller implementation that has no multi-suite tagging and no global report suite, the smaller implementation is free to use all variables for the one site being tracked. This provides ~250 variables to use as you desire. That should be plenty for any smaller site, so variable conservation isn’t as high of a priority. A few times, in my SiteCatalyst training classes, I have had both large and small companies sitting next to each other, and have witnessed the big company drooling over the fact that the smaller company was only using 20 of their eVars (wishing they could borrow some)! While it may sound strange, there are many cases in which I would tell a smaller organization to set success events and eVars that I would conversely tell a large organization not to set. For example, if I were working with a small organization that had only one workflow process (i.e. credit card application) and they wanted to track all six steps with success events, I might say “go for it!” But if that same scenario arose for a large website (i.e. American Express), I would encourage them to only set success events for the key milestone workflow steps to conserve success events. This is just one example of why I tend to approach large and small implementations differently.

One final note related to variable conservation. Keep in mind that you can use concatenation combined with SAINT Classifications to conserve variables. For example, instead of storing Time of Day, Day of Week and Weekday/Weekend in three separate eVars, you can concatenate those together into one and apply SAINT Classifications. This will save a few eVars and a similar process can be replicated for things like e-mail attributes, product attributes, etc.

Uniques Issues
If you have a large website, there is an increased chance you will have issues with “uniques.” Most eVar and sProp reports have a limit of 500,000 unique values per month. I have many large clients that try to track onsite search phrases or external search keywords and exceed the unique threshold by the 10th day of the month. This makes some key reports less useful and often results in data being exported via a data feed or DataWarehouse report to back-end tools for more robust analysis. For some large implementations, since the data points can’t be used regularly in the SiteCatalyst user interface due to unique limits, I sometimes have clients pass data to an sProp to conserve eVars, since in DataWarehouse, Discover and Segmentation, having values in an sProp is similar to having it in an eVar.

Smaller implementations normally only hit uniques issues if they are storing session ID’s (i.e. ClickTale, Tealeaf) or customer ID’s.

Large # of Page & Product Names
Many large websites have so many pages on their site (i.e. one page per product and over 100,000 products) that having an individual page name for each page is virtually impossible. In these cases, you often have to take page names up a level and start at a page category level. The same concept can apply to individual product names or ID’s as well.

Smaller implementations rarely have these issues since they tend to have fewer pages and numbers of products.

Page Naming Conventions
Another area where I see those running large implementations make mistakes is related to page naming across multiple websites. If you are managing a smaller implementation, you can name your pages anything you’d like. For example, while I don’t recommend it, if you want to call your website home page, “Home Page,” you will be ok. However, this approach won’t always work with a large implementation. If you have five report suites and one global report suite and you named the home page of each “Home Page,” in the global report suite, you would see data from all five report suites merged into one page name called “Home Page.” While there may be reasons to do this, you will probably also want to have a way to see things like Pathing and Participation for each of the home pages from each site individually in the global report suite. In this post, I show how you can have both (“have your cake and eat it too!”), but this example highlights the complexity that can arise when dealing with larger implementations.

SAINT Classifications
Large websites can often have a variable with more than a million SAINT classification values. Updating SAINT tables can take days or weeks unless you are methodical about your approach. Smaller sites with lower numbers of SAINT values can often re-upload their entire SAINT file daily or weekly to make sure all values are classified. Large implementations don’t have this luxury. They have to monitor which values are new or missing SAINT values so they can only upload the new or changed items so it doesn’t take weeks for SAINT tables to be updated. If you work with a large implementation, keep in mind that you can update SAINT Classifications for multiple report suites with one upload if you use the FTP method vs. browser uploads.

Time to Implement
In general, large implementations tend to move slower than smaller ones. While tag management systems are helping to remedy this, I still find that adding new variables or fixing broken variables takes much longer with large implementations (often due to corporate politics!). This means that you have to be sure that your tagging specifications are right the first time, since getting changes in after a release may be difficult.

Conversely, with smaller websites, you can be much more nimble and update SiteCatalyst tagging on the fly. For example, you may doing a specific analysis and realize that it would be helpful for you to have the Zip Code associated with a form. If you work with a smaller site, you may be able to use a SiteCatalyst Processing Rule or call your developer and have them add Zip Code to eVar30 and have data the same day!

Globally Shared Metrics, Dashboards, Reports, etc.
When you work with a small implementation, you may have a few calculated metrics, dashboards or reports that you share out to your users. This is a great way to collaborate and enforce some standards or consistency related to your implementation. However, when you have a large implementation, sometimes with 300+ SiteCatalyst users having logins, this type of sharing can easily get out of control. Imagine each SiteCatalyst user sharing five reports or dashboards. The shared area of the interface becomes a mess and you are not sure which reports/dashboards you should be using. Therefore, when you are working with a large implementation, it is common to have to implement some processes in which reports and dashboards are sent to the core web analytics team who can then share them out to others. This allows the SiteCatalyst user community to know which reports/dashboards are “approved” by the organization. You can learn more about centralizing reports and dashboards by reading this blog post.

Final Thoughts

As I mentioned in the beginning of this post, bigger isn’t always better. As shown from the items above, I often find that bigger implementations lead to more headaches and more limitations. However, keep in mind that with great volume, comes conversion improvement opportunities that often dwarf smaller sites.

One over-arching piece of advice I would give you, regardless of whether you work with a large or small implementation, is to review your implementation every six months (or at least yearly) and determine if you are still using all of your variables. It is better to get rid of what you no longer need periodically than to have to do a massive overhaul one day in the future.

While this post covers just a few of the differences between large and small implementations, they are the ones that I tend to see people mess up the most. If you have other tips for readers, feel free to leave a comment here. Thanks!

Analytics Strategy, Technical/Implementation

Adobe SiteCatalyst – ClickTale Integration

About a year ago, I wrote a blog post discussing ways that you could integrate Adobe SiteCatalyst and Tealeaf. In that post, I talked about some of the cool integration points between the two products. In this post, I’d like to talk about how the same integration would work with ClickTale and share some cool new things that are possible that go even beyond what is possible with Tealeaf.

What is ClickTale?

For those unfamiliar with ClickTale, it is an in-page analytics tool that allows you to record website sessions, filter them and play them back. It is often used to see heat maps of pages and to “watch” website visitors and includes even their mouse movements. It is pretty cool technology since often times the best way to get internal stakeholders to understand website issues is to have them watch real users encountering issues.

In a similar manner to what I described in my previous Tealeaf post (which I suggest you read before continuing with this post!), it is possible to pass a ClickTale ID to SiteCatalyst via an sProp or eVar:

Having this ClickTale ID in SiteCatalyst allows you to use the standard segmentation capabilities of SiteCatalyst to isolate visits or visitors who exhibit specific behaviors in which you are interested. For example, you might be interested in isolating visits where visitors reached checkout, but didn’t purchase:

Once you do this, it is possible to open the preceding ClickTale Session ID eVar and see a list of all of the ClickTale session ID’s that match this segment.

Adobe Genesis Extend (BETA) Integration

But as I noted in my preceding Tealeaf post, one of the frustrations of this type of integration is that once you isolate the session ID’s that you want to watch, you are stuck. You have to copy each one individually and then switch to the other application (i.e. Tealeaf) and then start the process of watching the session. My wishlist item in my previous post was that this process could be simplified so you can simply click and view the session, right from within SiteCatalyst. Believe it or not, doing this is now possible! Thanks to the creation of Genesis Extend (still in Beta), you can add a Genesis Chrome browser extension to your version of Chrome and get the ability to streamline this process for ClickTale (not Tealeaf unfortunately).

To do this, simply search for the Genesis Chrome browser extension and install it. When that is done, you will see a new icon in your Chrome browser which you can click to see the settings:

You will notice that there is a ClickTale box you can check (and also one for Twitter which allows you to see actual Tweets in referrer reports). From here you can enter your ClickTale authorization credentials and you are ready to go.

 

Back in SiteCatalyst, there is a free Genesis “labs” area you can visit to launch the wizard that helps you generate the code you need to capture the ClickTale ID in an eVar of your choice:

After you have completed the wizard and are collecting ClickTale recording ID’s in an eVar, you can open that report in SiteCatalyst, you will see a new link in each row…

…which allows you to click to view the actual recording in ClickTale:

It is also possible to use this new SiteCatalyst eVar to copy a list of ClickTale ID’s and paste them right into ClickTale to create a segment and look at heat maps for just those ID’s.

Final Thoughts

As you can see, this is a cool interface integration that is possible since both SiteCatalyst and ClickTale are “cloud” products. I would expect that you will see more of this in the future in more browsers or even natively as part of SiteCatalyst. If you are a ClickTale customer and use SiteCatalyst, you should definitely try this out!

Adobe Analytics, Reporting, Technical/Implementation

SiteCatalyst Tip: Corporate Logins & Labels

As you use Adobe SiteCatalyst, you will begin creating a vast array of bookmarked reports, dashboards, calculated metrics and so on. The good news is that SiteCatalyst makes it easy for you to publicly share these report bookmarks and dashboards amongst your user base. However, the bad news is that SiteCatalyst makes it easy for you to publicly share these report bookmarks and dashboards amongst your user base! What do I mean by this? It is very easy for your list of shared bookmarks, dashboards, targets and other items to get out of control. Eventually, you may not know which reports you can trust and trust is a huge part of success when it comes to web analytics. Therefore, in this post, I will share some tips on how you can increase trust by putting on your corporate hat…

Using a Corporate Login

One of the easiest ways to make sense of shared SiteCatalyst items at your organization is through the use of what I call a corporate login. I recommend that you create a new SiteCatalyst login that is owned by an administrator and use that login when sharing items that are sanctioned by the company. For example, if I owned SiteCatalyst at Greco, Inc., I might create the following login ID:

Once this new user ID is created, when you have bookmarks, dashboards or targets that are “blessed” by the company, you can create and share them using this ID. For example, here is what users might see when they look at shared bookmarks:

As you can see, in this case, there is a shared bookmark by “Adam Greco” and a shared bookmark by “Greco Inc.” While based upon his supreme prowess with SiteCatalyst, you might assume that Adam Greco’s bookmark is credible, that might not always be the case! Adam may have shared this bookmark a few years ago and it might no longer be valid. But if your administrator shares the second bookmark above while logged in as “Greco Inc.,” it can be used as a way to show users that the “Onsite Search Trend” report is sanctioned at the corporate level.

The same can be done for shared Dashboards:

In this case, Adam and David both have shared dashboards out there, but it is clear that the Key KPI’s dashboard is owned by Greco, Inc. as a whole. You can also apply the same concept to SiteCatalyst Targets:

If you have a large organization, you could even make a case for never letting anyone share bookmarks, dashboards or targets and only having this done via a corporate login. One process I work with clients on, is to have end-users suggest to the web analytics team reports and dashboards that they feel would benefit the entire company. If the corporate web analytics team likes the report/dashboard, they can login with the corporate ID and share it publicly. While this creates a bit of a bottleneck, I have seen that sometimes large organizations using SiteCatalyst require a bit of process to avoid chaos from breaking out!

Using a “CORP” Label

Another related technique that I have used is adjusting the naming of SiteCatalyst elements to communicate that an item is sanctioned by corporate. In the examples above, you may have noticed that I added the phrase “(CORP)” to the name of a Dashboard and a Target. While this may seem like a minor thing, when you are looking at many dashboards, bookmarks or targets, seeing an indicator of which items are approved by the core web analytics team can be invaluable. This can be redundant if you are using a corporate login as described above, but it doesn’t hurt to over communicate.

This concept becomes even more important when it comes to Calculated Metrics. It is not currently possible to manage calculated metrics and the sharing of them in the same manner as you can for bookmarks, dashboards and targets. The sharing of calculated metrics takes place in the Administration Console so there is no way to see which calculated metrics are sanctioned by the company using my corporate login method described above.

To make matters worse, it is possible for end users to create their own calculated metrics and name them anything they want. This can create some real issues. Look at the following screenshot from the Add Metrics window in SiteCatalyst:

In this case, there are two identical calculated metrics and there is no way to determine which one is the corporate version and which is the version the current logged in user had created. If both formulas are identical then there should be no issues, but what if they are not? This can also be very confusing to your end users. However, the simple act of adding a more descriptive name to the corporate metric (like “CORP” at the end of the name) can create a view like this:

This makes things much more clear and is an easy workaround for a shortcoming in the SiteCatalyst product.

Final Thoughts

Using a corporate login and corporate labels is not a significant undertaking, but these tips can save you a lot of time and heartache in the long run if used correctly. You will be amazed at how quickly SiteCatalyst implementations can get out of hand and these techniques will hopefully help you control the madness! If you have similar techniques, feel free to leave them as comments here…

Adobe Analytics, Technical/Implementation

SiteCatalyst Variable Naming Tips

One of the parts of Adobe SiteCatalyst implementations that is often overlooked is the actual naming of SiteCatalyst variables in the Administration Console. In this post, I’d like to share some tips that have helped me over the years in hopes that it will make your lives easier. If you are an administrator you can use these tips directly in the Administration Console. If you are an end-user, you can suggest these to your local SiteCatalyst administrator.

Use ALL CAPS For Impending Variables

There are often cases in which you will define SiteCatalyst variables with a name, but not yet have data contained within them. This may be due to an impending code release or you may have data being passed to the new variable, but it hasn’t yet been fully QA’d to the point that you are willing to let people use the data. Of course, you always have the option to use the menu customization tool to hide new variable reports until they are ready, but sometimes it is fun to let your users know what types of data are planned and coming soon. Anther reason to enter names into variable slots ahead of time is to make sure that your co-workers don’t re-use a specific variable slot for a different piece of data, which can mess up your multi-suite tagging architecture.

So now, let’s get to the first tip. If you have cases in which you have variables that are coming soon, I use the Administration Console to name these variables in ALL CAPS. This is an easy way to communicate to your users that these variables are coming soon, but not ready to be used. All you have to do is explain to your SiteCatalyst users what the ALL CAPS naming convention means. Below is an example of what this might look like in real life:

 

I have found that this simple trick can prevent many implementation issues. For example, I have seen many cases where SiteCatalyst clients open a variable report and either see no data or faulty data. This diminishes the credibility of your web analytics program and over time can turn people off with respect to using SiteCatalyst. By making sure that reports that are not in ALL CAPS (proper case) are dependable, you can build trust with your users. When you are sure that one of your new variables is ready for prime time, simply go to the Administration Console and rename the variable to remove the ALL CAPS and you will have let your end-users know that you have a new variable/report that they can dig into.

Some of my customers ask me why I wouldn’t simply use the user security feature of SiteCatalyst to only let administrators and testers see these soon to be deployed variables. That is a good question. It is possible to hand-pick which variables each SiteCatalyst user has access to using the Administration area. Unfortunately, you can only limit access to Success Events and Traffic Variables (sProps). For reasons unbeknownst to me, you cannot limit access to Conversion Variables (eVars), which are often the most important variables (I have requested the ability to limi access to eVars in the Idea Exchange if you want to vote for it!). But you can certainly use this approach to limit access to two out of the three variable types if desired. Another approach I have seen used is to to move all of these impending ALL CAPS variables to an “Admin” folder using the menu customizer.

Add Variable Identifiers to Variable Names

As you learn more about SiteCatalyst, you will eventually learn the differences between the different variable types (Success Events, eVars and sProps). I have even seen that some power users end up learning the numbers of the specific variables they use for a specific analysis, such as eVar10 or sProp12. While normally, only administrators and developers care about which specific variable numbers are used for each data element, I have found that there are benefits to sharing this information with end-users in a non-obtrusive manner.

For example, let’s say that you want to capture which onsite (internal) search terms are used by website visitors. You would want to capture that in a Conversion Variable (eVar) to see KPI success taking place after that search term is used, but you also might want to capture the phrases in a Traffic Variable (sProp) so you can enable Pathing and see the order in which terms are used. In this case, if you create an eVar and an sProp for “Internal Search Terms” and label them as such, it can be difficult for your SiteCatalyst users to distinguish between the eVar version of the variable and the sProp version of the variable (which is even more difficult if you customize your menus).

 

Therefore, my second variable naming tip is to add an identifier to the end of each variable so smart end-users know which variable they are looking at in the interface. As you can see in the screenshot above, I have added a “(v24)” to the Internal Search Terms eVar and “(c6) to the “Internal Search Term” sProp as well as identifiers for all other variables. This identifier doesn’t get in the way of end-users, but it adds some clarity for power users who now know that internal search phrases are contained within eVar 24 and sProp6. Being a bit “old school” when it comes to SIteCtaalyst, I use the old fashioned labels from older versions of the JavaScript Debugger as follows:

  • Success Events = (scAdd), (scCheckout), (e1), (e2), (e3), etc…
  • Conversion Variables = (v0) for s.campaigns, (v1), (v2), etc…
  • Traffic Variables = (s.channel), (c1), (c2), (c3), etc…

Obviously, you can choose any identifier that you’d like, but these have worked for me since they are short and make sense to those who have used SiteCatalyst for a while. Another side benefit of this approach is that if you ever need to find a report in a hurry and you know its variable number, you can simply enter this identifier in the report search box to access the report without having to figure out where it has been placed in the menu structure. Here is an example of this:

 

Front-Load Success Event Names

When you are naming SiteCatalyst variables, you should do your best to be as succinct as possible as having long variable names can have adverse effects on your menus and report column headings. However, there is one issue related to variable naming that is unique to Success Events I wanted to highlight. Let’s imagine that you have a multi-step credit card application process and you want to track a few of the steps in different Success Events. In this case, you might use the Administration Console and set-up variables as shown here:

 

In this case, the variable name is a bit lenghty, but more importantly, the key differentiator of the variable name occurs at the end of the name. So why does this matter? Well let’s take a look at how these Success Event names will look when we go to add them to a report in SiteCatalyst:

 

Uh, oh! Since the key aspects of these variable names are at the end, they are not visible when it comes to adding metrics to reports. This makes it difficult to know which Success Event is for step1, 2, 3, etc… You can hover over the variable name to see its full description, but this is much more time consuming. I have asked Adobe repeatedly to make the “Add Metrics” dialog box horizontal instead of vertical but have not had any success with this (you can vote for this!). In this case, I would suggest you change the names of these Success Events to something like this:

 

Which would then look like this when selecting metrics:

 

Keep in mind that there is no correlation between the length of the variable definition box in the Admin Console and when the Success Event name will get cut-off in the Add Metrics dialog box so don’t get tricked into believing that if it fits in the box you will be ok!

Final Thoughts

These are just a few variable naming tips that I would suggest you consider to make your life a bit easier. If you have other suggestions or ideas, please leave them here as comments so others can benefit from them. Thanks!

Analysis, Featured, Technical/Implementation

The T&T Plugin – Integrate T&T with Google Analytics

When Test&Target was being built back in the day and doing business as Offermatica, it was designed to be an open platform so that its data can be made available to any analytics platform.  While the integration with SiteCatalyst has since been productized, a very similar approach approach can be used to integrate your T&T test data with Google Analytics.  Let me explain how here.

The integration of SiteCatalyst leverages a feature of Test&Target called a “Plug-in”.  This plug-in concept allows you to specify code snippets that will be brought to the page upon certain conditions.  The SiteCatalyst integration is simply a push of a code snippet or plug-in to the page that tells SiteCatalyst key T&T info.

Having something like this can be incredibly helpful for all sorts of reasons such as integrating your optimization program with third party tools, or by allowing you to deliver code to the page via T&T which saves you from having IT make changes to the page code on the site.

To push your campaign or test data over to SiteCatalyst, you create a HTML offer in T&T that looks like this:

<script type=”text/javascript”>
if (typeof(s_tnt) == ‘undefined’) {
var s_tnt = ”;
}
s_tnt += ‘${campaign.id}:${campaign.recipe.id}:​${campaign.recipe.trafficType},’;
</script>

This code is simply taking the T&T profile values in red, which represent your test name and test experience names, and passes them to a variable called s_tnt for SiteCatalyst to pick up.  There is a back end classification process that takes place where these numerical values are translated into what you named them in T&T.  This is helpful to shorten the call being made to SiteCatalyst but not required unless the call to your SiteCatalyst has a relatively high character count.

After you save this HTML offer in your T&T account, you then have to create the “Plug-in”.  You can do so by accessing the configuration area as seen here:

T&T plugin, SiteCatalyst, Google AnalyticsThen we simply configure the plug-in here:

T&T Plug-in ConfiguratorThe area surrounded by a red box is where you select the previously created HTML offer with your plug-in code.  You also have the option to specify when the code gets fired.  Typically you want it to only fire when a visitor becomes a member of a test or when test content (T&T offers) are being displayed and to do so, simply select, Display mbox requests only.   If you wanted to, you can have your code fire on all mbox requests as that can be need sometimes.  Additionally, you can limit the code firings to a particular mbox or even by certain date periods.

Pretty straightforward.  To do this for Google Analytics you use the code right below to create a HTML offer and configure the plug-in in the exact same manner.  Note that we are not passing Campaign or Recipe (Experience) ID’s but rather profile tokens that represent the exact name of the Campaign name and Experience name specified in your test setup.

<script type=”text/javascript”>
_gaq.push([‘_trackEvent’, ‘Test&Target’,’${campaign.name}’,’${campaign.recipe.name}’]);
</script>

And that is it.  Once that is in place, your T&T test data is being pushed to your Google Analytics account.

Before I show you what it looks like in Google Analytics, it is important to understand a key concept in Google Analytics.

Test&Target is using the Custom Events capability of Google Analytics to populate the data.  Each Event has a Category, an Action, and a Label.  In this integration, the Google Analytics Event Category is simply Test&Target because that is our categorization of these Events.  The Google Analytics Action Event represents the Test&Target Test name.  And finally, the Event Label in Google Analytics represents the Test&Target Test Experience.  Here is a mapping to hopefully relate this easier:

Google Analytics EventsNow that we understand that, lets see what the integration gets you:

Google Analytics Test&TargetWhat we have here is a report of a specific Google Analytics Event Category, in this case the Test&Target Event.  Most of my clients have many Event Categories so it’s important to classify Test&Target as a separate Event and this plug-in code does that for you.

This is a very helpful report as we can get a macro view of the optimization efforts.  This report allows you to look at how ALL of your tests impact success events being tracked in Google Analytics at the SAME time.  Instead of looking at just a unique test as you might be used to when looking at test results in T&T, here we can see if Test A was more impactful then Test B – essentially comparing any and all tests against each other.  This is great if organizations have many groups running tests or if you want to see what particular test types impact a particular metric or combination of metrics.

Typically though, one likes to drill into a specific test and that is available by changing the Primary Dimension to Event Label which, as you know, represents the T&T Test Experience.  Here we are looking at Event Labels (Experiences) for a unique Event Action (Test):

Google Analytics Test ExperiencesHere we can look at how a unique test and its experiences impacted given success events captured in Google Analytics. Typically, most organizations include their key success events for analysis in T&T but this integration is helpful if you want to look at success events not included in your T&T account or if you want to see how your test experiences impacted engagement metrics like time on site, page views, etc….

So there you have it.  A quick and easy way to integrate your T&T account with Google Analytics.  While this can be incredibly helpful and FREE, it is important to also understand that statistical confidence is not communicated here in Google Analytics or any analytics platform that I know of, including SiteCatalyst.  It is important to leverage your testing platform for these calculations or offline calculators of statistical confidence before making any key decisions based on test data.

While this was fun to walk you through how to leverage the T&T plug-in to push data into Google Analytics please know that you can use the plug-in for a wide array of things.  I’ve helped clients leverage the plug-in capability to integrate T&T with MixPanel, CoreMetrics, and Webtrends.  You can also use this plug-in capability to integrate with other toolsets other then analytics.  For example, I have helped clients integrate T&T data into SFDC, ExactTarget, Responsys, Causata, internal CRM databases, Eloqua/Aprimo/Unica , Demdex (now DBA Audience Manager), and display retargeting toolsets.  Any platform that can accept a javascript call or pick up a javascript variable can make use of this plug-in concept.

I’ve also helped customers over the years leverage the plug-in to publish tags to the site.  Years before the abundance of Tag Management Platforms became available, there were T&T customers using the plug-in to publish Atlas, DoubleClick, and Analytic tags to the site.  In fact, if Adobe wanted to, they could make this plug-in capability into a pretty nice Tag Management Platform and one that would work much more efficiently with T&T then the current Tag Management tool they have on the market today.

Technical/Implementation

The Unknown and the Known

In the Demand Generation world it is all about the “Known” and “Unknown”.  Before visitors fill out a form they are considered to be “Unknown”.  After supplying their information on a form, they are considered “Known”.  Increasing the percentage of visitors that fill out these forms adds a significant amount of value to organizations.

If you are unfamiliar with Demand Generation tools, they are often used to capture information from prospects, score leads, and send targeted emails to prospects, among other things.

Focusing on optimization techniques will allow you to increase the progress of “Unknowns” to “Knowns”, and progress to true personalization; where you know exactly what to show to each person on your site. To start, using the traffic source, environmental variables, online behaviors and geographic variables to target content will help discover what is the most effective content to present to visitors.  These types of visitor profile attributes should serve as the foundation of your visitors’ marketing profiles.  Then, add to them with offline data and contextual data to determine content to present to visitors as part of an optimization.

Let me walk you through two examples that have allowed companies to extend the value of their Demand Generation Platform by integrating it with their Optimization Platform.

The architected solution that is shown here leverages Adobe’s Test&Target and can applied to Demand Generation tools such as Aprimo, Eloqua, and Unica.  This model can be adapted to other platforms and technologies; these are simply the ones that I have helped customers execute and see value with in the past.

This first example highlights ways to increase the percentage of visitors that complete these forms.  Here we are optimizing to the Unknowns.

Optimizing the Unknowns

Unknown Visitors Test&Target Demand Generation

1.  An Unknown contact comes to the website and we want to increase the likelihood of them filling out the form.  The areas on the website and in the email in light red represent mboxes, which is short for marketing box and is the Test&Target code that is placed on the page.  This mbox does two key things in this exercise:  it allows for injection of content to target this visitor and it sets a unique visitor ID.

In order to increase the likelihood of visitors filling out the form and becoming Known, we have to be relevant.  We can target different product promotions, messaging, or content that is relative to the referral messaging to find out what is the most relevant content that leads to increased form completes.  Optimization allows us to not only understand what is the most effective content for form completion for the general population as a whole but also across segments.  For example, we may learn that SEM traffic should be presented with promotional messaging and visitors who are on their third visit should be presented with branded messaging as that increases their likelihood to fill out the form and convert.

2.  In this step a visitor has completed completed the form.  They supplied information such as title, organizational department, company size, industry and personal information such as email, name and address.  All if this information is helpful for the Demand Generation tool to manage this particular lead.  There are two additional data points that should be communicated to the Demand Generation tool as well that only the Optimization Platform can provide.

The first is the information on what targeted content was presented to this individual.  Consider how helpful it would be for the Account Manager or the Sales Person if they knew if a visitor was presented with promotional content versus branded content.  In cases where the company is presenting differentiating products, having that data tied to this lead is even more valuable.

The second data point to be passed to the Demand Generation tool is the Test&Target unique visitor ID.  This ID, when coupled with Demand Generation unique ID, allows for augmentation of the visitor profile attributes with offline data – something I address in the second example.

This communication of optimization Attributes happens programmatically behind the scenes as part of the integration.

3.  At this point, the visitor has made the progression from an Unknown to a Known.  The Optimization Platform provided the ability to determine what content was relevant and would lead to a higher percentage of form completes.  The Demand Generation tool has an increased amount of leads to manage that also have the rich test information provided by the Optimization Platform.

Optimizing the Knowns

Test&Target target on offline profiles

1.  While optimizations are running targeting content to different segments of Unknown visitors, we can also simultaneously run optimizations to different segments of Known visitors.  Clients see incredible value doing this because they are continually being relevant by personalizing the site and email communication even after they have gotten the lead.

In step two in the first example, I pointed out that the optimization platform should communicate to the demand generation the Test&Target ID.  This ID is then coupled with the ID the Demand Generation tool manages.  As activity takes place offline such as phone calls or email communication, the profile of that lead gets richer.  Test&Target allows users to augment the online ID it creates with offline information such as sales cycle stage.  This is something that is accomplished programmatically when lead data is exported from the Demand Generation tool and then feed into Test&Target’s offline profile API.

2.  With all this rich information made available to the online profile, we now have the ability to target content in the same mboxes that were being used in the first example.  Using the Analytics Demystified’s website as an example, if this Known visitor came back to the site after filling out our form and then attending one of our ACCELERATE conferences, we may want to use the real estate in the mbox to promote our next ACCELERATE conference.  Anything that is known about this visitor can be used.  Another great example of the personalization capabilities here would be around targeting content based off of interest expressed offline.  Lets say a visitor came to the Demystified website and submitted their information for us to contact them.  During a phone call that we had with them, we found out that they were interested in the SiteCatalyst audits that Adam Greco provides.  We notate their interest in our Demand Generation tool.  That then gets pushed into their T&T profiles and upon subsequent visits to the site we can target content associated with Adam’s offerings that clients love.  This personalization or relevance can help progress this visitor into further engaging with Adam for his services.

3.  In this step, Test&Target is augmenting the Demand Generation tool’s profile by again communicating additional optimization data points as well as any recent website behavior.  This is very similar to Step two in the first example but in this case the visitor is already Known and may have visited other pages of the site or was presented with specific targeted content that is worth noting for the Sales Person or Account Executive.

So there you have it.  If you are using a Demand Generation tool and you are also using an optimization platform that supports the augmentation of the online ID with offline data, I highly recommend integrating the two.  There is incredible value in optimizing form completes and by continuing to be relevant to visitors even after they completed a form.  Because we are using an optimization tool to accomplish this, all the effort here is easily quantifiable to show the value and ROI.

Adobe Analytics, Analytics Strategy, Technical/Implementation

Integrating SiteCatalyst & Tealeaf

In the past, I have written about ways to integrate SiteCatalyst with other tools including Voice of Customer, CRM, etc… In this post, I will discuss how SiteCatalyst can be integrated with Tealeaf and how to implement the integration. This post was inspired and co-written by my friend Ryan Ekins who used to work at Omniture and now works at Tealeaf.

About Tealeaf

For those of you unfamiliar with Tealeaf, it is a software product in the Customer Experience Management space. One key feature that I will highlight in this post is that Tealeaf customers can use their set of products to record every minute detail that happens on the website and are then able to “replay” sessions at a later time to see how website visitors interacted with the website. While this “session replay” feature is just a portion of what you can do in Tealeaf, for the purposes of this post, that is the only feature I will focus on. In general, Tealeaf collects all data that is passed between the browser and the web/application servers, so when someone says, “Tealeaf collects everything” that is just about right. While there is some third party data that may need to be passed over in another way, for the most part, out of the box you get all communications between browser and server. Tealeaf clients use their products to improve the user experience, identify fraud or to simply learn how visitors use the website. Whereas tools like SiteCatalyst are primarily meant to look at aggregated trends in website data, Tealeaf is built to analyze data at the lowest possible level – the session. However, one of the challenges with having this much data, is that sometimes finding exactly what you are looking for is like looking for a needle in a haystack if you have an earlier version of Tealeaf (i.e. earlier than 8.x). While the Tealeaf UI has gotten better over the years and is used by business and technical users, it was not built to replace the need for a web analytical package. It is for this reason that an integration with web analytical packages such as SiteCatalyst makes so much sense.

SiteCatalyst Integration

Since SiteCatalyst is a tool that can be used by many folks at an organization, years ago, the folks at Omniture and Tealeaf decided to partner to create a Genesis integration that leverages the strengths of both products. The philosophy of the integration was as follows:

  • SiteCatalyst is an easy tool to use to segment website visits, but that it doesn’t have a lot of granular data
  • Tealeaf has tons of granular data, but isn’t built for many end-users to access it and build segments of visits on the fly
  • Establishing a “key” between the SiteCatalyst visit and the Tealeaf session identifier could bridge the gap between the two tools

Based upon this philosophy, the two companies were able to create a Genesis integration that is easy to implement and provides some very exciting benefits. When you sign up for the Tealeaf/SiteCatalyst Genesis integration, a piece of JavaScript is added to your SiteCatalyst code. This JavaScript merely takes the Tealeaf session identifier and places it into an sProp or eVar. That sProp or eVar then becomes the key across both products. Once the Tealeaf session identifier is passed into SiteCatalyst, it acts like any other value. This means that you can associate SiteCatalyst Success Events to Tealeaf ID’s, segment on them or even export these ID’s. However, if you go back to the original philosophy of the integration, you will recall that the primary objective of the integration is to combine SiteCatalyst’s segmentation capability with Tealeaf’s granular session replay capability. This is where you will find the most value as demonstrated in the following example.

Let’s say that you have an eCommerce website and that you have a high cart abandonment rate. In SiteCatalyst, it is easy to build a segment of website visits where a Cart Checkout Success Event took place, but no Purchase Success Event occurred:

Once you create this segment, you can use SiteCatalyst or Discover to see anything you want including Visit Number, Paths, Items in the Cart, Browser, etc… However, the one thing that is difficult to see in SiteCatalyst is the actual pages the visitor saw, how these pages looked, where the user entered data, the exact messages they saw, etc… As the old saying goes, “a picture is worth a thousand words” and sometimes simply “seeing” visitors use your site can open your eyes to ways you can improve the experience and make more money! However, watching every shopping cart session would be tedious. But by using the SiteCatalyst-Tealeaf integration, once you have built the segment shown above, you could isolate the exact Tealeaf session ID’s that match the criteria of the segment, which in this case are visits where a checkout event took place, but there was no purchase. To do this, simply apply this segment in SiteCatalyst v15, Discover or DataWarehouse and you can get a list of the exact Tealeaf session ID’s that are now stored in an sProp or eVar:

Once you have these Tealeaf ID’s, you can open Tealeaf and view session replays to see if you can find an issue that is common to many visits, such as a data validation error, a type of credit card that is causing issues, etc… Here is a screenshot of what you might see in Tealeaf:

It is easy to see how simply passing a unique Tealeaf session ID to a SiteCatalyst variable can establish a powerful connection between the two tools that can be exploited in many interesting ways. The above example is the primary method of leveraging the integration, but you could also upload meta-data from Tealeaf into SiteCatalyst using SAINT Classifications and many, many more.

One additional point to keep in mind is that for many clients, the number of unique Tealeaf session ID’s stored in SiteCatalyst will exceed the 500,000 monthly limit. As shown in the screenshot above, 96% of the values exceeded the monthly limit. This means that you may have to rely heavily on DataWarehouse, which can sometimes take a day or two to get data back. It also means that you may want to consider using an sProp instead of an eVar if you have a heavily trafficked site.

The Future

In the future, we’d like to see Adobe and Tealeaf build a deeper integration that allows SiteCatalyst users to simply click on a segment and automatically be taken into Tealeaf where they could have the same segment created in Tealeaf and begin replaying sessions. This functionality exists for OpinionLab, Google Analytics and others already. It would also be interesting if one day joint customers could use Tealeaf to assist with SiteCatalyst tagging itself. Since Tealeaf has all of the data anyway, why not use this, combined with SiteCatalyst API’s to populate data in SiteCatalyst instead of using lots of complex JavaScript? Currently, the cost of API tokens make this cost-prohibitive, but technically, there is no reason this cannot be done.

Final Thoughts

So there you have it. If you have both SiteCatalyst and Tealeaf, I recommend that you check-out this integration and think about the use cases that might make sense for you. Also keep in mind that similar integrations exist with other vendors that offer “session replay” features like ClickTale and RobotReplay (now part of Foresee). If you have any detailed questions about the Tealeaf integration, feel free to reach out to @solanalytics.

Adobe Analytics, Technical/Implementation

SiteCatalyst Implementation Pet Peeves – Follow-up [SiteCatalyst]

I recently blogged a list of my top Omniture SiteCatalyst implementation “Pet Peeves.” While the response to my post was very positive, one reader agreed with most of what I said, but disagreed with a few of my assertions or felt I had made some omissions. First, let me state that I always encourage feedback and comments to my blog posts since that helps everyone in the community learn. In general, the reader was making the point that my post only took into account an implementer’s perspective vs. the perspective of the web analyst. Personally, I don’t like to divide the world into implementers and analysts, since some of the best implementers I know also have a deep understanding of web analysis and vice-versa. Having been a web analytics practitioner using SiteCatalyst at two different organizations, I feel that I am in a good position to know if items I suggest (or discourage) will lead to fruitful analysis. I always try to write my blog from the perspective of the in-house web analyst who has to deal with things that I dealt with in the past, such as adoption, enterprise scalability, training, variable documentation, etc… In fact, I attribute much of my consulting success to the fact that I have been in the shoes of my clients and that they appreciate that my recommendations are based upon actual pains that I have experienced.

Since my original post was a very quick “Top-10” list and didn’t provide an enormous amount detail, and given the interest that it generated, I thought it would be worthwhile to write this follow-up post to address the concerns raised related to my post and to elaborate on the rationale behind some of my original assertions. In the process, it will become clear that I don’t necessarily agree with the concerns raised to my original post, but I am always cognizant of the fact that every client situation is different and every SiteCatalyst implementer has experiences that color their own implementation preferences. I don’t see it as my place to say which techniques are right and which are wrong, but rather to do my best to state what I think is/is not “best practice” and why based upon what I have seen and experienced over the past ten years and let my readers decide how to proceed from there…

Tracking Every eVar as an sProp

The first pet peeve I mentioned is when I find clients that have duplicated every eVar with a similar sProp. I stated that there are only specific cases in which an sProp should be used including a need for unique visitor counts, Pathing, Correlations and to store data that exceeds unique limits for accessing in DataWarehouse. The reader seemed to think I was being hard on the poor sProp and listed a few other cases where they felt duplicating an eVar with an identical sProp or adding additional sProps was justified including:

  1. Using List sProps – The reader suggested that I had made an omission, by not mentioning List sProps as another reason to consider using an sProp in an implementation. I maintain that the use of List sProps was justifiably covered in my statement of other sProp uses that are “few and far between.” I don’t use List sProps very often because I feel that there are better ways to achieve the same goals. As the reader stated, List sProps have severe limitations and there is a reason that they are rarely used (maybe 2% of the implementations I have seen use them). I have found that you can achieve almost any goal you want to use List sProps for by re-using the Products variable and its multi-value capabilities instead. By using the Products variable, you can associate list items to KPI’s (Success Events) rather than just Traffic metrics. Using the reader’s own example of tracking impressions, illustrates the differences perfectly. You can store impressions and clicks of internal ads and calculate a CTR using the Products variable and two success events. This also gives you charts for impressions, clicks and the ratio of the two which can be easily added to SiteCatalyst dashboards. I have found that doing this with a List sProp is difficult, if not impossible and reporting on it is tedious. For more information on my approach, please check out my blog post on the subject.
  2. Page-Based Containers & Segmentation – Here the reader suggested that the need to isolate specific pages using a Page View-based container is important to the life of the web analyst. Ben Gaines from Omniture also commented about this on my original post and I do agree that this can be useful for some advanced segmentation cases. I did not include this in my original list because I find it to be a much more advanced topic than I intended to cover for this quick “Top 10” post. While there may be cases in which you want to set an sProp to filter out specific items using a Page View-based segment container, I find that I often do this using the Page Name sProp which is already present. I do not see too many cases where a client is storing an eVar (let’s say Zip Code) and will say, “I am going to duplicate it as an sProp for the sole purpose of building a Page-Based container segment to include or exclude page views where a page is seen where a Zip Code equaled 123456.” Maybe that happens sometimes, but I still think it falls out of the scope of the primary things you should be considering when deciding whether to duplicate an eVar and I think it is a stretch to say that this functionality establishes the line between those who care about implementation and those who care about web analysis.
  3. Correlations – With respect to Correlations, the reader suggested that users correlate as often as they can since cross-tabulation is so essential to the web analyst. This is exactly why I included Correlations in my list! I also mentioned that this justification for using an sProp may go away in SiteCatalyst v15 where all eVars have Full Subrelations. Also, one of the reasons I prefer Subrelations to Correlations is that Correlations only show intersections (Page Views) and do not show any cross-tabulation of KPI’s (Success Events). Personally, I would disagree with the reader about over-doing Correlations, since in my experience, implementing too many Correlations (especially 5-item or 20-item Correlations), with too many unique values, can cost a lot of $$$, lead to corruption and latency.
  4. Pathing – In the area of Pathing, I think the reader and I are on the same page about its importance which is why I have published so many posts related to Pathing such as KPI (Success Event) Pathing, Product Pathing, Page Type Pathing, etc… Again, I might differ with the reader in that I don’t think enabling Pathing on too many sProps is a good idea since it can cost $$$ and produce report suite latency, which is why I prefer to use Pathing only when it adds value.

At the end of the sProp duplication section, the reader stated that there was no downside to duplicating every eVar as an sProp since it has no additional cost. To this, I would reiterate that my post was not advocating abandoning the use of sProps, but instead, attempting to help readers determine when they might want to use sProps so as to avoid over-using them when they will not add additional value. Even after years of education, I still find that many clients get confused as to whether they should use an eVar or an sProp in various situation, and most people I speak to welcome advice on how to decide if each is necessary.

However, I disagree with the reader’s assertion that duplicating every eVar as an sProp has no costs. Maybe it is due to the fact that I have “been in the trenches,” but in my experience I have seen the following potential negative ramifications:

  • Over-implementing variables and enabling features unnecessarily can cause report suite latency
  • Over-implementing variables can increase page load time, which can negatively impact conversion
  • Over-implementing variables and features can cost additional $$$ as described above (e.g. Pathing, Correlations)
  • When you implement SiteCatalyst on a global scale, you often need to conserve variables for different departments or countries to track their own unique data points. This means that variables (even 75 of them!) are at a premium. Therefore, duplicating variables has, at times, caused issues in which clients run out of usable variables.
  • Most importantly, however, is the impact on adoption. Again, I may be biased due to my in-house experience, but here is a real-life example: Let’s say that you have duplicated all eVars as sProps. Now you get a phone call from a new SiteCatalyst user (who you have begged/pleaded for six months to get to login!). The end-user says they are trying to see Form Completions broken down by City. They opened the City report, but were only able to see Page Views or Visits as metrics. Why can’t they find the Form Completions metric? Is SiteCatalyst broken? Of course not! The issue is that they have chosen to view the sProp version of the report instead of the eVar version. That makes sense to a SiteCatalyst expert, but I have seen the puzzled look on the faces of people who don’t have any desire to understand the difference between an sProp and an eVar! In fact, if you try to explain it to them, you will win the battle, but lose the war. In their minds, you just implemented something that is way too complicated. You’ve just lost one advocate for your web analytics program – all so that you can track City in an sProp when you may not have needed to in the first place. In my experience, adoption is a huge problem for web analytics and is a valid reason to think twice about whether duplicating an sProp is worthwhile. While I’ll admit that duplicating all variables certainly helps “cover your butt,” I worry about the people who are at the client, left to navigate a bloated, confusing implementation…

Therefore, for the reasons listed above, I remain steadfast in my assertion that there are cases where sProps add value and cases where they just create noise. While there will always be edge cases, I think that the justifications I laid out in my original post are the big ones that the majority of SiteCatalyst clients should think about when deciding if they want to duplicate an eVar as an sProp or use an sProp in general.

As an aside, while we are revisiting my original post, I thought of a few more items I wish I would have included so I will list them here:

  1. One other justification for setting an sProp I should have mentioned is Participation. There are some fun uses of Participation that can improve analysis and I find that sProp Participation is easier to understand for most people than eVar Participation so I would add that to my original list.
  2. If you do find a need to duplicate an eVar as an sProp, but it is only for “power users,” keep in mind that you can hide the sProp variable from your novice end-users through the security settings under Groups.
  3. Finally, I see Omniture ultimately moving to a world where there will only be one variable so if you want to be part of that world, please vote for my suggestion of doing this in the Ideas Exchange here.

VISTA Rules

Another pet peeve I mentioned is that I often find clients who are using VISTA rules too often or as band-aids. The reader stated that VISTA rules are a good alternative to JavaScript tagging since they can speed up page load times. I think this is another situation where my time working at Omniture and in-house managing SiteCatalyst implementations may bias my recommendations. While I agree that page load time is important, most Omniture clients I saw never mentioned using VISTA rules as a way to decrease page load time, but rather as a way to avoid working with IT! Usually, when I find a client that has many VISTA rules, it is because they have a bad relationship with IT, who doesn’t want to do additional tagging, rather than to save page load time. However, if I were to address the reader’s point of page load speed, I would agree that there are cases where using VISTA rules over JavaScript can decrease page load time, but I certainly do not think this should be the primary deciding factor. Great strides have been made in tagging including things like dynamic variable tagging and tag management tools which have greatly reduced page load speeds. I suggest readers check out Ben Robison’s excellent post on VISTA vs. JavaScript which discusses not only page load speed, but also the many other important factors to consider before jumping into VISTA rules.

Another point I’d like to make about VISTA rules is that, in my experience, they have a high likelihood of breaking and leading to periods of bad data. VISTA rules are like Excel macros. They do what you tell them to do, but if something changes, it can easily throw off a VISTA rule and cause incomplete or inaccurate data to be reported in SiteCatalyst. In this point, perhaps I am a bit jaded because I have seen so many different VISTA implementations go awry while I was at Omniture. In fact, it is rare that I find clients that have a VISTA rule that has worked for several years without ever having an issue. And if you do encounter an issue, you will have to pay Omniture around $2,000 to update it – every time. Want to make an update to the VISTA rule? $2,000. Want to turn off the VISTA rule or move it to a different report suite? $2,000! Consultants don’t have to write these checks, but guess who does – the in-house people do! This is why people are so excited about the new V15 processing rules and emerging tag management vendors. It is this tendency to break and the risk of bad data that makes me a bit gun-shy about using VISTA rules simply as a replacement for JavaScript tagging. Moreover, since the reader’s overall premise was that one must keep the web analyst in mind during implementation, I would be cautious about being overly-reliant on a solution like VISTA that is so prone to causing data issues which could thwart the analyst’s ability to do web analysis. I have seen companies that have 20+ VISTA rules and I promise you that they are not huge fans of VISTA right now (though they should really blame themselves not the tool!)! If you do pursue VISTA rules, my advice is that you consider using DB VISTA over VISTA. DB VISTA rules cost a bit more, but do offer more flexibility since you can at least make updates to the data portion of your rules without having to pay Omniture additional $$$.

One additional point to think about when it comes to VISTA rules is the impact they can have on report suite latency. Having too many VISTA rules can slow down your ability to get timely data in SiteCatalyst and I have seen many large organizations have severe (several days) report suite latency due to multiple VISTA rules acting on each server call. This impacts the web analyst’s ability to get the data they need and should be factored into decisions about VISTA rules.

As I stated in my original post, I have nothing against VISTA rules, but do find the overuse of them to be a potential red flag when I look at a new implementation. I often find that excessive use of VISTA Rules can be a symptom of bigger problems which merit investigation. Just like I don’t advocate duplicating sProps or enabling Pathing when not necessary, I don’t advocate the use of too many VISTA rules since it can be great in the short term, but bad in the long term. Now that I am a consultant again, it would be easy for me to recommend VISTA rules left and right, but since I like to have long-term relationships with my clients, I don’t do this since I know what it is like to be around later if/when they have issues!

Final Thoughts
I hope this post provides some good food for thought and more in-depth information about some of the items I listed in my original post. If you would like to discuss any of the above topics in more detail, feel free to leave comments here or e-mail me. Thanks!

General, Technical/Implementation

Need A Checkup? The Doctor Is In!

When it comes to your health, most doctors say that having a regular checkup is the easiest way to prevent major illness. By simply going to see your doctor once a year, you can get your vitals evaluated and see if your blood pressure is too high or low, check your cholesterol, etc… If you happen to be sick at the time you have your checkup, you can find out if it is serious or not and if you feel fine, the checkup is a way to confirm that you are in good shape.

However, when it comes to web analytics implementations, it isn’t always easy to know how “healthy” you are. You might wonder the following:

  • Is my organization capturing the right data to ensure it can do the analysis needed to improve conversion rates?
  • Do the configuration settings of our web analytics tool make sense?
  • Are we maximizing the use of our web analytics tool or are we only using 20% of its capabilities?
  • How does our web analytics implementation compare to that of my peers/competitors?

Over the past decade, I have been associated with hundreds of web analytics implementations, and the above questions were ones that often kept my clients awake at night. And, truth be told, based upon my experience, many of them had reason to be worried. More often than not, when I crack open a client’s web analytics implementation, I am shocked by what I see. Here are a few examples of problems I encounter repeatedly:

  • Unusable pathing reports due to inconsistent page naming practices
  • Unusable campaign reports due to inconsistent tracking code naming conventions
  • Web analytic variables/reports defined, but with no data
  • Cookie settings that don’t line up with business goals (i.e. Cookie using Last Touch when Marketing uses First Touch)
  • Data inconsistencies resulting in reports that are highly suspect or untrustworthy
  • Incomplete meta-data or look-up tables
  • Lack of critical KPI’s and best practices specific to the industry vertical the website serves
  • Lack of appropriate usage of key web analytics tool features that could improve overall analytic success

The remainder of this post will discuss a new service offering Analytics Demystified will be providing to address the preceding concerns. If you are interested in knowing the “health” of your organization’s web analytics implementation, please read on…

Introducing the Web Analytics Operational Audit

So how do you know if you are doing well or poorly? Like anything, the best way to know where you stand is to perform a checkup or audit. In this case, I am referring to an audit that reviews which web analytic tool features you are utilizing and what data your web analytics implementation is currently collecting.

Since there is no official “doctor” when it comes to web analytics, we at Analytics Demystified have created what we believe is the next best thing. Taking advantage of our depth of experience in the web analytics arena, we have created a Web Analytics Operational Audit scorecard that encompasses the best practices we have seen across all company sizes and industry verticals. This scorecard is vendor-agnostic and has over 100 specific items and categories that allow you to see where your current web analytics implementation excels and where it is lacking.

Over the years, I have done this type of scoring informally, but the Operational Audit framework we have created at Demystified takes this to a whole new level. Here is a snapshot of what the scorecard looks like so you can see the format:

Our goal in creating this Operational Audit project is to have a simple, yet powerful way to objectively score any web analytics implementation from a functionality point of view. Knowing where your organization stands with respect to its web analytics implementation is beneficial for the following reasons:

  • If you think you have a robust implementation, but it turns out that you do not, you may be making poor business decisions today based upon faulty data and/or incorrect assumptions
  • What if your implementation is worse than you thought? You can try and hide it, but I have found that in the long run, bad web analytics implementations are eventually found out…usually at the worst time when an executive needs something critical and you have to come back and say “sorry, we don’t have a way to know that…” Wouldn’t you like to know sooner, rather than later, what shape you are in so you can get your web analytics house in order?
  • Maybe you have an awesome web analytics implementation, but your boss doesn’t know it! What would it do to your job/career if your boss was told by an independent 3rd party that all of the time and money they invested in your web analytics implementation have paid off! What if your web analytics implementation was in the top 10% of the general web analytics population? Promotion anyone?
  • Your organization doesn’t have unlimited time and budget for web analytics implementation projects. When the stars align and you do get resources or budget, wouldn’t it be great to be armed and ready with the top things you should be doing so you don’t miss these golden opportunities?

These are just a few of the many reasons that auditing your implementation makes sense. One important note: this Operational Audit does not include a technical audit of JavaScript tagging (which can be equally as important!).

Go Forth and Audit!

As I stated earlier, the unfortunate truth is that there is more bad than good out there. People change roles, priorities change, people leave your company, companies merge. There can be any number of reasons contributing to the devolution of web analytics implementations, but regardless of how you got to where you are, if you want to be successful, you need to grab hold of the reins of your current web analytics implementation and take ownership of it.

For example, when I joined Salesforce.com, I could have spent my time blaming our implementation shortcomings on my predecessors, but that wouldn’t help me get to where I needed to go. Instead, I chose to audit our implementation and identify what was worth keeping and what had to go! In the end, our company was better for it, and the audit led to an implementation roadmap for the next year, allowing me to know how long it would take to turn things around and what type of resources I would need.

It is based upon this recent experience that I highly encourage you to consider this Operational Audit service for your organization. Long term, one of my hopes is that I can audit enough companies, across various company sizes and verticals to enable me to create a benchmark of web analytics implementations so I can let you know how your scores compare to others like you. This way, even if most companies score poorly, you can possibly claim to be the best of what is currently out there (can you tell I liked being graded on a curve in high school?). I am also looking forward to re-scoring companies next year so they can see how their implementation has improved year over year.

Intrigued? Interested? Scared?

If you’d like to learn more about having your web analytics implementation audited, please contact me and I’d be happy to answer any questions. Thanks!

 

Adobe Analytics, Technical/Implementation

CRM Integration #3 – Passing CRM Meta-Data to Web Analytics

(Estimated Time to Read this Post = 2.5 Minutes)

In my last few posts I have been delving into Web Analytics & CRM (Customer Relationship Management) integration. In my first post I described how you can pass Web Analytics Data to your CRM system to help your sales people. In my last post, I described how you could pass CRM data like Leads, Opportunities and Revenue into your Web Analytics tool. In this post, I will round out the trilogy by describing how you can use CRM data as Web Analytics meta-data to enhance your Web Analytics reporting.

My Golf Handicap Story
Since most people don’t often like talking about meta-data, I will begin by sharing an easier to understand story which first taught me how interesting integrating CRM and Web Analytics data could be. Back when I managed the website for the CME, we had situation in which we were trying to sell tickets for a major golf tournament. Unfortunately, the event was nearing and we still had lots of tickets to sell. At the time, I recalled that, for registered website users, we had golf handicap as one of our CRM fields in our Salesforce.com system (our customers were traders and spent a lot of time golfing!). I had recently worked on capturing each customer’s website ID in SiteCatalyst and also placing it in our CRM system. Suddenly, the light bulb went on in my head…why not upload golf handicap as a SAINT Classification of the website ID I had in an sProps and eVar in SiteCatalyst? I created a SAINT Classification table that passed in the raw handicap and also grouped it into buckets like this:

Whereas previously I could see what pages each website ID had viewed on the website, I could now expand that to see the same data for this new golf handicap Classification of that variable. The result was a report like this, in which I could see the most popular pages for website visitors by golf handicap:

From there, all that was left to do was to target some ads on those pages and voilà, the tickets were soon gone!

For me, this was more experimental than anything else, but it was the catalyst (no pun intended!) which helped me see the power of integrating CRM and Web Analytics. Of course back then there were no API’s to help pass data between systems, but nowadays, this is much easier (i.e. Genesis integrations). With this in mind, let’s take a look at a few more examples of how you can take advantage of this concept.

Examples of Passing CRM Meta-Data to Web Analytics
Now that you get the general idea, I’ll walk you through some other examples of enriching your Web Analytics data by bringing in CRM meta-data. Let’s assume that you have done the steps outlined in my last post and have made a connection between your Web Analytics visitors and your known CRM prospects/customers. Using the primary key described in my last post, you can export whatever CRM fields you care about from your CRM system and import them into your SiteCatalyst implementation as SAINT Classifications. Here, you can see that I have decided to export Industry, # of Employees, Lifetime Value and a Lifetime Value grouping (to make my reports more readable) from my CRM system and import them using the following SAINT file:

Now that I have done this, I can open my Lead Gen ID report in SiteCatalyst and look at any of these CRM fields as Classifications. Here is a view of some of my Success Events by Industry:

Here is the same data viewed by # of Employees:

Here is the same data viewed by Lifetime Value:

The same concept can apply if you are using other Web Analytics tools. Here is an example of viewing reports in Google Analytics by Job Title (in this case filtering for CIO’s):

Final Thoughts
As you can see, once you have made the connection between your Web Analytics and CRM system, there are lots of creative things you can do with respect to augmenting your traditional web analyses. I know a lot of people also do this in tools like Quantivo or Omniture Insight, but I hope this was helpful to see some of the ways to do this if you only have access to SiteCatalyst.

Adobe Analytics, Technical/Implementation

CRM Integration #1 – Passing Web Analytics Data to CRM

(Estimated Time to Read this Post = 5 Minutes)

One of the areas of Web Analytics that I am passionate about is the integration of Web Analytics and CRM. In the next three blog posts, I am going to share why I think this topic is important and some ideas on how to do it.

Why Integrate Web Analytics and CRM?
For those who are not experts on CRM, it stands for Customer Relationship Management and it generally involves using a tool to store all information you have about your prospects/customers. This normally includes all contacts with customers while they were prospects, all customer service touches, what products they use and how much they pay for each. However, the main thing to understand is that CRM systems contain pretty much all data about prospects/customers that takes place after you know who they are. But before your customers fill out a form or call you, guess where many of them are going? That’s right, your company’s website (and to social media sites more and more!). Guess who knows the most about what prospects do before your company knows they are interested in you? Your Web Analytics platform!

Last week, I presented on this topic at the eMetrics conference where I posited that the combination of Web Analytics and CRM is akin to the joining of chocolate and peanut butter in that they are both great, but even better together! Often times, as web analysts we know a great deal about what happens on the website, but unless your website sells something or sells advertising, the true success event ($$$) often takes place off the website (especially for B2B sites). Additionally, for all the great information we have about website visitors, most of it is anonymous – we don’t really know who they are so we can’t easily connect their website behavior to other interactions. What if we could take all of that anonymous website behavior and somehow connect it with the known prospect/customer behavior stored in our CRM system? Imagine if every time a prospect filled out a lead form on your website, the sales person who is routed the lead could see what that person had viewed on the website, what products they had looked at, etc… That could lead to a much more meaningful conversation and help get things off on the right foot. In this first post on the topic, I will cover ways in which you can improve your CRM system by passing it meaningful data from your web analytics tool.

Passing Pages Viewed
The first area I would like to cover is the concept mentioned above in which we pass data about pages viewed from your Web Analytics tool into your CRM tool. So let’s say that you have a website visitor who navigates a bunch of pages on your website and then fills out a lead form. At that moment, you have the opportunity to create a connection between that user’s website (cookie) ID (Omniture calls this a Visitor ID) and the ID used to record that lead form in your CRM system. While it would take too long to go into all of the details on how to do this (Hint: read my old Transaction ID post!), at a high level, you can use API’s of both tools to tie these ID’s together. Once you have made this connection, you can pass data bi-directionally between the two systems. In this case, we are going to create a custom object in our CRM system that represents website traffic and import what pages this particular prospect on the website. While this may sound hard, if you look closely, you will notice that the following screen shot is something I did between Omniture and Salesforce.com back in 2005 so it can’t be that hard right?

In this case, your sales team would know that this person is probably interested in Weather products so they might want to prepare accordingly for their first phone call or face-to-face meeting.

Passing Website Scores
In one of his post-Summit blog posts, Ben Gaines talked about a topic called Visitor Scoring (I prefer the name Website Scoring to avoid the whole Engagement debate!). Basically, this involves storing a unique website score for each website visitor so you can see how active they have been on the website. For example, you can set this up so if a visitor views a Product page they get 5 “points” but if they view a product demo video, they get 8 “points” and so on. I tend to use Participation metrics or segments in Discover to determine which pages should be rating higher than others. If you have implemented this, one of the cool ways you can use it is to identify the current website score of a website visitor who completes a web form and pass it to your CRM system. Let’s say that your sales team receives hundreds or thousands of new leads each day. One way they can determine which ones they should call first might be to see how active each has been on the website. If one prospect comes through with a website score of “10” and another with “54” which one would you call? While this isn’t meant to replace a full-blown lead management system, it is another data point that can be passed from Web Analytics to CRM.

Lead Nurturing
Unfortunately, there are most likely way too many visitors for your sales team to talk to and not all of them are truly qualified. Therefore, one of the key strengths of CRM tools is that they can nurture or re-market to prospects via e-mail and other platforms. For example, it would be common for a company to use its CRM tool to automatically schedule an e-mail to go to all prospects who are interested in Product X and have more than 500 employees. However, what is often missing from these types of nurturing programs is the deep insights that can come from your Web Analytics tool. Building upon the preceding scenario, if we have a connection between a particular prospect and their Website cookie ID, as they come back to the site and click on more things, we should be pushing that information into our CRM tool and having it then decide which re-marketing information the prospect receives. For example, if the prospect above started clicking on items related to another CME product (say Eurodollars), the sales person may not have any plans in the next week to look at this person’s record so they would never know that. But by automating the data exchange between the Web Analytics tool and the CRM tool, specific product flags could be triggered that would result in the prospect being intelligently nurtured with little human intervention. If you are interested in Lead Nurturing, you can also look at tools like Eloqua which partner with CRM tools to provide this type of functionality.

Passing Key Website Metrics to CRM
The last concept I will cover in this post is the passing of key website metrics to your CRM system. Most sales organizations use conversion funnels that are not unlike what we are used to in Web Analytics. However, their funnels normally begin with new Leads and progress through different sales stages until business is won or lost. The one flaw in this model is that it doesn’t account for the true potential of selling opportunities that exist. A true salesperson would say that anyone who visits their company’s website is an opportunity for a sale so the way I look at it, they should include metrics like Unique Visitors and people who View a Demo or see a Lead Form as part of their sales funnel. I also think that getting sales to think of their funnel in a larger context helps bridge the gap between Sales and Marketing and opens the door for increased cooperation.

Therefore, one of the ways I do this is to take the traditional sales funnel and add some of our Web Analytics KPI’s to it like this:

Final Thoughts
So covers most of the topics related to passing Web Analytics data into CRM. In the next post I will cover the flip-side and show how you can pass CRM data into your Web Analytics tool.

Adobe Analytics, Technical/Implementation

Omniture Usage Stats

(Estimated Time to Read this Post = 1 Minute)

Every once in a while, I will get questions about how often people are accessing Omniture SiteCatalyst. In case this happens to you as well, I thought I’d write a (really) quick post with instructions on how to do this. Currently, there isn’t that much you can report on (I have requested more here) so this will be one of my shortest posts ever!

Usage Reportlet on Dashboard
The only way I know of to get usage metrics is by adding a usage reportlet to a SiteCatalyst Dashboard. To do this, open a new or existing Dashboard and add a “Usage” element. Through this element you can get information on Users, Reports Viewed, Suites, etc… as shown here:

Unfortunately, you don’t get much detail and basically are stuck with one metric “Views.” (which I assume is similar to Page Views). Once there, you can fill out the rest of the dashboard items. In the example below, I am adding a report that will show me the Top 500 users.

Once the reportlet is on the Dashboard, I tend to export it to excel and do some charts/graphs there. Here is an example of how you can trend how often people are engaging with your SiteCatalyst reports:

Well, that’s it. Short and sweet…