Tag Management, Technical/Implementation

Single-Page Apps: Dream or Nightmare?

A few months ago, I was discussing a new project with a prospective client, and they described what they needed like this: “We have a brand new website and need to re-implement Adobe Analytics. So far we have no data layer, and we have no developer resources in place for this project. Can you help us re-implement Adobe Analytics?” I generally avoid projects just like this – not because I can’t write server-side application code in several languages, but because even if I am going to write code for a project like that, I still need a sharp developer or two to bounce ideas off of, and ask questions to find out where certain files are located, what standards they want me to follow, and other things like that. In an effort to do due diligence, I asked them to follow up with their IT team on a few basics. Which platform was their site built on? Which programming languages would be required?

When they followed up by saying that the site was built on Websphere using ReactJS, I was sure this project was doomed to failure – every recent client I had worked with that was using either of these technologies struggled mightily, and here was a client using both! In addition, while I understand the premise behind using ReactJS and can generally work my way through a ReactJS application, having to do all the heavy lifting myself was a terrifying thought. In an effort to do due diligence, I agreed to discuss this project with some members of their IT team.

On that call, I quickly realized that there had been a disconnect in how the marketing folks on the project had communicated what the IT folks wanted me to know. I learned that a data layer already existed on the site – and it already contained pretty much everything identified in the solution design that needed to be tracked. We still had to identify a way to track a few events on the website (like cart adds), but I felt good enough about the project to take it on.

This project, and a handful of others over the past year, have challenged some strong opinions I’ve held on single page applications (SPAs for short). Here are just a few of those:

  • SPAs have just as many user experience challenges as the page-based applications they are designed to replace.
  • SPAs are present a major measurement challenge for traditional analytics tools like Adobe or Google Analytics.
  • Most companies move to an SPA-based website because they look and sound cool – they’re just the latest “shiny object” that executives decide they have to have.

While I still hold each of these opinions to some degree, the past few months have given me a much more open mind about single-page applications and frameworks like React or Angular. Measurement of SPAs is definitely a challenge – but it’s not an insurmountable one. If your company is thinking about moving to a single-page application, you need to understand that – just like the site itself is going to be fundamentally different than what you’re used to – the way you measure it will be as well. I’d like to offer a few things you’ll want to strongly consider as you decide how to track your new SPA.

A New Data Architecture

In many ways, SPAs are much better equipped to support a data layer than the old, Frankenstein-ish website you’re moving away from. Many companies I know have such old websites that they pre-date their adoption of a tag management system. Think about that – a tool you probably purchased at least six years ago still isn’t as old as your website itself! So when you implemented your TMS, you probably bolted on your data layer at the same time, grabbing data wherever you could find it.

Migrating to an SPA – even for companies that do this one page at a time – requires a company to fundamentally rethink its approach to data. It’s no longer available in the same ways – which is a good thing. Rather than building the data layer one template at a time like in the past, an SPA typically accesses the data it needs to build a page through a series of APIs that are exposed by back-end development teams. For example, data related to the authenticated user is probably retrieved as the page loads from a service connected to your CRM; data relevant to the contents of a customer’s shopping cart may be accessed through an API integrated with your e-commerce platform; and the content for your pages is probably accessed through an integration with your website CMS. But unlike when you implemented your data layer the first time – when your website already had all that data the way it needed it and in the right locations on the page – your development team has to rethink and rebuild all of that data architecture. You both need the data this time around – which should make collaboration much easier and help you avoid claims that they just can’t get you the data you need.

Timing Challenges for Data Availability

As part of this new approach to data, SPAs typically also introduce a shift in the way they make this data accessible to the browser. The services and APIs I mentioned in the previous section are almost always asynchronous – which introduces a new challenge for measurement teams implementing tracking on SPAs.

On a traditional website, the page is generated on the server, and as this happens, data is pulled into the page from appropriate systems. That data is already part of the page when it is returned to the browser. On an SPA, the browser gets an almost “empty” page with a bunch of instructions on where to get the relevant data for the page; then, as the user navigates, rather than reloading a new page, it just gets a smaller set of instructions for how to update the relevant parts of the page to simulate the effect of navigation.

This “set of instructions” is the API calls I mentioned earlier – the browser is pulling in user data from one service, cart data from another, and product/content data from yet another. As data is made available, it is inserted into the page in the appropriate spot. This can have a positive impact on user experience, because less-relevant data can be added as it comes back, rather than holding up the loading of the entire page. But let’s just say it presents quite a challenge to analytics developers. This is because most tag management systems were built and implemented under the assumption that you’d want to immediately track every page as it loads, and that every new page would actually be a new page. SPAs don’t work like that – if you track an SPA on the page load, or even the DOM ready event, you’re probably going to track it before a significant amount of data is available. So you have to wait to track the initial page load until all the data is ready – and then you have to track subsequent page refreshes of the SPA as if a new page had actually loaded.

You may have experienced this problem before with a traditional website – many companies experiment with the idea of an SPA by trying it out on a smaller part of their website, like user authentication or checkout. Or you’ve maybe seen it with certain third-party tools like your recommendation engine – which, while not really an SPA, have similar timing issues because they feed content onto the page asynchronously. The good news is that most companies that go all-in on SPAs do so all at once, rather than trying to migrate single sections over a longer period of time. They undertake a larger replatforming effort, which probably makes it easier to solve for most of these issues.

Figuring out this timing is one of the most important hurdles you’ll need to coordinate as you implement tracking on an SPA – and it’s different for every site. But the good news is that – as long as you’re using one of the major tag management systems – or planning to migrate from Adobe DTM to Launch as part of your project – it’s the hard part. Because every major TMS has a solution to this problem built right in that allows you to fire any tag on any event that occurs on the page. So your web developers just need to notify your analytics developers when the page is truly “ready.” (Again, if you’re still using Adobe DTM, I can’t emphasize strongly enough that you should switch to Launch if you’re building an SPA. DTM has a few notable “features” that pose major problems for SPAs.)

A New Way of Tracking Events

Another major shift between traditional websites and SPAs is in how on-page events are most commonly tracked. It’s likely that when you first implemented a tag management system, you used a combination of CSS selectors and custom JavaScript you deployed in the TMS, along with events you had your web developers tag that would “trigger” the TMS to do something. Because early sales teams for the major TMS companies used a pitch along the lines of “Do everything without IT!” many companies tried to implement as much tracking as they could using hacks and one-offs in the TMS. All of this custom JavaScript may have had the effect on your TMS of moving all your ugly, one-off tracking JavaScript from your website code to your TMS – without making the actual tracking any cleaner or more elegant.

The good news is that SPAs will force you to clean up your act – because many of the traditional ways of tracking fall down. Because an SPA is constantly updating the DOM without loading a new page, you can’t just add a bunch of event listeners that bind when the page loads (or on DOM ready). You’d need to turn off all your listeners on each page refresh and turn on a bunch of new ones, which can be tedious and prone to error. Another option that will likely not work in every case is to target very broad events (like a body “click”) and then within those handlers just see which element first triggered the event. This approach could also potentially have a negative impact on the user’s experience.

Instead, many teams developing an SPA also develop a new model for listening and responding to events that, just like the data layer, can be leveraged by analytics teams as well.

The company I mentioned at the beginning of this post had an entire catalog of events they already needed to listen for to make the SPA work – for example, they needed to listen for each cart add event so that they could send data about that item to their e-commerce system. The e-commerce system would then respond with an updated version of all the data known about a future order. So they built an API for this – and then, the analytics team was able to use it as well. Without any additional development, we were able to track nearly every key interaction on the website. This was all because they had taken the time to think about how events and interactions should work on the website, and they built something that was extensible to other things on the website than just its core purpose.

This is the kind of thing that a company would almost never do with an old website – it’s a large effort to build this type of event service, and it has to be done inside an old, messy codebase. But when you build an SPA, you have to do it anyway – so you might as well add a little bit more work up front to save you a ton of time later on. Developers figure these kinds of things out as they go – they learn tricks that will save time in the future. SPAs can offer a chance to put some of these tricks into action.

Conclusion

There are many other important things to consider when building a single-page application, and it’s a major undertaking that can take longer than a company plans for. But while I still feel that it’s more difficult to implement analytics on an SPA than any other type of web-based application, it doesn’t have to be the nightmare that many companies encounter. Just remember to make sure your development team is building all this new functionality in a way that everyone can benefit from:

  • While they’re making sure all the data necessary for each view (page) of the website is available, make sure they provide hooks so that other teams (like analytics) can access that data.
  • Consider the impact on your website of all of that data showing up at different times.
  • Develop an event model that makes it easy to track key interactions on the site without relying on fragile CSS selectors and DOM hacks.

A few weeks ago at our ACCELERATE conference, I led a roundtable for the more technical minded attendees. The #1 challenge companies were dealing with when it came to analytics implementation was SPAs. But the key is to take advantage of all the opportunities an SPA can offer – you have to realize it gives you the chance to fix all the things that have broken and been patched together over the years. Your SPA developers are going to spend a lot of time getting the core functionality right – and they can do it in a way that can make your job easier, too, if you get out in front of them and push them to think in innovative ways. If you do, you might find yourself wondering why some folks complain so much about tracking single-page apps. But if you don’t, you’ll be right there complaining with everyone else. If you’re working with SPAs, I’d love to hear from you about how you’re solving the challenges they present – or where you’re stuck and need a little help.

Photo Credit: www.gotcredit.com

Industry Analysis, Tag Management, Technical/Implementation

Stop Thinking About Tags, and Start Thinking About Data

Nearly three weeks ago, I attended Tealium’s Digital Velocity conference in San Francisco. I’ve attended this event every year since 2014, and I’ve spent enough time using its Universal Data Hub (the name of the combined UI for AudienceStream, EventStream, and DataAccess, if you get a little confused by the way these products have been marketed – which I do), and attended enough conferences, to know that Tealium considers these products to be a big part of its future and a major part of its product roadmap. But given that the majority of my my clients are still heavily focused on tag management and getting the basics under control, I’ve spent far more time in Tealium iQ than any of its other products. So I was a little surprised as I left the conference on the last day by the force with which my key takeaway struck me: tag management as we knew it is dead.

Back in 2016, I wrote about how much the tag management space had changed since Adobe bought Satellite in 2013. It’s been awhile since tag management was the sole focus of any of the companies that offer tag management systems. But what struck me at Digital Velocity was that the most successful digital marketing organizations – while considering tag management a prerequisite for their efforts – don’t really use their tools to manage tags at all. I reflected on my own clients, and found that the most successful ones have realized that they’re not managing tags at all – they’re managing data. And that’s why Tealium is in such an advantageous position relative to any of the other companies still selling tag management systems while Google and Adobe give it away for free.

This idea has been kicking around in my head for awhile now, and maybe I’m stubborn, but I just couldn’t bring myself to admit it was true. Maybe it’s because I still have clients using Ensighten and Signal – in spite of the fact that neither product seems to have committed many resources to their tag management products lately (they both seem much more heavily invested in identity and privacy these days). Or maybe it’s because I still think of myself as the “tag management guy” at Demystified, and haven’t been able to quite come to grips with how much things have changed. But my experience at Digital Velocity was really the final wake-up call.

What finally dawned on me at Digital Velocity is that Tealium, like many of their early competitors, really doesn’t think of themselves as a tag management company anymore, either. They’ve done a much better job of disguising that though – because they continue to invest heavily in TiQ, and have even added some really great features lately (I’m looking at you, New JavaScript Code Extension). And maybe they haven’t really had to disguise it, either,  because of a single decision they made very early on in their history: the decision to emphasize a data layer and tightly couple it with all the core features of its product. In my opinion, that’s the most impactful decision any of the early tag management vendors made on the industry as a whole.

Most tag management vendors initially offered nothing more than code repositories outside of a company’s regular IT processes. They eventually layered on some minimal integration with a company’s “data layer” – but really without ever defining what a data layer was or why it was important. They just allowed you to go in and define data elements, write some code that instructed the TMS on how to access that data, and then – in limited cases – gave you the option of pushing some of that data to your different vendor tags.

On the other hand, Tealium told its customers up front that a good data layer was required to be successful with TiQ. They also clearly defined best practices around how that data layer should be structured if you wanted to tap into the power of their tool. And then they started building hundreds of different integrations (i.e. tags) that took advantage of that data layer. If they had stopped there, they would have been able to offer customers a pretty useful tool that made it easier to deploy and manage JavaScript tags. And that would have made Tealium a pretty similar company to all of its early competitors. Fortunately, they realized they had built something far more powerful than that – the backbone of a potentially very powerful customer data platform (or, as someone referred to Tealium’s tag management tool at DV, a “gateway drug” to its other products).

The most interesting thing that I saw during those 2 days was that there are actual companies for which tag management is only a subset of what they are doing through Tealium. In previous years, Tealium’s own product team has showcased AudienceStream and EventStream. But this year, they had actual customers showing off real-world examples of the way that they have leveraged these products to do some pretty amazing things. Tealium’s customers are doing much more real-time email marketing than you can do through traditional integrations with email service providers. They’re leveraging data collected on a customer’s website to feed integrations with tools like Slack and Twilio to meet customers’ needs in real-time. They’re solving legitimate concerns about the impact all these JavaScript tags have on page-load performance to do more flexible server-side tagging than is possible through most tools. And they’re able to perform real-time personalization across multiple domains and devices. That’s some really powerful stuff – and way more fun to talk about than “tags.” It’s also the kind of thing every company can start thinking about now, even if it’s something you have to ramp up to first.

In conclusion, Tealium isn’t the only company moving in this direction. I know Adobe, Google, an Salesforce all have marketing tools offer a ton of value to their customers. Segment offers the ability to do server-side integrations with many different marketing tools. But I’ve been doing tag management (either through actual products or my own code) for nearly 10 years, and I’ve been telling customers how important it is to have a solid data layer for almost as long – at Salesforce, we had a data layer before anyone actually called it that, and it was so robust that we used it to power everything we did. So to have the final confirmation that tag management is the past and that customer data is the future was a pretty cool experience for me. It’s exciting to see what Adobe Launch is doing with its extension community and the integration with the newest Adobe mobile SDKs. And there are all kinds of similar opportunities for other vendors in the space. So my advice to marketers is this: if you’re still thinking in terms of tags, or if you still think of all your third-party vendors as “silos,” make the shift to thinking about data and how to use it to drive your digital marketing efforts.

Photo Credit: Jonathan Poh (Flickr)

Adobe Analytics, Tag Management, Technical/Implementation, Testing and Optimization

Adobe Target + Analytics = Better Together

Last week I wrote about an Adobe Launch extension I built to familiarize myself with the extension development process. This extension can be used to integrate Adobe Analytics and Target in the same way that used to be possible prior to the A4T integration. For the first several years after Omniture acquired Offermatica (and Adobe acquired Omniture), the integration between the 2 products was rather simple but quite powerful. By using a built-in list variable called s.tnt (that did not count against the 3 per report suite available to all Adobe customers), Target would pass a list of all activities and experiences in which a visitor was a participant. This enabled reporting in Analytics that would show the performance of each activity, and allow for deep-dive analysis using all the reports available in Analytics (Target offers a powerful but limited number of reports). When Target Standard was released, this integration became more difficult to utilize, because if you choose to use Analytics for Target (A4T) reporting, the plugins required to make it work are invalidated. Luckily, there is a way around it, and I’d like to describe it today.

Changes in Analytics

In order to continue to re-create the old s.tnt integration, you’ll need to use one of your three list variables. Choose the one you want, as well as the delimiter and the expiration (the s.tnt expiration was 2 weeks).

Changes in Target

The changes you need to make in Target are nearly as simple. Log into Target, go to “Setup” in the top menu and then click “Response Tokens” in the left menu. You’ll see a list of tokens, or data elements that exist within Target, that can be exposed on the page. Make sure that activity.id, experience.id, activity.name, and experience.name are all toggled on in the “Status” column. That’s it!

Changes in Your TMS

What we did in Analytics and Target made an integration possible – we now have a list variable ready to store Target experience data, and Target will now expose that data on every mbox call. Now, we need to connect the two tools and get data from Target to Analytics.

Because Target is synchronous, the first block of code we need to execute must also run synchronously – this might cause problems for you if you’re using Signal or GTM, as there aren’t any great options for synchronous loading with those tools. But you could do this in any of the following ways:

  • Use the “All Pages – Blocking (Synchronous)” condition in Ensighten
  • Put the code into the utag.sync.js template in Tealium
  • Use a “Top of Page” (DTM) or “Library Loaded” rule (Launch)

The code we need to add synchronously attaches an event listener that will respond any time Target returns an mbox response. The response tokens are inside this response, so we listen for the mbox response and then write that code somewhere it can be accessed by other tags. Here’s the code:

	if (window.adobe && adobe.target) {
		document.addEventListener(adobe.target.event.REQUEST_SUCCEEDED, function(e) {
			if (e.detail.responseTokens) {
				var tokens = e.detail.responseTokens;
				window.targetExperiences = [];
				for (var i=0; i<tokens.length; i++) {
					var inList = false;
					for (var j=0; j<targetExperiences.length; j++) {
						if (targetExperiences[j].activityId == tokens[i]['activity.id']) {
							inList = true;
							break;
						}
					}
					
					if (!inList) {
						targetExperiences.push({
							activityId: tokens[i]['activity.id'],
							activityName: tokens[i]['activity.name'],
							experienceId: tokens[i]['experience.id'],
							experienceName: tokens[i]['experience.name']
						});
					}
				}
			}
			
			if (window.targetLoaded) {
				// TODO: respond with an event tracking call
			} else {
				// TODO: respond with a page tracking call
			} 
		});
	}
	
	// set failsafe in case Target doesn't load
	setTimeout(function() {
		if (!window.targetLoaded) {
			// TODO: respond with a page tracking call
		}
	}, 5000);

So what does this code do? It starts by adding an event listener that waits for Target to send out an mbox request and get a response back. Because of what we did earlier, that response will now carry at least a few tokens. If any of those tokens indicate the visitor has been placed within an activity, it checks to make sure we haven’t already tracked that activity on the current page (to avoid inflating instances). It then adds activity and experience IDs and names to a global object called “targetExperiences,” though you could push it to your data layer or anywhere else you want. We also set a flag called “targetLoaded” to true that allows us to use logic to fire either a page tracking call or an event tracking call, and avoid inflating page view counts on the page. We also have a failsafe in place, so that if for some reason Target does not load, we can initiate some error handling and avoid delaying tracking.

You’ll notice the word “TODO” in that code snippet a few times, because what you do with this event is really up to you. This is the point where things get a little tricky. Target is synchronous, but the events it registers are not. So there is no guarantee that this event will be triggered before the DOM ready event, when your TMS likely starts firing most tags.. So you have to decide how you want to handle the event. Here are some options:

  • My code above is written in a way that allows you to track a pageview on the very first mbox load, and a custom link/event tracking call on all subsequent mbox updates. You could do this with a utag.view and utag.link call (Tealium), or trigger a Bootstrapper event with Ensighten, or a direct call rule with DTM. If you do this, you’ll need to make sure you configure the TMS to not fire the Adobe server call on DOM ready (if you’re using DTM, this is a huge pain; luckily, it’s much easier with Launch), or you’ll double-count every page.
  • You could just configure the TMS to call a custom link call every time, which will probably increase your server calls dramatically. It may also make it difficult to analyze experiences that begin on page load.

What my Launch extension does is fire one direct call rule on the first mbox call, and a different call for all subsequent mbox calls. You can then configure the Adobe Analytics tag to fire an s.t() call (pageview) for that initial direct call rule, and an s.tl() call for all others. If you’re doing this with Tealium, make sure to configure your implementation to wait for your utag.view() call rather than allowing the automatic one to track on DOM ready. This is the closest behavior to how the original Target-Analytics integration worked.

I’d also recommend not limiting yourself to using response tokens in just this one way. You’ll notice that there are tokens available for geographic data (based on an IP lookup) and many other things. One interesting use case is that geographic data could be extremely useful in achieving GDPR compliance. While the old integration was simple and straightforward, and this new approach is a little more cumbersome, it’s far more powerful and gives you many more options. I’d love to hear what new ways you find to take advantage of response tokens in Adobe Target!

Photo Credit: M Liao (Flickr)

Adobe Analytics, Tag Management, Technical/Implementation

My First Crack at Adobe Launch Extension Development

Over the past few months, I’ve been spending more and more time in Adobe Launch. So far, I’m liking what I see – though I’m hoping the publish process gets ironed out a bit in coming months. But that’s not the focus of this post; rather, I wanted to describe my experience working with extensions in Launch. I recently authored my first extension – which offers a few very useful ways to integrate Adobe Target with other tools and extensions in Launch. You can find out more about it here, or ping me with any questions if you decide to add the extension to your Launch configuration. Next week I’ll try and write more about how you might something similar using any of the other major tag management systems. But for now, I’m more interested in how extension development works, and I’d like to share some of the things I learned along the way.

Extension Development is New (and Evolving) Territory for Adobe

The idea that Adobe has so freely opened up its platform to allow developers to share their own code across Adobe’s vast network of customers is admittedly new to me. After all, I can remember the days when Omniture/Adobe didn’t even want to open up its platform to a single customer, much less all of them. Remember the days of usage tokens for its APIs? Or having to pay for a consulting engagement just to get the code to use an advanced plugin like Channel Manager? So the idea that Adobe has opened things up to the point where I can write my own code within Launch, programmatically send it to Adobe, and have it then available for any Adobe customer to use – that’s pretty amazing. And for being so new, the process is actually pretty smooth.

What Works Well

Adobe has put together a pretty solid documentation section for extension developers. All the major topics are covered, and the Getting Started guide should help you get through the tricky parts of your first extension like authentication, access tokens, and uploading your extension package to the integration environment. One thing to note is that just about everything you define in your extension is a “type” of that thing, not the actual thing. For example, my extension exposes data from Adobe Target for use by other extensions. But I didn’t immediately realize that my data element definitions didn’t actually define new data elements for use in Launch; it only created a new “type” of data element in the UI that can then be used to create a data element. The same is true for custom events and actions. That makes sense now, but it took some getting used to.

During the time I spent developing my extension, I also found the Launch product team is working continuously to improve the process for us. When I started, the documentation offered a somewhat clunky process to retrieve an access token, zip my extension, and use a Postman collection to upload it. By the time I was finished, Adobe had released a Node package (npm) to basically do all the hard work. I also found the Launch product team to be incredibly helpful – they responded almost immediately to my questions on their Slack group. They definitely seem eager to build out a community as quickly as possible.

I also found the integration environment to be very helpful in testing out my extension. It’s almost identical to the production environment of Launch; the main difference is that it’s full of extensions in development by people just like me. So you can see what others are working on, and you can get immediate feedback on whether your extension works the way it should. There is even a fair amount of error logging available if you break something – though hopefully this will be expanded in the coming months.

What Could Work Better

Once I finished my extension, I noticed that there isn’t a real natural spot to document how your extension should work. I opted to put mine into the main extension view, even though there was no other configuration needed that would require such a view. While I was working on my extension, it was suggested that I put instructions in my Exchange listing, which doesn’t seem like a very natural place for it, either.

I also hope that, over time, Adobe offers an easier way to style your views to match theirs. For example, if your extension needs to know the name of a data element it should populate, you need a form field to collect this input. Making that form look the same as everything else in Launch would be ideal. I pulled this off by scraping the HTML and JavaScript from one Adobe’s own extensions and re-formatting it. But a “style toolkit” would be a nice addition to keep the user experience the same.

Lastly, while each of the sections in the Getting Started guide had examples, some of the more advanced topics could use some more additional exploration. For example, it took me a few tries to decide whether my extension would work better with a custom event type, or with just some custom code that triggered a direct call rule. And figuring out how to integrate with other extensions – how to access other extensions’ objects and code – wasn’t exactly easy and I still have some unanswered questions because I found a workaround and ended up not needing it.

Perhaps the hardest part of the whole process was getting my Exchange listing approved. The Exchange covers a lot of integrations beyond just Adobe Launch, some of which are likely far more complex than what mine does. A lot of the required images, screenshots, and details seemed like overkill – so a tiered approach to listings would be great, too.

What I’d Like to See Next

Extension development is in its infancy still, but one thing I hope is on the roadmap is the ability to customize an extension to work the way you need it. For a client I recently migrated, they used both Facebook and Pinterest, though the extensions didn’t work for their tag implementation. There were events and data they needed to capture that the extension didn’t support. I hope that in a future iteration, I’ll be able to “check out” an extension from the library and download the package, make it work the way I need and either create my own version of the extension or contribute to an update of someone else’s extension that the whole community can benefit from. The inability to customize tag templates has plagued every paid tag management solution except Tealium (which has supported it from the beginning) for years – in my opinion, it’s what turns tag management from a tool used primarily to deploy custom JavaScript into a powerful digital marketing toolbelt. It’s not something I’d expect so early in the game, but I hope it will be added soon.

In conclusion, my hat goes off to the Launch development team; they’ve come up with a really great way to build a collaborative community that pushes Launch forward. No initial release will ever be perfect, but there’s a lot to work with and a lot of opportunity for all of us in the future to shape the direction Launch takes and have some influence in how it’s adopted. And that’s an exciting place to be.

Photo Credit: Rod Herrea (Flickr)

Adobe Analytics, Featured, Tag Management, Technical/Implementation

A Coder’s Paradise: Notes from the Tech Track at Adobe Summit 2018

Last week I attended my 11th Adobe Summit – a number that seems hard to believe. At my first Summit back in 2008, the Great Recession was just starting, but companies were already cutting back on expenses like conferences – just as Omniture moved Summit from the Grand America to the Salt Palace (they moved it back in 2009 for a few more years). Now, the event has outgrown Salt Lake City – with over 13,000 attendees last week converging on Las Vegas for an event with a much larger footprint than just the digital analytics industry.

With the sheer size of the event and the wide variety of products now included in Adobe’s Marketing and Experience Clouds, it can be difficult to find the right sessions – but I managed to attend some great labs, and wanted to share some of what I learned. I’ll get to Adobe Launch, which was again under the spotlight – only this year, it’s actually available for customers to use. But I’m going to start with some of the other things that impressed me throughout the week. There’s a technical bent to all of this – so if you’re looking for takeaways more suited for analysts, I’m sure some of my fellow partners at Demystified (as well as lots of others out there) will have thoughts to share. But I’m a developer at heart, so that’s what I’ll be emphasizing.

Adobe Target Standard

Because Brian Hawkins is such an optimization wizard, I don’t spend as much time with Target as I used to, and this was my first chance to do much with Target Standard besides deploy the at.js library and the global mbox. But I attended a lab that worked through deploying it via Launch, then setting up some targeting on a singe-page ReactJS application. My main takeaway is that Target Standard is far better suited to running an optimization program on a single-page application than Classic ever was. I used to have to utilize nested mboxes and all sorts of DOM trickery to delay content from showing until the right moment when things actually took place. But with Launch, you can easily listen for page updates and then trigger mboxes accordingly.

Target Standard and Launch also makes it easier to handle a common issue with frameworks like ReactJS where the data layer is being asynchronously populated with data from API calls – so you can run a campaign on initial page load even if it takes some time for all the relevant targeting data to be available.

Adobe Analytics APIs

The initial version of the Omniture API was perhaps the most challenging API I’ve ever used. It supported SOAP only, and from authentication to query, you had to configure everything absolutely perfectly for it to work. And you had to do it with no API Explorer and virtually no documentation, all while paying very close attention to the number of requests you were making, since you only had 2,000 tokens per month and didn’t want to run out or get charged for more (I’m not aware this ever happened, but the threat at least felt real!).

Adobe adding REST API support a few years later was a career-changing event for me, and there have been several enhancements and improvements since, like adding OAUTH authentication support. But what I saw last week was pretty impressive nonetheless. The approach to querying data is changed significantly in the following ways:

  • The next iteration of Adobe’s APIs will offer a much more REST-ful approach to interacting with the platform.
  • Polling for completed reports is no longer required. It will likely take several more requests to get to the most complicated reports, but each individual request will run much faster.
  • Because Analytics Workspace is built on top of a non-public version of the API, you truly will be able to access any report you can find in the UI.
  • The request format for each report has been simplified, with non-essential parameters either removed or at least made optional.
  • The architecture of a report request is fundamentally different in some ways – especially in the way that breakdowns between reports work.
  • The ability to search or filter on reports is far more robust than in earlier versions of the API.

Launch by Adobe

While Launch has been available for a few months, I’ve found it more challenging than I expected to talk my clients into migrating from DTM to Launch. The “lottery” system made some of my clients wonder if Launch was really ready for prime-time, while the inability to quickly migrate an existing DTM implementation over to Launch has been prohibitive to others. But whatever the case may be, I’ve only started spending a significant amount of time in Launch in the last month or so. For customers who were able to attend labs or demos on Launch at Summit, I suspect that will quickly change – because the feature set is just so much better than with DTM.

How Launch Differs from DTM

My biggest complaint about DTM has always been that it hasn’t matched the rest of the Marketing Cloud in terms of enterprise-class features. From a limited number of integrations available, to the rigid staging/production publishing structure, I’ve repeatedly run into issues where it was hard to make DTM work the way I needed for some of my larger clients. Along the way, Adobe has repeatedly said they understood these limitations and were working to address them. And Launch does that – it seems fairly obvious now that the reason DTM lagged in offering features other systems did is because Adobe has been putting way more resources into Launch over the past few years. It opens up the platform in some really unique ways that DTM never has:

  • You can set up as many environments as you want.
  • Minification of JavaScript files is now standard (it’s still hard to believe this wasn’t the case with DTM).
  • Anyone can write extensions to enhance the functionality and features available.
  • The user(s) in charge of Launch administration for your company have much more granular control over what is eventually pushed to your production website.
  • The Launch platform will eventually offer open APIs to allow you to customize your company’s Launch experience in virtually any way you need.

With Great Power Comes Great Responsibility

Launch offers a pretty amazing amount of control that make for some major considerations to each company that implements it. For example, the publishing workflow is flexible to the point of being a bit confusing. Because it’s set up almost like a version control system like Git, any Launch user can set up his or her own development environment and configure in any number of ways. This means each user has to then choose which version of every single asset to include in a library, promote to staging/production, etc. So you have to be a lot more careful than when you’re publishing with DTM.

I would hope we’ve reached a point in tag management where companies no longer expect a marketer to be able to own tagging and the TMS – it was the sales pitch made from the beginning, but the truth is that it has never been that easy. Even Tealium, which (in my opinion) has the most user-friendly interface and the most marketer-friendly features, needs at least one good developer to tap into the whole power of the tool. Launch will be no different; as the extension library grows and more integrations are offered, marketers will probably feel more comfortable making changes than they were with DTM – but this will likely be the exception and not the rule.

Just One Complaint

If there is one thing that will slow migration from DTM to Launch, it is be the difficulty customers will face in migration. One of the promises Adobe made about Launch at Summit in 2017 was that you would be able to migrate from DTM to Launch without updating the embed code on your site. This is technically true – you can configure Launch to publish your production environment to an old DTM production publishing target. But this can only be done for production, and not any other environment – which means you can migrate without updating your production embed code, but you will need to update all your non-production codes. Alternatively, you can use a tool like DTM Switch or Charles Proxy for your testing – and that will work fine for your initial testing. But most enterprise companies want to accumulate a few weeks of test data for all the traffic on at least one QA site before they are comfortable deploying changes to production.

It’s important to point out that, even if you do choose to migrate by publishing your Launch configuration to your old production DTM publishing target, you still have to migrate everything currently in DTM over to Launch – manually. Later this year, Adobe has said that they will release a true migration tool that will allow customers to pull rules, data elements, and tags from a DTM property into a new Launch property and migrate them without causing errors. Short of such a tool, some customers will have to invest quite a bit to migrate everything they currently have in DTM over to Launch. Until then, my recommendation is to figure out the best migration approach for your company:

  1. If you have at least one rockstar analytics developer with some bandwidth, and a manageable set of rules and tags in DTM, I’d start playing around with migration in one of your development environments, and put together an actual migration plan.
  2. If you don’t have the resources yet, I’d probably wait for the migration tool to be available later in the year – but still start experimenting with Launch on smaller sites or as more resources become available.

Either way, for some of my clients that have let their DTM implementations get pretty unwieldy, moving from DTM to Launch offers a fresh start and a chance to upgrade to Adobe’s latest technology. No matter which of these two situations you’re in, I’d start thinking now (if you haven’t already) about how you’re going to get your DTM properties migrated to Launch. It is superior to DTM in nearly every way, and it is going to get nearly all of the development resources and roadmap attention from Adobe from here on out. You don’t need to start tomorrow – and if you need to wait for a migration tool, you’ll be fine. But if your long-term plan is to stay with DTM, you’re likely going to limit your ability in the future to tap into additional features, integrations and enhancements Adobe makes across its Marketing and Experience Cloud products.

Conclusion

We’ve come a long ways from the first Summits I attended, with only a few labs and very little emphasis on the technology itself. Whether it was new APIs, new product features announcements, or the hands-on labs, there was a wealth of great information shared at Summit 2018 for developers and implementation-minded folks like me – and hopefully you’re as excited as I am to get your hands on some of these great new products and features.

Photo Credit: Roberto Faccenda (Flickr)

Adobe Analytics, Tag Management, Technical/Implementation

Star of the Show: Adobe Announces Launch at Summit 2017

If you attended the Adobe Summit last week and are anything like me, a second year in Las Vegas did nothing to cure the longing I felt last year for more of a focus on digital analytics rather than experience (I still really missed the ski day, too). But seeing how tag management seemed to capture everyone’s attention with the announcement of Adobe Launch, I had to write a blog post anyway. I want to focus on 3 things: what Launch is (or will be), what it means for current users of DTM, and what it means for the rest of the tag management space.

Based on what I saw at Summit, Launch may be the new catchy name, but it looks like the new product may finally be worthy of the name given to the old one (Dynamic Tag Management, or DTM). I’ve never really thought there was much dynamic about DTM – if you ask me, the “D” should have stood for “Developer,” because you can’t really manage any tags with DTM unless you have a pretty sharp developer. I’ve used DTM for years, and it has been a perfectly adequate tool for what I needed. But I’ve always thought more about what it didn’t do than what it did: it didn’t build on the innovative UI of its Satellite forerunner (the DTM interface was a notable step backwards from Satellite); it didn’t make it easier to deploy any tags that weren’t sold by Adobe (especially after Google released enhanced e-commerce), and it didn’t lead to the type of industry innovation I hoped it would when Adobe acquired Satellite in 2013 (if anything, the fact that the biggest name in the industry was giving it away for free really stifled innovation at some – but not all – of its paid competitors). I always felt it was odd that Adobe, as the leading provider of enterprise-class digital analytics, offered a tag management system that seemed so unsuited to the enterprise. I know this assessment sounds harsh – but I wouldn’t write it here if I hadn’t heard similar descriptions of DTM from Adobe’s own product managers while they were showing off Launch last week. They knew they could do tag management better – and it looks like they just might have done it.

How Will Launch Be Different?

How about, “In every way except that they both allow you to deploy third-party tags to your website.” Everything else seems different – and in a good way. Here are the highlights:

  • Launch is 100% API driven: Unlike most software tools, which get built, and then the API is added later, Adobe decided what they wanted Launch to do; then they built the API; and then they built the UI on top of that. So if you don’t like the UI, you can write your own. If you don’t like the workflow, you can write your own. You can customize it any way you want, or write your own scripts to make commonly repeated tasks much faster. That’s a really slick idea.
  • Launch will have a community behind it: Adobe envisions a world where vendors write their own tag integrations (called “extensions”) that customers can then plug into their own Launch implementations. Even if vendors don’t jump at the chance to write their own extensions, I can at least see a world where agencies and implementation specialists do it for them, eager to templatize the work they do every day. I’ve already got a list of extensions I can’t wait to write!
  • Launch will let you “extend” anything: Most tag management solutions offer integrations but not the ability to customize them. If the pre-built integration doesn’t work for you, you get to write your own. That often means taking something simple – like which products a customer purchased from you – and rewriting the same code dozens of times to spit it out in each vendor’s preferred format. But Launch will give the ability to have sharable extensions that do this for you. If you’ve used Tealium, it means something similar to the e-commerce extension will be possible, which is probably my favorite usability/extensibility feature any TMS offers today.
  • Launch will fix DTM’s environment and workflow limitations: Among my clients, one of the most common complaints about DTM is that you get 2 environments – staging and production. If your IT process includes more, well, that’s too bad. But Launch will allow you to create unlimited environments, just like Ensighten and Tealium do today. And it will have improved workflow built in – so that multiple users can work concurrently, with great care built into the tool to make sure they don’t step on each others’ toes and cause problems.

What Does Launch Mean for DTM Customers?

If you’re a current DTM customer, your first thought about Launch is probably, “Wow, this is great! I can’t wait to use it!” Your second thought is more likely to be, “Wait. I’ve already implemented DTM, and now it’s totally changed. It will be a huge pain to switch now.”

The good news is that, so far, Adobe is saying that they don’t anticipate that companies will need to make any major changes when switching from DTM to Launch (you may need to update the base tag on each page if you plan to take advantage of the new environments feature). They are also working on a migration process that will account for custom JavaScript code you have already written. It may make for a bit of initial pain in migrating custom scripts over, but it should be a pretty smooth process that won’t leave you with a ton of JavaScript errors when you do it. Adobe has also communicated for over a year which parts of the core DTM library will continue to work in the future, and which will not. So you can get ready for Launch by making sure all your custom JavaScript is in compliance with what will be supported in the future. And the benefits over the current DTM product are so obvious that it should be well worth a little bit of up-front pain for all the advantages you’ll get from switching (though if you decide you want to stick with DTM, Adobe plans to continue supporting it).

So if you have decided that Launch beats DTM and you want to switch, the next question is, “When?” And the answer to that is…”Soon.” Adobe hasn’t provided an official launch date, and product managers said repeatedly that they won’t release Launch until it’s world-class. That should actually be welcome news – because making this change will be challenging enough without having to worry about whether Adobe is going to get it right the first time.

What Does Launch Mean for Tag Management?

I think this is really the key question – how will Launch impact the tag management space? Because, while Adobe has impressively used DTM as a deployment and activation tool on an awful lot of its customers’ websites, I still have just as many clients that are happily using Ensighten, GTM, Signal, or Tealium. And I hope they continue to do so – because competition is good for everyone. There is no doubt that Ensighten’s initial product launch pushed its competitors to move faster than they had planned; and that Tealium’s friendly UI has pushed everyone to provide a better user experience (for awhile, GTM’s template library even looked suspiciously like Tealium’s). Launch is adding some features that have already existed in other tools, but Adobe is also pushing some creative ideas that will hopefully push the market in new directions.

What I hope does not happen, though, is what happened when Adobe acquired Satellite in 2013 and started giving it away for free. A few of the the tools in the space are still remarkably similar in actual features in 2017 to what they were in 2013. The easy availability of Adobe DTM seemed to depress innovation – and if your tag management system hasn’t done much in the past few years but redo its UI and add support for a few new vendors, you know what I mean (and if you do, you’ve probably already started looking at other tools anyway). I fear that Launch is going to strain those vendors even more, and it wouldn’t surprise me at all if Launch spurs a new round of acquisitions. But my sincere hope is that the tools that have continued to innovate – that have risen to the challenge of competing with a free product and developed complementary products, innovative new features, and expanded their ecosystem of partners and integrations – will use Launch as motivation to come up with new ways of fulfilling the promise of tag management.

Last week’s announcement is definitely exciting for the tag management space. While Launch is still a few months away, we’ve already started talking at Analytics Demystified about which extensions our clients using DTM would benefit from – and how we can use extensions to get involved in the community that will surely emerge around Launch. If you’re thinking about migrating from DTM to Launch and would like some help planning for it, please reach out – we’d love to help you through the process!

Photo Credit: NASA Goddard Space Flight Center

Analytics Strategy

"Tag! You’re It!" One More Analyst’s Tag Management Thoughts

Unless you’re living in a cave (and, you’re clearly not, because you’re spending enough time trolling the interwebtubes to wind up on this blog), you’ve seen, heard, and felt the latest wave of news and excitement about tag management. Google announced Google Tag Manager last month, rumors are swirling that Adobe is going to begin providing their tag manager for free to all Sitecatalyst clients, and, most recently, Eric Peterson wrote a post trying to bring it all together. (Well…that was the most recent post when I started writing this, but Rudi Shumpert actually weighed in with a wish list for tag management systems, and his post is worth a read, too!).

My awareness of tag management dates back just over two years — back to Eric’s initial paper on the subject, which was sponsored by Ensighten; this coincided with Josh Manion’s “Tagolution” marketing stunt at the Washington, D.C. eMetrics in the fall of 2010. Since then:

  • I’ve had multiple discussions and demos from multiple enterprise tag management vendors (and even training from one!).
  • I’ve had one client that used Ensighten.
  • I hosted a recent Web Analytics Wednesday in Columbus that was co-sponsored by BrightTag, who also presented there
  • I’ve chatted with several local peers who either already have or are in the process of implementing tag management.
  • I’ve taken a crack at rolling out Google Tag Manager on this blog (I failed after an hour of fiddling — broke several things and couldn’t get Google Analytics working with it)

Along the way, of course, I’ve read posts, seen conference presentations, and chatted with a number of sharp analysts. I’ve also, apparently, derided the underlying need for tag management to Evan LaPointe of Satellite. This was over a year ago, and I don’t remember the conversation, but I am a cynic by nature, so I don’t doubt that I was somewhat skeptical.

My fear then, as it is now, is this:

Once again, we’re treating an emerging class of technology as a panacea. We’re letting vendors frame the conversation, and we’re putting our heads in the sand about some important realities.

Now, all told, I’ve had a lot of conversations and very little direct hands-on use of these platforms. On the one hand, since I love to tinker, that’s a symptom of some of what I’ll cover in this toast — tag management requires wayyyy more than “just dropping a single line of Javascript” to actually use. On the other hand, I may just be doing that blogging bloviation thing. You decide!

Tag Management Doesn’t Simplify the Underlying Tools

Example No. 1: In the spring of 2011, I got a demo — via Webex — of one of the leading enterprise tag management systems. The sales engineer who was demoing the product repeatedly confused Sitecatalyst and Google Analytics functionality in his demo. It was apparent that he had little familiarity with any web analytics tool, and questions that we asked to try to understand how the product worked got very vague and unsatisfactory answers. If the sales guy couldn’t clearly show and articulate how to accomplish some of our most common tagging challenges, we wondered, how could we believe him that his solution was, well, a solution?

Example No. 2: When it came to our client who used Ensighten, we were set up such that analysts at my agency developed the Sitecatalyst tagging requirements as part of our design and development work for the client, while an analytics consultancy — also under contract with the client — actually implemented what we specified through the TMS. Time and again, new content got pushed to production with incorrect or nonexistent tagging. And, time and again, we were told by the analytics consultancy that we needed to make adjustments to the site in order for them to be able to get the tags to fire as specified. Certainly, this was not all the fault of the tag management platform, as a tool is only as good as the people and processes that use it. But, the experience highlighted that tag management introduces complexity at the same time that it introduces flexibility.

Example No. 3: I had a discussion with a local analyst who works for a large retailer that is using BrightTag. When I asked how she liked it, she said the tool was fine, but, what no one had really thought through was that the “owner of the tool” inherently needed to be fairly well-versed in every tool the TMS was managing. In her case, she was an analyst well-versed in Sitecatalyst. She had BrightTag added to her plate of responsibilities. Overnight, she found herself needing to understand her companies implementations of ForeSee, Google Analytics, Brightcove, and a whole slew of media tracking technologies. In order to deploy a tag or tracking pixel correctly through the TMS, she actually needed to know what tags to deploy and how to deploy them in their own right.

Example No. 4: In my own experience with rolling out Google Tag Manager, I quickly realized how many different tags and tag customizations I’ve got on this blog. My documentation sucks, I admit, and over half of what is deployed is “for tinkering,” so, in that regard, my experience with this site isn’t a great example. On the other hand, sites that have been built up and evolved over years can’t simply “add tag management.” They have to ferret out where all of their tags are and how they’ve been tweaked and customized. Then, for each tagging technology, they need to get them completely un-deployed, and then redeploy them through the tag management system. That’s not a trivial task.

Putting all of these examples together is concerning, because there is a very real risk that, in an industry that is already facing a serious supply shortage, a significant number of very smart, multi-year-experienced analysts will find themselves spending 100% of their time as tool jockeys managing tags rather than analysts focused on solving business problems.

Tag Management Doesn’t Simplify User Experience

Completely separate from the discussions around tag management is the reality of the continuing evolution and fragmentation of the online consumer experience.

I recently completed a tagging spec for a client whose technology will be rolled out onto the sites of a number of their clients. As I navigated through the experience — widget-ized content augmentation our client’s clients’ sites — I was reminded anew how non-linear and non-“page”-based our online experiences have become. Developing tags that would enable the performance management and analysis that we scoped out in our measurement framework, that would be reasonably deployable and maintainable, and that would deliver data that would be interpretable by the casual business user while also having the underlying breadth and depth needed for the analyst, required many hours of thought and work.

And …the platform has pretty minimal designed integration with social media.

And …the platform does not yet have a robust mobile experience.

In other words, in some respects, this was a pretty simple tagging exercise…and it wasn’t simple!

The truth: Most of our customers and potential customers are now multi-device (phones, tablets, laptops, desktops, TV,…) and multi-channel (Facebook, Twitter, Pinterest, apps, web site,…). Tag management only works where “the tag” can be deployed, and it doesn’t inherently provide cross-device and cross-channel tracking. (For the record, neither does branding the next iteration of your platform as “Universal Analytics,” either…but that’s a topic for another day.)

Tag Management IS Another Failure Point

I’ve developed a reverse Pavlovian response to the phrases “single line of code” and “no IT involvement” — it’s a reverse response because, rather than drooling, I snarl. Tag management vendors are by no means the only platforms that laud their ease of deployment. And, there is truth in what they say — a single line of Javascript that includes a file with a bunch of code in it is a pretty clever way to minimize the technical level of effort to deploy a tool.

But, with the power of tag management comes some level of risk. I can deploy and update my Webtrends tags through a TMS. That means there are risks that:

  • I could misdeploy it and not be capturing data at all.
  • I could deploy it in a way that hangs up the loading of the entire site.
  • I could use my tag management system to implement some UI changes rather than waiting for IT’s deployment cycle…and break the UI for certain browsers.
  • I could implement code that will capture user data that violates my company’s privacy policy.

IT departments, as a whole, are risk averse. They are the ones whose cell phones ring in the middle of the night when the site crashes. They’re the ones who wind up in front of the CEO to explain why the site crashed on Black Friday. They are the ones who wind up in the corporate counsel’s office responding to a request to provide details on exactly what systems did what and when in response to lawsuits.

In other words, every time a TMS vendor proudly delivers their, “No IT involvement!” claim…I feel a little ill for two reasons:

  • The friction between IT and Marketing is real, but it needs to be addressed through communication rather than a technology solution.
  • The statement illustrates that the TMS vendor is not recognizing that site-wide updates of Javascript should be vetted through some sort of rigorous (not necessarily lengthy!) process (yes, many TMSs have workflow capabilities, but the fact that, “Oh, we have workflow and you can have it go through IT before being deployed…if you need to do that” is a response to a question rather than an up-front recognition is concerning).

I have a lot of empathy for the IT staff that gets cut out of TMS vendor discussions until after the contract has been signed and are then told to “just push out this one line of code.” That puts them in a difficult, delicate, and unfair spot!

Yet…Tag Management Is the Future

Do my concerns above mean that I think tag management is misguided or a mistake? Absolutely not! Tag management is an important step forward for the industry, but we can’t ignore the underlying realities. Tag management isn’t easy — no more than web analytics is easy or testing and optimization easy. The technology is a critical part of the whole formula, and I’m excited to see as many players as there are in the space — they’ll be innovating like crazy to survive! But, people (with knowledge) and processes (that cut across many systems) seem like they must be just as important to a successful TMS deployment as the TMS itself.