Adobe Analytics, Featured, google analytics

Switching from Adobe to Google? What you Should Know (Part 2)

Last week, I went into detail on four key differences between Adobe and Google Analytics. This week, I’ll cover four more. This is far from an exhaustive list – but the purpose of these posts is not to cover all the differences between the two tools. There have been numerous articles over the years that go into great detail on many of these differences. Instead, my purpose here is to identify key things that analysts or organizations should be aware of should they decide to switch from one platform to another (specifically switching from Adobe to Google, which is a question I seem to get from one of my clients on a monthly basis). I’m not trying to talk anyone out of such a change, because I honestly feel like the tool is less important than the quality of the implementation and the team that owns it. But there are important differences between them, and far too often, I see companies decide to change to save money, or because they’re unhappy with their implementation of the tool (and not really with the tool itself).

Topic #5: Pathing

Another important difference between Adobe and Google is in path and flow analysis. Adobe Analytics allows you to enable any traffic variable to use pathing – in theory, up to 75 dimensions, and you can do path and next/previous flow on any of them. What’s more, with Analytics Workspace, you can also do flow analysis on any conversion variable – meaning that you can analyze the flow of just about anything.

Google’s Universal Analytics is far more limited. You can do flow analysis on both Pages and Events, but not any custom dimensions. It’s another case where Google’s simple UI gives it a perception advantage. But if you really understand how path and flow analysis work, Adobe’s ability to path on many more dimensions, and across multiple sessions/visits, can be hugely beneficial. However, this is an area Google has identified for improvement, and GA4 is bringing new capabilities that may help bring GA closer to par.

Topic #6: Traffic Sources/Marketing Channels

Both Adobe and Google Analytics offer robust reporting on how your users find your website, but there are subtle differences between them. Adobe offers the ability to define as many channels as you want, and define the rules for those channels you want to use. There are also pre-built rules you can use if you need. So you can accept Adobe’s built-in way of identifying social media traffic, but also make sure your paid social media links are correctly detected. You can also classify your marketing channel data into as many dimensions as you want.

Google also allows you as many channels as you want to use, but its tool is built around 5 key dimensions: source, medium, campaign, keyword, and content. These dimensions are typically populated using a series of query parameters prefixed with “utm_,” though they can also be populated manually. You can use any dimension to set up a series of channel groupings as well, similar to what Adobe offers.

For paid channels, both tools offer more or less the same features and capabilities; Adobe offers far more flexibility in configuring how non-paid channels should be tracked. For example, Adobe allows you to decide that certain channels should not overwrite a previously identified channel. But Google overwrites any old channel (except direct traffic) as soon as a new channel is identified – and, what’s more, immediately starts a new session when this happens (this is one of the quirkiest parts of GA, in my opinion).

Both tools allow you to report on first, last, and multi-touch attribution – though again, Adobe tends to offer more customizability, while Google’s reporting is easier to understand and navigate, GA4 offers some real improvements to make attribution reporting even easier. Google Analytics is also so ubiquitous that most agencies are immediately familiar with and ready to comply with a company’s traffic source reporting standards.

One final note about traffic sources is that Google’s integrations between Analytics and other Google marketing and advertising tools offer real benefits to any company – so much so that I even have clients that don’t want to move away from Adobe Analytics but still purchase GA360 just to leverage the advertising integrations.

Topic #7: Data Import / Classifications

One of the most useful features in Adobe Analytics is Classifications. This feature allows a company to categorize and classify the data captured in a report into additional attributes or metadata. For example, a company might capture the product ID at each step of the purchase process, and then upload a mapping of product IDs to names, categories, and brands. Each of those additional attributes becomes a “free” report in the interface. You don’t need to allocate an additional variable for it, but every attribute becomes its own report. This allows data to be aggregated or viewed in new ways. These classifications are also the only truly retroactive data in the tool – you can upload and overwrite classifications at any time, overwriting the data that was there previously. In addition, Adobe also has a powerful tool allowing you to not just upload your metadata, but also write matching rules (even using regular expressions) and have the classifications applied automatically, updating the classification tables each night.

Google Analytics has a similar feature, called Data Import. On the whole, Data Import is less robust than Classifications – for example, every attribute you want to enable as a new report in GA requires allocating one of your custom dimensions. However, Data Import has one important advantage over Classifications – the possibility to process the metadata in two different ways:

  • Query Time Data Import: Using this approach, the metadata you upload gets mapped to the primary dimension (the product ID in my example above) when you run your report. This is identical to how Adobe handles its classification data.
  • Processing Time Data Import: Using this approach, the metadata you upload gets mapped to the primary dimension at the time of data collection. This means that Google gives you the ability to report on your metadata either retroactively or non-retroactively.

This distinction may not be initially obvious, so here’s an example. Let’s say you capture a unique ID for your products in a GA custom dimension, and then you use data import to upload metadata for both brand name and category. The brand name is unlikely to change; a query time data import will work just fine. However, let’s say that you frequently move products between categories to find the one where they sell best. In this case, a query time data import may not be very useful – if you sold a pair of shoes in the “Shoes” category last month but are now selling it under “Basketball,” when you run a report over both months, that pair of shoes will look like it’s part of the Basketball category the entire time. But if you use a processing time data import, each purchase will be correctly attributed to the category in which it was actually sold.

Topic #8: Raw Data Integrations

A few years ago, I was hired by a client to advise them on whether they’d be better off sticking with what had become a very expensive Adobe Analytics integration or moving to Google Analytics 360. I found that, under normal circumstances, they would have been an ideal candidate to move to Google – the base contract would save them money, and their reporting requirements were fairly common and not reliant on Adobe features like merchandising that are difficult to replicate with Google.

What made the difference in my final recommendation to stick with Adobe was that they had a custom integration in place that moved data from Adobe’s raw data feeds into their own massive data warehouse. A team of data scientists relied heavily on integrations that were already built and working successfully, and these integrations would need to be completely rebuilt if they switched to Google. We estimated that the cost of such an effort would likely more than make up the difference in the size of their contracts (it should be noted that the most expensive part of their Adobe contract was Target, and they were not planning on abandoning that tool even if they abandoned Analytics).

This is not to say that Adobe’s data feeds are superior to Google’s BigQuery product; in fact, because BigQuery runs of Google’s ubiquitous cloud platform, it’s more familiar to most database developers and data scientists. The integration between Universal Analytics and BigQuery is built right into the 360 platform, and it’s well structured and easy to work with if you are familiar with SQL. Adobe’s data feeds are large, flat, and require at least cursory knowledge of the Adobe Analytics infrastructure to consume properly (long, comma-delimited lists of obscure event and variable names cause companies all sorts of problems). But this company had already invested in an integration that worked, and it seemed costly and risky to switch.

The key takeaway for this topic is that both Adobe and Google offer solid methods for accessing their raw data and pulling it into your own proprietary databases. A company can be successful integrating with either product – but there is a heavy switching cost for moving from one to the other.

Here’s a summary of the topics covered in this post:

FeatureGoogle AnalyticsAdobe
PathingAllows pathing and flow analysis only on pages and events, though GA4 will improve on thisAllows pathing and flow analysis on any dimension available in the tool, including across multiple visits
Traffic Sources/Marketing ChannelsPrimarily organized around use of “utm” query parameters and basic referring domain rules, though customization is possible

Strong integrations between Analytics and other Google marketing products

 

Ability to define and customize channels in any way that you want, including for organic channels

Data Import/ClassificationsData can be categorized either at processing time or at query time (query time only available for 360 customers)

Each attribute/classification requires use of one of your custom dimensions

Data can only be categorized at query time

Unlimited attributes available without use of additional variables

Raw Data IntegrationsStrong integration between GA and BigQuery

Uses SQL (a skillset possessed by most companies)

Data feeds are readily available and can be scheduled by anyone with admin access

Requires processing of a series of complex flat files

In conclusion, Adobe and Google Analytics are the industry leaders in cloud-based digital analytics tools, and both offer a rich set of features that can allow any company to be successful. But there are important differences between them, and too often, companies that decide to switch tools are unprepared for what lies ahead. I hope these eight points have helped you better understand how the tools are different, and what a major undertaking it is to switch from one to the other. You can be successful, but that will depend more on how you plan, prepare, an execute on your implementation of whichever tool you choose. If you’re in a position where you’re considering switching analytics tools – or have already decided to switch but are unsure of how to do it successfully, please reach out to us and we’ll help you get through it.

Photo credits: trustypics is licensed under CC BY-NC 2.0

Adobe Analytics, Featured, google analytics, Uncategorized

Switching from Adobe to Google? What You Should Know (Part 1)

In the past few months, I’ve had the same conversation with at least 5 different clients. After the most recent occurrence, I decided it was time to write a blog post about it. This conversation has involved a client either having made the decision to migrate from Adobe Analytics to Google Analytics 360 – or deciding to invest in both tools simultaneously. This isn’t a conversation that is new to me – I’ve had it at least a few times a year since I started at Demystified. But this year has struck me particularly because of both the frequency and the lack of awareness among some of my clients at what this undertaking actually means to a company as large as those I typically work with. So I wanted to highlight the things I believe anyone considering a shift like this should know before they jump. Before I get into a discussion about the feature set between the tools, I want to note two things that have nothing to do with features and the tools themselves.

  • If you’re making this change because you lack confidence in the data in your current tool, you’re unlikely to feel better after switching. I’ve seen far too many companies that had a broken process for implementing and maintaining analytics tracking hope that switching platforms would magically fix their problems. I have yet to see a company actually experience that magical change. The best way to increase confidence in your data is to audit and fix your implementation, and then to make sure your analysts have adequate training to use whichever tool you’ve implemented. Switching tools will only solve your problem if it is accompanied by those two things.
  • If you’re making this change to save money, do your due diligence to make sure that’s really the case. Google’s pricing is usually much easier to figure out than Adobe’s, but I have seen strange cases where a company pays more for Google 360 than Adobe. You also need to make sure you consider the true cost of switching – how much will it take to start over with a new tool? Have you included the cost of things like rebuilding back-end processes for consuming data feeds, importing data into your internal data warehouse, and recreating integrations with other vendors you work with?

As we take a closer look at actual feature differences between Adobe and Google, I want to start by saying that we have many clients successfully using each tool. I’m a former Adobe employee, and I have more experience with Adobe’s tool’s than Google’s. But I’ve helped enough companies implement both of these tools to know that a company can succeed or fail with either tool, and a company’s processes, structure, and culture will be far more influential in determining success than which tool you choose. Each has strengths and features that the other does not have. But there are a lot of hidden costs in switching that companies often fail to think about beforehand. So if your company is considering a switch, I want you to know things that might influence that decision; and if your management team has made the decision for you, I want you to know what to expect.

A final caveat before diving in…this series of posts will not focus much on GA4 or the Adobe Experience Platform, which represent the future of each company’s strategy. There are similarities between those two platforms, namely that both open allow a company to define its own data schema, as well as more easily incorporate external data sources in the reporting tool (Google’s Analysis tool or Adobe’s Analysis Workspace). I’ll try to call out points where these newer platforms change things, but my own experience has shown me that we’re still a ways out from most companies being ready to fully transition from the old to the new platforms.

Topic #1: Intended Audience

The first area I’d like to consider may be more opinion than fact – but I believe that, while neither company may want to admit it, they have targeted their analytics solutions to different markets. Google Analytics takes a far more democratic approach – it offers a UI that is meant to be relatively easy for even a new analyst to use. While deeper analysis is possible using Data Studio, Advanced Analysis, or BigQuery, the average analyst in GA generally uses the reports that are readily available. They’re fast, easy to run, and offer easily digestible insights.

On the other hand, I frequently tell my clients that Adobe gives its customers enough rope to hang themselves. There tend to be a lot more reports at an analyst’s fingertips in Adobe Analytics, and it’s not always clear what the implications are for mixing different types of dimensions and metrics. That complexity means that you can hop into Analysis Workspace and pretty quickly get into the weeds.

I’ve heard many a complaint from analyst with extensive GA experience that join a company that uses Adobe, usually about how hard it is to find things, how unintuitive the UI is, etc. It’s a valid complaint – and yet, I think Adobe kind of intends for that to be the case. The two tools are different – but they are meant to be that way.

Topic #2: Sampling

Entire books have been written on Google Analytics’ use of sampling, and I don’t want to go into that level of detail here. But sampling tends to be the thing that scares analysts the most when they move from Adobe to Google. For those not familiar with Adobe, this is because Adobe does not have it. Whatever report you run will always include 100% of the data collected for that time period (one exception is that Adobe, like Google, does maintain some cardinality limits on reports, but I consider this to be different from sampling).

The good news is that Google Analytics has dramatically reduced the impact of sampling over the years, to the point where there are many ways to get unsampled data:

  • Any of the default reports in Google’s main navigation menus is unsampled, as long as you don’t add secondary dimensions, metrics, or breakdowns.
  • You always have the option of downloading an unsampled report if you need it.
  • Google 360 customers have the ability to create up to 100 “custom tables” per property. A custom table is a report you build in advance that combines all the dimension and metrics you know you need. When you run reports using a custom table you can apply dimensions, metrics, and segments to the report in any way you choose, without fear of sampling. They can be quite useful, but they must be built ahead of time and cannot be changed after that.
  • You can always get unsampled data from BigQuery, provided that you have analysts that are proficient with SQL.

It’s also important to note that most companies that move from Adobe to Google choose to pay for Google 360, which has much higher sampling thresholds than the free version of Google Analytics. The free version of GA turns on sampling once you exceed 500,000 sessions at the property level for the date range you are using. But GA 360 doesn’t apply sampling until you hit 100,000,000 sessions at the view level, or start pulling intra-day data. So not only is the total number much higher, but you can also structure your views in a way that makes sampling even less of an issue.

Topic #3: Events

Perhaps one of the most difficult adjustments for an analyst moving from Adobe to Google – or vice-versa – is event tracking. The confusion stems from the fact that the word “event” means something totally different in each tool:

  • In Adobe, an event usually refers to a variable used by Adobe Analytics to count things. A company gets up to 1000 “success events” that are used to count either the number of times something occurred (like orders) or a currency amount associated with a particular interaction (like revenue). These events become metrics in the reporting interface. The equivalent would be a goal or custom metric in Google Analytics – but Adobe’s events are far more useful throughout the reporting tools than custom metrics. They can also be serialized (counted only once per visit, or counted once for some unique ID).
  • In Google, an event refers to an interaction a user performs on a website or mobile app. These events become a specific report in the reporting interface, with a series of different dimensions containing data about the event. Each event you track has an associated category, action, label, and value. There really is no equivalent in Adobe Analytics – events are like a combination of 3 props and a corresponding success event, all rolled up into one highly useful report (unlike the custom links, file download, and exit links reports). But that report can often become overloaded or cluttered because it’s used to report on just about every non-page view interaction on the site.

If you’ve used both tools, these descriptions probably sound very unsophisticated. But it can often be difficult for an analyst to shift from one tool to the other, because he or she is used to one reporting framework, and the same terminology means something completely different in the other tool. GA4 users will note here that events have changed again from Universal Analytics – even page and screen views are considered to be events in GA4, so there’s even more to get used to when making that switch.

Topic #4: Conversion and E-commerce Reporting

Some of the most substantial differences between Adobe and Google Analytics are in their approach to conversion and e-commerce reporting. There are dozens of excellent blog posts and articles about the differences between props and eVars, or eVars and custom dimensions, and I don’t really want to hash that out again. But for an Adobe user migrating to Google Analytics, it’s important to remember a few key differences:

  • In Adobe Analytics, you can configure an eVar to expire in multiple ways: after each hit, after a visit/session, to never expire, after any success event occurs, or after any number of days. But in Google Analytics, custom dimensions can only expire after hits, sessions, or never (there is also the “product” option, but I’m going to address that separately).
  • In Adobe Analytics, eVars can be first touch or last touch, but in Google Analytics, all custom dimensions are always last touch.

These are notable differences, but it’s generally possible to work around those limitations when migrating to Google Analytics. However, there is a concept in Adobe that has virtually no equivalent in Google – and as luck would have it, it’s also something that even many Adobe users struggle to understand. Merchandising is the idea that an e-commerce company might want to associate different values of a variable with each product the customer views, adds to cart, or purchases. There are 2 different ways that merchandising can be useful:

  • Method #1: Let’s consider a customer that buys multiple products, and wants to use a variable or dimension to capture the product name, category, or some other common product attribute. Both Adobe and Google offer this type of merchandising, though Google requires each attribute to be passed on each hit where the product ID is captured, while Adobe allows an attribute to be captured once and associated with that product ID until you want it to expire.
  • Method #2: Alternatively, what if the value you want to associate with the product isn’t a consistent product attribute? Let’s say that a customer finds her first product via internal search, and her second by clicking on a cross-sell offer on that first product. You want to report on a dimension called “Product Finding Method.” We’re no longer dealing with a value that will be the same for every customer that buys the product; each customer can find the same product in different ways. This type of merchandising is much easier to accomplish with Adobe than with Google I could write multiple blog posts about how to implement this in Adobe Analytics, so I won’t go into additional detail here. But it’s one of the main things I caution my Adobe clients about when they’re considering switching.

At this point, I want to highlight Google’s suite of reports called “Enhanced E-commerce.” This is a robust suite of reports on all kinds of highly useful aspects of e-commerce reporting: product impressions and clicks, promotional impressions and clicks, each step of the purchase process from seeing a product in a list, to viewing a product detail page, all the way through checkout. It’s built right into the interface in a standardized way, using a standard set of dimensions which yields a et of reports that will be highly useful to anyone familiar with the Google reporting interface. While you can create all the same types of reporting in Adobe, it’s more customized – you pick which eVars you want to use, choose from multiple options for tracking impressions and clicks, and end up with reporting that is every bit of useful but far less user-friendly than in Google’s enhanced e-commerce reporting.

In the first section of this post, I posited that the major difference between these tools is that Adobe focuses on customizability, while Google focuses on standardization. Nowhere is that more apparent than in e-commerce and conversion reporting: Google’s enhanced e-commerce reporting is simple and straightforward. Adobe requires customization to accomplish a lot of the same things, but while layering on complex like merchandising, offers more robust reporting in the process.

One last thing I want to call out in this section is that Adobe’s standard e-commerce reporting allows for easy de-duplication of purchases based on a unique order ID. When you pass Adobe the order ID, it checks to make sure that the order hasn’t been counted before; if it has, it does not count the order a second time. Google, on the other hand, also accepts the order ID as a standard dimension for its reporting – but it doesn’t perform this useful de-duplication on its own. If you want it, you have to build out the functionality as part of your implementation work.

Here’s a quick recap on what we’ve covered so far:

FeatureGoogle AnalyticsAdobe
SamplingStandard: Above 500,000 sessions during the reporting period

360: Above 100,000,000 sessions during the reporting period

Does not exist
CardinalityStandard: 50,000 unique values per report per day, or 100,000 uniques for multi-day tables

360: 1,000,000 unique values per report per day, or 150,000 unique values for multi-day tables

500,000 unique values per report per month (can be increased if needed)
Event TrackingUsed to track interactions, using 3 separate dimensions (category, action, label)Used to track interactions using a single dimension (i.e. the “Custom Links” report)
Custom Metrics/Success Events200 per property

Can track whole numbers, decimals, or currency

Can only be used in custom reports

1,000 per report suite

Can track whole numbers, decimals, or currency

Can be used in any reports

Can be serialized

Custom Dimensions/Variables200 per property

Can be scoped to hit, session, or user

Can only be used in custom reports

Can only handle last-touch attribution

Product scope allows for analysis of product attributes, but nothing like Adobe’s merchandising feature exists

250 per report suite

Can be scoped to hit, visit, visitor, any number of days, or to expire when any success event occurs

Can be used in any report

Can handle first-touch or last-touch attribution

Merchandising allows for complex analysis of any possible dimension, including product attributes

E-Commerce ReportingPre-configured dimensions, metrics, and reports exist for all steps in an e-commerce flow, starting with product impressions and clicks and continuing through purchasePre-configured dimensions and metrics exist all steps in an e-commerce flow, starting with product views and continuing through purchase

Product impressions and clicks can also be tracked using additional success events

This is a good start – but next week, I’ll dive into a few additional topics: pathing, marketing channels, data import/classifications, and raw data integrations. If it feels like there’s a lot to keep track of, it should. Migrating from one analytics tool to another is a big job – and sometimes the people who make a decision like this aren’t totally aware of the burden it will place on their analysts and developers.

Photo credits: trustypics is licensed under CC BY-NC 2.0

Adobe Analytics

Sydney Adobe Analytics “Top Gun” Class!

UPDATE: Sydney “Top Gun” class is now sold out!

For several years, I have wanted to get back to Australia. It is one of my favorite places and I haven’t been in a LONG time. I have never offered my advanced Adobe Analytics “Top Gun” class in Australia, but this year is the year! I am conducting my Adobe Analytics “Top Gun” Class on June 26th in Sydney. This is the day before the Adobe 2019 Sydney Symposium held on June 27-28, so people who have to travel can attend both events as part of the same trip! This will probably be the only time I offer this class in the region, so I encourage you to take advantage of it! Seats are limited, so I suggest you register early!

Here is a link to register for the class: https://www.eventbrite.com/e/analytics-demystified-adobe-analytics-top-gun-training-sydney-2019-tickets-54764631487

For those of you unfamiliar with my Adobe Analytics “Top Gun” class, it is a one-day crash course on how Adobe Analytics works behind the scenes based upon my Adobe Analytics book. This class is not meant for daily Adobe Analytics end-users, but rather for those who administer Adobe Analytics at their organization, analysts who do requirements gathering or developers who want to understand why they are being told to implement things in Adobe Analytics. The class goes deep into the Adobe Analytics product, exploring all of its features from variables to merchandising to importing offline metrics. The primary objective of the class is to teach participants how to translate everyday business questions into Adobe Analytics implementation steps. For example, if your boss tells you that they want to track website visitor engagement using Adobe Analytics, would you know how to do that? While the class doesn’t get into all of the coding aspects of Adobe Analytics, it will teach you which product features and functions you can bring to bear to create reports answering any question you may get from business stakeholders. It will also allow you and your developers to have a common language and understanding of the Adobe Analytics product so that you can expedite getting the data you need to answer business questions.

Here are some quotes from recent London class attendees:

Again, here is a link to register for the class: https://www.eventbrite.com/e/analytics-demystified-adobe-analytics-top-gun-training-sydney-2019-tickets-54764631487

Please e-mail me if you have any questions.  Thanks!

Adobe Analytics, Featured

B2B Conversion Funnels

One of the unique challenges of managing a B2B website is that you often don’t actually sell anything directly. Most B2B websites are there to educate, create awareness and generate sales leads (normally through form completions). Retail sites have a very straightforward conversion funnel: Product Views to Cart Additions to Checkouts to Orders. But B2B sites are not as linear. In fact, there is a ton of research that shows that B2B sales consideration cycles are very long and potential customers only reach out or self-identify towards the end of the process.

So if you work for a B2B organization, how can you see how your website is performing if the conversion funnel isn’t obvious? One thing you can do is to use segmentation to split your visitors into the various stages of the buying process. Some people subscribe to the Awareness – Consideration – Intent – Decision funnel model, but there are many different types of B2B funnel models that you can choose from. Regardless of which model you prefer, you can use digital analytics segmentation to create visitor buckets and see how your visitors progress through the buying process.

To illustrate this, I will use a very basic example using my website. On my website, I write blog posts, which [hopefully] drive visitors to the site to read, which, in turn, gives me an opportunity to describe my consulting services (of course, generating business isn’t my only motivation for writing blog posts, but I do have kids to put through college!). Therefore, if I want to identify which visitors I think are at the “Awareness” stage for my services, I might make a segment that looks like this:

Here I am saying that someone who has been to my website more than once and read more than one of my blog posts is generally “aware” of me. Next, I can create another segment for those that might be a bit more serious about considering me like this:

Here, you can see that I am raising the bar a bit and saying that to be in the “Consideration” bucket, they have to have visited at least 3 times and viewed at least three of my blog posts. Lastly, I will create a third bucket called “Intent” and define it like this:

Here, I am saying that they had to have met the criteria of “Consideration” and viewed at least one of the more detailed pages that describe my consulting services. As I mentioned, this example is super-simplistic, but the general idea is to place visitors into sales funnel buckets based upon what actions they can do on your website that might indicate that they are in one stage or another.

However, these buckets are not mutually exclusive. Therefore, what you can do is place them into a conversion funnel report in your digital analytics tool. This will apply these segments but do so in a progressive manner taking into account sequence. In this case, I am going to use Adobe’s Analysis Workspace fallout visualization to see how my visitors are progressing through the sales process (and I am also applying a few segments to narrow down the data like excluding competitor traffic and some content unrelated to me):

Here is what the fallout report looks like when it is completed:

In this report, I have applied each of the preceding three segments to the Visits metric and created a funnel. I also use the Demandbase product (which attempts to tell me what company anonymous visitors work for), so I segmented my funnel for all visitors and for those where a Demandbase Company exists. Doing this, I can see that for companies that I can identify, 55% of visitors make it to the Awareness stage, 27% make it to the Consideration stage, but only 2% make it to the Intent stage. This allows you to see where your website issues might exist. In my case, I am not very focused on using my content to sell my services and this can be seen in the 25% drop-off between Consideration and Intent. If I want to see this trended over time, I can simply right-click and see the various stages trended:

In addition, I can view each of these stages in a tabular format by simply right-clicking and create a segment from each touchpoint and adding those segments to a freeform table. Keep in mind that these segments will be different from the Awareness, Consideration, Intent segments shown above because these segments take into account the sequence since they come from the fallout report (using sequential segmentation):

Once I have created segments for all funnel steps, I can create a table that looks like this:

This shows me which known companies (via Demanbase) have unique visitors at each stage of the buying process and which companies I might want to reach out to about getting new business. If I want, I can right-click and make a new calculated metric that divides the Intent visitor count by the Awareness visitor count to see who might be the most passionate about working with me:

Summary

So this is one way that you can use the power of segmentation to create B2B sales funnels with your digital analytics data. To read some other posts I have shared related to B2B, you can check out the following, many coming from my time at Salesforce.com:

Adobe Analytics, Featured

New Cohort Analysis Tables – Rolling Calculation

Last week, Adobe released a slew of cool updates to the Cohort Tables in Analysis Workspace. For those of you who suffered through my retention posts of 2017, you will know that this is something I have been looking forward to! In this post, I will share an example of how you can use one of the new updates, a feature called rolling calculation.

A pretty standard use case for cohort tables is looking to see how often a website visitor came to your site, performed an action and then returned to perform another action. The two actions can be the same or different. The most popular example is probably people who ordered something on your site and then came back and ordered again. You are essentially looking for “cohorts” that were the same people doing both actions.

To illustrate this, let’s look at people who come to my blog and read posts. I have a success event for blog post views and I have a segment created that looks for blog posts written by me. I can bring these together to see how often my visitors come back to my blog each week: 

I can also view this by month:

These reports are good at letting me know how many visitors who read a blog post in January of 2018 came back to read a post in February, March, etc… In this case, it looks like my blog posts in July, August & September did better than other months at driving retention.

However, one thing that these reports don’t tell me is whether the same visitors returned every week (or month). Knowing this tells you how loyal your visitors are over time (bearing in mind that cookie deletion will make people look less loyal!). This ability to see the same visitors rolling through all of your cohort reports is what Adobe has added.

Rolling Calculation

To view how often the same people return, you simply have to edit your cohort table and check off the Rolling Calculation box like this:

This will result in a new table that looks like this:

Here you can see that very few people are coming to my blog one week after another. For me, this makes sense, since I don’t always publish new posts weekly. The numbers look similar when viewed by month:

Even though the rolling calculation cohort feature can be a bit humbling, it is a really cool feature that can be used in many different ways. For example, if you are an online retailer, you might want to use the QUARTER granularity option and see what % of visitors who purchase from you at least once every quarter. If you manage a financial services site, you might want to see how often the same visitors return each month to check their online bank statements or make payments.

Segmentation

One last thing to remember is that you still have the ability to right-click on any cohort cell and create a segment. This means that in one click you can build a segment for people who come to your site in one week and return the next week. It is as easy as this:

The resulting segment created will be a bit lolengthy (and a bit intimidating!), but you can name it and tweak it as needed:

Summary

Rolling Calculation cohort analysis is a great new feature for Analysis Workspace. Since no additional implementation is required to use this new feature, I suggest you try it out with some of your popular success events…

Adobe Analytics, Featured

Adam Greco Adobe Analytics Blog Index

Over the years, I have tried to consistently share as much as I can about Adobe Analytics. The only downside of this is that my posts can span a wide range of topics. Therefore, as we start a new year, I have decided to compile an index of my blog posts in case you want a handy way to find them by topic. This index won’t include all of my posts or old ones on the Adobe site, since many of them are now outdated due to new advances in Adobe Analytics. Of course, you can always deep-dive into most Adobe Analytics topic by checking out my book.

Running A Successful Adobe Analytics Implementation

Adobe Analytics Implementation & Features

Analysis Workspace

Virtual Report Suites

Sample Analyses by Topic

Marketing Campaigns

Content

Click-through Rates

Visitor Engagement/Scoring

eCommerce

Lead Generation/Forms

Onsite Search

Adobe Analytics Administration

Adobe Analytics Integrations

Adobe Analytics, Featured

My Favorite Analysis Workspace Right-Clicks – Part 2

In my last blog post, I began sharing some of my favorite hidden right-click actions in Analysis Workspace. In this post, I continue where I left off (since that post was getting way too long!). Most of these items are related to the Fallout visualization since I find that it has so many hidden features!

Freeform Table – Change Attribution Model for Breakdowns

Attribution is always a heated topic. Some companies are into First Touch and others that believe in Last Touch. In many cases, you have to agree as an organization which attribution model to use, especially when it comes to marketing campaigns. However, what if you want to use multiple attribution models? For example, let’s say that as an organization, you decide that the over-arching attribution model is Last Touch, meaning that the campaign source taking place most closely to the success (Order, Blog Post View, etc.) is the one that gets credit. Here is what this looks like for my blog:

However, what if, at the tracking code level, you want to see attribution differently. For example, what if you decide that once the Last Touch model is applied to the campaign source, you want to see the specific tracking codes leading to Blog Posts allocated by First Touch? Multiple allocation models are available in Analysis Workspace, but this feature is hidden. The use of multiple concurrent attribution models is described below.

First, you want to break down your campaign source into tracking codes by right-clicking and choosing your breakdown:

You can see that the breakdown is showing tracking codes by source and that the attribution model is Last Touch | Visitor (highlighted in red above). However, if you hover your mouse over the attribution description of the breakdown header, you can see an “Edit” link like this:

Clicking this link allows you to change the attribution model for the selected metric for the breakdown rows. In this case, you can view tracking codes within the “linkedin-post” source attributed using First Touch Attribution and, just for fun, you can change the tracking code attribution for Twitter to an entirely different attribution model (both shown highlighted in red below):

So with a few clicks, I have changed my freeform table to view campaign source by Last Touch, but then within that, tracking codes from LinkedIn by First Touch and Twitter by J Curve attribution. Here is what the new table looks like side-by-side with the original table that is all based upon Last Touch:

As you can see, the numbers can change significantly! I suggest you try out this hidden tip whenever you want to see different attribution models at different levels…

Fallout – Trend

The next right-click I want to talk about has to do with the Fallout report. The Fallout report in Analysis Workspace is beyond cool! It lets you add pages, metrics and pretty much anything else you want to it to see where users are dropping off your site or app. You can also apply segments to the Fallout report holistically or just to a specific portion of the Fallout report. In this case, I have created a Fallout report that shows how often visitors come to our home page, eventually view one of my blog posts and then eventually view one my consulting services pages:

Now, let’s imagine that I want to see how this fallout is trending over time. To do this, right-click anywhere in the fallout report and choose the Trend all touchpoints option as shown here:

Trending all touchpoints produces a new graph that shows fallout trended over time:

Alternatively, you can select the Trend touchpoint option for a specific fallout touchpoint and see one of the trends. Seeing one fallout trend provides the added benefit of being able to see anomaly detection within the graph:

Fallout – Fall-Through & Fall-Out

The Fallout visualization also allows you to view where people go directly after your fallout touchpoints. Fallthrough reporting can help you understand where they are going if they don’t go directly to the next step in your fallout steps. Of course, there are two possibilities here. Some visitors eventually do make it to the remaining steps in your fallout and others do not. Therefore, Analysis Workspace provides right-clicks that show you where people went in both situations. The Fallthrough scenario covers cases where visitors do eventually make it to the next touchpoint and right-clicking and selecting that option looks like this:

In this case, I want to see where people who have completed the first two steps of my fallout go directly after the second step, but only for cases in which they eventually make it to the third step of my fallout. Here is what the resulting report looks like:

As you can see, there were a few cases in which users went directly to the pages I wanted them to go to (shown in red), but now I can see where they deviated and view the latter in descending order.

The other option is to use the fallout (vs. fallthrough) option. Fallout shows you where visitors went next if they did not eventually make it to the next step in your fallout. You can choose this using the following right-click option:

Breakdown fallout by touchpoint produces a report that looks like this:

Another quick tip related to the fallout visualization that some of my clients miss is the option to make fallout steps immediate instead of eventual. At each step of the fallout, you can change the setting shown here:

Changing the setting to Next Hit, narrows down the scope of your fallout to only include cases in which visitors went directly from one step to the next. Here is what my fallout report looks like before and after this change:

Fallout – Multiple Segments

Another cool feature of the fallout visualization is that you can add segments to it to see fallout for different segments of visitors. You can add multiple segments to the fallout visualization. Unfortunately, this is another “hidden” feature because you need to know that this is done by dragging over a segment and dropping it on the top part of the visualization as shown here:

This shows a fallout that looks like this:

Now I can see how my general population falls out and also how it is different for first-time visits. To demonstrate adding multiple segments, here is the same visualization with an additional “Europe” segment added:

Going back to what I shared earlier, right-clicking to trend touchpoints with multiple segments added requires you to click precisely on the part that you want to see trended. For example, right-clicking on the Europe Visits step two shows a different trend than clicking on the 1st Time Visits bar:

Therefore, clicking on both of the different segment bars displays two different fallout trends:

So there you have it. Two blog posts worth of obscure Analysis Workspace features that you can explore. I am sure there are many more, so if you have any good ones, feel free to leave them as a comment here.

Adobe Analytics, Featured

My Favorite Analysis Workspace Right-Clicks – Part 1

If you use Adobe Analytics, Analysis Workspace has become the indispensable tool of choice for reporting and analysis. As I mentioned back in 2016, Analysis Workspace is the future and where Adobe is concentrating all of its energy these days. However, many people miss all of the cool things they can do with Analysis Workspace because much of it is hidden in the [in]famous right-click menus. Analysis Workspace gurus have learned “when in doubt, right-click” while using Analysis Workspace. In this post, I will share some of my favorite right-click options in Analysis Workspace in case you have not yet discovered them.

Freeform Table – Compare Attribution Models

If you are an avid reader of my blog, you may recall that I recently shared that a lot of attribution in Adobe Analytics is shifting from eVars to Success Events. Therefore, when you are using a freeform table in Analysis Workspace, there may be times when you want to compare different attribution models for a metric you already have in the table. Instead of forcing you to add the metric again and then modify its attribution model, you can now choose a second attribution model right from within the freeform table. To do this, just right-click on the metric header and select the Compare Attribution Model option:

This will bring up a window asking you which comparison attribution model you want to use that looks like this:

Once you select that, Analysis Workspace will create a new column with the secondary attribution model and also automatically create a third column that compares the two:

My only complaint here is that when you do this, it becomes apparent that you aren’t sure what attribution model was being used for the column you had in the first place. I hope that, in the future,  Adobe will start putting attribution model indicators underneath every metric that is added to freeform tables, since the first metric column above looks a bit confusing and only an administrator would know what its allocation is based upon eVar settings in the admin console. Therefore, my bonus trick is to use the Modify Attribution Model right-click option and set it to the correct model:

In this case, the original column was Last Touch at the Visitor level, so modifying this keeps the data as it was, but now shows the attribution label:

This is just a quick “hack” I figured out to make things clearer for my end-users… But, as you can see, all of this functionality is hidden in the right-click of the Freeform table visualization. Obviously, there are other uses for the Modify Attribution Model feature, such as changing your mind about which model you want to use as you progress through your analysis.

Freeform Table – Compare Date Range

Another handy freeform table right-click is the date comparison. This allows you to pick a date range and compare the same metric for the before and after range and also creates a difference column automatically. To do this, just right-click on the metric column of interest and specify your date range:

This what you will see after you are finished with your selection:

In this case, I am looking at my top blog posts from October 11 – Nov 9 compared to the prior 30 days. This allows me to see how posts are doing in both time periods and see the percent change. In your implementation, you might use this technique to see product changes for Orders and Revenue.

Cohort – Create Segment From Cell

If you have situations on your website or mobile app that require you to see if your audience is coming back over time to perform specific actions, then the Cohort visualization can be convenient. By adding the starting and ending metric to the Cohort visualization, Analysis Workspace will automatically show you how often your audience (“cohorts”) are returning. Here is what my blog Cohort looks like using Blog Post Views as the starting and ending metrics:

While this is interesting, what I like is my next hidden right-click. This is the ability to automatically create a segment from a specific cohort cell. There are many times where you might want to build a segment of people who came to your site, did something and then came back later to do either the same thing or a different thing. Instead of spending a lot of time trying to build a segment for this, you can create a Cohort table and then right-click to create a segment from a cell. For example, let’s imagine that I notice a relatively high return rate the week after September 16th. I can right-click on that cell and use the Create Segment from Cell option:

This will automatically open up the segment builder and pre-populate the segment, which may look like this:

From here you can modify the segment any way you see fit and then save it. Then you can use this segment in any Adobe Analytics report (or even make a Virtual Report Suite from it!). This is a cool, fast way to build cohort segments! Sometimes, I don’t even keep the Cohort table itself. I merely use the Cohort table to make the segment I care about. I am not sure if that is smart or lazy, but either way, it works!

Venn – Create Segment From Cell

As long as we are talking about creating segments from a visualization, I would be remiss if I didn’t mention the Venn visualization. This visualization allows you to add up to three segments and see the overlap between all of them. For example, let’s say that for some crazy reason I need to look at people who view my blog posts, are first-time visitors and are from Europe. I would just drag over all three of these segments and then select the metric I care about (Blog Post Views in this case):

This would produce a Venn diagram that looks like this:

While this is interesting, the really cool part is that I can now right-click on any portion of the Venn diagram to get a segment. For example, if I want a segment for the intersection of all three segments, I just right-click in the region where they all overlap like this:

This will result in a brand new segment builder window that looks like this:

From here, I can modify it, save it and use it any way I’d like in the future.

Venn – Add Additional Metrics

While we are looking at the Venn visualization, I wanted to share another secret tip that I learned from Jen Lasser while we traveled the country performing Adobe Insider Tours. Once you have created a Venn visualization, you can click on the dot next to the visualization name and check the Show Data Source option:

This will expose the underlying data table that is powering the visualization like this:

But the cool part is what comes next. From here, you can add as many metrics as you want to the table by dragging them into the Metrics area. Here is an example of me dragging over the Visits metric and dropping it on top of the Metrics area:

Here is what it looks like after multiple metrics have been added (my implementation is somewhat lame, so I don’t have many metrics!):

But once you have numerous metrics, things get really cool! You can click on any metric, and the Venn visualization associated with the table will dynamically change! Here is a video that shows what this looks like in real life:

This cool technique allows you to see many Venn visualizations for the same segments at once!

Believe it or not, that is only half of my favorite right-clicks in Analysis Workspace! Next week, I will share the other ones, so stay tuned!

Adobe Analytics, Tag Management, Technical/Implementation, Testing and Optimization

Adobe Target + Analytics = Better Together

Last week I wrote about an Adobe Launch extension I built to familiarize myself with the extension development process. This extension can be used to integrate Adobe Analytics and Target in the same way that used to be possible prior to the A4T integration. For the first several years after Omniture acquired Offermatica (and Adobe acquired Omniture), the integration between the 2 products was rather simple but quite powerful. By using a built-in list variable called s.tnt (that did not count against the 3 per report suite available to all Adobe customers), Target would pass a list of all activities and experiences in which a visitor was a participant. This enabled reporting in Analytics that would show the performance of each activity, and allow for deep-dive analysis using all the reports available in Analytics (Target offers a powerful but limited number of reports). When Target Standard was released, this integration became more difficult to utilize, because if you choose to use Analytics for Target (A4T) reporting, the plugins required to make it work are invalidated. Luckily, there is a way around it, and I’d like to describe it today.

Changes in Analytics

In order to continue to re-create the old s.tnt integration, you’ll need to use one of your three list variables. Choose the one you want, as well as the delimiter and the expiration (the s.tnt expiration was 2 weeks).

Changes in Target

The changes you need to make in Target are nearly as simple. Log into Target, go to “Setup” in the top menu and then click “Response Tokens” in the left menu. You’ll see a list of tokens, or data elements that exist within Target, that can be exposed on the page. Make sure that activity.id, experience.id, activity.name, and experience.name are all toggled on in the “Status” column. That’s it!

Changes in Your TMS

What we did in Analytics and Target made an integration possible – we now have a list variable ready to store Target experience data, and Target will now expose that data on every mbox call. Now, we need to connect the two tools and get data from Target to Analytics.

Because Target is synchronous, the first block of code we need to execute must also run synchronously – this might cause problems for you if you’re using Signal or GTM, as there aren’t any great options for synchronous loading with those tools. But you could do this in any of the following ways:

  • Use the “All Pages – Blocking (Synchronous)” condition in Ensighten
  • Put the code into the utag.sync.js template in Tealium
  • Use a “Top of Page” (DTM) or “Library Loaded” rule (Launch)

The code we need to add synchronously attaches an event listener that will respond any time Target returns an mbox response. The response tokens are inside this response, so we listen for the mbox response and then write that code somewhere it can be accessed by other tags. Here’s the code:

	if (window.adobe && adobe.target) {
		document.addEventListener(adobe.target.event.REQUEST_SUCCEEDED, function(e) {
			if (e.detail.responseTokens) {
				var tokens = e.detail.responseTokens;
				window.targetExperiences = [];
				for (var i=0; i<tokens.length; i++) {
					var inList = false;
					for (var j=0; j<targetExperiences.length; j++) {
						if (targetExperiences[j].activityId == tokens[i]['activity.id']) {
							inList = true;
							break;
						}
					}
					
					if (!inList) {
						targetExperiences.push({
							activityId: tokens[i]['activity.id'],
							activityName: tokens[i]['activity.name'],
							experienceId: tokens[i]['experience.id'],
							experienceName: tokens[i]['experience.name']
						});
					}
				}
			}
			
			if (window.targetLoaded) {
				// TODO: respond with an event tracking call
			} else {
				// TODO: respond with a page tracking call
			} 
		});
	}
	
	// set failsafe in case Target doesn't load
	setTimeout(function() {
		if (!window.targetLoaded) {
			// TODO: respond with a page tracking call
		}
	}, 5000);

So what does this code do? It starts by adding an event listener that waits for Target to send out an mbox request and get a response back. Because of what we did earlier, that response will now carry at least a few tokens. If any of those tokens indicate the visitor has been placed within an activity, it checks to make sure we haven’t already tracked that activity on the current page (to avoid inflating instances). It then adds activity and experience IDs and names to a global object called “targetExperiences,” though you could push it to your data layer or anywhere else you want. We also set a flag called “targetLoaded” to true that allows us to use logic to fire either a page tracking call or an event tracking call, and avoid inflating page view counts on the page. We also have a failsafe in place, so that if for some reason Target does not load, we can initiate some error handling and avoid delaying tracking.

You’ll notice the word “TODO” in that code snippet a few times, because what you do with this event is really up to you. This is the point where things get a little tricky. Target is synchronous, but the events it registers are not. So there is no guarantee that this event will be triggered before the DOM ready event, when your TMS likely starts firing most tags.. So you have to decide how you want to handle the event. Here are some options:

  • My code above is written in a way that allows you to track a pageview on the very first mbox load, and a custom link/event tracking call on all subsequent mbox updates. You could do this with a utag.view and utag.link call (Tealium), or trigger a Bootstrapper event with Ensighten, or a direct call rule with DTM. If you do this, you’ll need to make sure you configure the TMS to not fire the Adobe server call on DOM ready (if you’re using DTM, this is a huge pain; luckily, it’s much easier with Launch), or you’ll double-count every page.
  • You could just configure the TMS to call a custom link call every time, which will probably increase your server calls dramatically. It may also make it difficult to analyze experiences that begin on page load.

What my Launch extension does is fire one direct call rule on the first mbox call, and a different call for all subsequent mbox calls. You can then configure the Adobe Analytics tag to fire an s.t() call (pageview) for that initial direct call rule, and an s.tl() call for all others. If you’re doing this with Tealium, make sure to configure your implementation to wait for your utag.view() call rather than allowing the automatic one to track on DOM ready. This is the closest behavior to how the original Target-Analytics integration worked.

I’d also recommend not limiting yourself to using response tokens in just this one way. You’ll notice that there are tokens available for geographic data (based on an IP lookup) and many other things. One interesting use case is that geographic data could be extremely useful in achieving GDPR compliance. While the old integration was simple and straightforward, and this new approach is a little more cumbersome, it’s far more powerful and gives you many more options. I’d love to hear what new ways you find to take advantage of response tokens in Adobe Target!

Photo Credit: M Liao (Flickr)

Adobe Analytics, Tag Management, Technical/Implementation

My First Crack at Adobe Launch Extension Development

Over the past few months, I’ve been spending more and more time in Adobe Launch. So far, I’m liking what I see – though I’m hoping the publish process gets ironed out a bit in coming months. But that’s not the focus of this post; rather, I wanted to describe my experience working with extensions in Launch. I recently authored my first extension – which offers a few very useful ways to integrate Adobe Target with other tools and extensions in Launch. You can find out more about it here, or ping me with any questions if you decide to add the extension to your Launch configuration. Next week I’ll try and write more about how you might something similar using any of the other major tag management systems. But for now, I’m more interested in how extension development works, and I’d like to share some of the things I learned along the way.

Extension Development is New (and Evolving) Territory for Adobe

The idea that Adobe has so freely opened up its platform to allow developers to share their own code across Adobe’s vast network of customers is admittedly new to me. After all, I can remember the days when Omniture/Adobe didn’t even want to open up its platform to a single customer, much less all of them. Remember the days of usage tokens for its APIs? Or having to pay for a consulting engagement just to get the code to use an advanced plugin like Channel Manager? So the idea that Adobe has opened things up to the point where I can write my own code within Launch, programmatically send it to Adobe, and have it then available for any Adobe customer to use – that’s pretty amazing. And for being so new, the process is actually pretty smooth.

What Works Well

Adobe has put together a pretty solid documentation section for extension developers. All the major topics are covered, and the Getting Started guide should help you get through the tricky parts of your first extension like authentication, access tokens, and uploading your extension package to the integration environment. One thing to note is that just about everything you define in your extension is a “type” of that thing, not the actual thing. For example, my extension exposes data from Adobe Target for use by other extensions. But I didn’t immediately realize that my data element definitions didn’t actually define new data elements for use in Launch; it only created a new “type” of data element in the UI that can then be used to create a data element. The same is true for custom events and actions. That makes sense now, but it took some getting used to.

During the time I spent developing my extension, I also found the Launch product team is working continuously to improve the process for us. When I started, the documentation offered a somewhat clunky process to retrieve an access token, zip my extension, and use a Postman collection to upload it. By the time I was finished, Adobe had released a Node package (npm) to basically do all the hard work. I also found the Launch product team to be incredibly helpful – they responded almost immediately to my questions on their Slack group. They definitely seem eager to build out a community as quickly as possible.

I also found the integration environment to be very helpful in testing out my extension. It’s almost identical to the production environment of Launch; the main difference is that it’s full of extensions in development by people just like me. So you can see what others are working on, and you can get immediate feedback on whether your extension works the way it should. There is even a fair amount of error logging available if you break something – though hopefully this will be expanded in the coming months.

What Could Work Better

Once I finished my extension, I noticed that there isn’t a real natural spot to document how your extension should work. I opted to put mine into the main extension view, even though there was no other configuration needed that would require such a view. While I was working on my extension, it was suggested that I put instructions in my Exchange listing, which doesn’t seem like a very natural place for it, either.

I also hope that, over time, Adobe offers an easier way to style your views to match theirs. For example, if your extension needs to know the name of a data element it should populate, you need a form field to collect this input. Making that form look the same as everything else in Launch would be ideal. I pulled this off by scraping the HTML and JavaScript from one Adobe’s own extensions and re-formatting it. But a “style toolkit” would be a nice addition to keep the user experience the same.

Lastly, while each of the sections in the Getting Started guide had examples, some of the more advanced topics could use some more additional exploration. For example, it took me a few tries to decide whether my extension would work better with a custom event type, or with just some custom code that triggered a direct call rule. And figuring out how to integrate with other extensions – how to access other extensions’ objects and code – wasn’t exactly easy and I still have some unanswered questions because I found a workaround and ended up not needing it.

Perhaps the hardest part of the whole process was getting my Exchange listing approved. The Exchange covers a lot of integrations beyond just Adobe Launch, some of which are likely far more complex than what mine does. A lot of the required images, screenshots, and details seemed like overkill – so a tiered approach to listings would be great, too.

What I’d Like to See Next

Extension development is in its infancy still, but one thing I hope is on the roadmap is the ability to customize an extension to work the way you need it. For a client I recently migrated, they used both Facebook and Pinterest, though the extensions didn’t work for their tag implementation. There were events and data they needed to capture that the extension didn’t support. I hope that in a future iteration, I’ll be able to “check out” an extension from the library and download the package, make it work the way I need and either create my own version of the extension or contribute to an update of someone else’s extension that the whole community can benefit from. The inability to customize tag templates has plagued every paid tag management solution except Tealium (which has supported it from the beginning) for years – in my opinion, it’s what turns tag management from a tool used primarily to deploy custom JavaScript into a powerful digital marketing toolbelt. It’s not something I’d expect so early in the game, but I hope it will be added soon.

In conclusion, my hat goes off to the Launch development team; they’ve come up with a really great way to build a collaborative community that pushes Launch forward. No initial release will ever be perfect, but there’s a lot to work with and a lot of opportunity for all of us in the future to shape the direction Launch takes and have some influence in how it’s adopted. And that’s an exciting place to be.

Photo Credit: Rod Herrea (Flickr)

Adobe Analytics, Featured, Tag Management, Technical/Implementation

A Coder’s Paradise: Notes from the Tech Track at Adobe Summit 2018

Last week I attended my 11th Adobe Summit – a number that seems hard to believe. At my first Summit back in 2008, the Great Recession was just starting, but companies were already cutting back on expenses like conferences – just as Omniture moved Summit from the Grand America to the Salt Palace (they moved it back in 2009 for a few more years). Now, the event has outgrown Salt Lake City – with over 13,000 attendees last week converging on Las Vegas for an event with a much larger footprint than just the digital analytics industry.

With the sheer size of the event and the wide variety of products now included in Adobe’s Marketing and Experience Clouds, it can be difficult to find the right sessions – but I managed to attend some great labs, and wanted to share some of what I learned. I’ll get to Adobe Launch, which was again under the spotlight – only this year, it’s actually available for customers to use. But I’m going to start with some of the other things that impressed me throughout the week. There’s a technical bent to all of this – so if you’re looking for takeaways more suited for analysts, I’m sure some of my fellow partners at Demystified (as well as lots of others out there) will have thoughts to share. But I’m a developer at heart, so that’s what I’ll be emphasizing.

Adobe Target Standard

Because Brian Hawkins is such an optimization wizard, I don’t spend as much time with Target as I used to, and this was my first chance to do much with Target Standard besides deploy the at.js library and the global mbox. But I attended a lab that worked through deploying it via Launch, then setting up some targeting on a singe-page ReactJS application. My main takeaway is that Target Standard is far better suited to running an optimization program on a single-page application than Classic ever was. I used to have to utilize nested mboxes and all sorts of DOM trickery to delay content from showing until the right moment when things actually took place. But with Launch, you can easily listen for page updates and then trigger mboxes accordingly.

Target Standard and Launch also makes it easier to handle a common issue with frameworks like ReactJS where the data layer is being asynchronously populated with data from API calls – so you can run a campaign on initial page load even if it takes some time for all the relevant targeting data to be available.

Adobe Analytics APIs

The initial version of the Omniture API was perhaps the most challenging API I’ve ever used. It supported SOAP only, and from authentication to query, you had to configure everything absolutely perfectly for it to work. And you had to do it with no API Explorer and virtually no documentation, all while paying very close attention to the number of requests you were making, since you only had 2,000 tokens per month and didn’t want to run out or get charged for more (I’m not aware this ever happened, but the threat at least felt real!).

Adobe adding REST API support a few years later was a career-changing event for me, and there have been several enhancements and improvements since, like adding OAUTH authentication support. But what I saw last week was pretty impressive nonetheless. The approach to querying data is changed significantly in the following ways:

  • The next iteration of Adobe’s APIs will offer a much more REST-ful approach to interacting with the platform.
  • Polling for completed reports is no longer required. It will likely take several more requests to get to the most complicated reports, but each individual request will run much faster.
  • Because Analytics Workspace is built on top of a non-public version of the API, you truly will be able to access any report you can find in the UI.
  • The request format for each report has been simplified, with non-essential parameters either removed or at least made optional.
  • The architecture of a report request is fundamentally different in some ways – especially in the way that breakdowns between reports work.
  • The ability to search or filter on reports is far more robust than in earlier versions of the API.

Launch by Adobe

While Launch has been available for a few months, I’ve found it more challenging than I expected to talk my clients into migrating from DTM to Launch. The “lottery” system made some of my clients wonder if Launch was really ready for prime-time, while the inability to quickly migrate an existing DTM implementation over to Launch has been prohibitive to others. But whatever the case may be, I’ve only started spending a significant amount of time in Launch in the last month or so. For customers who were able to attend labs or demos on Launch at Summit, I suspect that will quickly change – because the feature set is just so much better than with DTM.

How Launch Differs from DTM

My biggest complaint about DTM has always been that it hasn’t matched the rest of the Marketing Cloud in terms of enterprise-class features. From a limited number of integrations available, to the rigid staging/production publishing structure, I’ve repeatedly run into issues where it was hard to make DTM work the way I needed for some of my larger clients. Along the way, Adobe has repeatedly said they understood these limitations and were working to address them. And Launch does that – it seems fairly obvious now that the reason DTM lagged in offering features other systems did is because Adobe has been putting way more resources into Launch over the past few years. It opens up the platform in some really unique ways that DTM never has:

  • You can set up as many environments as you want.
  • Minification of JavaScript files is now standard (it’s still hard to believe this wasn’t the case with DTM).
  • Anyone can write extensions to enhance the functionality and features available.
  • The user(s) in charge of Launch administration for your company have much more granular control over what is eventually pushed to your production website.
  • The Launch platform will eventually offer open APIs to allow you to customize your company’s Launch experience in virtually any way you need.

With Great Power Comes Great Responsibility

Launch offers a pretty amazing amount of control that make for some major considerations to each company that implements it. For example, the publishing workflow is flexible to the point of being a bit confusing. Because it’s set up almost like a version control system like Git, any Launch user can set up his or her own development environment and configure in any number of ways. This means each user has to then choose which version of every single asset to include in a library, promote to staging/production, etc. So you have to be a lot more careful than when you’re publishing with DTM.

I would hope we’ve reached a point in tag management where companies no longer expect a marketer to be able to own tagging and the TMS – it was the sales pitch made from the beginning, but the truth is that it has never been that easy. Even Tealium, which (in my opinion) has the most user-friendly interface and the most marketer-friendly features, needs at least one good developer to tap into the whole power of the tool. Launch will be no different; as the extension library grows and more integrations are offered, marketers will probably feel more comfortable making changes than they were with DTM – but this will likely be the exception and not the rule.

Just One Complaint

If there is one thing that will slow migration from DTM to Launch, it is be the difficulty customers will face in migration. One of the promises Adobe made about Launch at Summit in 2017 was that you would be able to migrate from DTM to Launch without updating the embed code on your site. This is technically true – you can configure Launch to publish your production environment to an old DTM production publishing target. But this can only be done for production, and not any other environment – which means you can migrate without updating your production embed code, but you will need to update all your non-production codes. Alternatively, you can use a tool like DTM Switch or Charles Proxy for your testing – and that will work fine for your initial testing. But most enterprise companies want to accumulate a few weeks of test data for all the traffic on at least one QA site before they are comfortable deploying changes to production.

It’s important to point out that, even if you do choose to migrate by publishing your Launch configuration to your old production DTM publishing target, you still have to migrate everything currently in DTM over to Launch – manually. Later this year, Adobe has said that they will release a true migration tool that will allow customers to pull rules, data elements, and tags from a DTM property into a new Launch property and migrate them without causing errors. Short of such a tool, some customers will have to invest quite a bit to migrate everything they currently have in DTM over to Launch. Until then, my recommendation is to figure out the best migration approach for your company:

  1. If you have at least one rockstar analytics developer with some bandwidth, and a manageable set of rules and tags in DTM, I’d start playing around with migration in one of your development environments, and put together an actual migration plan.
  2. If you don’t have the resources yet, I’d probably wait for the migration tool to be available later in the year – but still start experimenting with Launch on smaller sites or as more resources become available.

Either way, for some of my clients that have let their DTM implementations get pretty unwieldy, moving from DTM to Launch offers a fresh start and a chance to upgrade to Adobe’s latest technology. No matter which of these two situations you’re in, I’d start thinking now (if you haven’t already) about how you’re going to get your DTM properties migrated to Launch. It is superior to DTM in nearly every way, and it is going to get nearly all of the development resources and roadmap attention from Adobe from here on out. You don’t need to start tomorrow – and if you need to wait for a migration tool, you’ll be fine. But if your long-term plan is to stay with DTM, you’re likely going to limit your ability in the future to tap into additional features, integrations and enhancements Adobe makes across its Marketing and Experience Cloud products.

Conclusion

We’ve come a long ways from the first Summits I attended, with only a few labs and very little emphasis on the technology itself. Whether it was new APIs, new product features announcements, or the hands-on labs, there was a wealth of great information shared at Summit 2018 for developers and implementation-minded folks like me – and hopefully you’re as excited as I am to get your hands on some of these great new products and features.

Photo Credit: Roberto Faccenda (Flickr)

Adobe Analytics

Adobe’s new Marketing Cloud Visitor ID: How Does it Work?

A few months ago, I wrote a series of posts about cookies – how they are used in web analytics, and how Google and Adobe (historically) identify your web visitors. Those two topics set the stage for a discussion on Adobe’s current best practices approach for visitor identification.

You’ll remember that Adobe has historically used a cookie called “s_vi” to identify visitors to your site. This cookie is set by Adobe’s servers – meaning that by default it is third-party. Many Adobe customers have gone through the somewhat tedious process of allowing Adobe to set that cookie from one of their own subdomains, making it first-party. This is done by having your network operations team update its DNS settings to assign that subdomain to Adobe, and by purchasing (and annually maintaining) an SSL certificate for Adobe to use. If that sounds like a pain to you, you’re not alone. I remember having to go through the process when I worked at salesforce.com – because companies rightly take their websites, networks, and security seriously, what is essentially 5 minutes of actual work took almost 3 months!

So a few years back, Adobe came up with another alternative I discussed a few months ago – the “s_fid” cookie. This is a fallback visitor ID cookie set purely in JavaScript, used when a browser visiting a website still on third-party cookies rejected Adobe’s cookie. That was nice, but it wasn’t a very publicized change, and most analysts may not even know it exists. That may be because, at the time it happened, Adobe already had something better in the works.

The next change Adobe introduced – and, though it happened well over a year ago, only now am I starting to see major traction – was built on top of the Demdex product they acquired a few years ago, now known as Adobe Audience Manager (AAM). AAM is the backbone for identifying visitors using its new “Marketing Cloud” suite, and the Marketing Cloud Visitor ID service (AMCV) is the new best-practice for identifying visitors to your website. Note that you don’t need to be using Audience Manager to take advantage of the Visitor ID service – the service is available to all Adobe customers.

The really great thing about this new approach is that it represents something that Adobe customers have been hoping for for years – a single point of visitor identification. The biggest advantage a company gains in switching to this new approach is a way to finally, truly integrate some of Adobe’s most popular products. Notice that I didn’t say all Adobe products – but things are finally moving in that direction. The idea here is that if you implement the Marketing Cloud Visitor ID Service, and then upgrade to the latest code versions for tools like Analytics and Target, they’ll all be using the same visitor ID, which makes for a much smoother integration of your data, your visitor segments, and so on. One caveat is that while the AMCV has been around for almost 2 years, it’s been a slow ramp-up for companies to implement it. It’s a bit more challenging than a simple change to your s_code.js or mbox.js files. And even if you get that far, it’s then an additional challenge to migrate to the latest version of Target that is compatible with AMCV – a few of my clients that have tried doing it have hit some bumps in the road along the way. The good news is that it’s a major focus of Adobe’s product roadmap, which means those bumps in the road are getting smoothed out pretty quickly.

So, where to begin? Let’s start with the new cookie containing your Marketing Cloud Visitor ID. Unlike Adobe’s “s_vi” cookie, this cookie value is set with JavaScript and will always be first-party to your site. However, unlike Google’s visitor ID cookie, it’s not set exclusively with logic in the tracking JavaScript. When your browser loads that JavaScript, Adobe sends off an additional request to its AAM servers. What comes back is a bit of JavaScript that contains the new ID, which the browser can then use to set its own first-party cookie. But there is an extra request added in (at least on the first page load) that page-load time and performance fanatics will want to be aware of.

The other thing this extra request does is allow Adobe to set an additional third-party cookie with the same value, which it will do if the browser allows. This cookie can then be used if your site spans multiple domains, allowing you to use the same ID on each one of your sites. Adobe’s own documentation says this approach will only work if you’ve set up a first-party cookie subdomain with them (that painful process I discussed earlier). One of the reasons I’ve waited to write this post is that it took awhile for a large enough client, with enough different sites, to be ready to try this approach out. After a lot of testing, I can say that it does work – but that since it is based on that initial third-party cookie, it’s a bit fragile. It works best for brand-new visitors that have no Adobe cookies on any of your website. If you test it out, you’re likely to see most visits to your websites work just like you hoped – and a few where you still get a new ID, instead of the one stored in that third-party cookie. There’s a pretty crazy flow chart that covers the whole process here if you’re more of a visual learner.

Adobe has a lot of information available to help you migrate through this process successfully, and I don’t want to re-hash it here. But the basics are as follows:

  1. Request from Adobe that they enable your account for the Marketing Cloud, and send you your new “Org ID.” This uniquely identifies your company and ensures your visitors get identified correctly.
  2. If you’re using (or want to use) first-party cookies via a CNAME, make sure your DNS records point to Adobe’s latest regional data center (RDC) collection servers. You can read about the details here.
  3. If your migration is going to take time (like if you’re not using tag management or can’t update all your different implementations or sites at the same time), work with Adobe to configure a “grace period” for the transition process.
  4. Update your Analytics JavaScript code. You can use either AppMeasurement or H code – as long as you’re using the latest version.
  5. Deploy the new VisitorAPI JavaScript library. This can happen at the same time you deploy your Analytics code if you want.
  6. Test. And then test again. And just to be safe, test one more time – just to make sure the data being sent back to Adobe looks like you expect it to.

Once you finish, you’re going to see something like this in the Adobe Debugger:

adobe-debugger-amcv

 

There are two things to notice here. The first is a request for Adobe Audience Manager, and you should see your Marketing Cloud Org ID in it. The other is a new parameter in the Analytics request called “mid” that contains your new Marketing Cloud Visitor ID. Chances are, you’ll see both those. Easy, right? Unfortunately, there’s one more thing to test. After helping a dozen or so of my clients through this transition, I’ve seen a few “gotchas” pop up more than once. The Adobe debugger won’t tell you if everything worked right, so try another tool like Charles Proxy or Firebug, and find the request to “dpm.demdex.net.” The response should look something like this if it worked correctly:

charles-amcv-good

However, you may see something like this:

charles-amcv-bad

If you get the error message “Partner ID is not provisioned in AAM correctly,” stop your testing (hopefully you didn’t test in production!). You’ll need to work with Adobe to make sure your Marketing Cloud Org ID is ”provisioned” correctly. I have no idea how ClientCare does this, but I’ve seen this problem happen enough to know that not everyone at Adobe knows how to fix it, and it may take some time. But where my first 4-5 clients all had the problem the first time they tested, lately it’s been a much smoother process.

If you’ve made it this far, I’ve saved one little thing for last – because it has the potential to become a really big thing. One of the less-mentioned features that the new Marketing Cloud Visitor ID service offers you is the ability to set your own unique IDs. Here are a few examples:

  • The unique ID you give to customers in your loyalty program
  • The unique ID assigned by your lead generation system (like Eloqua, Marketo, or Salesforce)

You can read about how to implement these changes here, but they’re really simple. Right now, there’s not a ton you can do with this new functionality – Adobe doesn’t even store these IDs in its cookie yet, or do anything to link those IDs to its Marketing Cloud Visitor ID. But there’s a lot of potential for things it might do in the future. For example, very few tools I’ve worked with offer a great solution for visitor stitching – the idea that a visitor should look the same to the tool whether they’re visiting your full site, your mobile site, or using your mobile app. Tealium’s AudienceStream is a notable exception, but it has less reporting capability than Adobe or Google Analytics – and those tools still aren’t totally equipped to retroactively change a visitor’s unique ID. But creating an “ID exchange” is just one of many steps that would make visitor stitching a realistic possibility.

I’ve intentionally left out many technical details on this process. The code isn’t that hard to write, and the planning and coordination with deploying it is really where I’ve seen my clients tripped up. But the new Marketing Cloud Visitor ID service is pretty slick – and a lot of the new product integration Adobe is working on depends on it. So if you’re an Adobe customer and you’re not using it, you should investigate what it will take to migrate. And if you’ve already migrated, hopefully you’ve started taking advantage of some of the new features as well!