Adobe Analytics, Featured, google analytics

Switching from Adobe to Google? What you Should Know (Part 2)

Last week, I went into detail on four key differences between Adobe and Google Analytics. This week, I’ll cover four more. This is far from an exhaustive list – but the purpose of these posts is not to cover all the differences between the two tools. There have been numerous articles over the years that go into great detail on many of these differences. Instead, my purpose here is to identify key things that analysts or organizations should be aware of should they decide to switch from one platform to another (specifically switching from Adobe to Google, which is a question I seem to get from one of my clients on a monthly basis). I’m not trying to talk anyone out of such a change, because I honestly feel like the tool is less important than the quality of the implementation and the team that owns it. But there are important differences between them, and far too often, I see companies decide to change to save money, or because they’re unhappy with their implementation of the tool (and not really with the tool itself).

Topic #5: Pathing

Another important difference between Adobe and Google is in path and flow analysis. Adobe Analytics allows you to enable any traffic variable to use pathing – in theory, up to 75 dimensions, and you can do path and next/previous flow on any of them. What’s more, with Analytics Workspace, you can also do flow analysis on any conversion variable – meaning that you can analyze the flow of just about anything.

Google’s Universal Analytics is far more limited. You can do flow analysis on both Pages and Events, but not any custom dimensions. It’s another case where Google’s simple UI gives it a perception advantage. But if you really understand how path and flow analysis work, Adobe’s ability to path on many more dimensions, and across multiple sessions/visits, can be hugely beneficial. However, this is an area Google has identified for improvement, and GA4 is bringing new capabilities that may help bring GA closer to par.

Topic #6: Traffic Sources/Marketing Channels

Both Adobe and Google Analytics offer robust reporting on how your users find your website, but there are subtle differences between them. Adobe offers the ability to define as many channels as you want, and define the rules for those channels you want to use. There are also pre-built rules you can use if you need. So you can accept Adobe’s built-in way of identifying social media traffic, but also make sure your paid social media links are correctly detected. You can also classify your marketing channel data into as many dimensions as you want.

Google also allows you as many channels as you want to use, but its tool is built around 5 key dimensions: source, medium, campaign, keyword, and content. These dimensions are typically populated using a series of query parameters prefixed with “utm_,” though they can also be populated manually. You can use any dimension to set up a series of channel groupings as well, similar to what Adobe offers.

For paid channels, both tools offer more or less the same features and capabilities; Adobe offers far more flexibility in configuring how non-paid channels should be tracked. For example, Adobe allows you to decide that certain channels should not overwrite a previously identified channel. But Google overwrites any old channel (except direct traffic) as soon as a new channel is identified – and, what’s more, immediately starts a new session when this happens (this is one of the quirkiest parts of GA, in my opinion).

Both tools allow you to report on first, last, and multi-touch attribution – though again, Adobe tends to offer more customizability, while Google’s reporting is easier to understand and navigate, GA4 offers some real improvements to make attribution reporting even easier. Google Analytics is also so ubiquitous that most agencies are immediately familiar with and ready to comply with a company’s traffic source reporting standards.

One final note about traffic sources is that Google’s integrations between Analytics and other Google marketing and advertising tools offer real benefits to any company – so much so that I even have clients that don’t want to move away from Adobe Analytics but still purchase GA360 just to leverage the advertising integrations.

Topic #7: Data Import / Classifications

One of the most useful features in Adobe Analytics is Classifications. This feature allows a company to categorize and classify the data captured in a report into additional attributes or metadata. For example, a company might capture the product ID at each step of the purchase process, and then upload a mapping of product IDs to names, categories, and brands. Each of those additional attributes becomes a “free” report in the interface. You don’t need to allocate an additional variable for it, but every attribute becomes its own report. This allows data to be aggregated or viewed in new ways. These classifications are also the only truly retroactive data in the tool – you can upload and overwrite classifications at any time, overwriting the data that was there previously. In addition, Adobe also has a powerful tool allowing you to not just upload your metadata, but also write matching rules (even using regular expressions) and have the classifications applied automatically, updating the classification tables each night.

Google Analytics has a similar feature, called Data Import. On the whole, Data Import is less robust than Classifications – for example, every attribute you want to enable as a new report in GA requires allocating one of your custom dimensions. However, Data Import has one important advantage over Classifications – the possibility to process the metadata in two different ways:

  • Query Time Data Import: Using this approach, the metadata you upload gets mapped to the primary dimension (the product ID in my example above) when you run your report. This is identical to how Adobe handles its classification data.
  • Processing Time Data Import: Using this approach, the metadata you upload gets mapped to the primary dimension at the time of data collection. This means that Google gives you the ability to report on your metadata either retroactively or non-retroactively.

This distinction may not be initially obvious, so here’s an example. Let’s say you capture a unique ID for your products in a GA custom dimension, and then you use data import to upload metadata for both brand name and category. The brand name is unlikely to change; a query time data import will work just fine. However, let’s say that you frequently move products between categories to find the one where they sell best. In this case, a query time data import may not be very useful – if you sold a pair of shoes in the “Shoes” category last month but are now selling it under “Basketball,” when you run a report over both months, that pair of shoes will look like it’s part of the Basketball category the entire time. But if you use a processing time data import, each purchase will be correctly attributed to the category in which it was actually sold.

Topic #8: Raw Data Integrations

A few years ago, I was hired by a client to advise them on whether they’d be better off sticking with what had become a very expensive Adobe Analytics integration or moving to Google Analytics 360. I found that, under normal circumstances, they would have been an ideal candidate to move to Google – the base contract would save them money, and their reporting requirements were fairly common and not reliant on Adobe features like merchandising that are difficult to replicate with Google.

What made the difference in my final recommendation to stick with Adobe was that they had a custom integration in place that moved data from Adobe’s raw data feeds into their own massive data warehouse. A team of data scientists relied heavily on integrations that were already built and working successfully, and these integrations would need to be completely rebuilt if they switched to Google. We estimated that the cost of such an effort would likely more than make up the difference in the size of their contracts (it should be noted that the most expensive part of their Adobe contract was Target, and they were not planning on abandoning that tool even if they abandoned Analytics).

This is not to say that Adobe’s data feeds are superior to Google’s BigQuery product; in fact, because BigQuery runs of Google’s ubiquitous cloud platform, it’s more familiar to most database developers and data scientists. The integration between Universal Analytics and BigQuery is built right into the 360 platform, and it’s well structured and easy to work with if you are familiar with SQL. Adobe’s data feeds are large, flat, and require at least cursory knowledge of the Adobe Analytics infrastructure to consume properly (long, comma-delimited lists of obscure event and variable names cause companies all sorts of problems). But this company had already invested in an integration that worked, and it seemed costly and risky to switch.

The key takeaway for this topic is that both Adobe and Google offer solid methods for accessing their raw data and pulling it into your own proprietary databases. A company can be successful integrating with either product – but there is a heavy switching cost for moving from one to the other.

Here’s a summary of the topics covered in this post:

FeatureGoogle AnalyticsAdobe
PathingAllows pathing and flow analysis only on pages and events, though GA4 will improve on thisAllows pathing and flow analysis on any dimension available in the tool, including across multiple visits
Traffic Sources/Marketing ChannelsPrimarily organized around use of “utm” query parameters and basic referring domain rules, though customization is possible

Strong integrations between Analytics and other Google marketing products

 

Ability to define and customize channels in any way that you want, including for organic channels

Data Import/ClassificationsData can be categorized either at processing time or at query time (query time only available for 360 customers)

Each attribute/classification requires use of one of your custom dimensions

Data can only be categorized at query time

Unlimited attributes available without use of additional variables

Raw Data IntegrationsStrong integration between GA and BigQuery

Uses SQL (a skillset possessed by most companies)

Data feeds are readily available and can be scheduled by anyone with admin access

Requires processing of a series of complex flat files

In conclusion, Adobe and Google Analytics are the industry leaders in cloud-based digital analytics tools, and both offer a rich set of features that can allow any company to be successful. But there are important differences between them, and too often, companies that decide to switch tools are unprepared for what lies ahead. I hope these eight points have helped you better understand how the tools are different, and what a major undertaking it is to switch from one to the other. You can be successful, but that will depend more on how you plan, prepare, an execute on your implementation of whichever tool you choose. If you’re in a position where you’re considering switching analytics tools – or have already decided to switch but are unsure of how to do it successfully, please reach out to us and we’ll help you get through it.

Photo credits: trustypics is licensed under CC BY-NC 2.0

Adobe Analytics, Featured, google analytics, Uncategorized

Switching from Adobe to Google? What You Should Know (Part 1)

In the past few months, I’ve had the same conversation with at least 5 different clients. After the most recent occurrence, I decided it was time to write a blog post about it. This conversation has involved a client either having made the decision to migrate from Adobe Analytics to Google Analytics 360 – or deciding to invest in both tools simultaneously. This isn’t a conversation that is new to me – I’ve had it at least a few times a year since I started at Demystified. But this year has struck me particularly because of both the frequency and the lack of awareness among some of my clients at what this undertaking actually means to a company as large as those I typically work with. So I wanted to highlight the things I believe anyone considering a shift like this should know before they jump. Before I get into a discussion about the feature set between the tools, I want to note two things that have nothing to do with features and the tools themselves.

  • If you’re making this change because you lack confidence in the data in your current tool, you’re unlikely to feel better after switching. I’ve seen far too many companies that had a broken process for implementing and maintaining analytics tracking hope that switching platforms would magically fix their problems. I have yet to see a company actually experience that magical change. The best way to increase confidence in your data is to audit and fix your implementation, and then to make sure your analysts have adequate training to use whichever tool you’ve implemented. Switching tools will only solve your problem if it is accompanied by those two things.
  • If you’re making this change to save money, do your due diligence to make sure that’s really the case. Google’s pricing is usually much easier to figure out than Adobe’s, but I have seen strange cases where a company pays more for Google 360 than Adobe. You also need to make sure you consider the true cost of switching – how much will it take to start over with a new tool? Have you included the cost of things like rebuilding back-end processes for consuming data feeds, importing data into your internal data warehouse, and recreating integrations with other vendors you work with?

As we take a closer look at actual feature differences between Adobe and Google, I want to start by saying that we have many clients successfully using each tool. I’m a former Adobe employee, and I have more experience with Adobe’s tool’s than Google’s. But I’ve helped enough companies implement both of these tools to know that a company can succeed or fail with either tool, and a company’s processes, structure, and culture will be far more influential in determining success than which tool you choose. Each has strengths and features that the other does not have. But there are a lot of hidden costs in switching that companies often fail to think about beforehand. So if your company is considering a switch, I want you to know things that might influence that decision; and if your management team has made the decision for you, I want you to know what to expect.

A final caveat before diving in…this series of posts will not focus much on GA4 or the Adobe Experience Platform, which represent the future of each company’s strategy. There are similarities between those two platforms, namely that both open allow a company to define its own data schema, as well as more easily incorporate external data sources in the reporting tool (Google’s Analysis tool or Adobe’s Analysis Workspace). I’ll try to call out points where these newer platforms change things, but my own experience has shown me that we’re still a ways out from most companies being ready to fully transition from the old to the new platforms.

Topic #1: Intended Audience

The first area I’d like to consider may be more opinion than fact – but I believe that, while neither company may want to admit it, they have targeted their analytics solutions to different markets. Google Analytics takes a far more democratic approach – it offers a UI that is meant to be relatively easy for even a new analyst to use. While deeper analysis is possible using Data Studio, Advanced Analysis, or BigQuery, the average analyst in GA generally uses the reports that are readily available. They’re fast, easy to run, and offer easily digestible insights.

On the other hand, I frequently tell my clients that Adobe gives its customers enough rope to hang themselves. There tend to be a lot more reports at an analyst’s fingertips in Adobe Analytics, and it’s not always clear what the implications are for mixing different types of dimensions and metrics. That complexity means that you can hop into Analysis Workspace and pretty quickly get into the weeds.

I’ve heard many a complaint from analyst with extensive GA experience that join a company that uses Adobe, usually about how hard it is to find things, how unintuitive the UI is, etc. It’s a valid complaint – and yet, I think Adobe kind of intends for that to be the case. The two tools are different – but they are meant to be that way.

Topic #2: Sampling

Entire books have been written on Google Analytics’ use of sampling, and I don’t want to go into that level of detail here. But sampling tends to be the thing that scares analysts the most when they move from Adobe to Google. For those not familiar with Adobe, this is because Adobe does not have it. Whatever report you run will always include 100% of the data collected for that time period (one exception is that Adobe, like Google, does maintain some cardinality limits on reports, but I consider this to be different from sampling).

The good news is that Google Analytics has dramatically reduced the impact of sampling over the years, to the point where there are many ways to get unsampled data:

  • Any of the default reports in Google’s main navigation menus is unsampled, as long as you don’t add secondary dimensions, metrics, or breakdowns.
  • You always have the option of downloading an unsampled report if you need it.
  • Google 360 customers have the ability to create up to 100 “custom tables” per property. A custom table is a report you build in advance that combines all the dimension and metrics you know you need. When you run reports using a custom table you can apply dimensions, metrics, and segments to the report in any way you choose, without fear of sampling. They can be quite useful, but they must be built ahead of time and cannot be changed after that.
  • You can always get unsampled data from BigQuery, provided that you have analysts that are proficient with SQL.

It’s also important to note that most companies that move from Adobe to Google choose to pay for Google 360, which has much higher sampling thresholds than the free version of Google Analytics. The free version of GA turns on sampling once you exceed 500,000 sessions at the property level for the date range you are using. But GA 360 doesn’t apply sampling until you hit 100,000,000 sessions at the view level, or start pulling intra-day data. So not only is the total number much higher, but you can also structure your views in a way that makes sampling even less of an issue.

Topic #3: Events

Perhaps one of the most difficult adjustments for an analyst moving from Adobe to Google – or vice-versa – is event tracking. The confusion stems from the fact that the word “event” means something totally different in each tool:

  • In Adobe, an event usually refers to a variable used by Adobe Analytics to count things. A company gets up to 1000 “success events” that are used to count either the number of times something occurred (like orders) or a currency amount associated with a particular interaction (like revenue). These events become metrics in the reporting interface. The equivalent would be a goal or custom metric in Google Analytics – but Adobe’s events are far more useful throughout the reporting tools than custom metrics. They can also be serialized (counted only once per visit, or counted once for some unique ID).
  • In Google, an event refers to an interaction a user performs on a website or mobile app. These events become a specific report in the reporting interface, with a series of different dimensions containing data about the event. Each event you track has an associated category, action, label, and value. There really is no equivalent in Adobe Analytics – events are like a combination of 3 props and a corresponding success event, all rolled up into one highly useful report (unlike the custom links, file download, and exit links reports). But that report can often become overloaded or cluttered because it’s used to report on just about every non-page view interaction on the site.

If you’ve used both tools, these descriptions probably sound very unsophisticated. But it can often be difficult for an analyst to shift from one tool to the other, because he or she is used to one reporting framework, and the same terminology means something completely different in the other tool. GA4 users will note here that events have changed again from Universal Analytics – even page and screen views are considered to be events in GA4, so there’s even more to get used to when making that switch.

Topic #4: Conversion and E-commerce Reporting

Some of the most substantial differences between Adobe and Google Analytics are in their approach to conversion and e-commerce reporting. There are dozens of excellent blog posts and articles about the differences between props and eVars, or eVars and custom dimensions, and I don’t really want to hash that out again. But for an Adobe user migrating to Google Analytics, it’s important to remember a few key differences:

  • In Adobe Analytics, you can configure an eVar to expire in multiple ways: after each hit, after a visit/session, to never expire, after any success event occurs, or after any number of days. But in Google Analytics, custom dimensions can only expire after hits, sessions, or never (there is also the “product” option, but I’m going to address that separately).
  • In Adobe Analytics, eVars can be first touch or last touch, but in Google Analytics, all custom dimensions are always last touch.

These are notable differences, but it’s generally possible to work around those limitations when migrating to Google Analytics. However, there is a concept in Adobe that has virtually no equivalent in Google – and as luck would have it, it’s also something that even many Adobe users struggle to understand. Merchandising is the idea that an e-commerce company might want to associate different values of a variable with each product the customer views, adds to cart, or purchases. There are 2 different ways that merchandising can be useful:

  • Method #1: Let’s consider a customer that buys multiple products, and wants to use a variable or dimension to capture the product name, category, or some other common product attribute. Both Adobe and Google offer this type of merchandising, though Google requires each attribute to be passed on each hit where the product ID is captured, while Adobe allows an attribute to be captured once and associated with that product ID until you want it to expire.
  • Method #2: Alternatively, what if the value you want to associate with the product isn’t a consistent product attribute? Let’s say that a customer finds her first product via internal search, and her second by clicking on a cross-sell offer on that first product. You want to report on a dimension called “Product Finding Method.” We’re no longer dealing with a value that will be the same for every customer that buys the product; each customer can find the same product in different ways. This type of merchandising is much easier to accomplish with Adobe than with Google I could write multiple blog posts about how to implement this in Adobe Analytics, so I won’t go into additional detail here. But it’s one of the main things I caution my Adobe clients about when they’re considering switching.

At this point, I want to highlight Google’s suite of reports called “Enhanced E-commerce.” This is a robust suite of reports on all kinds of highly useful aspects of e-commerce reporting: product impressions and clicks, promotional impressions and clicks, each step of the purchase process from seeing a product in a list, to viewing a product detail page, all the way through checkout. It’s built right into the interface in a standardized way, using a standard set of dimensions which yields a et of reports that will be highly useful to anyone familiar with the Google reporting interface. While you can create all the same types of reporting in Adobe, it’s more customized – you pick which eVars you want to use, choose from multiple options for tracking impressions and clicks, and end up with reporting that is every bit of useful but far less user-friendly than in Google’s enhanced e-commerce reporting.

In the first section of this post, I posited that the major difference between these tools is that Adobe focuses on customizability, while Google focuses on standardization. Nowhere is that more apparent than in e-commerce and conversion reporting: Google’s enhanced e-commerce reporting is simple and straightforward. Adobe requires customization to accomplish a lot of the same things, but while layering on complex like merchandising, offers more robust reporting in the process.

One last thing I want to call out in this section is that Adobe’s standard e-commerce reporting allows for easy de-duplication of purchases based on a unique order ID. When you pass Adobe the order ID, it checks to make sure that the order hasn’t been counted before; if it has, it does not count the order a second time. Google, on the other hand, also accepts the order ID as a standard dimension for its reporting – but it doesn’t perform this useful de-duplication on its own. If you want it, you have to build out the functionality as part of your implementation work.

Here’s a quick recap on what we’ve covered so far:

FeatureGoogle AnalyticsAdobe
SamplingStandard: Above 500,000 sessions during the reporting period

360: Above 100,000,000 sessions during the reporting period

Does not exist
CardinalityStandard: 50,000 unique values per report per day, or 100,000 uniques for multi-day tables

360: 1,000,000 unique values per report per day, or 150,000 unique values for multi-day tables

500,000 unique values per report per month (can be increased if needed)
Event TrackingUsed to track interactions, using 3 separate dimensions (category, action, label)Used to track interactions using a single dimension (i.e. the “Custom Links” report)
Custom Metrics/Success Events200 per property

Can track whole numbers, decimals, or currency

Can only be used in custom reports

1,000 per report suite

Can track whole numbers, decimals, or currency

Can be used in any reports

Can be serialized

Custom Dimensions/Variables200 per property

Can be scoped to hit, session, or user

Can only be used in custom reports

Can only handle last-touch attribution

Product scope allows for analysis of product attributes, but nothing like Adobe’s merchandising feature exists

250 per report suite

Can be scoped to hit, visit, visitor, any number of days, or to expire when any success event occurs

Can be used in any report

Can handle first-touch or last-touch attribution

Merchandising allows for complex analysis of any possible dimension, including product attributes

E-Commerce ReportingPre-configured dimensions, metrics, and reports exist for all steps in an e-commerce flow, starting with product impressions and clicks and continuing through purchasePre-configured dimensions and metrics exist all steps in an e-commerce flow, starting with product views and continuing through purchase

Product impressions and clicks can also be tracked using additional success events

This is a good start – but next week, I’ll dive into a few additional topics: pathing, marketing channels, data import/classifications, and raw data integrations. If it feels like there’s a lot to keep track of, it should. Migrating from one analytics tool to another is a big job – and sometimes the people who make a decision like this aren’t totally aware of the burden it will place on their analysts and developers.

Photo credits: trustypics is licensed under CC BY-NC 2.0

Featured, Testing and Optimization

How Adobe Target can help in the craziest of times…

It has been a crazy week but I don’t have to tell any of you that.  Many of you might be new to working from home, adjusting to homeschooling (the biggest challenge in my house), changes to businesses, health issues, family concerns, etc…  We have never seen anything like this before.  Truly historical times.

Since last Saturday, I have been swamped helping some of my clients that leverage Adobe Target to make things easier and better for their digital consumers.  Now that I am getting my head above the water, I thought I would share some of the many use cases that have come up in the hopes that some of you might find them helpful as well.  

So, in no particular order:

A.  Geo-Targeting –   a few of the retailer companies that I work with wanted certain messaging sent to certain DMA’s and cities related to store closings, adjusted hours, etc..  I even had a few financial institutions that needed certain content displayed to their customers outside the United States.  Geo-Targeting is simply an Activity that is targeted to an audience that uses the built-in geo-attributes:

Another helpful utility built into the geo-attributes for the telco’s out there.  You can target your own network or a competitor’s network.  😉

 

 

B.  Impression Capping – This has been a popular request this week.  Present COVID-19 related content show but only for 3 or 4 impressions and then suppress it.  This is done by leveraging Adobe Target profile attributes.  We simply set up a profile script to increment with each Adobe Target server call (or mbox call for us old-timers) like the one below.

 

 

Then create an Audience like this and use it in the Activity.  This Audience here essentially represents 1, 2, or 3 page impressions assuming a global mbox (every page) deployment.  The fourth impression would kick the visitor out of the test and stop any content associated with it to stop showing.

 

C.  Recommendations – quite a bit of work here this week helping customers adjust the Criteria used in Recommendations being made across the site.  The first thing we focused on is the recency of data.  Baseball and soccer cleats were hot items up until this week so adjusting the “most viewed” or “top sellers” to a smaller window made a lot of sense. 

To modify this, within the Criteria, simply drag it all the way to the left and data window for product suggested will only come from data within the last 24 hours.  

The next thing we did was raise the inventory considerations given the high volume of some items being sold.  Again, within the Criteria, you can tell Adobe not to include a particular SKU or product in the Recommendations if the inventory of that product falls below a threshold.

D.  Auto-Allocate – this feature is available to all Adobe Target licenses and not limited to those that have Target Premium.  This feature is huge during short-term marketing initiatives (think cyber Monday, black Friday, etc…) but was really helpful this week.  By simply changing the default radio button to what I show below within the Targeting step of the Activity setup, Adobe Target will automatically shift traffic to the better performing experience.

If you have different messaging that you need to convey to your visitors and are unsure of what one would be the best, you can let your consumers tell you.  Be warned though, I have seen this thing kick some serious butt and shift traffic pretty quickly when confidence is detected.

 

E.  Emergency Backups – This one came as a bit of a surprise to me this week and something you all should think about.  I’ve been helping companies use technologies like Adobe Target since 2006 at Offermatica and I’ve been the pseudo backup for people hundreds of times I am sure but this week things got a bit more formal. 

This week I was incorporated into a very formal process with one of the large Financial firms that I help a lot with test execution and system integrations.  When the situation arose this week, this firm put together very formal processes in place in the event someone is unavailable to work or even get on a phone.  Quite impressive and a testament to the value of optimization and personalization. 

The tactical component of this exercise involved making some adjustments to Adobe Target workspaces (NOT to be confused with Analytics workspaces:) and Adobe Target Product Profiles (NOT the Profile attribute:).  

F:  Test Results – these are not normal times and visitor behavior, traffic volume, conversions are likely very atypical.  In most of the scenarios I dove into this week, the test results were not helpful even though this noise is distributed across all test experiences.  I’d spend more time on qualitative data and use that data, coupled with your testing solution, to help the digital consumer.  Deciding a test winner based off of this traffic, could potentially not be a winner when things normalize.  That said, it depends on what the test is – I had a Recommendation test related to the design that could be valid despite odd traffic.  We are just going to test it again later.    

I wish all of you and yours well and let us all continue to flatten the curve.

 

Featured, google analytics

Data Studio (Random) Mini-Tip: Fixing “No Data” in Blends

I encountered a (maybe?) very random issue recently, with a nifty solution that I didn’t know about, so I wanted to share a quick tip.

The issue: I have two metrics, in two separate data sources, and I’d like to blend them so I can sum them. Easy… pretty basic use case, right?

The problem is that one of the metrics is currently zero in the original data source (but I expect it to have a value in the future.) So here’s what I’m working with:

So I take these two metrics, and I blend them. (I ensure that Metric 1, the one with a value, is in fact on the left, since Data Studio blends are a left join.)

And now I pull those same two metrics, but from the blend:

Metric 1 (the one with a value) is fine. Metric 2, on the other hand, is zero in my original data source, but “No data” in the blend.

When I try to create a calculation in the blend, the result is “No data”

GAH! I just want to add 121 + 0! This shouldn’t be complicated… 

(Note that I tried two methods, both Metric1+Metric2, as well as SUM(Metric1)+SUM(Metric2) and neither worked. Basically… the “No data” caused the entire formula to render “No data”)

Voila… Rick Elliott to the rescue, who pointed me to a helpful community post, in which Nimantha provided this nifty solution.

Did you know about this formula? Because I didn’t:

NARY_MAX(Metric 1, 0) + NARY_MAX(Metric 2, 0)

Basically, it returns the max of two arguments. So in my case, it returns the max of either Metric1 or 0 (or Metric2 or 0.) So in the case where Metric2 is “No data”, it’ll return the zero. Now, when I sum those two, it works!

MAGIC!

This is a pretty random tip, but perhaps it will help someone who is desperately googling “Data Studio blend shows No Data instead of zero”  🙂

Featured, google analytics

Using Data Studio for Google Analytics Alerts

Ever since Data Studio released scheduling, I’ve found the feature very handy for the purpose of alerts and performance monitoring.

Prior to this feature, I mostly used the in-built Alerts feature of Google Analytics, but I find them to be pretty limiting, and lacking a lot of sophistication that would make these alerts truly useful.

Note that for the purpose the post, I am referring to the Alerts feature of Universal Google Analytics, not the newer “App+Web” Google Analytics. Alerts in App+Web are showing promise, with some improvements such as the ability to add alerts for “has anomaly”, or hourly alerts for web data. 

Some of the challenges in using Google Analytics alerts include:

You can only set alerts based on a fixed number or percentage. For example, “alert me when sessions increase by +50%.”

The problem here is that if you set this threshold too low, the alerts will go off too often. As soon as that happens, people ignore them, because they’re constantly “false alarms.” However, if you set the threshold too high, you might not catch an important shift. For example, perhaps sessions dropped by -30% because of some major broken tracking, and it was a big deal, but your alert didn’t go off.

So, to set them at a “reasonable” level, you have to do a bunch of analysis to figure out what the normal variation in your data is, before you even set them up.

What would be more helpful? Intelligent alerts, such as “alert me when sessions shift by two standard deviations.” This would allow us to actually use the variation in historical data, to determine whether something is “alertable”!

Creating alerts is unnecessarily duplicative. If you want an alert for sessions increase or decrease by 50%, that’s two separate alerts you need to configure, share with the relevant users and manage on-going (if there are any changes.)

Only the alert-creator gets any kind of link through to the UI. You can set other users to be email recipients of your alerts, but they’re going to see a simple alert with no link to view more data. On the left, you’ll see what an added recipient of alerts sees. Compare to the right, which the creator of the alerts will see (with a link to the Google Analytics UI.)

The lack of any link to GA for report recipients means either 1) Every user needs to configure their own (c’mon, no one is going to do that) or 2) Only the report creator is ever likely to act on them or investigate further.

The automated alert emails in GA are also not very visual. You get a text-alert, basically, that says “your metric is up/down.” Nothing to show you (without going in to a GA report) if there’s just a decrease, or if something precipitously dropped off a cliff! For example, there’s a big difference between “sessions are down -50%” because it was Thanksgiving — versus sessions plummeting due to a major issue.

You also only know if your alert threshold was met, versus hugely exceeded. E.g. The same alert will trigger for “down -50%”, even if the actual value is down -300%. (Unless you’ve set up multiple, scaling alerts. Which… time consuming…!)

So, what have I been doing instead? 

As soon as Data Studio added the ability to schedule emails, I created what I call an “Alerts Dashboard.” In my case, it contains a few topic metrics, for each of my clients using GA. (If you are client-side, it could, of course, be just those top metrics for your own site.) You’ll want to include, of course, all of your Key Performance Indicators. But if there are other metrics in particular that are prone to breaking on your site, you’d want to include those as well.

Why does this work? Well, because human beings are actually pretty good pattern detectors. As long as we’ve got the right metrics in there, a quick glance at a trended chart (and a little business knowledge) can normally tell us whether we should be panicking, or whether it was “just Thanksgiving.”

Now to be clear: It’s not really an alerts dashboard. It’s not triggering based on certain criteria. It’s just sending to me every day, regardless of what it says.

But, because it is 1) Visual and 2) Shows up in my email, I find I actually do look at it every day (unlike old school GA alerts.)

On top of that, I can also send it to other people and have them see the same visuals I’m seeing, and they can also click through to the report itself.

So what are you waiting for? Set yours up now.

Analysis, Conferences/Community, Featured, google analytics, Reporting

Go From Zero to Analytics Hero using Data Studio

Over the past few years, I’ve had the opportunity to spend a lot of time in Google’s Data Studio product. It has allowed me to build intuitive, easy-to-use reporting, from a wide variety of data sources, that are highly interactive and empower my end-users to easily explore the data themselves… for FREE. (What?!) Needless to say, I’m a fan!

So when I had the chance to partner with the CXL Institute to teach an in-depth course on getting started with Data Studio, I was excited to help others draw the same value from the product that I have.

Perhaps you’re trying to do more with less time… Maybe you’re tearing your hair out with manual analysis work… Perhaps you’re trying to better communicate your data… Or maybe you set yourself a resolution to add a new tool to your analytics “toolbox” for 2020. Whatever your reasons, I hope these resources will get you started!

So without further adieu, check out my free 30 minute webinar with the CXL Institute team here, which will give you a 10-step guide to getting started with Data Studio.

And if you’re ready to really dive in, check out the entire hour online course here:

 

Adobe Analytics, Featured

Creating Time-Lapse Data via Analysis Workspace

Sometimes, seeing how data changes over time can inform you about trends in your data. One way to do this is to use time-lapse. Who hasn’t been mesmerized by a cool video like this:

Credit: RankingTheWorld – https://www.youtube.com/watch?v=8WVoJ6JNLO8

Wouldn’t it be cool if you could do something similar with Adobe Analytics data? Imagine seeing something like the above time-lapse for your products, product categories, campaign channels! That would be amazing! Unfortunately, I doubt this functionality is on the Adobe Analytics roadmap, but in this post, I am going to show you how you can partially create this using Analysis Workspace and add time-lapse to your analytics presentations.

Step 1 – Isolate Data

To illustrate this concept, let’s start with a simple example. Imagine that you have a site that uses some advanced browser features of Google Chrome. It is important for you to understand which version of Chrome your website visitors are using and how quickly they move from one version to the next. You can easily build a freeform table in Analysis Workspace that isolates visits from a bunch of Google Chrome versions like this:

Here you can see that the table goes back a few years and views Visits by various Chrome versions using a cross-tab with values from the Browser dimension.

Step 2 – Add a Chart Visualization

The next step is to add a chart visualization. I have found that there are only three types of visualizations that work for time-lapse: horizontal bar, treemap and donut. I will illustrate all of these, but to start, simply add a horizontal bar visualization and link it to the table created above:

When you first add this chart visualization, it may look a bit strange since it has so much data, but don’t worry, we will fix it in a minute. Once you add it, be sure to use the gear icon to customize it so it has enough rows to encompass the number of items you have added to your table (I normally choose the maximum of 25):

Step 3 – Create Time-Lapse

The final step is to create the time-lapse. To do this, you have to have some sort of software that will allow you to record. I use a Mac product called GIF Brewery 3, but you can use Snagit, Goto Meeting, Zoom, etc… Once you have selected how you want to record the time-lapse, you have to learn the trick in Analysis Workspace that allows you to cycle through your data by week. The trick is to click on the cell directly to the right of the first time period (week of July 2, 2017 in my example) and then use your left arrow to move one cell to the left. This will allow you to select the entire first row as illustrated here:

Once you have the entire row selected, you can use the down arrow to scroll down one row at a time. Therefore, if you start recording, select the cell to the right of the first time period, select the row and then continue pressing the down arrow, you can stop the recording when you get to the end. Then you just have to clean it up (I cut off a bit at the beginning and end) and save it as a video file. Using GIF Brewery 3, I can turn these recordings into animated GIFs which are easy to embed into Powerpoint, Keynote or Google Slides.

Here is what the time-lapse for the Chrome browser scenario looks like when it is completed:

Another visualization type I mentioned was the treemap. The process is exactly the same, you simply link the treemap to your table and record the same way to produce something like this:

Venn Visualization

As mentioned above, I have found that time-lapse works best with horizontal bar, treemap and donut visualizations. One other one that is cool is the Venn visualization, but this one has to be handled a bit differently than the previous examples. The following are the steps to do a time-lapse with the Venn visualization.

First, choose the segments and metric you want to add to the Venn visualization. as an example, I am going to look at what portion of all Demystified visits view one of my blog posts and also how many people have viewed the page about the Adobe Analytics Expert Council (AAEC). I start this by adding segments to the Venn visualization:

Next, I am going to expose the data table that is populating the Venn visualization:

Then I use a time dimension to breakdown the table. In this case, I will use Week:

From here, you can follow the same steps to record weekly time-lapse to produce this:

Sample Use Cases

This concept can be applied to many other data points found in Adobe Analytics. For example, I recently conducted a webinar with the Decibel, an experience analytics provider, in which we integrated Decibel experience data into Adobe Analytics to view how many visitors were having good and bad website experiences. We were then able to view experience over time using time-lapse. In the following clip, I have highlighted in the time-lapse when key events took place on the website:

If you want to memorialize the time when your customers officially started ordering more products from their mobile phone than the desktop, you can run this device type time-lapse:

Another use case might be blog readership if you are a B2B company. Often times, blogs are used to educate prospects and drive lead generation. Here is an example in which a company wanted to view how a series of blogs were performing over time. Once again, you simply create a table of the various blogs (in this case I used segments since each blog type had several contributors):

In this case, I will use the donut chart I mentioned earlier (though it is dangerously close to a pie chart, which I have been told is officially uncool!):

Here is the same data in the typical horizontal bar chart:

As a bonus tip, if you want to see a cumulative view of your data in a time-lapse, all you need to do is follow the same process, but with a different metric. You can use the Cumulative formula in the calculated metric builder to sum all previous weeks as you go and then do a time-lapse of the sum. In this blog example, here is the new calculated metric that you would build:

Once you add this to your table, it will look like this:

Then you just follow the same steps to record your time-lapse:

Final Thoughts

These are just a few examples of how this concept can be applied. In your case, you might want to view a time-lapse of your top ten pages, campaign codes, etc. It is really up to you to decide how you want to use it. I have heard rumors that Analysis Workspace will soon allow you to add images to projects, so it would be cool if you could add animated GIFs or videos like this right into your project!

Other things to note. When you use the treemap and donut visualizations, Analysis Workspace may switch the placements and colors when one number increases over the other, so watch out for that. Another general “gotcha” I have found with this approach is that you have to pre-select the items you want in your time-lapse. It would be cool if there were a way to have the Adobe Analytics time-lapses be like the first market cap one shown in which new values can appear and disappear based upon data changes, but I have not yet found a way to do that. If you can find a way, let me know!

Featured, Testing and Optimization

Profile Playbook for Adobe Target

Adobe Target Profile Playbook

This blog post provides a very thorough overview of what Adobe Target’s profile is and how it works.  Additionally, we’ve included 10 profile scripts that you can start using immediately in your Adobe Target account. 

We also want to share a helpful tool that will allow you to see the Adobe Target Profile in action.  This Chrome Extension allows Adobe Target users to visualize, edit, and add profile attributes to your Adobe Target ID or your 1st Party Organization’s ID.  Here is a video that shows it in action and if you want to read about all the free Adobe Target features in the extension, please check out this blog post.  

THE PROFILE

The Adobe Target profile is the most valuable component of the Adobe Target platform. Without this profile, Adobe Target would be a relatively simple A/B testing solution.  This profile allows organizations to take their optimization program to levels not normally achievable with typical testing tools.  The profile and the profiling capabilities allow organizations to define attributes for visitors for targeting and segmenting purposes.  These attributes are independent of any tests and essentially are creating audiences that can be managed automatically.

As a general example, let’s say an organization decided to build an audience of purchasers.  

Within the Adobe Target user interface, users can create profile attributes based off of any data that Target gets passed to it.  When someone makes a purchase the URL could contain something like “thank-you.html” or something along those lines.

URLs, among other things, are automatically passed to Adobe Target.  So within Target, under the Audiences and then Profiles Scripts, a Target user can say “IF URL CONTAINS ‘thank-you’, set the purchaser attribute to TRUE.  

Once saved, anytime a visitor sees a URL that contains ‘thank-you’, they will automatically attain the profile attribute of ‘purchaser’ and that value will be ‘true’.  This audience will continue to grow automatically on its own as well and if you had a test targeted to purchasers, visitors who purchased would automatically be included in that test.

Audiences like purchasers can be made based off of any event, offline or online when data is communicated to Adobe Target.  The Adobe Target profile is immediate in that Adobe’s infrastructure updates and evaluates the profile before returning test content.  This allows audiences created to be used IMMEDIATELY on that first impression.

The image below outlines what happens when calls are made from your digital properties to the global edge network of Adobe Target.  Here you can see just how important the profile is as it is the first thing that gets called when Adobe receives a network request.  

The profile is much more than this simple example of creating an audience.  The Adobe Target Profile is:

  • The backbone of Adobe Target:  all test or activity participation are visitor profile attributes in Adobe Target.  In this image below, you can see our Analytics Demystified home page and on the right, the MiaProva Chrome Extension that is highlighting four tests that I am in on this page and a test that my Visitor ID is associated with in another location.  Test and test experiences are just attributes of the unique visitor ID.

  • Independent of any single activity or test:  This profile and all attributes associated with it are not limited to any single or group of tests and can be used interchangeably across any test type in Adobe Target.  
  • Is an OPEN ID for custom audience creation:  The profile and its attributes map directly to the Adobe Target visitor ID and this ID can be shared, coupled, and joined with other systems and IDs.  Before there was A4T for example, you could push your Adobe Target Visitor ID to an eVar, create audiences in Analytics and then target a test to the Target ID’s that mapped to the data in Analytics.  This ID is automatically set and can easily be shared with other systems internally or externally.
  • Empowerment of 1st, 2nd, and 3rd party data: the profile allows audiences to be created and managed in Adobe Target.  The audiences are constructed from 1st party data (an organization’s data), a 2nd party (Adobe Analytics/Target, Google Analytics, etc…), or 3rd party data (audience manager, DemandBase, etc..).  The profile allows to consolidate data sources and use them interchangeably giving you the ability to test out any strategies without any limitations that data sources typically have.

  • Cross-Device test coordination: Adobe Target has a special reserved parameter name called ‘mbox3rdPartyId’ (more on that below) but essentially this is YOUR organization’s visitor ID.  If you pass this ID to Adobe Target, any and all profile attributes are then mapped to that ID. This means that is this ID
  • Exportable client side dynamically:  Profile attributes can be used in offers used in tests or activities and they can be used as Response Tokens (more on Response Tokens later).  To the right here is our Chrome Extension and the boxed area “Adobe Target Geo Metadata” are actually profile attributes or profile tokens injected into the Chrome Extension via Target.  

 

Here is what the offer looks like in Target:

 

<div class=“id_target”>

  <h2>Adobe Target Geo Metadata</h2>

  <h3>City: ${user.city}<br>

  State: ${user.state}<br>

  Country: ${user.country}<br>

  Zip: ${user.zip}<br>

  DMA: ${user.dma}<br>

  Latitude: ${profile.geolocation.latitude}<br>

  Longitude: ${profile.geolocation.longitude}<br>

  ISP Name: ${user.ispName}<br>

  Connection Speed: ${user.connectionSpeed}<br>

  IP Address: ${user.ipaddress}</h3>

</div><br>

<div class=“id_map”>

  <iframe allowfullscreen frameborder=“0” height=“250” src=https://www.google.com/maps/embed/v1/search?key=AIzaSyAxhzWd0cY7k-l4EYkzzzEjwRIdtsNKaIk&q=${user.city},${user.state},${user.country}‘” style=“border:0” width=“425”></iframe>

</div>

The BOLD text are actually profile attributes that I have in my Adobe Target account.  

When you use them in Adobe Target offers they are called tokens and these tokens are dynamically replaced by Target to the values of the profile attributes.   You can even see that I am also passing Adobe Target Profile attributes to Google Mapping Service to return the map based on what Adobe considers to be my geolocation.

 

 

  • How Automated Personalization does its magic:  Automated Personalization is one of Adobe’s Activity types that uses propensity scoring and models to decide what content to present to individuals.  Without passing any data to Adobe Target, Automated Personalization uses what data is does see, by way of the mbox or Adobe Target tags, to see what content works well with what visitors.  To get more value out of Automated Personalization an organization typically passes additional data to Adobe Target for the models to use for content decisions. Any and all data supplied to Sensei or Automated Personalization outside of the data that Adobe Target collects automatically, are profile attributes.  Similarly, the data that you see in the Insights and Segments reports of Automated Personalization is profile attributes (image below of example report).

  • The mechanism by which organizations can make use of their internal models:  Because the Adobe Target profile and its attributes are all mapped to the Adobe Target ID or your organizational ID, that means you can import any offline scoring that your organization may be doing.  Several organizations are doing this and seeing the considerable value. The profile makes it easy to have the data just sitting there waiting for the digital consumer to be seen again so as to respond automatically with the desired content related to the model or strategy.  

HOW TO CREATE PROFILES

The beautiful part of the Adobe Target Profile is that it is created automatically as soon as digital consumers come in contact with Adobe Target.  This is the case no matter how you use Adobe Target (client-side, server-side, SDK, etc…). When we want to leverage the profile’s ability to define audiences, we are not creating profiles as much as we are creating profile attributes that are mapped or associated with a Profile which is directly mapped to Adobe Target’s ID or your organization’s ID.  

There are three main ways to create profile attributes.  No matter the method of creating the profile attributes, they all function exactly the same way within Adobe Target.  The three ways that Adobe Target users can create mboxes is by way of the mbox (passing the parameter value data as profile parameters), within the Adobe Target user interface, and programmatically via an API.

Client-Side

This is going to be the most popular and easiest way to get profile attributes into your Adobe Target account.  For those of you that have sound data layers or have rich data in your tag management system, you are going to love this approach. When Adobe Target is implemented, you can configure data to be passed to the call that is made to Adobe when a visitor consumes your content.  This data can be from your data layer, cookies, your tag management, or third-party services that are called.

The image below is from the MiaProva Chrome Extension and highlights the data being passed to Adobe when Adobe Target is called.  The call that is made by Adobe Target to Adobe is often referred to as a mbox call (mbox being short for marketing box). The data being passed along are called mbox parameters.  

If you look at #3 below in the image, that is a mbox parameter but because it starts with a “profile.” syntax, that makes it a profile attribute that is then immediately associated with the ID’s at #1 (your organizational ID) and #2, Adobe Target’s visitor ID.  

The important thing to note is that you are limited to 50 profile attribute per mbox or call to Adobe Target.  

Server-side – within your Adobe Target account

The client-side approach will likely be your go-to method especially if you have investments in data layers and tag management.  That said, there is another great way to create these profile attributes right within your Adobe Target account.

This method is quite popular because it requires no change to your Adobe Target implementation and anyone with Approver rights in your Target account can create them.  I especially appreciate that it allows for processing, similar to Adobe I/O Runtime, to be done server side.

This method can be intimidating though because it requires some scripting experience to really take advantage of all the benefits of this approach.  Essentially, you are creating logic based off of what data Adobe Target is getting coupled with the values of any other profile attributes.

Here is a good example, let’s say we want to an audience of current customers and we know that only customers see a URL that contains “myaccount.html”.  When Adobe Target makes its call to Adobe, it passes along the URL to Adobe. Here in this server-side approach, we want to say “if URL contains myaccount.html” create an audience or profile attribute of customer equal to true.  

Here is what that would look like in Target:

And the script used:

if (page.url.indexOf(‘myaccount.html’)  > -1) { return ‘true’; }

Developers and people comfortable with scripting love this approach but for those not familiar with scripting, you can see how it can be intimidating.  

After scripts like this are saved, they live in the “Visitor Profile Repository” and are a key component of the “Profile Processing” as seen in the image below.  Your Adobe Target account will process any and all of these scripts and update their values if warranted. This all happens before test content is returned so that you can use that profile and its values immediately on the first impression.  

To access this server-side configuration of Adobe Target profile attributes, simply click on Audiences in the top-navigation and then on Profile Scripts in the left navigation.  

10 Profile Templates:  The table below outlines 10 great profile scripts that you can use immediately in your Adobe Target account.  Once these scripts are saved, the audiences they create will immediately start to grow. These scripts are a great starting point and help you realize all the potential with this approach.

 

DETAILS PROFILE ATTRIBUTE NAME SCRIPT
This profile attribute retains the current visit number of the visitor. visitnumber if(user.sessionId!=user.getLocal(‘lastSessionId’)) {  user.setLocal(‘lastSessionId’, user.sessionId);

 return (user.get(‘visitnumber’) | 0) + 1;

}

This profile attribute will associate the IP address with the visitor thus enabling you to target activities to certain IP addresses ip_address user.header(‘x-cluster-client-ip’);
This attribute increases with each purchase as defined by impressions of the ‘orderConfirmPage’ mbox which typically exists on thank you pages. purchasefrequency if (mbox.name == ‘orderConfirmPage’) {

return (user.get(‘purchasefrequency’) | 0) + 1;

}

One of my favorites as it allows you to QA tests without having to repeat entry conditions of the tests.  Simply use the letters “qa” as part of your query string and this profile is set to true! Very popular attribute.   qa if (page.param(“qa”)) {

  return “true”;

}

Day of the week.  Helpful such that it highlights the incorporation of standard javascript functions.   day_of_visit if (mbox.name == “target-global-mbox”) {

var today = new Date().getDay();

var days = [‘sunday’, ‘monday’, ‘tuesday’, ‘wednesday’, ‘thursday’, ‘friday’, ‘saturday’];

return(days[today]);

}

This attribute sums up the total revenue per visitor as they make multiple purchases over time. amountSpent if (mbox.name == ‘orderConfirmPage’) {

  return (user.get(‘amountSpent’) || 0) + parseInt(mbox.param(‘orderTotal’));

}

This attribute sums up the number of items purchased by a visitor over time.   purchaseunits if (mbox.name == (‘orderConfirmPage’)) {

var unitsPurchased;

if(mbox.param(‘productPurchasedId’).length === 0){

   unitsPurchased = 0;} else {

   unitsPurchased = mbox.param(‘productPurchasedId’).split(‘,’).length;

}

return unitsPurchased;

} else {

return ‘0’;

}

This attribute simply sets a value of true based off of the URL of the page. You can easily modify this script for any page that is important to you.   myaccount if (page.url.indexOf(‘myaccount’)  > -1)

{

return ‘true’;

}

This attribute is a good example of using an mbox name and an mbox parameter to set an attribute.  I used this one for Marketo when a user submits a form. This creates a ‘known’ audience segment. form_complete if ((!user.get(‘marketo_mbox’)) && (mbox.param(‘form’) == (‘completed’))) {

return ‘true’;

}

This script enables mutual exclusivity in your Adobe Target account.  This attribute creates 20 mutually exclusive swim lanes. Visitors are randomly assigned a group number 1 through 20.   random_20_group if (!user.get(‘random_20_group’)) {

var ran_number = Math.floor(Math.random() * 99),

query = (page.query || ”).toLowerCase();query = query.indexOf(‘testgroup=’) > -1 ? query.substring(query.indexOf(‘testgroup=’) + 10) : ”;

if (ran_number <= 4) {

return ‘group1’;

} else if (ran_number <= 9) {

return ‘group2’;

} else if (ran_number <= 14) {

return ‘group3’;

} else if (ran_number <= 19) {

return ‘group4’;

} else if (ran_number <= 24) {

return ‘group5’;

} else if (ran_number <= 29) {

return ‘group6’;

} else if (ran_number <= 34) {

return ‘group7’;

} else if (ran_number <= 39) {

return ‘group8’;

} else if (ran_number <= 44) {

return ‘group9’;

} else if (ran_number <= 49) {

return ‘group10’;

} else if (ran_number <= 54) {

return ‘group11’;

} else if (ran_number <= 59) {

return ‘group12’;

} else if (ran_number <= 64) {

return ‘group13’;

} else if (ran_number <= 69) {

return ‘group14’;

} else if (ran_number <= 74) {

return ‘group15’;

} else if (ran_number <= 79) {

return ‘group16’;

} else if (ran_number <= 84) {

return ‘group17’;

} else if (ran_number <= 89) {

return ‘group18’;

} else if (ran_number <= 94) {

return ‘group19’;

} else {

return ‘group20’;

}

}

API

The third approach that we highlight is by way of API.  Many organizations leverage this approach because the data that they want to be profile attributes is not available online and so passing it client side is not an option.  Similarly, we can’t use server-side scripting either because of data communications. Many financial institutions and organizations that have conversion events offline typically use this approach.  

Essentially, how this works is you leverage Adobe’s API to push data (profile attributes) to Adobe based on your visitor ID (mbox3rdPartyId) or by Adobe Target’s ID.  The documentation on this approach can be found here: http://developers.adobetarget.com/api/#updating-profiles

mbox3rdPartyId or thirdPartyId

This is one of the easiest things you can do with your Adobe Target account and yet it is one of the most impactful things you can do to your optimization program.  

The mbox3rdPartyId is a special parameter name that is used when you pass YOUR visitor ID to Adobe Target.  

The image to the right is the MiaProva Chrome Extension which is showing the data that is communicated to Adobe Target.  The highlighted value is this mbox3rdPartyId in action.

Here I am mirroring my ID, with the Adobe ID.  This will allow me to coordinate tests across devices such that if a visitor is getting Experience B on one device, they will continue to get Experience B on any other device that has this ID.

Any and all data that is available offline by this ID can be imported to Adobe Target via API!  This further enables offline modeling and having targeting in place even before the digital consumer arrives on your digital properties.  

If your digital property has a visitor ID that they manage, you most definitely want to integrate this into Adobe Target.

Response Tokens

To allow organizations to easily made profile attributes and their values available to other systems, Adobe Target has Response Tokens.  Within your Adobe Target account under “Setup” and then “Response Tokens” as seen in the image below, we can toggle on or off Response Tokens, which are Profile Attributes.  

When you turn the toggle to on, Adobe Target will push these profile attribute values back to the page or location where the Adobe Target call came from.  

This feature is how Adobe Target can integrate with third-party Analytics tools such as Google Analytics.  It is also how the MiaProva Chrome Extension works because as part of that setup, we instruct turning the above-toggled attributes to on.

The immediate image below is what the Adobe Target response looks like where I have a test running.  The first component (in green) is the offer that is changing the visitor’s experience as part of the test.  The second component (in blue) are response tokens that have been turned on. Pretty cool way to easily get your profile attributes part of your data layer or for the consumption of other tools such as ClickTale, internal data lakes, Heap, MiaProva, etc…    

Expiration

A very important thing to note.  By default, the Adobe Target Profile lasts for 14 days of inactivity.  You can submit a ticket to client care to extend this lifetime. They can extend it for 12 to 18 weeks.  This period of time is a rolling period based off of inactivity. So if a visitor arrives on day 1 and then on day 85, the visitor ID and its attributes will be gone if your profile expiration was at 12 weeks (84 days).  

If the visitor was seen at any point before the profile expiration, Adobe Target will push its expiration back by the profile expiration period.  

 

Adobe Analytics, Featured

B2B Conversion Funnels

One of the unique challenges of managing a B2B website is that you often don’t actually sell anything directly. Most B2B websites are there to educate, create awareness and generate sales leads (normally through form completions). Retail sites have a very straightforward conversion funnel: Product Views to Cart Additions to Checkouts to Orders. But B2B sites are not as linear. In fact, there is a ton of research that shows that B2B sales consideration cycles are very long and potential customers only reach out or self-identify towards the end of the process.

So if you work for a B2B organization, how can you see how your website is performing if the conversion funnel isn’t obvious? One thing you can do is to use segmentation to split your visitors into the various stages of the buying process. Some people subscribe to the Awareness – Consideration – Intent – Decision funnel model, but there are many different types of B2B funnel models that you can choose from. Regardless of which model you prefer, you can use digital analytics segmentation to create visitor buckets and see how your visitors progress through the buying process.

To illustrate this, I will use a very basic example using my website. On my website, I write blog posts, which [hopefully] drive visitors to the site to read, which, in turn, gives me an opportunity to describe my consulting services (of course, generating business isn’t my only motivation for writing blog posts, but I do have kids to put through college!). Therefore, if I want to identify which visitors I think are at the “Awareness” stage for my services, I might make a segment that looks like this:

Here I am saying that someone who has been to my website more than once and read more than one of my blog posts is generally “aware” of me. Next, I can create another segment for those that might be a bit more serious about considering me like this:

Here, you can see that I am raising the bar a bit and saying that to be in the “Consideration” bucket, they have to have visited at least 3 times and viewed at least three of my blog posts. Lastly, I will create a third bucket called “Intent” and define it like this:

Here, I am saying that they had to have met the criteria of “Consideration” and viewed at least one of the more detailed pages that describe my consulting services. As I mentioned, this example is super-simplistic, but the general idea is to place visitors into sales funnel buckets based upon what actions they can do on your website that might indicate that they are in one stage or another.

However, these buckets are not mutually exclusive. Therefore, what you can do is place them into a conversion funnel report in your digital analytics tool. This will apply these segments but do so in a progressive manner taking into account sequence. In this case, I am going to use Adobe’s Analysis Workspace fallout visualization to see how my visitors are progressing through the sales process (and I am also applying a few segments to narrow down the data like excluding competitor traffic and some content unrelated to me):

Here is what the fallout report looks like when it is completed:

In this report, I have applied each of the preceding three segments to the Visits metric and created a funnel. I also use the Demandbase product (which attempts to tell me what company anonymous visitors work for), so I segmented my funnel for all visitors and for those where a Demandbase Company exists. Doing this, I can see that for companies that I can identify, 55% of visitors make it to the Awareness stage, 27% make it to the Consideration stage, but only 2% make it to the Intent stage. This allows you to see where your website issues might exist. In my case, I am not very focused on using my content to sell my services and this can be seen in the 25% drop-off between Consideration and Intent. If I want to see this trended over time, I can simply right-click and see the various stages trended:

In addition, I can view each of these stages in a tabular format by simply right-clicking and create a segment from each touchpoint and adding those segments to a freeform table. Keep in mind that these segments will be different from the Awareness, Consideration, Intent segments shown above because these segments take into account the sequence since they come from the fallout report (using sequential segmentation):

Once I have created segments for all funnel steps, I can create a table that looks like this:

This shows me which known companies (via Demanbase) have unique visitors at each stage of the buying process and which companies I might want to reach out to about getting new business. If I want, I can right-click and make a new calculated metric that divides the Intent visitor count by the Awareness visitor count to see who might be the most passionate about working with me:

Summary

So this is one way that you can use the power of segmentation to create B2B sales funnels with your digital analytics data. To read some other posts I have shared related to B2B, you can check out the following, many coming from my time at Salesforce.com:

Adobe Analytics, Featured

New Cohort Analysis Tables – Rolling Calculation

Last week, Adobe released a slew of cool updates to the Cohort Tables in Analysis Workspace. For those of you who suffered through my retention posts of 2017, you will know that this is something I have been looking forward to! In this post, I will share an example of how you can use one of the new updates, a feature called rolling calculation.

A pretty standard use case for cohort tables is looking to see how often a website visitor came to your site, performed an action and then returned to perform another action. The two actions can be the same or different. The most popular example is probably people who ordered something on your site and then came back and ordered again. You are essentially looking for “cohorts” that were the same people doing both actions.

To illustrate this, let’s look at people who come to my blog and read posts. I have a success event for blog post views and I have a segment created that looks for blog posts written by me. I can bring these together to see how often my visitors come back to my blog each week: 

I can also view this by month:

These reports are good at letting me know how many visitors who read a blog post in January of 2018 came back to read a post in February, March, etc… In this case, it looks like my blog posts in July, August & September did better than other months at driving retention.

However, one thing that these reports don’t tell me is whether the same visitors returned every week (or month). Knowing this tells you how loyal your visitors are over time (bearing in mind that cookie deletion will make people look less loyal!). This ability to see the same visitors rolling through all of your cohort reports is what Adobe has added.

Rolling Calculation

To view how often the same people return, you simply have to edit your cohort table and check off the Rolling Calculation box like this:

This will result in a new table that looks like this:

Here you can see that very few people are coming to my blog one week after another. For me, this makes sense, since I don’t always publish new posts weekly. The numbers look similar when viewed by month:

Even though the rolling calculation cohort feature can be a bit humbling, it is a really cool feature that can be used in many different ways. For example, if you are an online retailer, you might want to use the QUARTER granularity option and see what % of visitors who purchase from you at least once every quarter. If you manage a financial services site, you might want to see how often the same visitors return each month to check their online bank statements or make payments.

Segmentation

One last thing to remember is that you still have the ability to right-click on any cohort cell and create a segment. This means that in one click you can build a segment for people who come to your site in one week and return the next week. It is as easy as this:

The resulting segment created will be a bit lolengthy (and a bit intimidating!), but you can name it and tweak it as needed:

Summary

Rolling Calculation cohort analysis is a great new feature for Analysis Workspace. Since no additional implementation is required to use this new feature, I suggest you try it out with some of your popular success events…

Analysis, Conferences/Community, Featured, google analytics

That’s So Meta: Tracking Data Studio, in Data Studio

That’s So Meta: Tracking Data Studio, in Data Studio

In my eternal desire to track and analyze all.the.things, I’ve recently found it useful to track the usage of my Data Studio reports.

Viewing data about Data Studio, in Data Studio? So meta!

Step 1: Create a property

Create a new Google Analytics property, to house this data. (If you work with multiple clients, sites or business units, where you may want to be able to isolate data, then you may want to consider one property for each client/site/etc. You can always combine them in Data Studio to view all the info together, but it gives you more control over permissions, without messing around with View filters.)

Step 2: Add GA Tracking Code to your Data Studio reports

Data Studio makes this really easy. Under Report Settings, you can add a GA property ID. You can add Universal Analytics, or GA4.

You’ll need to add this to every report, and remember to add it when you create new reports, if you’d like them to be included in your report.

Step 3: Clean Up Dimension Values

Note: This blog post is based on Universal Analytics, but the same principles apply if you’re using GA4. 

Once you have tracked some data, you’ll notice that the Page dimension in Google Analytics is a gibberish, useless URL. I suppose you could create a CASE formula and rewrite the URLs in to the title of the report…Hmmm… Wait, why would you do that, when there’s already an easier way?!

You’ll want to use the Page Title for the bulk of your reporting, as it has nice, readable, user-friendly values:

However, you’ll need to do some further transformation of Page Title. This is because reports with one page, versus multiple pages, will look different.

Reports with only one page have a page title of:

Report Name

Reports with more than one page have a page title of:

Report Name > Page Name

If you want to report on the popularity at a report level, we need to extract just the report name. Unfortunately, we can’t simply extract “everything before the ‘>’ sign” as the Report Name, since not all Page Titles will contain a “>” (if the report only has one page.)

I therefore use a formula to manipulate the Page Title:

REGEXP_EXTRACT(

(CASE 
WHEN REGEXP_MATCH(Page Title,".*›.*") 
THEN Page Title 
ELSE CONCAT(Page Title," ›")
END)

,'(.*).*›.*')

Step 4: A quick “gotcha”

Please note that, on top of Google Analytics tracking when users actually view your report, Google Analytics will also fire and track a view when:

  1. Someone is loading the report in Edit mode. In the Page dimension, you will see these with /edit in the URL.
  2. If you have a report scheduled to send on a regular cadence via email, the process of rendering the PDF to attach to the email also counts as a load in Google Analytics. In the Page dimension, you will see these loads with /appview in the URL.

This means that if you or your team spend a lot of time in the report editing it, your tracking may be “inflated” as a result of all of those loads.

Similarly, if you schedule a report for email send, it will track in Google Analytics for every send (even if no one actually clicks through and views the report.)

If you want to exclude these from your data, you will want to filter out from your dashboard Pages that contain /edit and /appview.

 

Step 5: Build your report

Here’s an example of one I have created:

Which metrics should I use?

My general recommendation is to use either Users or Pageviews, not Sessions or Unique Pageviews.

Why? Sessions will only count if the report page was the first page viewed (aka, it’s basically “landing page”), and Unique Pageviews will consider two pages in one report “unique”, since they have different URLs and Page Titles. (It’s just confusing to call something “Unique” when there are so many caveats on how “unique” is defined, in this instance.) So, Users will be the best for de-duping, and Pageviews will be the best for a totals count.

What can I use these reports for?

I find it helpful to see which reports people are looking at the most, when they typically look at them (for example, at the end of the month, or quarter?) Perhaps you’re having a lot of ad hoc questions coming to your team, that are covered in your reports? You can check if people are even using them, and if not, direct them there before spending a bunch of ad hoc time! Or perhaps it’s time to hold another lunch & learn, to introduce people to the various reports available? 

You can also include data filters in the report, to filter for a specific report, or other dimensions, such as device type, geolocation, date, etc. Perhaps a certain office location typically views your reports more than another?

Of course, you will not know which users are viewing your reports (since we definitely can’t track PII in Google Analytics) but you can at least understand if they’re being viewed at all!

Adobe Analytics, Featured

Adam Greco Adobe Analytics Blog Index

Over the years, I have tried to consistently share as much as I can about Adobe Analytics. The only downside of this is that my posts can span a wide range of topics. Therefore, as we start a new year, I have decided to compile an index of my blog posts in case you want a handy way to find them by topic. This index won’t include all of my posts or old ones on the Adobe site, since many of them are now outdated due to new advances in Adobe Analytics. Of course, you can always deep-dive into most Adobe Analytics topic by checking out my book.

Running A Successful Adobe Analytics Implementation

Adobe Analytics Implementation & Features

Analysis Workspace

Virtual Report Suites

Sample Analyses by Topic

Marketing Campaigns

Content

Click-through Rates

Visitor Engagement/Scoring

eCommerce

Lead Generation/Forms

Onsite Search

Adobe Analytics Administration

Adobe Analytics Integrations

Adobe Analytics, Featured

2019 London Adobe Analytics “Top Gun” Class

I will be traveling to London in early February, so I am going to try and throw together an Adobe Analytics “Top Gun” class whilst I am there (Feb 5th). As a special bonus, for the first time ever, I am also going to include some of my brand new class “Managing Adobe Analytics Like A Pro!” in the same training!  I promise it will be a packed day! This will likely be the only class I do in Europe this year, so if you have been wanting to attend this class, I suggest you register. Thanks!

Here is the registration link:

https://www.eventbrite.com/e/analytics-demystified-adobe-analytics-top-gun-training-london-2019-tickets-53403058987

Here is some feedback from class attendees:

Adobe Analytics, Featured

A Product Container for Christmas

Dear Adobe,

For Christmas this year I would like a product container in the segment builder. It is something I’ve wanted for years and if you saw my post on Product Segmentation Gotchas you’ll realize that there are some ways that people may inevitably get bad data when using product-level dimensions with any of the existing containers. Because there can be multiple products per hit, segmentation on product attributes can be tough. Really, any bad data is due to a misuse of how the segment builder currently works. However, if we were to add additional functionality to the segment builder we can expand the uses of the builder. A product container is also interesting because it isn’t necessarily smaller in scope compared to a visit or hit. One product could span all of those. So because of all this I would love a new container for Christmas.

Don’t get me wrong, I love the segment builder. This would be another one of those little features that adds to the overall amazingness of the product. Or, since containers are a pretty fundamental aspect of the segment builder, maybe it’s much more than just a “little feature”? Hmmm, the more I think about in those terms the more I think it would be a feature of epic proportions 🙂

How would this work?

Good question! I have some ideas around that. I imagine a product container working similar to a product line item in a tabular report. In a table visualization everything on that row is limited to what is associated with that product. Usually we just use product-specific metrics in those types of reports, but if you were to pull in a non-product-specific metric it would pull in whatever was the effective values for the hit that the product was on at that time. So really it wouldn’t be too different from how data is generated now. The big change is making it accessible in the segment builder.

Here’s an example of what I mean. Let’s use the first scenario from the  Product Segmentation Gotchas post. We are interested in segmenting for visits that placed an order where product A had a “2_for_1” discount applied. Let’s say that we have a report suite that has only two orders like so:

Order #101 (the one we want)

Product Unit Price Units Net Revenue Discount Code Marketing Channel
A $10 2 $10 2_for_1 PPC
B $6 2 $12 none

Notice that product A has the discount and this visit came from a PPC channel.

Order #102 (the one we don’t want)

Product Unit Price Units Net Revenue Discount Code Marketing Channel
A $10 2 $20 none Email
B $6 2 $6 2_for_1

Notice that product B has the discount now and this visit came from an Email channel.

Here is the resulting report if we were to not use any segments. You’ll notice that everything lines up great in this view and we know exactly which discount applied to which product.

The Bad

Now let’s get on to our question and try to segment for the visits that had the 2_for_1 discount applied to product A. In the last post I already mentioned that this segment is no good:

If you were to use this to get some high-level summary data it would look like this:

Notice that it doesn’t look any different from the All Visits segment. The reason for this is that we just have two orders in our dataset and each of them have product A and a 2_for_1 discount. To answer our question we really need a way to associate the discount specifically with product A.

The Correct Theoretical Segment

Using my theoretical product container, the correct segment would look something like the image below. Here I’m using a visit-level outer container but my inner container is set to the product level (along with a new cute icon, of course). Keep in mind this is fake!

The results of this would be just what we wanted which is “visits where product A had a ‘2_for_1’ discount applied”.

This visit had an order of multiple products so the segment would include more than just product A in the final result. The inner product container would qualify the product and the outer visit container would then qualify the entire visit. This results in the whole order showing up in our list. We are able to answer our question and avoid including the extra order that was unintentionally included with the first segment. 

Even More Specific

Let’s refine this and say that we wanted just the sales from product A in our segment. The correct segment would look like this with my theoretical product scope in the outer container.

And the results would be trimmed down to just the product-specific dimensions and metrics like so:

Notice that this gives us a single row that is just like the line item in the table report! Now you can see that we have great flexibility to get to just what we want when it comes to product-level dimensions.

Summary

Wow, that was amazing! Fake data and mockups are so cooperative! This may be a little boring for just this simple example but when thousands of products are involved the table would be a mess and I’d be pretty grateful for this feature. There are a bunch of other ways that this could be useful in building segments with this feature at different levels like wrapping visit or visitor containers or working with non-product-specific metrics but this post is already well past my attention span limits. Hopefully this is enough to explain the idea. I know that this Christmas is getting pretty close so I’d be glad to accept it as a belated gift on MLK Day instead. Thanks Adobe!

Sincerely,

Kevin Willeitner

 

PS, for others that might be reading this, if you’d like this feature to be implemented please vote for it here. After some searching I also found that several people have asked for related capability so vote for theirs as well! Those are linked to in the idea post.

Adobe Analytics, Featured

My Favorite Analysis Workspace Right-Clicks – Part 2

In my last blog post, I began sharing some of my favorite hidden right-click actions in Analysis Workspace. In this post, I continue where I left off (since that post was getting way too long!). Most of these items are related to the Fallout visualization since I find that it has so many hidden features!

Freeform Table – Change Attribution Model for Breakdowns

Attribution is always a heated topic. Some companies are into First Touch and others that believe in Last Touch. In many cases, you have to agree as an organization which attribution model to use, especially when it comes to marketing campaigns. However, what if you want to use multiple attribution models? For example, let’s say that as an organization, you decide that the over-arching attribution model is Last Touch, meaning that the campaign source taking place most closely to the success (Order, Blog Post View, etc.) is the one that gets credit. Here is what this looks like for my blog:

However, what if, at the tracking code level, you want to see attribution differently. For example, what if you decide that once the Last Touch model is applied to the campaign source, you want to see the specific tracking codes leading to Blog Posts allocated by First Touch? Multiple allocation models are available in Analysis Workspace, but this feature is hidden. The use of multiple concurrent attribution models is described below.

First, you want to break down your campaign source into tracking codes by right-clicking and choosing your breakdown:

You can see that the breakdown is showing tracking codes by source and that the attribution model is Last Touch | Visitor (highlighted in red above). However, if you hover your mouse over the attribution description of the breakdown header, you can see an “Edit” link like this:

Clicking this link allows you to change the attribution model for the selected metric for the breakdown rows. In this case, you can view tracking codes within the “linkedin-post” source attributed using First Touch Attribution and, just for fun, you can change the tracking code attribution for Twitter to an entirely different attribution model (both shown highlighted in red below):

So with a few clicks, I have changed my freeform table to view campaign source by Last Touch, but then within that, tracking codes from LinkedIn by First Touch and Twitter by J Curve attribution. Here is what the new table looks like side-by-side with the original table that is all based upon Last Touch:

As you can see, the numbers can change significantly! I suggest you try out this hidden tip whenever you want to see different attribution models at different levels…

Fallout – Trend

The next right-click I want to talk about has to do with the Fallout report. The Fallout report in Analysis Workspace is beyond cool! It lets you add pages, metrics and pretty much anything else you want to it to see where users are dropping off your site or app. You can also apply segments to the Fallout report holistically or just to a specific portion of the Fallout report. In this case, I have created a Fallout report that shows how often visitors come to our home page, eventually view one of my blog posts and then eventually view one my consulting services pages:

Now, let’s imagine that I want to see how this fallout is trending over time. To do this, right-click anywhere in the fallout report and choose the Trend all touchpoints option as shown here:

Trending all touchpoints produces a new graph that shows fallout trended over time:

Alternatively, you can select the Trend touchpoint option for a specific fallout touchpoint and see one of the trends. Seeing one fallout trend provides the added benefit of being able to see anomaly detection within the graph:

Fallout – Fall-Through & Fall-Out

The Fallout visualization also allows you to view where people go directly after your fallout touchpoints. Fallthrough reporting can help you understand where they are going if they don’t go directly to the next step in your fallout steps. Of course, there are two possibilities here. Some visitors eventually do make it to the remaining steps in your fallout and others do not. Therefore, Analysis Workspace provides right-clicks that show you where people went in both situations. The Fallthrough scenario covers cases where visitors do eventually make it to the next touchpoint and right-clicking and selecting that option looks like this:

In this case, I want to see where people who have completed the first two steps of my fallout go directly after the second step, but only for cases in which they eventually make it to the third step of my fallout. Here is what the resulting report looks like:

As you can see, there were a few cases in which users went directly to the pages I wanted them to go to (shown in red), but now I can see where they deviated and view the latter in descending order.

The other option is to use the fallout (vs. fallthrough) option. Fallout shows you where visitors went next if they did not eventually make it to the next step in your fallout. You can choose this using the following right-click option:

Breakdown fallout by touchpoint produces a report that looks like this:

Another quick tip related to the fallout visualization that some of my clients miss is the option to make fallout steps immediate instead of eventual. At each step of the fallout, you can change the setting shown here:

Changing the setting to Next Hit, narrows down the scope of your fallout to only include cases in which visitors went directly from one step to the next. Here is what my fallout report looks like before and after this change:

Fallout – Multiple Segments

Another cool feature of the fallout visualization is that you can add segments to it to see fallout for different segments of visitors. You can add multiple segments to the fallout visualization. Unfortunately, this is another “hidden” feature because you need to know that this is done by dragging over a segment and dropping it on the top part of the visualization as shown here:

This shows a fallout that looks like this:

Now I can see how my general population falls out and also how it is different for first-time visits. To demonstrate adding multiple segments, here is the same visualization with an additional “Europe” segment added:

Going back to what I shared earlier, right-clicking to trend touchpoints with multiple segments added requires you to click precisely on the part that you want to see trended. For example, right-clicking on the Europe Visits step two shows a different trend than clicking on the 1st Time Visits bar:

Therefore, clicking on both of the different segment bars displays two different fallout trends:

So there you have it. Two blog posts worth of obscure Analysis Workspace features that you can explore. I am sure there are many more, so if you have any good ones, feel free to leave them as a comment here.

Adobe Analytics, Featured

Product Segmentation Gotchas

If you have used Adobe Analytics segmentation you are likely very familiar with the hierarchy of container. These containers illustrate the scope of the criteria wrapped in the container and are available at the visitor, visit, and hit levels. These containers help you control exactly what happens on each of those levels and your analysis can be heavily impacted by which of these you use. These are extremely useful and handle most use cases.

When doing really detailed analysis related to products, however, the available containers can get confused. This is because there can be multiple products per visitor, visit, or hit. Scenarios like a product list page and checkout pages, when analyzed at a product level, can be especially problematic. Obviously this has a disproportionate impact on retailers but other industries may also be impacted if they use the products variable to facilitate fancy implementations. Any implementation that has a need to collect attributes that have a many-to-many relationship may need to leverage the products variable.

Following are a few cases illustrating where this might happen so be on the lookout.

Product Attributes at Time of Order

Let’s say you want to segment for visits that purchased a product with a discount. Or, rather than a discount, it could be a flag indicating the product should be gift wrapped. It could even be some other attribute that you want passed “per product” on the thank you page. Using the scenario of a discount, if a product-level discount (e.g. 2 for 1 deal) is involved and that same discount can apply to other products, you won’t quite be able to get the right association between the two dimension. You may be tempted to create a segment like this:

However, this segment can disappoint you. Imagine that your order includes two products (product A and product B) and product B is the one that has the “2_for_1” discount applied to it (through a product syntax merchandising eVar). In that case the visit will qualify for our segment because our criteria will be applied at the hit level (note the red arrow). This setting will result in the segment looking for a hit with product A and a code of “2_for_1” but it doesn’t care beyond that. This segment will include the correct results (the right discount associated with the right product), but it will also include undesired results such as right discount associated with the wrong product. This is caused when the correct product just so happened to be purchased at the same time. In the end you are left with a segment you shouldn’t use.

This example is centered around differing per-product attributes at the time of an order but really the event doesn’t matter. This could apply at any time you have a bunch of products collected at once that may each have different values. If multiple products are involved and your implementation is using merchandising evars with product syntax (correctly) then this will be a consideration for you.

Differentiating Test Products

I once had a super-large retailer run a test on a narrow set of a few thousand products. They wanted to know what kind of impact different combinations of alternate images available on the product detail page would have on conversion. This included still images, lifestyle images, 360 views, videos, etc. However, not all products had comparable alternate images available. Because of this they ran the test only across products that did have comparable imagery assets. This resulted in the need to segment very carefully at a product level. Inevitably they came to me with the question “how much revenue was generated by the products that were in the test?” This is a bit tricky because in A/B tests we normally look at visitor-level data for a certain timeframe. If someone in the test made a purchase and the test products were only a fraction of the overall order then the impact of the test could be washed out. So we had to get specific. Unfortunately, through a segment alone we couldn’t get good summary information.

This is rooted in the same reasons as the first example. If you were to only segment for a visitor in the test then your resulting revenue would include all orders for that visitor while in that test. From there you could try to get more specific and segment for the products you are interested in; however,  the closest you’ll get is order-level revenue containing the right products. You’ll still be missing the product-specific revenue for the right products. At least you would be excluding orders placed by test participants that didn’t have the test products at all…but a less-bad segment is still a bad segment 🙂

Changes to Product Attributes

This example involves the fulfillment method of the product. Another client wanted to see how people changed their fulfillment method (ship to home, ship to store, buy online/pickup in store) and was trying to work around a limited implementation. The implementation was set up to answer “what was the fulfillment method changed to?” but what they didn’t have built in was this new question — “of those that start with ship-to-home products in the cart, how often is that then changed to ship to store?” Also important is that each product in the cart could have different fulfillment methods at any given time.

In this case we can segment for visits that start with some product with a ship-to-home method. We can even segment for those that change the fulfillment method. We get stuck, though, when trying to associate the two events together by a specific product. You’re left without historical data and resorting to implementation enhancements.

Other Options

The main point of this post is to emphasize where segmenting on products could go wrong. There are ways to work around the limitations above, though. Here are a few options to consider:

  • In the case of the product test, we could apply a classification to identify which products are in the test. Then you would just have to use a table visualization, add a dimension for your test groups, and break that down by this new classification. This will show you the split of revenue within the test group.
  • Turn to the Adobe Data Feed and do some custom crunching of the numbers in your data warehouse.
  • Enhance your implementation. In the case of the first scenario where persistence isn’t needed you could get away with appending the product to the attribute to provide the uniqueness you need. That may, though, give you some issue with the number of permutations that could create. Depending on how into this you want to get, you could even try some really crazy/fun stuff like rewriting the visitor ID to include the product. This results in some really advanced product-level segmentation. No historical data available, though.
  • Limit your dataset to users that just interacted with or ordered one product to avoid confusion with other products. Blech! Not recommended.

Common Theme

You’ll notice in all of these examples the common thread is where we are leveraging product-specific attributes (merchandising eVars) and trying to tease out specific products from other products based on those attributes. Given that none of the containers perfectly match the same scope of a product you may run into something like the problems described above. Have you come across other segmenting-at-a-product-level problems? If so please comment below!

 

Adobe Analytics, Featured

My Favorite Analysis Workspace Right-Clicks – Part 1

If you use Adobe Analytics, Analysis Workspace has become the indispensable tool of choice for reporting and analysis. As I mentioned back in 2016, Analysis Workspace is the future and where Adobe is concentrating all of its energy these days. However, many people miss all of the cool things they can do with Analysis Workspace because much of it is hidden in the [in]famous right-click menus. Analysis Workspace gurus have learned “when in doubt, right-click” while using Analysis Workspace. In this post, I will share some of my favorite right-click options in Analysis Workspace in case you have not yet discovered them.

Freeform Table – Compare Attribution Models

If you are an avid reader of my blog, you may recall that I recently shared that a lot of attribution in Adobe Analytics is shifting from eVars to Success Events. Therefore, when you are using a freeform table in Analysis Workspace, there may be times when you want to compare different attribution models for a metric you already have in the table. Instead of forcing you to add the metric again and then modify its attribution model, you can now choose a second attribution model right from within the freeform table. To do this, just right-click on the metric header and select the Compare Attribution Model option:

This will bring up a window asking you which comparison attribution model you want to use that looks like this:

Once you select that, Analysis Workspace will create a new column with the secondary attribution model and also automatically create a third column that compares the two:

My only complaint here is that when you do this, it becomes apparent that you aren’t sure what attribution model was being used for the column you had in the first place. I hope that, in the future,  Adobe will start putting attribution model indicators underneath every metric that is added to freeform tables, since the first metric column above looks a bit confusing and only an administrator would know what its allocation is based upon eVar settings in the admin console. Therefore, my bonus trick is to use the Modify Attribution Model right-click option and set it to the correct model:

In this case, the original column was Last Touch at the Visitor level, so modifying this keeps the data as it was, but now shows the attribution label:

This is just a quick “hack” I figured out to make things clearer for my end-users… But, as you can see, all of this functionality is hidden in the right-click of the Freeform table visualization. Obviously, there are other uses for the Modify Attribution Model feature, such as changing your mind about which model you want to use as you progress through your analysis.

Freeform Table – Compare Date Range

Another handy freeform table right-click is the date comparison. This allows you to pick a date range and compare the same metric for the before and after range and also creates a difference column automatically. To do this, just right-click on the metric column of interest and specify your date range:

This what you will see after you are finished with your selection:

In this case, I am looking at my top blog posts from October 11 – Nov 9 compared to the prior 30 days. This allows me to see how posts are doing in both time periods and see the percent change. In your implementation, you might use this technique to see product changes for Orders and Revenue.

Cohort – Create Segment From Cell

If you have situations on your website or mobile app that require you to see if your audience is coming back over time to perform specific actions, then the Cohort visualization can be convenient. By adding the starting and ending metric to the Cohort visualization, Analysis Workspace will automatically show you how often your audience (“cohorts”) are returning. Here is what my blog Cohort looks like using Blog Post Views as the starting and ending metrics:

While this is interesting, what I like is my next hidden right-click. This is the ability to automatically create a segment from a specific cohort cell. There are many times where you might want to build a segment of people who came to your site, did something and then came back later to do either the same thing or a different thing. Instead of spending a lot of time trying to build a segment for this, you can create a Cohort table and then right-click to create a segment from a cell. For example, let’s imagine that I notice a relatively high return rate the week after September 16th. I can right-click on that cell and use the Create Segment from Cell option:

This will automatically open up the segment builder and pre-populate the segment, which may look like this:

From here you can modify the segment any way you see fit and then save it. Then you can use this segment in any Adobe Analytics report (or even make a Virtual Report Suite from it!). This is a cool, fast way to build cohort segments! Sometimes, I don’t even keep the Cohort table itself. I merely use the Cohort table to make the segment I care about. I am not sure if that is smart or lazy, but either way, it works!

Venn – Create Segment From Cell

As long as we are talking about creating segments from a visualization, I would be remiss if I didn’t mention the Venn visualization. This visualization allows you to add up to three segments and see the overlap between all of them. For example, let’s say that for some crazy reason I need to look at people who view my blog posts, are first-time visitors and are from Europe. I would just drag over all three of these segments and then select the metric I care about (Blog Post Views in this case):

This would produce a Venn diagram that looks like this:

While this is interesting, the really cool part is that I can now right-click on any portion of the Venn diagram to get a segment. For example, if I want a segment for the intersection of all three segments, I just right-click in the region where they all overlap like this:

This will result in a brand new segment builder window that looks like this:

From here, I can modify it, save it and use it any way I’d like in the future.

Venn – Add Additional Metrics

While we are looking at the Venn visualization, I wanted to share another secret tip that I learned from Jen Lasser while we traveled the country performing Adobe Insider Tours. Once you have created a Venn visualization, you can click on the dot next to the visualization name and check the Show Data Source option:

This will expose the underlying data table that is powering the visualization like this:

But the cool part is what comes next. From here, you can add as many metrics as you want to the table by dragging them into the Metrics area. Here is an example of me dragging over the Visits metric and dropping it on top of the Metrics area:

Here is what it looks like after multiple metrics have been added (my implementation is somewhat lame, so I don’t have many metrics!):

But once you have numerous metrics, things get really cool! You can click on any metric, and the Venn visualization associated with the table will dynamically change! Here is a video that shows what this looks like in real life:

This cool technique allows you to see many Venn visualizations for the same segments at once!

Believe it or not, that is only half of my favorite right-clicks in Analysis Workspace! Next week, I will share the other ones, so stay tuned!

Adobe Analytics, Featured

New Adobe Analytics Class – Managing Adobe Analytics Like A Pro!

While training is only a small portion of what I do in my consulting business, it is something I really enjoy. Training allows you to meet with many people and companies and help them truly understand the concepts involved in a product like Adobe Analytics. Blog posts are great for small snippets of information, but training people face-to-face allows you to go so much deeper.

For years, I have provided general Adobe Analytics end-user training for corporate clients and, more recently, Analysis Workspace training. But my most popular class has always been my Adobe Analytics “Top Gun” Class, in which I delve deep into the Adobe Analytics product and teach people how to really get the most out of their investment in Adobe Analytics. I have done this class for many clients privately and also offer public versions of the class periodically (click here to have me come to your city!).

In 2019, I am launching a brand new class related to Adobe Analytics! I call this class:

Having worked with Adobe Analytics for fifteen years now (yeesh!), I have learned a lot about how to run a successful analytics program, especially those using Adobe Analytics. Therefore, I have attempted to put all of my knowledge and best practices into this new class. Some of the things I cover in the class include:

  • How to run an analytics implementation based upon business requirements
  • What does a fully functioning Solution Design Reference look like and how can you use it to track implementation status
  • Why data quality is so important and what steps can you take to minimize data quality issues
  • What are best practices in organizing/managing your Adobe Analytics implementation (naming conventions, admin settings, etc…)
  • What are the best ways to train users on Adobe Analytics
  • What team structures are available for an analytics team which is best for your organization
  • How to create the right perception of your analytics team within the organization
  • How to get executives to “buy-in” to your analytics program

These are just some of the topics covered in this class. About 70% of the class applies to those using any analytics tool (i.e. Adobe, GA, etc…), but there are definitely key portions that are geared towards Adobe Analytics users.

I decided to create this class based on feedback from people attending my “Top Gun” Class over the years. Many of the attendees were excited about knowing more about the Adobe Analytics product, but they expressed concerns about running the overall analytics function at their company. I have always done my best to share ideas, lessons, and anecdotes in my conference talks and training classes, but in this new class, I have really formalized my thinking in hopes that class participants can learn from what I have seen work over the past two decades.

ACCELERATE

This new class will be making its debut at the Analytics Demystified ACCELERATE conference this January in California. You can come to this class and others at our two-day training/conference event, all for under $1,000! In addition to this class and others, you also have access to our full day conference with great speakers from Adobe, Google, Nordstrom, Twitch and many others. I assure you that this two-day conference is the best bang for the buck you can get in our industry! Unfortunately, space is limited, so I encourage you to register as soon as possible.

Adobe Analytics, Featured

Using Builders Visibility in Adobe Analytics

Recently, while working on a client implementation, I came across something I hadn’t seen before in Adobe Analytics. For me, that is quite unusual! While in the administration console, I saw a new option under the success event visibility settings called “Builders” as shown here:

A quick check in the documentation showed this:

Therefore, the new Builders setting for success events is meant for cases in which you want to capture data and use it in components (i.e. Calculated Metrics, Segments, etc.), but not necessarily expose it in the interface. While I am not convinced that this functionality is all that useful, in this post, I will share some uses that I thought of related to the feature.

Using Builders in Calculated Metrics

One example of how you could use the Builders visibility is when you want to create a calculated metric, but don’t necessarily care about one of the elements contained in the calculated metric formula as a standalone metric. To illustrate this, I will reference an old blog post I wrote about calculating the average internal search position clicked. In that post, I suggested that you capture the search result position clicked in a numeric success event, so that it could be divided by the number of search result clicks to calculate the average search position. For example, if a user conducts two searches and clicks on the 4th and 6th results respectively, you would pass the values of 4 and 6 to the numeric success event and divide it by the number of search result clicks (6+4/2=5.0). Once you do that, you will see a report that looks like this:

In this situation, the Search Position column is being used to calculate the Average Search Position, but by itself, the Search Position metric is pretty useless. There aren’t many cases in which someone would want to view the Search Position metric by itself. It is simply a means to an end. Therefore, this may be a situation in which you, as the Adobe Analytics administrator, may choose to use the Builders functionality to hide this metric from the reporting interface and Analysis Workspace, only exposing it when it comes to building calculated metrics and segments. This allows you to remove a bit of the clutter from your implementation and can be done by simply checking the box in the visibility column and using the Builders option as shown here:

As I stated earlier, this feature will not solve world peace, but I guess it can be handy in situations like this.

Using Builders in Segments

In addition to using “Builders” Success Events in calculated metrics, you can also use them when building segments. Continuing the preceding internal search position example, there may be cases in which you want to use the Search Position metric in a segment like the one shown here:

Make Builder Metrics Selectively Visible

One other thing to note with Builders has to do with calculated metrics. If you choose to hide an element from the interface, but one of your advanced users wants to view it, keep in mind that they still can by leveraging calculated metrics. Since the element set to Builders visibility is available in the calculated metrics builder, there is nothing stopping you or your users from creating a calculated metric that is equal to the hidden success event. They can do this by simply dragging over the metric and saving it as a new calculated metric as shown here:

This will be the same as having the success event visible, but by using a calculated metric, your users can determine who they want to share the resulting metric with at the organization.

Adobe Analytics, Featured

Viewing Classifications Only via Virtual Report Suites

I love SAINT Classifications! I evangelize the use of SAINT Classifications anytime I can, especially in my training classes. Too often Adobe customers fail to take full advantage of the power of SAINT Classifications. Adding meta-data to your Adobe Analytics implementation greatly expands the types of analysis you can perform and what data you can use for segmentation. Whether the meta-data is related to campaigns, products or customers, enriching your data via SAINT is really powerful.

However, there are some cases in which, for a variety of reasons, you may choose to put a lot of data into an eVar or sProp with the intention of splitting the data out later using SAINT Classifications. Here are some examples:

  • Companies concatenate a lot of “ugly” campaign data into the Tracking Code eVar which is later split out via SAINT
  • Companies store indecipherable data (like an ID) in an eVar or sProp which only makes sense when you look at the SAINT Classifications
  • Companies have unplanned bad data in the “root” variable that they fix using SANIT Classifications
  • Companies are low on variables, so they concatenate disparate data points into an eVar or sProp to conserve variables

One example of the latter I encountered with a client is shown here:

In this example, the client was low on eVars and instead of wasting many eVars, we concatenated the values and then split out the data using SAINT like this:

Using this method, the company was able to get all of the reports they wanted, but only had to use one eVar. The downside was that users could open up the actual eVar28 report in Adobe Analytics and see the ugly values shown above (yuck!). Because of this, a few years ago I suggested an idea to Adobe that they let users hide an eVar/sProp in the interface, but continue letting users view the SAINT Classifications of the hidden eVar/sProp. Unfortunately, since SAINT Classification reports were always tied directly to the “root” eVar/sProp from which they are based, this wasn’t possible. However, with the advent of Virtual Report Suites, I am pleased to announce that you now can curate your report suite to provide access to SAINT Classification meta-data reports, while at the same time not providing access to the main variable they are based upon. The following will walk you through how to do this.

Curate Your Classifications

The first step is to create a new Virtual Report Suite off of another report suite. At the last step of the process, you will see the option to curate/customize what implementation elements will go over to the new Virtual Report Suite. In this case, I am going to copy over everything except the Tracking Code and Blog Post Title (eVar5) elements as shown here:

As you can see, I am hiding Blog Post Title [v5], but users still have access to the four SAINT Classifications of eVar5. Once the Virtual Report Suite is saved and active, if you go into Analysis Workspace and look at the dimensions in the left nav, you will see the meta-data reports for eVar5, but not the original eVar5 report:

If you drag over one of the SAINT Classification reports, it works just like you would expect it to:

If you try to break this report down by the “root” variable it is based upon, you can’t because it isn’t there:

Therefore, you have successfully hidden the “root” report, but still provided access to the meta-data reports. Similarly, you can view one of the Campaign Tracking Code SAINT Classification reports (like Source shown below), but not have access to the “root” Tracking Code report:

Summary

If you ever have situations in which you want to hide an eVar/sProp that is the “root” of a SAINT Classification, this technique can prove useful. Many of the reasons you might want to do this are shown in the beginning of this post. In addition, you can combine Virtual Report Suite customization and security settings to show different SAINT Classification elements to different people. For example, you might have a few Classifications that are useful to an executive and others that are meant for more junior analysts. There are lots of interesting use cases where you can apply this cool trick!

Adobe Analytics, Featured

Adjusting Time Zones via Virtual Report Suites

When you are doing analysis for an organization that spans multiple time zones, things can get tricky. Each Adobe Analytics report suite is tied to one specific time zone (which makes sense), but this can lead to frustration for your international counterparts. For example, let’s say that Analytics Demystified went international and had resources in the United Kingdom. If they wanted to see when visitors located in the UK viewed blog posts (assume that is one of our KPI’s), here is what they would see in Adobe Analytics:

This report shows a Blog Post Views success event segmented for people located in the UK. While I wish our content was so popular that people were reading blogs from midnight until the early morning hours, I am not sure that is really the case! Obviously, this data is skewed because the time zone of our report suite is on US Pacific time. Therefore, analysts in the UK would have to mentally shift everything eight hours on the fly, which is not ideal and can cause headaches.

So how do you solve this? How do you let the people in the US see data in Pacific time and those in the UK see data in their time zone? Way back in 2011, I wrote a post about shifting time zones using custom time parting variables and SAINT Classifications. This was a major hack and one that I wouldn’t really recommend unless you were desperate (but that was 2011!). Nowadays, using the power of Virtual Report Suites, there is a more elegant solution to the time zone issue (thanks to Trevor Paulsen from Adobe Product Management for the reminder).

Time-Zone Virtual Report Suites

Here are step-by-step instructions on how to solve the time zone paradox. First, you will create a new Virtual Report Suite and assign it a new name and a new time zone:

You can choose whether this Virtual Report Suite has any segments applied and/or contains all of your data or just a subset of your data in the subsequent settings screens.

When you are done, you will have a brand new Virtual Report Suite that has all data shifted to the UK time zone:

Now you are able to view all reports in the UK time zone.  To illustrate this, let’s look at the report above in the regular report suite side by side with the same report in the new Virtual Report Suite:

As you can see, both of these reports are for the same date and have the same UK geo-segmentation segment applied. However, as you can see, the data has been shifted eight hours. For example, Blog Post Views that previously looked like they were viewed by UK residents at 2:00am, now show that they were viewed at 10:00am UK time. This can also be seen by looking at the table view and lining up the rows:

This provides a much more realistic view of the data for your international folks. In theory, you could have a different Virtual Report Suite for all of your major time zones.

So that is all you need to do to show data in different time zones. Just a handy trick if you have a lot of international users.

Featured, General

Analytics Demystified Interview Service Offering

Finding good analytics talent is hard! Whether you are looking for technical or analysis folks, it seems like many candidates are good from afar, but far from good! As someone who has been part of hundreds of analytics implementations/programs, I can tell you that having the right people makes all of the difference. Unfortunately, there are many people in our industry who sound like they know Adobe Analytics (or Google Analytics or Tealium, etc…), but really don’t.

One of the services that we have always provided to our clients at Demystified is the ability to have our folks interview prospective client candidates. For example, if a client of ours is looking for an Adobe Analytics implementation expert, I would conduct a skills assessment interview and let them know how much I think the candidate knows about Adobe Analytics. Since many of my clients don’t know the product as well as I do, they have found this to be extremely helpful.  In fact, I even had one case where a candidate withdrew from contention upon finding out that they would be interviewing with me, basically admitting that they had been trying to “BS” their way to a new job!

Recently, we have had more and more companies ask us for this type of help, so now Analytics Demystified is going to open this service up to any company that wants to take advantage of it. For a fixed fee, our firm will conduct an interview with your job candidates and provide an assessment about their product-based capabilities. While there are many technologies we can assess, so far most of the interest has been around the following tools:

  • Adobe Analytics
  • Google Analytics
  • Adobe Launch/DTM
  • Adobe Target
  • Optimizely
  • Tealium
  • Ensighten
  • Optimize
  • Google Tag Manager

If you are interested in getting our help to make sure you hire the right folks, please send an e-mail to contact@analyticsdemystified.com.

Adobe Analytics, Featured

Setting After The Fact Metrics in Adobe Analytics

As loyal blog readers will know, I am a big fan of identifying business requirements for Adobe Analytics implementations. I think that working with your stakeholders before your implementation (or re-implementation!) to understand what types of questions they want to answer helps you focus your efforts on the most important items and can reduce unnecessary implementation. However, I am also a realist and acknowledge that there will always be times where you miss stuff. In those cases, you can set a new metric after the fact for the thing you missed, but what about the data from the last few years? It would be ideal if you could create a metric today that would be retroactive such that it shows you data from the past.

This ability to set a metric “after the fact” is very common in other areas of analytics and there are even vendors like Heap, SnowPlow and Mixpanel that allow you to capture virtually everything and then set up metrics/goals afterwards. These tools capture raw data, let you model it as you see fit and change your mind on definitions whenever you want. For example, in Heap you can collect data and then one day decide that something you have been collecting for years should be a KPI and assign it a name. This provides a ton of flexibility. I believe that tools like Heap and SnowPlow are quite a bit different than Adobe Analytics and that each tool has its strengths, but for those who have made a long-term investment in Adobe Analytics, I wanted to share how you can have some of the Heap-like functionality in Adobe Analytics in case you ever need to assign metrics after the fact. This by no means is meant to discount the cool stuff that Heap or SnowpPlow are doing, but rather, just showing how this one cool feature of theirs can be mimicked in Adobe Analytics if needed.

After The Fact Metrics

To illustrate this concept, let’s imagine that I completely forgot to set a success event in Adobe Analytics when visitors hit my main consulting service page. I’d like to have a success event called “Adobe Analytics Service Page Views” when visitors hit this page, but as you can see here, I do not:

To do this, you simply create a new calculated metric that has the following definition:

This metric allows you to see the count of Adobe Analytics Service Page Views based upon the Page Name (or you could use URL) that is associated with that event and can then be used in any Adobe Analytics report:

So that is how simple it is to retroactively create a metric in Adobe Analytics. Obviously, this becomes more difficult if the metric you want is based on actions beyond just a page loading, but if you are tracking those actions in other variables (or ClickMap), you can follow the same process to create a calculated metric off of those actions.

Transitioning To A New Success Event

But what if you want to use the new success event going forward, but also want all of the historical data? This can be done as well with the following steps:

The first step would be to set the new success event going forward via manual tagging, a processing rule or via tag management. To do this, assign the new success event in the Admin Console:

The next step is to pick a date in which you will start setting this new success event and then start populating it.  If you want to have it be a clean break, I recommend doing this one day at midnight.

Next, you want to add the new success event to the preceding calculated metric so that you can have both the historical count and the count going forward:

However, this formula will double-count the event for all dates in which the new success event 12 has been set. Therefore, the last step is to apply two date-based segments to each part of the formula. The first date range contains the historical dates before the new success event was set. The second date range contains the dates after the new success event has been set (you can make the end date some date way into the future). Once both of these segments have been created, you can add them to the corresponding part of the formula so it looks like this:

This combined metric will use the page name for the old timeframe and the new success event for the new timeframe. Eventually, if desired, you can transition to using only the success event instead of this calculated metric when you have enough data in the success event alone.

Summary

To wrap up, this post shows a way that you can create metrics for items that you may have missed in your initial implementation and provides a way to fix your original omission and combine the old and the new. As I stated, this functionality isn’t as robust as what you might get from a Heap, SnowPlow or Mixpanel, but it can be a way to help if you need it in a pinch.

Adobe Analytics, Featured

Shifting Attribution in Adobe Analytics

If you are a veteran Adobe Analytics (or Omniture SiteCatalyst) user, for years the term attribution was defined by whether an eVar was First Touch (Original Value) or Last Touch (Most Recent). eVar attribution was setup in the administration console and each eVar had a setting (and don’t bring up Linear because that is a waste!). If you wanted to see both First and Last Touch campaign code performance, you needed to make two separate eVars that each had different attribution settings. If you wanted to see “Middle Touch” attribution in Adobe Analytics, you were pretty much out of luck unless you used a “hack” JavaScript plug-in called Cross Visit Participation (thanks to Lamont C.).

However, this has changed in recent releases of the Adobe Analytics product. Now you can apply a bunch of pre-set attribution models including J Curve, U Curve, Time Decay, etc… and you can also create your own custom attribution model that assigns some credit to first, some to last and the rest divided among the middle values. These different attribution models can be built into Calculated Metrics or applied on the fly in metric columns in Analysis Workspace (not available for all Adobe Analytics packages). This stuff is really cool! To learn more about this, check out this video by Trevor Paulsen from Adobe.

However, this post is not about the new Adobe Analytics attribution models. Instead, I wanted to take a step back and look at the bigger picture of attribution in Adobe Analytics. This is because I feel that the recently added Attribution IQ functionality is fundamentally changing how I have always thought about where and how Adobe performs attribution. Let me explain. As I mentioned above, for the past decade or more, Adobe Analytics attribution has been tied to eVars. sProps didn’t really even have attribution since their values weren’t persistent and generally didn’t work with Success Events. But what has changed in the past year, is that attribution has shifted to metrics instead of eVars. Today, instead of having a First Touch and Last Touch campaign code eVar, you can have one eVar (or sProp – more on that later) that captures campaign codes and then choose the attribution (First or Last Touch) in whatever metric you care about. For example, if you want to see First Touch Orders vs. Last Touch Orders, instead of breaking down two eVars by each other like this…

…you can use one eVar and create two different Order metric columns with different attribution models to see the differences:

In fact, you could have metric columns for all available attribution models (and even create Calculated Metrics to divide them by each other) as shown here:

In addition, the new attribution models work with sProps as well. Even though sProp values don’t persist, you can use them with Success Events in Analysis Workspace and then apply attribution models to those metrics. This means that the difference between eVars and sProps is narrowing due to the new attribution model functionality.

To prove this, here is an Analysis Workspace table based upon an eVar…

…and here is the same table based upon an sProp:

What Does This Mean?

So, what does this mean for you? I think this changes a few things in significant ways:

  1. Different Paradigm for Attribution – You are going to have to help your Adobe Analytics users understand that attribution (First, Last Touch) is no longer something that is part of the implementation, but rather, something that they are empowered to create. I recommend that you educate your users on how to apply attribution models to metrics and what each model means. You will want to avoid “analysis paralysis” for your users, so you may want to suggest which model you think makes the most sense for each data dimension.
  2. Different Approach to Implementation – The shift in attribution from eVars to metrics means that  you no longer have to use multiple eVars to see different attribution models. Also, the fact that you can see success event attribution for sProps means that you can also use sProps if you are using Analysis Workspace.
  3. sProps Are Not Dead! – While I have been on record saying that outside of Pathing, sProps are just a relic of old Omniture days, but as stated above, the new attribution modeling feature is helping make them useful again! sProps can now be used almost like eVars, which gives you more variables. Plus, they have Pathing that is better than eVars in Flow reports (until the instances bug is fixed!). Eventually, I assume all eVars and sProps will merge and simply be “dimensions,” but for now, you just got about 50 more variables!
  4. Create Popular Metric/Attribution Combinations – I suggest that you identify your most important metrics and create different versions of them for the relevant attribution models and share those out so your users can easily access them.  You may want to use tags as I suggested in this post.
Featured, Testing and Optimization

Adobe Target Chrome Extension

Adobe Target Chrome Extension

I use many different testing solutions each day as part of my strategic and tactical support of testing programs here at Analytics Demystified.  I am very familiar with how each of these different solutions functions and how to get the most value out of them.  To that end, I had a Chrome Extension built that will allow Adobe Target users to get much more value with visibility into test interaction, their Adobe Target Profile, and the bidirectional communication taking place.  23 (and counting:) powerful features, all for free.  Check out the video below to see it in action.

 

Video URL: https://youtu.be/XibDjGXPY4E

To learn more details about this Extension and download it from the Chrome Store, click below:
MiaProva Chrome Extension

Adobe Analytics, Featured

Ingersoll Rand Case Study

One of my “soapbox” issues is that too few organizations focus on analytics business requirements and KPI definition. This is why I spend so much time working with clients to help them identify their analytics business requirements. I have found that having requirements enables you to make sure that your analytics solution/implementation are aligned with the true needs of the organization. For this reason, I don’t take on consulting engagements unless the customer agrees to spend time defining their business requirements.

A while back, I had the pleasure of working with Ingersoll Rand to help them transform their legacy Adobe Analytics implementation to a more business requirements driven approach. The following is a quick case study that shares more information on the process and the results:

The Demystified Advantage – Ingersoll Rand – September 2018

 

Adobe Analytics, Featured

Analysis Workspace Drop-downs

Recently, the Adobe Analytics team added a new Analysis Workspace feature called “Drop-downs.” It has always been possible to add Adobe Analytics components like segments, metrics, dimensions and date ranges to the drop zone of Analysis Workspace projects. Adding these components allowed you to create “Hit” segments based upon what was brought over or, in the case of a segment, segment your data accordingly. Now, with the addition of drop-downs, this has been enhanced to allow you to add a set of individual elements to the filter area and then use a drop-down feature to selectively filter data. This functionality is akin to the Microsoft Excel Filter feature that lets you filter rows of a table. In this post, I will share some of the cool things you can do with this new functionality.

Filter on Dimension Values

One easy way to take advantage of this new feature is to drag over a few of your dimension values and see what it is like to filter on each. To do this, you simply find a dimension you care about in the left navigation and then click the right chevron to see its values like this:

Next you can use the control/shift key to pick the values you want (up to 50) and drag them over to the filter bar. Before you drop them, you must hold down the shift key to make it a drop-down:

When this is done, you can see your items in the drop-down like this:

 

Now you can select any item and all of your Workspace visualizations will be filtered. For example, if I select my name in the blog post author dimension, I will see only blog posts I have authored:

Of course, you can add as many dimensions as you’d like, such as Visit Number and/or Country. For example, if I wanted to narrow my data down to my blog posts viewed in the United States and the first visit, I might choose the following filters:

This approach is likely easier for your end-users to understand than building complex segments.

Other Filters

In addition to dimensions, you can create drop-downs for things like Metrics, Time Ranges and Segments. If you want to narrow your data down to cases in which a specific Metric was present, you can drag over the Metrics you care about and filter like this:

Similarly, you can filter on Data Ranges that you have created in your implementation (note that this will override whatever dates you have selected in the calendar portion of the project):

One of the coolest parts of this new feature is that you can also filter on Segments:

This means that instead of having multiple copies of the same Analysis Workspace project for different segments, you can consolidate down to one version and simply use the Segment drop-down to see the data you care about. This is similar to how you might use the report suite drop-down in the old Reports & Analytics interface. This should also help improve the performance times of your Analysis Workspace projects.

Example Use – Solution Design Project

Over the last few weeks, I have been posting about a concept of adding your business requirements and solution design to an Analysis Workspace project. In the final post of the series (I suggest reading all parts in order), I talked about how you could apply segmentation to the solution design project to see different completion percentages based upon attributes like status or priority (shown here):

Around this time, after reading my blog post, one of my old Omniture cohorts tweeted this teaser message:

At the time, I didn’t know what Brandon was referring to, but as usual, he was absolutely correct that the new drop-down feature would help with my proposed solution design project. Instead of having to constantly drag over different dimension/value combinations, the new drop-down feature allows any user to select the ways they want to filter the solution design project and, once they apply the filters, the overall project percentage completion rate (and all other elements) will dynamically change. Let’s see how this works through an example:

As shown in my previous post, I have a project that is 44.44% complete as shown above. Now I have added a few dimension filters to the project like this:

Now, if I choose to filter by “High” priority items, the percentage changes to 66.67% and only high priority requirements are shown:

Another cool side benefit of this is that the variable panel of the project now only shows variables that are associated with high priority requirements:

If I want to see how I am doing for all of Kevin’s high priority business requirements, I can simply select both high priority and then select Kevin in the requirement owner filter:

This is just a fun way to see how you can apply this new functionality to old Analysis Workspace projects into which you have invested time.

Future Wishlist Items

While this new feature is super-cool, I have already come up with a list of improvements that I’d like to eventually see:

  • Ability to filter on multiple items in the list instead of just one item at a time
  • Ability to clear the entire filter without having to remove each item individually
  • Ability to click a button to turn currently selected items (across all filters) into a new Adobe Analytics Segment
  • Ability to have drop-down list values generated dynamically based upon a search criteria (using the same functionality available when filtering values in a freeform table shown below)

Conferences/Community, Featured

ACCELERATE 2019

Back in 2015, the Analytics Demystified team decided to put on a different type of analytics conference we called ACCELERATE. The idea was that we as partners and a few select other industry folks would share as much information as we could in the shortest amount of time possible. We chose a 10 tips in 20 minutes format to force us and our other presenters to only share the “greatest hits” instead of the typical (often boring) 50 minute presentation with only a few minutes worth of good information. The reception of these events (held in San Francisco, Boston, Chicago, Atlanta and Columbus) was amazing. Other than some folks feeling a bit overwhelmed with the sheer amount of information, people loved the concept. We also coupled this one day event with some detailed training classes that attendees could optionally attend. The best part was that our ACCELERATE conference was dramatically less expensive than other industry conferences.

I am pleased to say that, after a long hiatus, we are bringing back ACCELERATE in January of 2019 in the Bay Area! As someone who attends a LOT of conferences, I still find that there is a bit of a void that we once again hope to fill with an updated version of ACCELERATE. In this iteration, we are going to do some different things in the agenda in addition to our normal 10 tips format. We hope to have a few roundtable discussions where attendees can network and have some face-to-face discussions like what is available at the popular DA Hub conference. We are also bringing in product folks Ben Gaines (Adobe) and Krista Seiden (Google) to talk about the two most popular digital analytics tools. I will even be doing an epic bake-off comparison of Adobe Analytics and Google Analytics with my partner Kevin Willeitner! We may also have some other surprises coming as the event gets closer…

You will be hard-pressed to find a conference at this price that provides as much value in the analytics space. But seats are limited and our past ACCELERATE events all sold out, so I suggest you check out the information now and sign-up before spaces are gone. This is a great way to start your year with a motivating event, at a great location, with great weather and great industry peers! I hope to see you there…

Featured, Testing and Optimization

Adobe Target and Marketo

The Marketo acquisition by Adobe went from rumor to fact earlier today.  This is a really good thing for the Adobe Target community.

I’ve integrated Adobe Target and Marketo together many times over the years and the two solutions complement each other incredibly well.  Independent of this acquisition and of marketing automation in general, I’ve also been saying for years that organizations need to shift their testing programs such that the key focus is on the Knowns and Unknowns if they are to succeed.  Marketo can maybe help those organizations with this vision if it is part of their Adobe stack since Marketo is marketing automation for leads (Unknowns) and customers (Knowns).

The assimilation of Marketo into the Adobe Experience Cloud will definitely deepen the integration between the multiple technologies but let me layout here how Target and Marketo work together today so as to relay the value the two together bring.

Marketo

For those of you in the testing community that is unfamiliar with Marketo or Marketing Automation in general, let me layout at a very high level some of the things these tools do.

Initially and maybe most commonly the Marketing Automation starts out with Lead Management space which means, when you fill out those forms on websites, the management of that “lead” is then handled by these systems.  At that point, you get emails, deal with salespeople, consume more content, etc…  The management of that process is handled here and if done well, prospects turn into customers.  Unknowns become Knowns.

Once you are Known, a whole new set of Marketing and Customer Marketing kicks in and that is also typically managed by Marketing Automation technologies like Marketo.

Below is an image that was taken directly from Marketo’s Solution’s website that highlights their offering.

Image from: https://www.marketo.com/solutions/

Adobe Target

Just like Marketo, testing solutions like Adobe Target also focus on different audiences as well.  The most successful testing programs out there have testing roadmaps and personalization strategies dedicated to getting Unknowns (prospects) to becoming Knowns (customers).  And when that transition takes place, these newly gotten Knowns then fall into tests and personalization initiatives focused on different KPIs vs becoming a Known.

Combining the power of testing and the quantification/reporting of consumer experiences (Adobe Target) with the power of marketing automation (Marketo) provide a value significantly higher than the value these solutions provide independently.

Target into Marketo

Envision a scenario where you bring testing to unknowns and use the benefits of testing to find ideal experiences that lead to more forms completions.  This is a no-brainer for Marketo customers and works quite well.  At this point, when tests are doing their thing, it is crucial to communicate or share this test data to Marketo when end users make the transition from Unknowns to Knowns.  This data will help with the management of leads because we will know what test and test experience influenced their transition to becoming a Known.

Just like Target, Marketo loves data and this code below is what Target would deliver with tests targeted to Unknowns.  This code delivers to Marketo the test name but also the Adobe Target ID in the event users of Marketo wanted to retargeted certain Adobe Target visitors.

var customData = {value: ‘${campaign.name}:${user.recipe.name}’};
rtp(‘send’, ‘AdobeTarget’, customData);
var customData = {value: ‘${profile.mboxPCId}’};
rtp(‘send’, ‘AdobeTarget_ID’, customData);

Marketo into Target

Adobe Target manages a rich profile that can be made up of online behaviors, 3rd Party Data, and offline data.  Many Target customers use this profile for strategic initiatives that change and quantify consumer experiences based off of the values of the profile attributes associated with this profile or Adobe Target ID.

In the Marketo world, there are many actions or events that take place as the leads are nurtured and the customers are marketed to.  Organizations differ on how the specific actions or stages of lead or customer management/marketing are defined but no matter what definition, those stages/actions/events can be mirrored or shared with Adobe Target.  This effort allows Marketo users to run tests online that are coordinated with their efforts managed offline – hence making those offline efforts more successful.

Push Adobe Target ID into Marketo

Marketo can get this data into Target in one of two ways.  The first method uses the code that I shared above where the Adobe Target ID is shared with Marketo.  Marketo can then generate a report or gather all Adobe Target IDs at a specific stage/event/action and then set up a test targeted to them.  It is literally that easy.

Push Marketo ID into Adobe Target

The second method is a more programmatic approach.  We have the Marketo visitor ID passed to Adobe Target as a special mbox parameter called mbox3rdPartyId.  When Adobe Target sees this value it immediately marries its ID to that ID so that any data shared to Adobe with that ID will be available for any testing efforts.  This process is one that many organizations use with their own internal ID.  At this point, any and all (non-PII) data can be sent to Adobe Target by way of APIs using nothing more than the Marketo ID – all possible because it passed the ID to Adobe Target when the consumer was on the website.

And then the cycle repeats itself with Adobe Target communicating test and experience names again to Marketo but this time for the Knowns – thus making that continued management more effective.

 

Adobe Analytics, Featured

Bonus Tip: Quantifying Content Creation

Last week and this week, I shared some thoughts on how to quantify content velocity in Adobe Analytics. As part of that post, I showed how to assign a publish date to each piece of content via a SAINT Classification like this:

Once you have this data in Adobe Analytics, you can download your SAINT file and clean it up a bit to see your content by date published in a table like this:

The last three columns split out the Year, the Month and then I added a “1” for each post. Adding these three columns allows you to then build a pivot table to see how often content is published by both Month and Year:

Then you can chart these like you would any other pivot table. Here are blog posts by month:

Here are blog posts by year:

As long as you are going to go through the work of documenting the publish date of your key content, you can use this bonus tip to leverage your SAINT Classifications file to do some cool reporting on your content creation.

Adobe Analytics, Featured

Quantifying Content Velocity in Adobe Analytics – Part 2

Last week, I shared how to quantify content velocity in Adobe Analytics. This involved classifying content with the date it was published and looking at subsequent days to see how fast it is viewed. As part of this exercise, the date published was added via the SAINT classification and dates were grouped by Year and Month & Year. At the same time, it is normal to capture the current Date in an eVar (as I described in this old blog post). This Date eVar can also be classified into Year and Year & Month. The classification file might look like this:

Once you have the Month-Year for both Blog Post Launches and Views, you can use the new cross-tab functionality of Analysis Workspace to do some analysis. To do this, you can create a freeform table and add your main content metric (Blog Post Views in my case) and break it down by the Launch Month-Year:

In this case, I am limiting data to 2018 and showing the percentages only. Next, you can add the Blog Post View Month-Year as cross-tab items by dragging over this dimension from the left navigation:

This will insert five Blog Post View Month-Year values across the top like this:

From here, you can add the missing three months, order them in chronological order and then change column settings like this:

Next, you can change the column percentages so they go by row instead of column, but clicking on the row settings gear icon like this:

After all of this, you will have a cross-tab table that looks like this:

Now you have a cross-tab table that allows you to see how blog posts launched in each month are viewed in subsequent months. In this case, you can see that from January to August, for example, blog posts launched in February had 59% of their views take place in February and the remaining 40% over the next few months.

Of course, the closer you are to the month content was posted, the higher the view percentage will be for the current month and the months that follow. This is due to the fact that over time, more visitors will end up viewing older content. You can see this above by the fact that 100% of content launched in August was viewed in August (duh!). But in September, August will look more like July in the table above when September will steal a percentage of content that was launched in August.

This type of analysis can be used to see how sticky your content is in a way that is similar to the Cohort Analysis visualization. For example, four months after content was launched in March, its view % was 3.5%, whereas, four months after content was released in April, its view % was 5.3%. There are many ways that you can dissect this data and, of course, since this is Analysis Workspace, if you ever want to do a deeper dive on one of the cross-tab table elements, you can simply right-click and build an additional visualization. For example, if I want to see the trend of February content, I can simply right-click on the 59.4% value and add an area visualization like this:

This would produce an additional Analysis Workspace visualization like this:

For a bonus tip related to this concept, click here.

Adobe Analytics, Featured

Quantifying Content Velocity in Adobe Analytics

If publishing content is important to your brand, there may be times when you want to quantify how fast users are viewing your content and how long it takes for excitement to wane. This is especially important for news and other media sites that have content as their main product. In my world, I write a lot of blog posts, so I also am curious about which posts people view and how soon they are viewed. In this post, I will share some techniques for measuring this in Adobe Analytics.

Implementation Setup

The first step to tracking content velocity is to assign a launch date to each piece of content, which is normally the publish date. Using my blog as an example, I have created a SAINT Classification of the Blog Post Title eVar and classified each post with the publish date:

Here is what the SAINT File looks like when completed:

The next setup step is to set a date eVar on every website visit. This is as simple as capturing today’s date in an eVar on every hit, which I blogged about back in 2011. Having the current date will allow you to compare the date the post was viewed with the date it was published. Here is an example on my site:

Reporting in Analysis Workspace

Once the setup is complete, you can move onto reporting. First, I’ll show how to report on the data in Analysis Workspace. In Workspace, you can create a panel and add the content item you care about (blog post in my example) and then break it down by the launch date and the view date. I recommend setting the date range to begin with the publish date:

In this example, you can see that the blog post launched on 8/7/18 and that 36% of total blog post views since then occurred on the launch date. You can also see how many views took place on each date thereafter. As you would expect, most of the views took place around the launch date and then slowed down in subsequent days. If you want to see how this compares to another piece of content, you can create a new panel and view the same report for another post (making sure to adjust the date range in the new panel to start with the new post’s launch date):

By viewing two posts side by side, I can start to see how usage varies. The unfortunate part, is that it is difficult to see which date is “Launch Date,” Launch Date +1,” Launch Date +2, ” etc… Therefore, Analysis Workspace, in this situation, is good for seeing some ad-hoc data (no pun intended!), but using Adobe ReportBuilder might actually prove to be a more scalable solution.

Reporting in Adobe ReportBuilder

When you want to do some more advanced formulas, sometimes Adobe ReportBuilder is the best way to go. In this case, I want to create a data block that pulls in all of my blog posts and the date each post was published like this:

Once I have a list of the content I care about (blog posts in this example), I want to pull in how many views of the content occurred each date after the publish date. To do this, I have created a set of reporting parameters like this:

The items in green are manually entered by setting them equal to the blog post name and publish date I am interested in from the preceding data block. In this case, I am setting the Start Date equal to the sixth cell in the second column and the Blog Post equal to the cell to the left of that. Once I have done that I create a data block that looks like this:

This will produce the following table of data:

Now I have a daily report of content views beginning with the publish date. Next, I created a table that references this table that captures the launch date and the subsequent seven days (you can use more days if you want). This is done by referencing the first eight rows in the preceding table and then creating a sum of all other data to create a table that looks like this:

In this table, I have created a dynamic seven-day distribution and then lumped everything else into the last row. Then I have calculated the percentage and added an incremental percentage formula as well. These extra columns allow me to see the following graphs on content velocity:

The cool part about this process, is that it only takes 30 seconds to produce the same reports/graphs for any other piece of content (blog post in my example). All you have to do is alter the items in green and then refresh the data block. Here is the same reporting for a different blog post:

You can see that this post had much more activity early on, whereas the other post started slow and increased later. You could even duplicate each tab in your Excel worksheet so you have one tab for each key content item and then refresh the entire workbook to update the stats for all content at once.

Check out Part 2 of this post here: https://analyticsdemystified.com/featured/quantifying-content-velocity-in-adobe-analytics-part-2/

Featured, google analytics

Google Analytics Segmentation: A “Gotcha!” and a Hack

Google Analytics segments are a commonly used feature for analyzing subsets of your users. However, while they seem fairly simple at the outset, certain use cases may unearth hidden complexity, or downright surprising functionality – as happened to me today! This post will share a gotcha with user-based segments I just encountered, as well as two options for hit-based Google Analytics segmentation. 

First, the gotcha.

One of these things is not like the other

Google Analytics allows you to create two kinds of segments: session-based, and user-based. A session-based segment requires that the behaviour happened within the same session (for example, watched a video and purchased.) A user-based segment requires that one user did those two things, but it does not need to be within the same session.

However, thanks to the help and collective wisdom of Measure Slack, Simo Ahava and Jules Stuifbergen (thank you both!), I stumbled upon a lesser-known fact about Google Analytics segmentation. 

These two segmentation criteria “boxes” do not behave the same:

I know… they look identical, right? (Except for Session vs. User.)

What might the expected behaviour be? The first looks for sessions in which the page abc.html was seen, and the button was clicked in that same session. The second looks for users who did those two things (perhaps in different sessions.) 

When I built a session-based segment and attempted to flip it to user-based, imagine my surprise to find… the session-based segment worked. The user-based segment, with the exact same criteria didn’t work. (Note: It’s logically impossible for sessions to exist in which two things were done, but no users have done those two things…) I will confess that I typically use session-based segmentation far more, as I’m often looking back more than 90 days, so it’s not something I’ve happened upon.

That’s when I found out that if two criteria in a Google Analytics user-based segment are in the same criteria “box”, they have to occur on the same hit. The same functionality and UI works differently depending on if you’re looking at a user- or session-based segment. 

I know.

Note: There is some documented of this, within the segment builder, though not within the main segmentation documentation.

In summary:

If you want to create a User-based segment that looks for two events (or more) occurring for the same user, but not on the same hit? You need to use two separate criteria “boxes”, like this:

So, there you go.

This brings me to the quick hack:

Two Hacks for Hit-Level Segmentation

Once you know about the strange behaviour of User-based segments, you can actually use them to your advantage.

Analysts familiar with Adobe Analytics know that Adobe has three options for segmentation: hit, visit and visitor level. Google Analytics, however, only has session (visit) and user (visitor) level.

Why might you need hit-level segmentation?

Sometimes when doing analysis, we want to be very specific that certain criteria must have taken place on the same hit. For example, the video play on a specific page. 

Since Google Analytics doesn’t have built-in hit-based segmentation, you can use one of two possible hacks:

1. User-segment hack: Use our method above: Create a user-based segment, and put your criteria in the same “box.” Voila! It’s a feature, not a bug! 

2. Sequential segment hack: Another clever method brought to my attention by Charles Farina is to use a sequential segment. Sequential segments evaluate each “step” as a single hit, so this sequential segment is the equivalent of a hit-based segment:  

Need convincing? Here are the two methods, compared. You’ll see the number of users is identical:

(Note that the number of sessions is different since, in the user-based segment, the segment of users who match that criteria might have had other sessions in which the criteria didn’t occur.)

So which hit-level segmentation method should you use? Personally I’d recommend sticking with Charles’ sequential segment methodology, since a major limitation of user-based segments is that they only look back 90 days. However, it may depend on your analysis question as to what’s more appropriate. 

I hope this was helpful! If you have any similar “gotchas” or segmentation hacks you’ve found, please don’t hesitate to share them in the comments. 

Adobe Analytics, Featured

Adobe Analytics Requirements and SDR in Workspace – Part 4

Last week, I shared how to calculate and incorporate your business requirement completion percentage in Analysis Workspace as part of my series of posts on embedding your business requirements and Solution Design in Analysis Workspace (Part 1, Part 2, Part 3). In this post, I will share a few more aspects of the overall SDR in Workspace solution in case you endeavor to try it out.

Updating Business Requirement Status

Over time, your team will add and complete business requirements. In this solution, adding new business requirements is as simple as uploading a few more rows of data via Data Sources as shown in the “Part 2” blog post. In fact, you can re-use the same Data Sources template and FTP info to do this. When uploading, you have two choices. You can upload only new business requirements or you can re-upload all of your business requirements each time, including the new ones. If you upload only the new ones, you can tie them to the same date you originally used or use the current date. Using the current date allows you to see your requirements grow over time, but you have to be mindful to make sure your project date ranges cover the timeframe for all requirements. What I have done is re-uploaded ALL of my business requirements monthly and changed the Data Sources date to the 1st of each month. Doing this allows me to see how many requirements I had in January, Feb, March, etc., simply by changing the date range of my SDR Analysis Workspace project. The only downside of this approach is that you have to be careful not to include multiple months or you will see the same business requirements multiple times.

Once you have all of your requirements in Adobe Analytics and your Analysis Workspace project, you need to update which requirements are complete and which are not. As business requirements are completed, you will update your business requirement SAINT file to change the completion status of business requirements. For example, let’s say that you re-upload the requirements SAINT file and change two requirements to be marked as “Complete” as shown here in red:

Once the SAINT file has processed (normally 1 day), you would see that 4 out of your 9 business requirements are now complete, which is then reflected in the Status table of the SDR project:

Updating Completion Percentage

In addition, as shown in Part 3 of the post series, the overall business requirement completion percentage would be automatically updated as soon as the two business requirements are flagged as complete. This means that the overall completion percentage would move from 22.22% (2/9) to 44.44% (4/9):

Therefore, any time you add new business requirements, the overall completion percentage would decrease, and any time you complete requirements, the percentage would increase.

Using Advanced Segmentation

For those that are true Adobe Analytics geeks, here is an additional cool tip. As mentioned above, the SAINT file for the business requirements variable has several attributes. These attributes can be used in segments just like anything else in Adobe Analytics. For example, here you see the “Priority” SAINT Classification attribute highlighted:

This means that each business requirement has an associated Priority value, in this case, High, Medium or Low, which can be seen in the left navigation of Analysis Workspace:

Therefore, you can drag over items to create temporary segments using these attributes. Highlighted here, you see “Priority = High” added as a temporary segment to the SDR panel:

Doing this, applies the segment to all project data, so only the business requirements that are marked as “High Priority” are included in the dashboard components. After the segment is applied, there are now three business requirements that are marked as high priority, as shown in our SAINT file:

Therefore, since, after the upload described above, two of those three “High Priority” business requirements are complete, the overall implementation completion percentage automatically changes from 44.44% to 66.67% (2 out 3), as shown here (I temporarily unhid the underlying data table in case you want to see the raw data):

As you can see, the power of segmentation is fully at your disposal to make your Requirements/Solution Design project highly dynamic! That could mean segmenting by requirement owner, variable or any other data points represented within the project! For example, once we apply the “High Priority” segment to the project as shown above, viewing the variable portion of the project displays this:

This now shows all variables associated with “High Priority” business requirements.  This can be useful if you have limited time and/or resources for development.

Another example might be creating a segment for all business requirements that are not complete:

This segment can then be applied to the project as shown here to only see the requirements and variables that are yet to be implemented:

As you can see, there are some fun ways that you can use segmentation to to slice and dice your Solution Design! Pretty cool huh?

Adobe Analytics, Featured

Adobe Analytics Requirements and SDR in Workspace – Part 3

Over the past two weeks, I have been posting about how to view your business requirements and solution design in Analysis Workspace. First, I showed how this would look in Workspace and then I explained how I created it. In this post, I am going to share how you can extend this concept to calculate the completion percentage of business requirements directly within Analysis Workspace. Completion percentage is important because Adobe Analytics implementations are never truly done. Most organizations are continuously doing development work and/or adding new business requirements. Therefore, one internal KPI that you may want to monitor and share is the completion percentage of all business requirements.

Calculating Requirement Percentage Complete

As shown in the previous posts, you use Data Sources to upload a list of business requirements and each business requirement has one or more Adobe Analytics variables associated to it:

When this is complete, you can see a report like this:

Unfortunately, this report is really showing you how many total variables are being used, not the number of distinct business requirements (Note: You could divide the “1” in event30 by the number of variables, but that can get confusing!). This can be seen by doing a breakdown by the Variable eVar:

Since your task is to see how many business requirements are complete, you can upload a status for each business requirement via a SAINT file like this:

This allows you to create a new calculated metric that counts how many business requirements have a status of complete (based upon the SAINT Classification attribute) like this:

However, this is tricky, because the SAINT Classification that is applied to the Business Requirement metric doesn’t sum the number of completed business requirements, but rather the number of variables associated with completed requirements. This can be seen here:

What is shown here is that there are five total variables associated with completed business requirements out of twenty-five total variables associated with all business requirements. You could divide these two to show that your implementation is 20% complete (5/25), but that is not really accurate. The reality is that two out of nine business requirements are complete, so your actual completion percentage is 22.22 % (2/9).

So how do you solve this? Luckily, there are some amazing functions included in Adobe Analytics that can be used to do advanced calculations. In this case, what you want to do is count how many business requirements are complete, not how many variables are complete. To do this, you can use an IF function with a GREATER THAN function to set each row equal to either “1” or “0” based upon its completion status using this formula:

This produces the numbers shown in the highlighted column here:

Next, you want to divide the number of rows that have a value of “1” by the total number of rows (which represents the number of requirements). To do this, you simply divide the preceding metric by the ROW COUNT function, which will produce the numbers shown in the highlighted column here:

Unfortunately, this doesn’t help that much, because what you really want is the sum of the rows (22.22%) versus seeing the percentages in each row. However, you can wrap the previous formula in a COLUMN SUM function to sum all of the individual rows. Here is what the final formula would look like:

This would then produce a table like this:

Now you have the correct requirement percentage completion rate. The last step is to create a new summary number visualization using the column heading in the Requirement Completion % column as shown highlighted here:

To be safe, you should use the “lock” feature to make sure that this summary number will always be tied to the top cell in the column like this:

Before finishing, there are a few clean-up items left to do. You can remove any extraneous columns in the preceding table (which I added just to explain the formula) to speed up the overall project so the final table looks like this:

You can also hide the table completely by unchecking the “Show Data Source” box, which will avoid confusing your users) :

Lastly, you can move the completion percentage summary number to the top of the project where it is easily visible to all:

So now you have an easy way to see the overall business requirement completion % right in your Analysis Workspace SDR project!

[Note: The only downside of this overall approach is that the completion status is flagged by a SAINT Classification, which, by definition, is retroactive. This means that the Analysis Workspace project will always show the current completion percentage and will not record the history. If that is important to you, you’d have to import two success events for each business requirement via Data Sources. One for requirements and another for completed requirements and use formulas similar to the ones described above.]

Click here to see Part 4 for even more cool things related to this concept!

Featured, google analytics

Understanding Marketing Channels in Google Analytics: The Good, The Bad – and a Toy Surprise!

Understanding the effectiveness of marketing efforts is a core use case for Google Analytics. While we may analyze our marketing at the level of an individual site, or ad network, typically we are also looking to understand performance at a higher channel level. (For example, how did my Display ads perform?)

In this post I’ll discuss two ways you can approach this, as well as the gotchas, and even offer a handy little tool you can use for yourself!

Option 1: Channel Groupings in GA

There are two relevant features here:

  1. Default channel groupings
  2. Custom channel groupings

Default Channel Groupings

Default channel groupings are defined rules, that apply at the time the data is processed. So, they apply from the time you set them up, onwards. Note also that the rule set execute in order

The default channel grouping dimension is available throughout Google Analytics, including for use in segments, as a secondary dimensions, in custom reports, Data Studio, Advanced Analysis and the API. (Note: They are not included in Big Query.)

Unfortunately, there are some real frustrations associated with this feature:

  1. The default channel groupings that come pre-setup aren’t typically applicable. By default, GA provides some default rules. However, in my experience, they rarely map well enough to marketing efforts. Which leads me to…
  2. You have to customize them. Makes sense – for your data to be useful, it should be customized to your business, right? I always end up editing the default grouping, to take into account the UTM and tracking standards we use. Unfortunately…  
  3. The manual work in customizing them makes kittens cry. Why?
    • You have to manually update them for every.single.view. Default Channel Groupings are a view level asset. So if your company has two views (or worse, twenty!) you need to manually set them up over. and over. again.
    • (“I know! I’ll outsmart GA! I’ll set up the groupings then copy the view. Nope, sorry.) Unlike goals, any customizations made to your Default Channel Groupings don’t copy over when you copy a view, even if they were created before you copied it. You start from scratch, with the GA default. So you have to create them. Again.
    • There is no way to create them programmatically. They can’t be edited or otherwise managed via the Management API.
    • Personally, I consider this to be a huge limitation for feature use in an enterprise organization, as it requires an unnecessary level of manual work.
  4. They are not retroactive. This is a common complaint. Honestly, it’s the least of my issues with them. Yes, retroactive would be nice. But I’d take a solve of the issues in #3 any day.

“Okay… I’ll outsmart GA (again)! Let’s not use the default. Let’s just use the custom groupings!” Unfortunately, custom channel groupings aren’t a great substitute either.

Custom Channel Groupings

Custom Channel Groupings are a very similar feature. However, the custom groupings aren’t processed with the data, they’re a rule set applied on top of the data, after it’s processed.

The good:

The bad:

  • The custom grouping created is literally only available in one report. You can not use the dimensions they create in a segment, as a secondary dimension, via the API or Data Studio. So they have exceptionally limited value. (IMHO they’re only useful for checking a grouping before you set it as the default.) 

So, as you may have grasped, the channel groupings features in Google Analytics are necessary… but incredibly cumbersome and manual.

<begging>

Dear GA product team,

For channel groupings to be a useful and more scalable enterprise feature, one of the following things needs to happen:

  1. The Default should be sharable as a configured link, the same way that a segment or a goal works. Create them once, share the link to apply them to other views; or
  2. The Default should be a shared asset throughout the Account (similar to View filters) allowing you to apply the same Default to multiple views; or
  3. The Default should be manageable via the Management API; or
  4. Custom Groupings need to be able to be “promoted” to the default; or
  5. Custom-created channels need to be accessible like any other dimension, for use in segmentation, reports and via the API and Data Studio.

Pretty please? Just one of them would help…

</begging>

So, what are the alternate options?

Option 2: Define Channels within Data Studio, instead of GA

The launch of Data Studio in 2016 created a new option that didn’t used to exist: use Data Studio to create your groupings, and don’t bother with the Default Channel Groupings at all.

You can use Data Studio’s CASE formula to recreate all the same rules as you would in the GA UI. For example, something like this:  

CASE
WHEN REGEXP_MATCH (Medium, 'social') OR REGEXP_MATCH (Source, 'facebook|linkedin|youtube|plus|stack.(exc|ov)|twitter|reddit|quora|google.groups|disqus|slideshare|addthis|(^t.co$)|lnk.in') THEN 'Social'
WHEN REGEXP_MATCH (Medium, 'cpc') THEN 'Paid Search'
WHEN REGEXP_MATCH (Medium, 'display|video|cpm|gdn|doubleclick|streamads') THEN 'Display'
WHEN REGEXP_MATCH (Medium, '^organic

You can then use this newly created “Channel” dimension in Data Studio for your reports (instead of the default.)

Note, however, a few potential downsides:

  • This field is only available in Data Studio (so, it is not accessible for segments, via the API, etc.)
  • Depending on the complexity of your rules, you could bump up against a character limit for CASE formulas in Data Studio (2048 characters.) Don’t laugh… I have one set of incredibly complex channel rules where the CASE statement was 3438 characters… 

Note: If you use BigQuery, you could then use a version of this channel definition in your queries, as well.

And a Toy Surprise!

Let’s say you do choose to use Default Channel Groupings (I do end up using them, I just grumble incessantly during the painful process of setting them up, or amending them.) You might put a lot of thought in to the rules, the order in which they execute, etc. But nonetheless, you’ll still need to check your results after you set them up, to make sure they’re correct.

To do this, I created a little Data Studio report, that you are welcome to copy and use for your own purposes. Basically, after you setup your default groupings and collect at least a (full) day’s data, the report allows you to flip through each channel, and see what Sources, Mediums and Campaigns are falling in to each channel, based on your rules.

mkiss.me/DefaultChannelGroupingCheck
Note: At first it will load with errors, since you don’t have access to my data set. You need to select a data set you have access to, and then the tables will load. 

If you see something that seems miscategorized, you can then edit the rules in the GA admin settings. (Keeping in mind that your edits will only apply moving forward.)

I also recommend you keep documentation of your rules. I use something like this:

I also set up alerts for big increases in the “Other” channel, so that I can catch where the rules might need to be amended. 

Thoughts? Comments?

I hope this is helpful! If there are other ways you do this, I would love to hear about it.


) OR REGEXP_MATCH(Source, 'duckduckgo') THEN 'Organic Search'
WHEN REGEXP_MATCH (Medium, '^blog

You can then use this newly created “Channel” dimension in Data Studio for your reports (instead of the default.)

Note, however, a few potential downsides:

  • This field is only available in Data Studio (so, it is not accessible for segments, via the API, etc.)
  • Depending on the complexity of your rules, you could bump up against a character limit for CASE formulas in Data Studio (2048 characters.) Don’t laugh… I have one set of incredibly complex channel rules where the CASE statement was 3438 characters… 

Note: If you use BigQuery, you could then use a version of this channel definition in your queries, as well.

And a Toy Surprise!

Let’s say you do choose to use Default Channel Groupings (I do end up using them, I just grumble incessantly during the painful process of setting them up, or amending them.) You might put a lot of thought in to the rules, the order in which they execute, etc. But nonetheless, you’ll still need to check your results after you set them up, to make sure they’re correct.

To do this, I created a little Data Studio report, that you are welcome to copy and use for your own purposes. Basically, after you setup your default groupings and collect at least a (full) day’s data, the report allows you to flip through each channel, and see what Sources, Mediums and Campaigns are falling in to each channel, based on your rules.

mkiss.me/DefaultChannelGroupingCheck
Note: At first it will load with errors, since you don’t have access to my data set. You need to select a data set you have access to, and then the tables will load. 

If you see something that seems miscategorized, you can then edit the rules in the GA admin settings. (Keeping in mind that your edits will only apply moving forward.)

I also recommend you keep documentation of your rules. I use something like this:

I also set up alerts for big increases in the “Other” channel, so that I can catch where the rules might need to be amended. 

Thoughts? Comments?

I hope this is helpful! If there are other ways you do this, I would love to hear about it.


) THEN 'Blogs'
WHEN REGEXP_MATCH (Medium, 'email|edm|(^em$)') THEN 'Email'
WHEN REGEXP_MATCH (Medium, '^referral

You can then use this newly created “Channel” dimension in Data Studio for your reports (instead of the default.)

Note, however, a few potential downsides:

  • This field is only available in Data Studio (so, it is not accessible for segments, via the API, etc.)
  • Depending on the complexity of your rules, you could bump up against a character limit for CASE formulas in Data Studio (2048 characters.) Don’t laugh… I have one set of incredibly complex channel rules where the CASE statement was 3438 characters… 

Note: If you use BigQuery, you could then use a version of this channel definition in your queries, as well.

And a Toy Surprise!

Let’s say you do choose to use Default Channel Groupings (I do end up using them, I just grumble incessantly during the painful process of setting them up, or amending them.) You might put a lot of thought in to the rules, the order in which they execute, etc. But nonetheless, you’ll still need to check your results after you set them up, to make sure they’re correct.

To do this, I created a little Data Studio report, that you are welcome to copy and use for your own purposes. Basically, after you setup your default groupings and collect at least a (full) day’s data, the report allows you to flip through each channel, and see what Sources, Mediums and Campaigns are falling in to each channel, based on your rules.

mkiss.me/DefaultChannelGroupingCheck
Note: At first it will load with errors, since you don’t have access to my data set. You need to select a data set you have access to, and then the tables will load. 

If you see something that seems miscategorized, you can then edit the rules in the GA admin settings. (Keeping in mind that your edits will only apply moving forward.)

I also recommend you keep documentation of your rules. I use something like this:

I also set up alerts for big increases in the “Other” channel, so that I can catch where the rules might need to be amended. 

Thoughts? Comments?

I hope this is helpful! If there are other ways you do this, I would love to hear about it.


) THEN 'Referral'
WHEN REGEXP_MATCH (Source, '(direct)') THEN 'Direct'
ELSE 'Other' 
END

You can then use this newly created “Channel” dimension in Data Studio for your reports (instead of the default.)

Note, however, a few potential downsides:

  • This field is only available in Data Studio (so, it is not accessible for segments, via the API, etc.)
  • Depending on the complexity of your rules, you could bump up against a character limit for CASE formulas in Data Studio (2048 characters.) Don’t laugh… I have one set of incredibly complex channel rules where the CASE statement was 3438 characters… 

Note: If you use BigQuery, you could then use a version of this channel definition in your queries, as well.

And a Toy Surprise!

Let’s say you do choose to use Default Channel Groupings (I do end up using them, I just grumble incessantly during the painful process of setting them up, or amending them.) You might put a lot of thought in to the rules, the order in which they execute, etc. But nonetheless, you’ll still need to check your results after you set them up, to make sure they’re correct.

To do this, I created a little Data Studio report, that you are welcome to copy and use for your own purposes. Basically, after you setup your default groupings and collect at least a (full) day’s data, the report allows you to flip through each channel, and see what Sources, Mediums and Campaigns are falling in to each channel, based on your rules.

mkiss.me/DefaultChannelGroupingCheck
Note: At first it will load with errors, since you don’t have access to my data set. You need to select a data set you have access to, and then the tables will load. 

If you see something that seems miscategorized, you can then edit the rules in the GA admin settings. (Keeping in mind that your edits will only apply moving forward.)

I also recommend you keep documentation of your rules. I use something like this:

I also set up alerts for big increases in the “Other” channel, so that I can catch where the rules might need to be amended. 

Thoughts? Comments?

I hope this is helpful! If there are other ways you do this, I would love to hear about it.

Adobe Analytics, Featured

Adobe Analytics Requirements and SDR in Workspace – Part 2

Last week, I wrote about a concept of having your business requirements and SDR inside Analysis Workspace. My theory was that putting business requirements and implementation information as close to users as possible could be a good thing. Afterwards, I had some folks ask me how I implemented this, so in this post I will share the steps I took. However, I will warn you that my approach is definitely a “hack” and it would be cool if, in the future, Adobe provided a much better way to do this natively within Adobe Analytics.

Importing Business Requirements (Data Sources)

The first step in the solution I shared is getting business requirements into Adobe Analytics so they can be viewed in Analysis Workspace. To do this, I used Data Sources and two conversion variables – one for the business requirement number and another for the variables associated with each requirement number. While this can be done with any two conversion variables (eVars), I chose to use the Products variable and another eVar because my site wasn’t using the Products variable (since we don’t sell a physical product). You may choose to use any two available eVars. I also used a Success Event because when you use Data Sources, it is best to have a metric to view data in reports (other than occurrences). Here is what my data sources file looked like:

Doing this allowed me to create a one to many relationship between Req# (Products) and the variables for each (eVar17). The numbers in event 30 are inconsequential, so I just put a “1′ for each. Also note, that you need to associate a date with data being uploaded via Data Sources. The cool thing about this, is that you can change your requirements when needed by re-uploading the entire file again at a later date (keeping in mind that you need to choose your data ranges carefully so you don’t get the same requirement in your report twice!). Another reason I uploaded the requirement number and the variables into conversion variables is that these data points should not change very often, whereas, many of the other attributes will change (as I will show next).

Importing Requirement & Variable Meta-Data (SAINT Classifications)

The next step of the process is adding meta-data to the two conversion variables that were imported. Since the Products variable (in my case) contains data related to business requirements, I added SAINT Classifications for any meta-data that I would want to upload for each business requirement. This included attributes like description, owner, priority, status and source.

Note, these attributes are likely to change over time (i.e. status), so using SAINT allows me to update them by simply uploading an updated SAINT file. Here is the SAINT file I started with:

 

The next meta-data upload required is related to variables. In my case, I used eVar17 to capture the variable names and then classified it like this:

As you can see, I used classifications and sub-classifications to document all attributes of variables. These attributes include variable types, descriptions and, if desired, all of the admin console attributes associated with variables. Here is what the SAINT file looks like when completed:

[Note: After doing this and thinking about it for a while, in hindsight, I probably should have uploaded Variable # into eVar17 and made variable name a classification in case I want to change variable names in the future, so you may want to do that if you try to replicate this concept.]

Hence, when you bring together the Data Sources import and the classifications for business requirements and variables, you have all of the data you need to view requirements and associated variables natively in Adobe Analytics and Analysis Workspace as shown here:

Project Curation

Lastly, if you want to minimize confusion for your users in this special SDR project, you can use project curation to limit the items that users will see in the project to those relevant to business requirements and the solution design. Here is how I curated my Analysis Workspace project:

This made it so visits only saw these elements by default:

Final Thoughts

This solution has a bit of set-up work, but once you do that, the only ongoing maintenance is uploading new business requirements via Data Sources and updating requirements and variable attributes via SAINT Classifications. Obviously, this was just a quick & dirty thing I was playing around with and, as such, not something for everyone. I know many people are content with keeping this information in spreadsheets, in Jira/Confluence or SharePoint, but I have found that this separation can lead to reduced usage. My hope is that others out there will expand upon this concept and [hopefully] improve it. If you have any additional questions/comments, please leave a comment below.

To see the next post in this series, click here.

Adobe Analytics, Featured

Adobe Analytics Requirements and SDR in Workspace

Those who know me, know that I have a few complaints about Adobe Analytics implementations when it comes to business requirements and solution designs. You can see some of my gripes around business requirements in the slides from my 2017 Adobe Summit session and you can watch me describe why Adobe Analytics Solution Designs are often problematic in this webinar (free registration required). In general, I find that:

  • Too few organizations have defined analytics business requirements
  • Most Solution Designs are simply lists of variables and not tied to business requirements
  • Often times, Solution Designs are outdated/inaccurate

When I start working with new clients, I am shocked at how few have their Adobe Analytics implementation adequately organized and documented. One reason for this, is that requirements documents and solution designs tend to live on a [digital] shelf somewhere, and as you know, out of sight, often means out of mind. For this reason, I have been playing around with something in this area that I wanted to share. To be honest, I am not sure if the concept is the right solution, but my hope is that some of you out there can possibly think about it and help me improve upon it.

Living in Workspace

It has become abundantly clear that the future of Adobe Analytics is Analysis Workspace. If you haven’t already started using Workspace as your default interface for Adobe Analytics, you will be soon. Most people are spending all of their time in Analysis Workspace, since it is so much more flexible and powerful than the older “SiteCatalyst” interface. This got me thinking… “What if there were a way to house all of your Adobe Analytics business requirements and the corresponding Solution Design as a project right within Analysis Workspace?” That would put all of your documentation a few clicks away from you at all times, meaning that there would be no excuse to not know what is in your implementation, which variables answer each business requirement and so on.

Therefore, I created this:

The first Workspace panel is simply a table of contents with hyperlinks to the panels below it. The following will share what is contained within each of the Workspace panels.

The first panel is simply a list of all business requirements in the Adobe Analytics implementation, which for demo purposes is only two:

The second panel shows the same business requirements split out by business priority, in case you want to look at ones that are more important than others:

One of the ways you can help your end-users understand your implementation is to make it clear which Adobe Analytics variables (reports) are associated with each business requirement. Therefore, I thought it would make sense to let users breakdown each business requirement by variable as shown here:

Of course, there will always be occasions where you just want to see a list of all of your Success Events, eVars and sProps, so I created a breakdown by variable type:

Since each business requirement should have a designated owner, the following breakdown allows you to see all business requirements broken down by owner:

Lastly, you may want to track which business requirements have been completed and which are still outstanding. The following breakdown allows you to see requirements by current implementation status:

Maximum Flexibility

As you can see, the preceding Analysis Workspace project, and panels contained within, provide an easy way to understand your Adobe Analytics implementation. But since you can break anything down by anything else in Analysis Workspace, these are just some sample reports of many more that could be created. For example, what if one of my users wanted to drill deep into the first business requirement and see what variables it uses, descriptions of those variables and even the detailed settings of those variables (i.e. serialization, expiration, etc…)? All of these components can be incorporated into this solution such that users can simply choose from a list of curated Analysis Workspace items (left panel) and drop them in as desired like shown here:

Granted, it isn’t as elegant as seeing everything on an Excel spreadsheet, it is convenient to be able to see all of this detail without having to leave the tool! And maybe one day, it will be possible to see multiple items on the same row in Analysis Workspace, which would allow this solution to look more like a spreadsheet. I also wish there were a way to hyper-link right from the variable (report) name to a new project that opens with that report, but maybe that will be possible in the future.

If you want to see the drill-down capabilities in action, here is a link to a video that shows me doing drill-downs live:

Summary

So what do you think? Is this something that your Adobe Analytics users would benefit from? Do you have ideas on how to improve it? Please leave a comment here…Thanks!

P.S. To learn how I created the preceding Analysis Workspace project, check out Part Two of this post.

Adobe Analytics, Featured

Transaction ID – HR Example

The Transaction ID feature in Adobe Analytics is one of the most underrated in the product. Transaction ID allows you to “close the loop,” so to speak, and import offline metrics related to online activity and apply those metrics to pre-existing dimension values.  This means that you can set a unique ID online and then import offline metrics tied to that unique ID and have the offline metrics associated with all eVar values that were present when the online ID was set. For example, if you want to see how many people who complete a lead form end up becoming customers a few weeks later, you can set a Transaction ID and then later import a “1” into a Success Event for each ID that becomes a customer. This will give “1” to every eVar value that was present when the Transaction ID was set, such as campaign code, visit number, etc…. It is almost like you are tricking Adobe Analytics into thinking that the offline event happened online. In the past, I have described how you could use Transaction ID to import recurring revenue and import product returns, but in this post, I will share another example related to Human Resources and recruiting.

Did They Get Hired?

So let’s imagine that you work for an organization that uses Adobe Analytics and hires a lot of folks. It is always a good thing if you can get more groups to use analytics (to justify the cost), so why not have the HR department leverage the tool as well? On your website, you have job postings and visitors can view jobs and then click to apply. You would want to set a success event for “Job Views” and another for “Job Clicks” and store the Job ID # in an eVar. Then if a user submits a job application, you would capture this with a “Job Applications” Success Event. Thus, you would have a report that looks like this:

Let’s assume that your organization is also using marketing campaigns to find potential employees. These campaign codes would be captured in the Campaigns (Tracking Code) eVar and, of course, you can also see all of these job metrics in this and any other eVar reports:

But what if you wanted to see which of these job applicants were actually hired? Moreover, what if you wanted to see which marketing campaigns led to hires vs. just unqualified applicants? All of this can be done with Transaction ID. As long as you have some sort of back-end system that knows the unique “transaction” ID and knows if a hire took place, you can upload the offline metric and close the loop. Here is what the Transaction ID upload file might look like:

Notice that we are setting a new “Job Hires” Success Event success event and tying it to the Transaction ID. This will bind the offline metric to the Job # eVar value, the campaign code and any other eVars. Once this has loaded, you can see a report that looks like this:

Additionally, you can then switch to the Campaigns report to see this:

This allows you to then create Calculated Metrics to see which marketing campaigns are most effective at driving new hires.

Are They Superstars?

If you want to get a bit more advanced with Transaction ID, you can extend this concept to import additional metrics related to employee performance. For example, let’s say that each new hire is evaluated after their first six months on the job and that they are rated on a scale of 1 (bad) to 10 (great). In the future, you can import their performance as another numeric Success Event (just be sure to have your Adobe account manager extend Transaction ID beyond the default 90 days):

Which will allow you to see a report like this:

Then you can create a Calculated Metric that divides the rating by the number of hires. This will allow you to see ratings per hire in any eVar report, like the Campaigns report shown here:

Final Thoughts

This is a creative way to apply the concept of Transaction ID, but as you can imagine, there are many other ways to utilize this functionality. Anytime that you want to tie offline metrics to online metrics, you should consider using Transaction ID.

Conferences/Community, Featured

ACCELERATE 2.0 coming in 2019: Save the Date

After a brief hiatus while we examined the ever-changing conference landscape and regrouped here at Analytics Demystified I am delighted to announce that our much loved ACCELERATE conference will be returning in January 2019.

On January 25th we will be gathering in Los Gatos, California at the beautiful Toll House Hotel to ACCELERATE attendees knowledge of digital measurement and optimization via our “Ten Tips in Twenty Minutes” format.  If you haven’t experienced our ground-breaking “Ten Tips” format before … think of it as a small firehose of information, aimed directly at you, in rapid-fire succession all morning long.

What’s more, as part of the evolution of ACCELERATE, the afternoon will feature both a keynote presentation that we think you will love and a session of intimate round-tables led by each of our “Ten Tips” speakers designed to allow participants to dig into each topic more deeply.  I am especially excited about the round-tables since, as an early participant and organizer in the old X Change conference, I have seen first-hand how deep these sessions can go, and how valuable they can be (when done properly!)

Also, as we have done in the past, on Thursday, January 24th, the Partners at Analytics Demystified will be leading half-day training sessions.  Led by Adam Greco, Brian Hawkins, Kevin Willeitner, Michele Kiss, Josh West, Tim Patten, and possibly … yours truly … these training sessions will cover the topics that digital analysts need most to ACCELERATE their own knowledge of Adobe and Google, analytics and optimization in practice, and their own professional careers.

But wait, there is one more thing!

While we have long been known for our commitment to the social aspects of analytics via Web Analytics Wednesday and the “lobby bar” gathering model … at ACCELERATE 2.0 we will be offering wholly social activities for folks who want to hang around and see a little more of Los Gatos.  Want to go mountain biking with Kevin Willeitner?  Or hiking with Tim Patten and Michele Kiss?  Now is your chance!

Watch for more information including our industry-low ticket prices, scheduling information, and details about hotel, training, and activities in the coming weeks … but for now we hope you will save January 24th and January 25th to join us in Los Gatos, California for ACCELERATE 2.0!

Adobe Analytics, Featured

Return Frequency % of Total

Recently, a co-worker ran into an issue in Adobe Analytics related to Visit Frequency. The Visit Frequency report in Adobe Analytics is not one that I use all that often, but it looks like this:

This report simply shows a distribution of how long it takes people to come back to your website. In this case, my co-worker was looking to show these visit frequencies as a percentage of all visits. To do this, she created a calculated metric that divided visits by the total number of visits like this:

Then she added it to the report as shown here:

At this point, she realized that something wasn’t right. As you can see here, the total number of Visits is 5,531, but when she opened the Visits metric, she saw this:

Then she realized that the Return Frequency report doesn’t show 1st time visits and even though you might expect the % of Total Visits calculated metric to include ALL visits, it doesn’t. This was proven by applying a 1st Time Visits segment to the Visits report like this:

Now we can see that when subtracting the total visits (27,686) from the 1st time visits (22,155), we are left with 5,531, which is the amount shown in the return frequency report. Hence, it is not as easy as you’d think to see the % of total visits for each return frequency row.

Solution #1 – Adobe ReportBuilder

The easiest way to solve this problem is to use Adobe ReportBuilder. Using ReportBuilder, you can download two data blocks – one for Return Frequency and one for Visits:

Once you have downloaded these data blocks you can create new columns that divide each row by the correct total number of visits to see your % of total:

In this case, I re-created the original percentages shown in the Return Frequency report, but also added the desired % of Total visits in a column next to it so both could be seen.

Solution #2 – Analysis Workspace & Calculated Metrics

Since Analysis Workspace is what all the cool kids are using these days, I wanted to find a way to get this data there as well. To do this, I created a few new Calculated Metrics that used Visits and Return Frequency. Here is one example:

This Calculated Metric divides Visits where Return Frequency was less than 1 day by all Visits. Here is what it looks like when you view Total visits, the segmented version of Visits and the Calculated Metric in a table in Analysis Workspace:

Here you can see that the total visits for June is 27,686, that the less than 1 day visits were 2,276 and that the % of Total Visits is 8.2%. You will see that these figures match exactly what we saw in Adobe ReportBuilder as well (always a good sign!). Here is what it looks like if we add a few more Return Frequencies:

Again, our numbers match what we saw above. In this case, there is a finite number of Return Frequency options, so even though it is a bit of a pain to create a bunch of new Calculated Metrics, once they are created, you won’t have to do them again. I was able to create them quickly by using the SAVE AS feature in the Calculated Metrics builder.

As a bonus, you can also right-click and create an alert for one or more of these new calculated metrics:

Summary

So even though Adobe Analytics can have some quirks from time to time, as shown here, you can usually find multiple ways to get to the data you need if you understand all of the facets of the product. If you know of other or easier ways to do this, please leave a comment here. Thanks!

Adobe Analytics, Featured

Measuring Page Load Time With Success Events

One of the things I have noticed lately is how slowly some websites are loading, especially media-related websites. For example, recently I visited wired.com and couldn’t get anything to work. Then I looked at Ghostery and saw that they had 126 tags on their site and a page load time of almost 20 seconds!

I have seen lots of articles showing that fast loading pages can have huge positive impacts on website conversion, but the proliferation of JavaScript tags may be slowly killing websites! Hopefully some of the new GDPR regulations will force companies to re-examine how many tags are on their sites and whether all of them are still needed. In the meantime, I highly recommend that you use a tool like ObservePoint to understand how many tags are lingering on your site now.

As a web analyst, you may want to measure how long it is taking your pages to load. Doing this isn’t trivial, as can be seen in my partner Josh West’s 2015 blog post. In this post, Josh shows some of the ways you can capture page load time in a dimension in Adobe or Google Analytics, though doing so is not going to be completely exact. Regardless, I suggest you check out his post and consider adding this dimension to your analytics implementation.

One thing that Josh alluded to, but did not go into depth on, is the idea of storing page load time as a metric. This is quite different than capturing the load time in a dimension, so I thought I would touch upon how to do this in Adobe Analytics (which can also be done in Google Analytics). If you want to store page load time as a metric in Adobe Analytics, you would pass the actual load time (in seconds or milliseconds) to a Numeric Success Event. This would create an aggregated page load time metric that is increased with every website page view. This new metric can be divided by page views or you can set a separate counter page load denominator success event (if you are not going to track page load time on every page). Here is what you might see if you set the page load time and denominator metrics in the debugger:

You would also want to capture the page name in an eVar so you can easily see the page load time metrics by page. This is what the data might look like in a page name (actual page names hidden here):

In this case, there is a calculated metric that is dividing the aggregated page load time by the denominator to see an average page load time for each page. There are also ways that you can use Visit metrics to see the average page load time per visit. Regardless of which version you use, this type of report can help you identify your problem pages so you can see if there are things you can do to improve conversion. I suggest combing this with a Participation report to see which pages impact your conversion the most, but are loading slowly.

Another cool thing you can do with this data is to trend the average page load time for the website overall. Since you already have created the calculated metric shown below, you can simply open this metric by itself (vs. viewing by page name), to see the overall trend of page load speeds for your site and then set some internal targets or goals to strive for in the future.

Featured, google analytics, Reporting

A Scalable Way To Add Annotations of Notable Events To Your Reports in Data Studio

Documenting and sharing important events that affected your business are key to an accurate interpretation of your data.

For example, perhaps your analytics tracking broke for a week last July, or you ran a huge promo in December. Or maybe you doubled paid search spend, or ran a huge A/B test. These events are always top of mind at the time, but memories fade quickly, and turnover happens, so documenting these events is key!

Within Google Analytics itself, there’s an available feature to add “Annotations” to your reports. These annotations show up as little markers on trend charts in all standard reports, and you can expand to read the details of a specific event.

However, there is a major challenge with annotations as they exist today: They essentially live in a silo – they’re not accessible outside the standard GA reports. This means you can’t access these annotations in:

  • Google Analytics flat-table custom reports
  • Google Analytics API data requests
  • Big Query data requests
  • Data Studio reports

While I can’t solve All.The.Things, I do have a handy option to incorporate annotations in to Google Data Studio. Here’s a quick example:

Not too long ago, Data Studio added a new feature that essentially “unified” the idea of a date across multiple data sources. (Previously, a date selector would only affect the data source you had created it for.)

One nifty application of this feature is the ability to pull a list of important events from a Google Spreadsheet in to your Data Studio report, so that you have a very similar feature to Annotations.

To do this:

Prerequisite: Your report should really include a Date filter for this to work well. You don’t want all annotations (for all time) to show, as it may be overwhelming, depending on the timeframe.

Step 1: Create a spreadsheet that contains all of your GA annotations. (Feel free to add any others, while you’re at it. Perhaps yours haven’t been kept very up to date…! You’re not alone.)

I did this simply, by just selecting the entire timeframe of my data set, and copy-pasting from the Annotations table in GA in to a spreadsheet

You’ll want to include these dimensions in your spreadsheet:

  • Date
  • The contents of the annotation itself
  • Who added it (why not, might as well)

You’ll also want to add a “dummy metric”, which I just created as Count, which is 1 for each row. (Technically, I threw a formula in to put a one in that row as long as there’s a comment.)

Step 2: Add this as a Data Source in Data Studio

First, “Create New Data Source”

Then select your spreadsheet:

It should happen automatically, but just confirm that the date dimension is correct:

3. Create a data table

Now you create a data table that includes those annotations.

Here are the settings I used:

Data Settings:

  • Dimensions:
    • Date
    • Comment
    • (You could add the user who added it, or a contact person, if you so choose)
  • Metric:
    • Count (just because you need something there)
  • Rows per Page:
    • 5 (to conserve space)
  • Sort:
    • By Date (descending)
  • Default Date Range:
    • Auto (This is important – this is how the table of annotations will update whenever you use the date selector on the report!)

Style settings:

  • Table Body:
    • Wrap text (so they can read the entire annotation, even if it’s long)
  • Table Footer:
    • Show Pagination, and use Compact (so if there are more than 5 annotations during the timeframe the user is looking at, they can scroll through the rest of them)

Apart from that, a lot of the other choices are stylistic…

  • I chose a lot of things based on the data/pixel ratio:
    • I don’t show row numbers (unnecessary information)
    • I don’t show any lines or borders on the table, or fill/background for the heading row
    • I choose a small font, just since the data itself is the primary information I want the user to focus on

I also did a couple of hack-y things, like just covering over the Count column with a grey filled box. So fancy…!

Finally, I put my new “Notable Events” table at the very bottom of the page, and set it to show on all pages (Arrange > Make Report Level.)

You might choose to place it somewhere else, or display it differently, or only show it on some pages.

And that’s it…!

But, there’s more you could do 

This is a really simple example. You can expand it out to make it even more useful. For example, your spreadsheet could include:

  • Brand: Display (or allow filtering) of notable events by Brand, or for a specific Brand plus Global
  • Site area: To filter based on events affecting the home page vs. product pages vs. checkout (etc)
  • Type of Notable Event: For example, A/B test vs. Marketing Campaign vs. Site Issue vs. Analytics Issue vs. Data System Affected (e.g. GA vs. AdWords)
  • Country… 
  • There are a wide range of possible use cases, depending on your business

Your spreadsheet can be collaborative, so that others in the organization can add their own events.

One other cool thing is that it’s very easy to just copy-paste rows in a spreadsheet. So let’s say you had an issue that started June 1 and ended June 7. You could easily add one row for each of those days in June, so that even if a user pulled say, June 6-10, they’d see the annotation noted for June 6 and June 7. That’s more cumbersome in Google Analytics, where you’d have to add an annotation for every day.

Limitations

It is, of course, a bit more leg work to maintain both this set of annotations, AND the default annotations in Google Analytics. (Assuming, of course, that you choose to maintain both, rather than just using this method.) But unless GA exposes the contents of the annotations in a way that we can pull in to Data Studio, the hack-y solution will need to be it!

Solving The.Other.Things

I won’t go in to it here, but I mentioned the challenge of the default GA annotations and both API data requests and Big Query. This solution doesn’t have to be limited to Data Studio: you could also use this table in Big Query by connecting the spreadsheet, and you could similarly pull this data into a report based on the GA API (for example, by using the spreadsheet as a data source in Tableau.)

Thoughts? 

It’s a pretty small thing, but at least it’s a way to incorporate comments on the data within Data Studio, in a way that the comments are based on the timeframe the user is actually looking at.

Thoughts? Other cool ideas? Please leave them in the comments!

Adobe Analytics, Featured

Product Ratings/Reviews in Adobe Analytics

Many retailers use product ratings as a way to convince buyers that they should take the next step in conversion, which is usually a cart addition. Showing how often a product has been reviewed and its average product rating helps build product credibility and something consumers have grown used to from popular sites like amazon.com.

Digital analytics tools like Adobe Analytics can be used to determine whether the product ratings on your site/app are having a positive or negative impact on conversion. In this post, I will share some ways you can track product review information to see its impact on your data.

Impact of Having Product Ratings/Reviews

The first thing you should do with product ratings and reviews is to capture the current avg. rating and # of reviews in a product syntax merchandising eVar when visitors view the product detail page. In order to save eVars, I sometimes concatenate these two values with a separator and then use RegEx and the SAINT Classification RuleBuilder to split them out later. In the preceding screenshot, for example, you might pass 4.7|3 to the eVar and then split those values out later via SAINT. Capturing these values at the time of the product detail page view allows you to lock in what the rating and # of reviews was at the time of the product view. Here is what the rating merchandising eVar might look like once split out:

You can also group these items using SAINT to see how ratings between 4.0 – 4.5 perform vs. 4.5 – 5.0, etc… You can also sort this report by your conversion metrics, but if you do so, I would recommend adding a percentile function so you don’t just see rows that have very few product views or orders. The same type of report can be run for # of reviews as well:

Lastly, if you have products that don’t have ratings/reviews at all, the preceding reports will have a “None” row, which will allow you to see the conversion rate when no ratings/reviews exist, which may be useful information to see overall impact of ratings/reviews for your site.

Average Product Rating Calculated Metric

In addition to capturing the average rating and the # of reviews in an eVar, another thing you can do is to capture the same values in numeric success events. As a reminder, a numeric success event is a metric that can be incremented by more than one in each server call. For example, when a visitor views the following product page, the average product rating of 4.67 is being passed to a numeric success event 50. This means that event 50 is being increased for the entire website by 4.67 each time this product is viewed. Since the Products variable is also set, this 4.67 is “bound” (associated) to product H8194. At the same time, we need a denominator to divide this rating by to compute the overall product rating average. In this case, event 51 is set to “1” each time that a rating is present (you cannot use Product Views metric since there may be cases in which no rating is present but there is a product view).  Here is what the tagging might look like when it is complete:

Below is what the data looks like once it is collected:

You can see Product Views, the accumulated star ratings, the number of times ratings were available and a calculated metric to compute the average rating for each product. Given that we already have the average product rating in an eVar, this may not seem important, but the cool part of this is that now the product rating can be trended over time. Simply add a chart visualization and then select a specific product to see how its rating changes over time:

The other cool part of this is that you can leverage your product classifications to group these numeric ratings by product category:

Using both eVars and success events to capture product ratings/reviews on your site allows you to capture what your visitors saw for each product while on your product detail pages. Having this information can be helpful to see if ratings/reviews are important to your site and to be aware of the impact for each product and/or product category.

Adobe Analytics, Featured

Engagement Scoring Using Approx. Count Distinct

Back in 2015, I wrote a post about using Calculated Metrics to create an Engagement Score. In that post, I mentioned that it was possible to pick a series of success events and multiply them by some sort of weighted number to compute an overall website engagement score. This was an alternative to a different method of tracking visitor engagement via numeric success events set via JavaScript (which was also described in the post). However, given that Adobe has added the cool Approximate Count Distinct function to the analytics product, I recently had an idea about a different way to compute website engagement that I thought I would share.

Adding Depth to Website Engagement

In my previous post, website engagement was computed simply by multiplying chosen success events by a weighted multiplier like this:

This approach is workable but lacks a depth component. For example, the first parameter looks at how many Product Views take place but doesn’t account for how many different products are viewed. There may be a situation in which you want to assign more website engagement to visits that get visitors to view multiple products vs. just one. The same concept could apply to Page Views and Page Names, Video Views and Video Names, etc…

Using the Approximate Count Distinct function, it is now possible to add a depth component to the website engagement formula. To see how this might work, let’s go through an example. Imagine that in a very basic website engagement model, you want to look at Blog Post Views and Internal Searches occurring on your website. You have success events for both Blog Post Views and Internal Searches and you also have eVars that capture the Blog Post Titles and Internal Search Keywords.

To start, you can use the Approximate Count Distinct function to calculate how many unique Blog Post Titles exist (for the chosen date range) using this formula:

Next, you can multiply the number of Blog Post Views by the number of unique Blog Post Titles to come up with a Blog Post Engagement score as shown here:

Note that since the Approximate Count Distinct function is not 100% accurate, the numbers will differ slightly from what you would get if you use a calculator, but in general, the function will be at least 95% accurate or greater.

You can repeat this process for Internal Search Keywords. First, you compute the Approximate Count of unique Search Keywords like this:

Then you create a new calculated metric that multiplies the number of Internal Searches by the unique number of Keywords. Here is what a report looks like with all six metrics:

Website Engagement Calculation

Now that you have created the building blocks for your simplistic website engagement score, it is time to put them together and add some weighting. Weighting is important, because it is unlikely that your individual elements will have the same importance to your website. In this case, let’s imagine that a Blog Post View is much more important than an Internal Search, so it is assigned a weight score of 90, whereas a score of 10 is assigned to Internal Searches. If you are creating your own engagement score, you may have more elements and can weight them as you see fit.

In the following formula, you can see that I am adding the Blog Post engagement score to the Internal Search engagement score and adding the 90/10 weighting all in one formula. I am also dividing the entire formula by Visits to normalize it, so my engagement score doesn’t rise or fall based upon differing number of Visits over time:

Here you can see a version of the engagement score as a raw number (multiplied by 90 & 10) and then the final one that is divided by Visits:

Finally, you can plot the engagement score in a trended bar chart. In this case, I am trending both the engagement score and visits in the same chart:

In the end, this engagement score calculation isn’t significantly different than the original one, but adding the Approximate Count Distinct function allows you to add some more depth to the overall calculation If you don’t want to multiply the number of success event instances by ALL of the unique count of values, you could alternatively use an IF function with the GREATER THAN function to cap the number of unique items at a certain amount (i.e. If more than 50 unique Blog Post Titles, use 50, else, use the unique count).

The best part of this approach is that it requires no JavaScript tagging (assuming you already have the success events and eVars you need in the calculation). So you can play around with the formula and its weightings with no fear of negatively impacting your implementation and no IT resources! I suggest that you give it a try and see if this type of engagement score can be used as an overall health gauge of how your website is performing over time.

Featured, Testing and Optimization

Adobe Insider Awesomeness and Geo Test deep dive

Adobe Insider and EXBE

The first Adobe Insider with Adobe Target took place on June 1st in Atlanta, Georgia.  I wrote a blog post a couple of weeks back about the multi-city event but after attending the first one, I thought I would share some takeaways.  

The event was very worthwhile and everyone that I talked to was glad to have attended.  The location was an old theatre and Hamilton was even set to run in that building later that evening.  Had I known that my flight back to Chicago that evening would be delayed by four hours, I would have tried to score a ticket.  The Insider Tour is broken down into two tracks.  An Analytics one and an Adobe Target or Personalization one.  My guess is that there were about 150 to 180 attendees which made for a more social and intimate gathering.

The Personalization track got to hang directly with the Target Product Team and hear some presentations on what they are working on, what is set to be released, and they even got to give some feedback as to product direction and focus.

The roundtable discussions went really well with lots of interaction and feedback.  I especially found it interesting to see the company to company conversations taking place.  The roundtable that I was at had really advanced users of Adobe Target as well as brand new users which allowed newbies to get advice and tips directly from other organizations vs. vendors or consultants.

As for the what the attendees liked the most, they seem to really enjoy meeting and working directly with the Product Team members but the biggest and most popular thing for the day was EXBE.   EXBE represents “Experiences Business Experience Excellence”.  You are not alone if that doesn’t roll off the tongue nicely.  Essentially, this all translates to someone (not Adobe and not a Consultant) sharing a case study of a test that they ran.  The test could be simple or the test could be very complex, it doesn’t matter.  The presenter would simply share any background, test design, setup, and any results that they could share.

Home Depot shared a case study at this year’s event and it was a big hit.  Priyanka, from Home Depot, walked attendees through a test that made a very substantial impact into Home Depot’s business.  Attendees asked a ton of questions about the test and the conversation even turned into a geek out.  Priyanka made really cool use of using multiple locations within a single experience.   This capability mapped back to using multiple mboxes in the same experience.  Some advanced users didn’t know it was possible.

So, if you are in LOS ANGELES, CHICAGO, NEW YORK, or DALLAS and plan on attending the Insider Tour, I strongly encourage to submit a test and present it.  Even if the test may seem very straightforward or not that exciting, there will be attendees that will benefit substantially.  The presentation could be 5 minutes or 30 minutes, and there is no need to worry if you can’t share actual results.  It is also a great opportunity to present to your peers and in front of a very friendly audience.  You can register here or via the very nerdy non-mboxy CTA below (see if you can figure out what I am doing here) if you are interested.

Sample Test and feedback…

At the event that day, an attendee was telling me that they don’t do anything fancy with their tests otherwise they would have submitted something and gotten the experience presenting to fellow testers.  I explained that I don’t think that matters as long as the test is valuable to your or to your business.  I then explained a very simple test that I am running on the Demystified site that some might think is simple but would a good example of a test to present.  

Also, at the event, a few people asked that I write more about test setup and some of the ways I approach test setup within Target.  So, I thought I would walk through the above mention Geo Targeted test that I have running on the Demystified website.

 

Test Design and Execution

Hypothesis

Adam and I are joining Adobe on the Adobe Insider Tour in Atlanta, Los Angeles, Chicago, New York and in Dallas.  We hypothesize that geo-targeting a banner to those five cities encouraging attendance will increase clicks on the hero compared to the rotating carousel that is hard-coded into the site.  We also hope that in the event that some of our customers or previous customers didn’t know about the Insider event, that maybe the test might make them aware of it and they attend.  

Built into Adobe Target is geo-targeting based on reverse IP lookup.  Target user the same provider that is in Analytics and users can target based on zip code, city, state, DMA, and country.  I chose to use DMA so as to get the biggest reach.

This data in this box represents the geo attributes for YOU, based on your IP address.  I am pumping this in via a test on this page.

Default Content – if you are seeing this, you are not getting the test content from Target

Test Design

So as to make sure we have a control group and to make sure we get our message out to as many people as possible, we went with a 90/10 split.  Of course, this is not ideal for sample sizes calculations, etc… but that is a whole other subject.  This is more about the tactical steps or a geo-targeted test.

Experience A:  10% holdout group to serve as my baseline (all five cities will be represented here)

Experience B:  Atlanta 

Experience C:  Los Angeles

Experience D:  Chicago

Experience E:  New York

Experience F:  Dallas

I also used an Experience Targeted test in the event that someone got into the test and happen to travel to another city that was part of our test.  The Experience Targeted test enables their offer to change to the corresponding test-Experience.

The banner would look like this (I live in Chicago DMA so I am getting this banner:).  When I go to Los Angeles next week, I will get the one for Los Angeles.  If I used an A/B test, I would continue to get Chicago since that is where I was first assigned.

Profile to make this happen

To have my 10% group, I have to use Target profiles.  There is no way to use % allocation coupled with visitor attributes like DMA so profiles are the way to go.  I’ve long argued that the most powerful part of the Adobe Target platform is the ability to profile visitors client side or server side.  For this use case, we are going to use the server side scripts to get our 10% control group.  Below is my script and you are welcome to copy it into your account.  Just be sure to name it “random_10_group”.

This script randomly generates a number and based off of that number, puts visitors into 1 of 10 groups.  Each group or set of groups can be used for targeting.  You can also force yourself into a group by appending the URL parameter ‘testgroup’ = the number of the group that you want.  For example, http://analyticsdemystified.com/?testgroup=4 would put me in the group4 for this profile.  Helpful when debugging or QA’ing tests that make use of this.

These groups are mutually exclusive as well so if your company wants to incorporate test swimlanes, this script will be helpful.

if (!user.get('random_10_group')) {
var ran_number = Math.floor(Math.random() * 99),
query = (page.query || '').toLowerCase();
query = query.indexOf('testgroup=') > -1 ? query.substring(query.indexOf('testgroup=') + 10) : '';
if (query.charAt(0) == '1') {
return 'group1';
} else if (query.charAt(0) == '2') {
return 'group2';
} else if (query.charAt(0) == '3') {
return 'group3';
} else if (query.charAt(0) == '4') {
return 'group4';
} else if (query.charAt(0) == '5') {
return 'group5';
} else if (query.charAt(0) == '6') {
return 'group6';
} else if (query.charAt(0) == '7') {
return 'group7';
} else if (query.charAt(0) == '8') {
return 'group5';
} else if (query.charAt(0) == '9') {
return 'group6';
} else if (query.charAt(0) == '10') {
return 'group10';
} else if (ran_number <= 9) {
return 'group1';
} else if (ran_number <= 19) {
return 'group2';
} else if (ran_number <= 29) {
return 'group3';
} else if (ran_number <= 39) {
return 'group4';
} else if (ran_number <= 49) {
return 'group5';
} else if (ran_number <= 59) {
return 'group6';
} else if (ran_number <= 69) {
return 'group7';
} else if (ran_number <= 79) {
return 'group8';
} else if (ran_number <= 89) {
return 'group9';
} else if (ran_number <= 99) {
return 'group10';
} else {
return 'sorry';
}
}

Audiences

Before I go into setting up the test, I am going to create my Audiences.  If you are going to be using more than a couple of Audiences in your test, I recommend you adopt this process.  Creating Audiences during the test setup can interrupt the flow of things and if you have them already created, it takes no time at all to add them as needed.

Here is my first Audience – it is my 10% control group that was made possible by the above profile parameter and it has all five cities that I am using for this test.  This will be my first Experience in my Experience Targeted Test which is a very important component.  For Experience Targeted Tests, visitors are evaluated for Experiences from top to bottom so had I put my New York Experience first, I would get visitors that should be in my Control group in that Experience.

And here is my New York Audience.  Chicago, Dallas, Atlanta, and Los Angeles are setup the same way.

 

Offer Code

Here is an example of the code I used for my test. This is the code for the offer that will display for users in Los Angeles.  I could have used VEC to do this test but our carousel is finicky and would have taken too much time to figure out in VEC so I went with FORM based.  I am old school and prefer to use Form vs. VEC.  I do love the easy click tracking as conversions events in VEC and wish they would put that in Form-based testing.  Users should only use VEC if they are using the Visual Composer.  Too often I see users select VEC only to place in custom code.  That adds overhead and is unnecessary.

 

<!– I use CSS here to suppress the hero from showing –>
<style id=”flickersuppression”>
#slider {visibility:hidden !important}
</style>
<script>
(function($){var c=function(s,f){if($(s)[0]){try{f.apply($(s)[0])}catch(e){setTimeout(function(){c(s,f)},1)}}else{setTimeout(function(){c(s,f)},1)}};if($.isReady){setTimeout(“c=function(){}”,100)}$.fn.elementOnLoad=function(f){c(this.selector,f)}})(jQuery);
// this next like wants for my test content to show up in the DOM then changes the experience
jQuery(‘.rsArrowRight > .rsArrowIcn’).elementOnLoad(function(){
$(“.rsContainer”).replaceWith(“<div class=\”rsContent\”>\n <a href=\”https://webanalyticsdemystif.tt.omtrdc.net/m2/webanalyticsdemystif/ubox/page?mbox=insider&mboxDefault=http%3A%2F%2Fwww.adobeeventsonline.com%2FInsiderTour%2F2018%2F/\”><img class=\”rsImg rsMainSlideImage\” src=\”http://analyticsdemystified.com/wp-content/uploads/2015/02/header-image-services-training-700×400.jpg\” alt=\”feature-image-1\” style=\”width:100%; height: 620px; margin-left: 0px; margin-top: -192px;\”></a>\n \n \n <div class=\”rsSBlock ui-draggable-handle\” style=\”width: auto; height: 600px; left: 40px; top: 317px;\”><h1><strong>Los Angeles! Analytics Demystified is joining Adobe on the Adobe Insider Tour</strong></h1>\n<p style=\”text-align:left;\”><br><br>Thursday, June 21st – iPic Westwood in Los Angeles, CA. </p>\n</div>\n</div>”);
$(“.rsContainer > div:eq(0) > div:eq(0) > div:eq(0) > p:eq(0)”).css({“color”:”#000000″});
$(“.rsContainer > div:eq(0) > div:eq(0) > div:eq(0) > h1:eq(0)”).css({“color”:”#000000″});
$(“.rsNav”).css({“display”:”none”, “visibility”:””});
$(“.rsArrowLeft > .rsArrowIcn”).css({“display”:”none”, “visibility”:””});
$(“.rsArrowRight > .rsArrowIcn”).css({“display”:”none”, “visibility”:””});
$(“#login-trigger > img”).removeAttr(“src”).removeAttr(“srcdoc”);
$(“#login-trigger > img”).css({“display”:”none”, “visibility”:””});
$(“.rsSBlock > h1”).append(“<div id=\”hawk_cta\”>…</div>”);
// this next line removes my flicker suppression that I put in place at the top of this code
jQuery(‘#flickersuppression’).remove();
})
// one of the coolest parts of at.js making click tracking a lot easier!!!
$(‘#slider’).click(function(event){
adobe.target.trackEvent({‘mbox’:’hero_click’})
});
</script>

Success Events

The success event for this test is clicking on the hero CTA which brings you to the Adobe page to register to join the insider event.  This CTA click was tracked via a very cool function that you all will grow to love as you adobe at.js.

$(‘#slider‘).click(function(event){
adobe.target.trackEvent({‘mbox’:’hero_click‘})
});

To use this, one needs to be using at.js and then update the two bold sections above.  The first bold section is the CSS selector which you can get with any browswer by right clicking and then click inspect.  In the HTML below we then right click again and copy the selector.  The second bold section is the name of the mbox that will be called when the area gets clicked on.  In the test setup, that looks like this:

Segments

Segment adoption within Target varies quite a bit it seems.  I personally find it a crucial component and recommend that organizations standardize a set of key segments to your business and include them with every test.  With Analytics, much time and effort are put in place to classify sources (utm parameters), behaviors, key devices, etc… so the same effort should be applied here.  If you use A4T or integrate with Analytics in other ways, this will help with these efforts for many of your tests.  For this test, I can’t use Analytics because the success event is a temporary CTA that was put in place for this test and I have no Analytics tracking in place to report on it so the success event lives in Target.

The main segments that are important here are for my Control group.  If you recall, I am consolidating all five cities into the Experience A.  To see how any of these cities do in this Experience, I have to define them as a segment when they qualify for the activity.  Target makes this a bit easier now vs. the Classic days as we can repurpose the Audiences that we used in the Experience Targeting.

Also cool now is the ability to add more than one segment at a time!  Classic had this many years back but the feature was taken away.  Having it now leaves organizations with no excuses for not using key segments in your tests!

An important note, you can apply segments on any and all Adobe Target success events used in the test.  For example, if I wanted to segment out visitors that spent over $200 on a revenue success event (or any event other than test entry), I can do that in the “Applied At” dropdown.  Lot of very cool use cases here but for what I need here, I am going to select “Campaign Entry” (although Adobe should change this to Activity entry:) and I will see how all the visitors from each of these cities did for my Control.

Geo-Targeting

To wrap things up here, I am going to share this last little nugget of gold.  Adobe Target allows users to pass an IP address to a special URL parameter and Adobe Target will return the Geo Attribues (City, State, DMA, Country, and Zip) for that IP address.  Very helpful when debugging.  You can see what it would look like below but clicking on this link will do you no good.  Sadly there is a bug with some versions of WordPress that changes the “.” in the URL to an underscore.  That breaks it sadly but this only applies to our site and some other installs of Word Press.

https://analyticsdemystified.com/?mboxOverride.browserIp=161.185.160.93

Happy Testing and hopefully see you at one of the Insider events coming up!

 

Adobe Analytics, Featured

100% Stacked Bar Chart in Analysis Workspace

As is often the case with Analysis Workspace (in Adobe Analytics), you stumble upon new features accidentally. Hopefully, by now you have learned the rule of “when in doubt, right-click” when using Analysis Workspace, but for other new features, I recommend reading Adobe’s release notes and subscribing to the Adobe Analytics YouTube Channel. Recently, the ability to use 100% stacked bar charts was added to Analysis Workspace, so I thought I’d give it a spin.

Normal vs. 100% Stacked Bar Charts

Normally, when you use a stacked bar chart, you are comparing raw numbers. For example, here is a sample stacked bar chart that looks at Blog Post Views by Author:

This type of chart allows you to see overall trends in performance over time. In some respects, you can also get a sense of which elements are going up and down over time, but since the data goes up and down each week, it can be tricky to be exact in the percentage changes.

For this reason, Adobe has added a 100% stacked bar visualization. This visualization stretches the elements in your chart to 100% and shifts the graph from raw numbers to percentages (of the items being graphed, not all items necessarily). This allows you to more accurately gauge how each element is changing over time.

To enable this, simply click the gear icon of the visualization and check the 100% stacked box:

Once this is done, your chart will look like this:

In addition, if you hover over one of the elements, it will show you the actual percentage:

The 100% stacked setting can be used in any trended stacked bar visualization. For example, here is a super basic example that shows the breakdown of Blog Post Views by mobile operating system:

For more information on using the 100% stacked bar visualization, here is an Adobe video on this topic: https://www.youtube.com/watch?v=_6hzCR1SCxk&t=1s

Adobe Analytics, Featured

Finding Adobe Analytics Components via Tags

When I am working on a project to audit someone’s Adobe Analytics implementation, one of the things I often notice is a lack of organization that surrounds the implementation. When you use Adobe Analytics, there are a lot of “components” that you can customize for your implementation. These components include Segments, Calculated Metrics, Reports, Dashboards, etc. I have some clients that have hundreds of Segments or Calculated Metrics, to the point that finding the one you are looking for can be like searching for a needle in a haystack! Over time, it is so easy to keep creating more and more Adobe Analytics components instead of re-using the ones that already exist. When new, duplicative components are created, things can get very chaotic because:

  • Different users could use different components in reports/dashboards
  • Changes made to fix a component may only be fixed in some places if there are duplicative components floating out there
  • Multiple components with the same name or definition can confuse novice users

For these reasons, I am a big fan of keeping your Adobe Analytics components under control, which takes some work, but pays dividends in the long run.  A few years ago, I wrote a post about how you can use a “Corporate Login” to help manage key Adobe Analytics components. I still endorse that concept, but today, I will share another technique I have started using to organize components in case you find it helpful.

Searching For Components Doesn’t Work

One reason that components proliferate is because finding the components you are looking for is not foolproof in Adobe Analytics. For example, let’s say that I just implemented some code to track Net Promoter Score in Adobe Analytics. Now, I want to create a Net Promoter Score Calculated Metric so I can trend NPS by day, week or month. To do this, I might go to the Calculated Metrics component screen where I would see all of the Calculated Metrics that exist:

If I have a lot of Calculated Metrics, it could take me a long time to see if this exists, so I might search for the Calculated Metric I want like this:

 

Unfortunately, my search came up empty, so I would likely go ahead and create a new Net Promoter Score Calculated Metric. What I didn’t know is that one already exists, it was just named “NPS Score” instead of “Net Promoter Score.” And since people are not generally good about using standard naming conventions, this scenario can happen often. So how do we fix this? How do we avoid the creation of duplicative components?

Search By Variable

To solve this problem, I have a few ideas. In general, the way I think about components like Calculated Metrics or Segments is that they are made up of other Adobe Analytics elements, specifically variables. Therefore, if I want to see if a Net Promoter Score Calculated Metric already exists, a good place to start would be to look for all Calculated Metrics that use one of the variables that is used to track Net Promoter Score in my implementation. In this case, success event #20 (called NPS Submissions [e20]) is set when any Net Promoter Score survey occurs. Therefore, if I could filter all Calculated Metrics to see only those that utilize success event #20, I would be able to find all Calculated Metrics that relate to Net Promoter Score. Unfortunately, Adobe Analytics only allows you to filter by the following items:

It would be great if Adobe had a way that you could filter on variables (Success Events, eVars, sProps), but that doesn’t exist today. The next best thing would be the ability to have Adobe Analytics find Calculated Metrics (or other components) by variable when you type the variable name in the search box. For example, it would be great if I could enter this in the search box:

But, alas, this doesn’t work either (though could one day if you vote for my idea in the Adobe Idea Exchange!).

Tagging to the Rescue!

Since there is no good way today to search for components by variable, I have created a workaround that you can use leveraging the tagging feature of Adobe Analytics. What I have started doing, is adding a tag for every variable that is used in a Calculated Metric (or Segment). For example, if I am creating a “Net Promoter Score” Calculated Metric that uses success event# 20 and success event# 21, in addition to any other tags I might want to use, I can tag the Calculated Metric with these variable names as shown here:

Once I do this, I will begin to see variable names appear in the tag list like this:

Next, if I am looking for a specific Calculated Metric, I can simply check one of the variables that I know would be part of the formula…

…and Adobe Analytics will filter the entire list of Calculated Metrics to only show me those that have that variable tag:

This is what I wish Adobe Analytics would do out-of-the-box, but using the tagging feature, you can take matters into your own hands. The only downside is that you need to go through all of your existing components and add these tags, but I would argue that you should be doing that anyway as part of a general clean-up effort and then simply ask people to do this for all new components thereafter.

The same concept can be applied to other Adobe Analytics components that use variables and allow tags. For example, here is a Segment that I have created and tagged based upon variables it contains:

This allows me to filter Segments in the same way:

Therefore, if you want to keep your Adobe Analytics implementation components organized and make them easy for your end-users to find, you can try out this work-around using component tags and maybe even vote for my idea to make this something that isn’t needed in the future. Thanks!

Featured, Testing and Optimization

Adobe Personalization Insider

To my fellow optimizers in or near Atlanta, Los Angeles, Chicago, New York, and Dallas:

I am very excited to share that I am heading your way and hope to see you.  I have the privilege of joining Adobe this year for the Adobe Insider Tour which is now much bigger than ever and has a lot of great stuff for optimizers like you and me.   

If you haven’t heard of it, the Adobe Insider Tour is a free half-day event that Adobe puts together so attendees can network and collaborate with their industry peers.  And it’s an opportunity for all participating experts to keep it real through interactive breakout sessions, some even workshop-style.  Adobe will share some recent product innovations and even some sneaks to what’s coming next.

The Insider Tour has three tracks, the Analytics Insider, and Personalization Insider and for New York, there will also be an Audience Manager Insider.  If you leverage Adobe to support your testing and personalization efforts, your analysis, or for managing of audiences, the interactive breakouts will be perfect for you.  My colleague Adam Greco will be there was well for the Analytics Insider.

Personalization Insider

I am going to be part of the Personalization Insider as I am all things testing and if you part of a testing team or want to learn more about testing, the breakout sessions and workshop will be perfect for you.  

In true optimization form, get ready to discuss, ideate, hypothesize and share best practices around the following:

*Automation and machine learning

*Optimization/Personalization beyond the browser (apps, connected cars, kiosks, etc)

*Program ramp and maturity

*Experience optimization in practice

Experience Business Excellence Awards

There is also something really cool and new this year that is part of the Insider Tour.  Adobe is bringing the Experience Business Excellence (EXBE) to each city.  The EXBE Awards Program was a huge hit at the Adobe Summit as it allows organizations to submit their experiences of using Adobe Target that kicked some serious butt and compete for awards and a free pass to Summit.  I was part of this last year at Summit where two of my clients won with some awesome examples of using testing to add value to their business and digital consumers.  If you have any interesting use cases or inspirational tests, you should submit them for consideration.   

Learn More and Register

If you come early to the event, there will be a “GENIUS BAR” where you can geek out with experts with any questions you might have.  Please come at me with any challenges you might have with test scaling, execution or anything for that matter.  I will be giving a free copy of my book on Adobe Target to the most interesting use case brought to me during “GENIUS BAR” hours.

I really hope to see you there and the venues are also being held at some cool places.    

Here are the dates for each city:  

  • Atlanta, GA – June 1st
  • Los Angeles, CA – June 21st
  • Chicago, IL – September 11th
  • New York, NY – September 13th
  • Dallas, TX – September 27

Click the button below to formally register (required)

(I did something nerdy and fun with this CTA – if anyone figures out exactly what I did here or what it is called, add a comment and let me know:)

Adobe Analytics, Featured

Adobe Insider Tour!

I am excited to announce that my partner Brian Hawkins and I will be joining the Adobe Insider Tour that is hitting several US cities over the next few months! These 100% free events held by Adobe are great opportunities to learn more about Adobe’s Marketing Cloud products (Adobe Analytics, Adobe Target, Adobe Audience Manager). The half-day sessions will provide product-specific tips & tricks, show future product features being worked on and provide practical education on how to maximize your use of Adobe products.

The Adobe Insider Tour will be held in the following cities and locations:

Atlanta – Friday, June 1
Fox Theatre
660 Peachtree St NE
Atlanta, GA 30308

Los Angeles – Thursday, June 21
iPic Westwood
10840 Wilshire Blvd
Los Angeles, CA 90024

Chicago – Tuesday, September 11
Davis Theater
4614 N Lincoln Ave
Chicago, IL 60625

New York – Thursday, September 13
iPic Theaters at Fulton Market
11 Fulton St
New York, NY 10038

Dallas – Thursday, September 27
Alamo Drafthouse
1005 S Lamar St
Dallas, TX 75215

Adobe Analytics Implementation Improv

As many of my blog readers know, I pride myself on pushing Adobe Analytics to the limit! I love to look at websites and “riff” on what could be implemented to increase analytics capabilities. On the Adobe Insider Tour, I am going to try and take this to the next level with what we are calling Adobe Analytics Implementation Improv. At the beginning of the day, we will pick a few companies in the audience and I will review the site and share some cool, advanced things that I think they should implement in Adobe Analytics. These suggestions will be based upon the hundreds of Adobe Analytics implementations I have done in the past, but this time it will be done live, with no preparation and no rehearsal! But in the process, you will get to see how you can quickly add some real-world, practical new things to your implementation when you get back to the office!

Adobe Analytics “Ask Me Anything” Session

After the “Improv” session, I will have an “Ask Me Anything” session to do my best and answer any questions you may have related to Adobe Analytics. This is your chance to get some free consulting and pick my brain about any Adobe Analytics topic. I will also be available prior to the event at Adobe’s “Genius Bar” providing 1:1 help.

Adobe Analytics Idol

As many of you may know, for the past few years, Adobe has hosted an Adobe Analytics Idol contest. This is an opportunity for you to share something cool that you are doing with Adobe Analytics or some cool tip or trick that has helped you. Over the years this has become very popular and now Adobe is even offering a free pass to the next Adobe Summit for the winner! So if you want to be a candidate for the Adobe Analytics Idol, you can now submit your name and tip and present at your local event. If you are a bit hesitant to submit a tip, this year, Adobe is adding a cool new aspect to the Adobe Analytics Idol. If you have a general idea, but need some help, you can email and either myself or one of the amazing Adobe Analytics product managers will help you formulate your idea and bring it to fruition. So even if you are a bit nervous to be an “Idol” you can get help and increase your chances of winning!

There will also be time at these events for more questions and casual networking, so I encourage you to register now and hope to see you at one of these events!

Adobe Analytics, Featured

Elsevier Case Study

I have been in consulting for a large portion of my professional life, starting right out of school at Arthur Andersen (back when it existed!). Therefore, I have been part of countless consulting engagements over the past twenty-five years. During this time, there are a few projects that stand out. Those that seemed daunting at first, but in the end turned out to make a real difference. Those large, super-difficult projects are the ones that tend to stick with you.

A few years ago, I came across one of these large projects at a company called Elsevier. Elsevier is a massive organization, with thousands of employees and key locations all across Europe and North America. But what differentiates Elsevier the most, is how disparate a lot of their business units can be – from geology to chemistry, etc. When I stumbled upon Elsevier, they were struggling to figure out how to have a unified approach to implementing Adobe Analytics worldwide in a way that helped them see some key top-line metrics, but at the same time offering each business unit its own flexibility where needed. This is something I see a lot of large organizations struggle with when it comes to Adobe Analytics. Since over my career I have worked with some of the largest Adobe Analytics implementations in the world, I was excited to apply what I have learned to tackle this super-complex project. I am also fortunate to have Josh West, one of the best Adobe Analytics implementation folks in the world, as my partner, who was able to work with me and Elsevier to turn our vision into a reality.

While the project took some time and had many bumps along the way, Elsevier heeded our advice and ended up with an Adobe Analytics program that transformed their business. They provided tremendous support form the top (thanks to Darren Person!) and Adobe Analytics became a huge success for the organization.  To learn more about this, I suggest you check out this case study here.

In addition, if you want to hear Darren and I talk about the project while we were still in the midst of it, you can see a presentation we did at the 2016 Adobe Summit (free registration required) by clicking here.

Adobe Analytics, Featured

DB Vista – Bringing the Sexy Back!

OK. It may be a bit of a stretch to say that DB Vista is sexy. But I continue to discover that very few Adobe Analytics clients have used DB Vista or even know what it is. As I wrote in my old blog back in 2008 (minus the images which Adobe seems to have lost!), DB Vista is a method of setting Adobe Analytics variables using a rule that does a database lookup on a table that you upload (via FTP) to Adobe. In my original blog post, I mentioned how you can use DB Vista to import the cost of each product to a currency success event, so you can combine it with revenue to calculate product margin. This is done by uploading your product information (including cost) to the DB Vista table and having a DB Vista rule lookup the value passed to the Products variable and match it to the column in the table that stores the current product cost.  As long as you are diligent about keeping your product cost table updated, DB Vista will do the rest.  The reason I wanted to bring the topic of DB Vista back is that it has come up more and more over the past few weeks. In this post, I will share why and a few reasons why I keep talking about it.

Adobe Summit Presentation

A few weeks ago, while presenting at Adobe Summit, I showed an example where a company was [incorrectly] using SAINT Classifications to classify product ID’s with the product cost like this:

As I described in this post, SAINT Classifications are not ideal for something like Product Cost because the cost of each product will change over time and updating the SAINT file is a retroactive change that will make it look like each product ALWAYS had the most recently uploaded cost.  In the past, this could be mitigated by using date-enabled SAINT Classifications, but those have recently been removed from the product, I presume due to the fact they weren’t used very often and were overly complex.

However, if you want to capture the cost of each product, as mentioned above, you could use DB Vista to pass the cost to a currency success event and/or you could capture the cost in an eVar. Unlike SAINT, using DB Vista to get the cost, means that the data is locked in at the time it is collected.  All that is needed is a mechanism to keep your product cost data updated in the DB Vista table.

Measure Slack

Another case where DB Vista arose recently, was in the #Measure Slack group. There was a discussion around using classifications to group products, but the product group was not available in real-time to be passed to an eVar and the product group could change over time.

The challenge in this situation is that SAINT classifications would not be able keep all of this straight without the use of date-enabled classifications. This is another situation where DB Vista can save the day as long as you are able to keep the product table updated as products move groups.  In this case, all you’d need to do is upload the product group to the DB Vista table and use the DB Vista rule to grab the value and pass it to an eVar whenever the Products variable is set.

Idea Exchange

There are countless other things that you can do with DB Vista. So why don’t people use it more? I think it has to do with the following reasons:

  • Most people don’t understand the inner workings of DB Vista (hint: come to my upcoming  “Top Gun” Training Class!)
  • DB Vista has an additional cost (though it is pretty nominal)
  • DB Vista isn’t something you can do on your own – you need to engage with Adobe Engineering Services

Therefore, I wish that Adobe would consider making DB Vista something that administrators could do on their own through the Admin Console and Processing Rules (or via Launch!). Recently, Data Feeds was made self-service and I think it has been a huge success! More people than ever are using Data Feeds, which used to cost $$ and have to go through Adobe Engineering Services. I think the same would be true for DB Vista. If you agree, please vote for my idea here. Together, we can make DB Vista the sexy feature it deserves to be!

Adobe Analytics, Featured

Virtual Report Suites and Data Sources

Lately, I have been seeing more and more Adobe Analytics clients moving to Virtual Report Suites. Virtual Report Suites are data sets that you create from a base Adobe Analytics report suite that differ from the original by either limiting data by a segment or making other changes to it, such as changing the visit length. Virtual Report Suites are handy because they are free, whereas sending data to multiple report suites in Adobe Analytics costs more due to increased server calls. The Virtual Report Suite feature of Adobe Analytics has matured since I originally wrote about it back in 2016. If you are not using them, you probably should be by now.

However, when some of my clients have used Virtual Report Suites, I have noticed that there are some data elements that tend to not transition from the main report suite to the Virtual Report Suite. One of those items is data imported via Data Sources. In last week’s post, I shared an example of how you can import external metrics to your Adobe Analytics implementation via Data Sources, but there are many data points that can be imported, including metrics from 3rd party apps. One of the more common 3rd party apps that my clients integrate into Adobe Analytics are e-mail applications. For example, if your organization uses Responsys to send and report on e-mails sent to customers, you may want to use the established Data Connector that allows you to import your e-mail metrics into Adobe Analytics, such as:

  • Email Total Bounces
  • Email Sent
  • Email Delivered
  • Email Clicked
  • Email Opened
  • Email Unsubscribed

Once you import these metrics into Adobe Analytics, you can see them like any other metrics…

…and combine them with other metrics:

In this case, I am viewing the offline e-mail metrics alongside with the online metric of Orders and also created a new Calculated Metric that combines both offline and online metrics (last column). So far so good!

But watch what happens if I now view the same report in a “UK Only” Virtual Report Suite that is based off of this main report suite:

Uh oh…I just lost all of my data! I see this happen all of the time and usually my clients don’t even realize that they have told their internal users to use a Virtual Report Suite that is missing all Data Source metrics.

So why is the data missing? In this case the Virtual Report Suite is based upon a geographic region segment:

This means that any hits with eVar16 value of “UK” will make it into the Virtual Report Suite. Since all online data has an eVar16 value, it is successfully carried over to the Virtual Report Suite.  However, when the Data Sources metrics were imported (in this case Responsys E-mail Metrics), they did not have an eVar16 value so they are not included. That is why these metrics zeroed out when I ran the report for the Virtual Report Suite. In the next section, I will explain how to fix this so you make sure all of your Data Source metrics are included in the Virtual Report Suite

Long-Term Approach (Data Sources File)

The best long-term way to fix this problem is to change your Data Sources import files to make sure that you add data that will match your Virtual Report Suite segment. In this case, that means making sure each row of data imported has an eVar16 value. If you add a column for eVar16 to the import, any rows that contain “UK” will be included in the Virtual Report Suite. For this e-mail data, it means that your e-mail team would have to know which region each e-mail is associated with, but that shouldn’t be a problem. Unfortunately, it does require a change to your daily import process, but this is the cleanest way to make sure your Data Sources data flows correctly to your Virtual Report Suite.

Short-Term Approach (Segmentation)

If, however, making a change to your daily import process isn’t something that can happen soon (such as data being imported from an internal database that takes time to change), there is an easy workaround that will allow you to get Data Sources data immediately. This approach is also useful if you want to retroactively include Data Sources metrics that was imported before you make the preceding fix.

This short-term solution involves modifying the Segment used to pull data into the Virtual Report Suite. By adding additional criteria to your Segment definition, you can manually select which data appears in the Virtual Report Suite. In this case, the Responsys e-mail metrics don’t have an eVar16 value, but you can add them to the Virtual Report Suite by finding another creative way to include them in the segment. For example, you can add an OR statement that includes hits where the various Responsys metrics exist like this:

Once you save this new segment, your Virtual Report Suite will now include all of the data it had before and the Responsys data so the report will now look like this:

Summary

So this post is just a reminder to make sure that all of your imported Data Source metrics have made it into your shiny new Virtual Report Suites and, if not, how you can get them to show up there. I highly suggest you fix the issue at the source (Data Sources import file), but the segmentation approach will also work and helps you see data retroactively.

Adobe Analytics, Featured

Dimension Penetration %

Last week, I explained how the Approximate Count Distinct function in Adobe Analytics can be used to see how many distinct dimension values occur within a specified timeframe. In that post, I showed how you could see how many different products or campaign codes are viewed without having to count up rows manually and how the function provided by Adobe can then be used in other Calculated Metrics. As a follow-on to that post, in this post, I am going to share a concept that I call “dimension penetration %.” The idea of dimension penetration % is that there may be times in which you want to see what % of all possible dimension values are viewed or have some other action taken. For example, you may want to see what % of all products available on your website were added to the shopping cart this month. The goal here is to identify the maximum number of dimension values (for a time period) and compare that to the number of dimension values that were acted upon (in the same time period). Here are just some of the business questions that you might want to answer with the concept of dimension penetration %:

  • What % of available products are being viewed, added to cart, etc…?
  • What % of available documents are being downloaded?
  • What % of BOPIS products are picked up in store?
  • What % of all campaign codes are being clicked?
  • What % of all content items are viewed?
  • What % of available videos are viewed?
  • What % of all blog posts are viewed?

As you can see, there are many possibilities, depending upon the goals of your digital property. However, Adobe Analytics (and other digital analytics tools), only capture data for items that get “hits” in the date range you select. They are not clairvoyant and able to figure out the total sum of available items. For example, if you wanted to see what % of all campaign tracking codes had at least one click this month, Adobe Analytics can show you how many had at least one click, but it has no way of determining what the denominator should be, which is the total number of campaign codes you have purchased. If there are 1,000 campaign codes that never receive a click in the selected timeframe, as far as Adobe Analytics is concerned, they don’t exist. However, the following will share some ways that you can rectify this problem and calculate the penetration % for any Adobe Analytics dimension.

Calculating Dimension Penetration %

To calculate the dimension penetration %, you need to use the following formula:

For example, if you wanted to see what % of all blog posts available have had at least one view this month, you would calculate this by dividing the unique count of viewed blog posts by the total number of blog posts that could have been viewed. To illustrate this, let’s go through a real scenario. Based upon what was learned in the preceding post, you now know that it is easy to determine the numerator (how many unique blog posts were viewed) as long as you are capturing the blog post title or ID in an Adobe Analytics dimension (eVar or sProp). This can be done using the Approximate Count Distinct function like this:

Once this new Calculated Metric has been created, you can see how many distinct blog posts are viewed each day, week, month, etc…

So far, so good! You now have the numerator of the dimension penetration % formula completed.  Unfortunately, that was the easy part!

Next, you have to figure out a way to get the denominator. This is a bit more difficult and I will share a few different ways to achieve this. Unfortunately, finding out how many dimension values exist (in this scenario, total # of available blog posts), is a manual effort. Whether you are trying to identify the total number of blog posts, videos, campaign codes, etc. you will probably have to work with someone at your company to figure out that number. Once you find that number, there are two ways that you can use it to calculate your dimension penetration %.

Adobe ReportBuilder Method

The first approach is to add the daily total count of the dimension you care about to an Excel spreadsheet and then use Adobe ReportBuilder to import the Approximate Count Distinct Calculated Metric created above by date. By importing the Approximate Count Distinct metric by date and lining it up with your total numbers by date, you can easily divide the two and compute the dimension penetration % as shown here:

In this case the items with a green background were inputted manually and mixed with an Adobe Analytics data block. Then formulas were added to compute the percentages.

However, you have to be careful not to SUM the daily Approximate Count numbers since the sum will be different than the Approximate Count of the entire month. To see an accurate count of unique blog posts viewed in the month of April, for example, you would need to create a separate data block like this:

Data Sources Method

The downside of the Adobe ReportBuilder method is that you have to leave Adobe Analytics proper and cannot take advantage of its web-based features like Dashboards, Analysis Workspace, Alerts, etc. Plus, it is more difficult to share the data with your other users. If you want to keep your users within the Adobe Analytics interface, you can use Data Sources. Shockingly, Data Sources has not changed that much since I blogged about in back in 2009! Data Sources is a mechanism to import metrics that don’t take place online into Adobe Analytics. It can be used to upload any number you want as long as you can tie that number to a date. In this case, you can use Data Sources to import the total number of dimension items that exist on each day.

To do this, you need to use the administration console to create a new Data Source. There is a wizard that walks you through the steps needed, which include creating a new numeric success event that will store your data. The wizard won’t let you complete the process unless you add at least one eVar, but you can remove that from the template later, so just pick any one if you don’t plan to upload numbers with eVar values. In this case, I used Blog Post Author (eVar3) in case I wanted to break out Total Blog Posts by Author. Here is what the wizard should look like when you are done:

Once this is complete, you can download your template and create an FTP folder to which you will upload files. Next, you will create your upload file that has date and the total number of blog posts for each date. Again, you will be responsible for identifying these numbers. Here is what a sample upload file might look like using the template provided by Adobe Analytics:

Next, you upload your data via FTP (you can read how to do this by clicking here). A few important things to note are that you cannot upload more than 90 days of data at one time, so you may have to upload your historical numbers in batches. You also cannot data for dates in the future, so my suggestion would be to upload all of your historical data and then upload one row of data (yesterday’s count) each day in an automated FTP process. When your data has successfully imported, you will see the numbers appear in Adobe Analytics just like any other metrics (see below). This new Count of Blog Posts metric can also be used in Analysis Workspace.

Now that you have the Count of Blog Posts that have been viewed for each day and the count of Total Blog Posts available for each day, you can [finally] create a Calculated Metric that divides these two metrics to see your daily penetration %:

This will produce a report that looks like this:

However, this report will not work if you change it to view the data by something other than day, since the Count of Blog Posts [e8] metric is not meant to be summed (as mentioned in the ReportBuilder method). If you do change it to report by week, you will see this:

This is obviously incorrect. The first column is correct, but the second column is drastically overstating the number of available blog posts! This is something you have to be mindful of in this type of analysis. If you want to see dimension penetration % by week or month, you would have to do some additional work. Let’s look at how you can view this data by week (special thinks to Urs Boller who helped me with this workaround!). One method is to identify how many dimension items existed yesterday and use that as the denominator. Unfortunately, this can be problematic if you are looking at a long timeframe and if there are many additional items added. But if you want to use this approach, you can create this new Calculated Metric to see yesterday’s # of blog posts:

Which produces this report:

As you can see, this approach treats yesterday’s total number as the denominator for all weeks, but if you look above, you will see that the first week only had 1,155 posts, not 1162. You could make this more precise by adding an IF statement to the Calculated Metric and use a weekly number or if you are crazy, add 31 IF statements and grab the exact number for each date.

The other approach you can take is to simply divide the incorrect summed Count of Blog Posts [e8] metric by 7 for week and 30 for month. This will give you an average number of blog posts that existed and will look like this:

This approach has pretty similar penetration % numbers as the other approach and will work best if you use full weeks or full months (in this case, I started with the first full week in January).

Automated Method (Advanced)

If you decide that finding out the total # of items for each dimension is too complicated (or if you are just too busy or lazy to find it!), I will demonstrate an automated approach to find out this information. However, this approach will not be 100% accurate and can only be used for dimension items that will be persistent on your site from the day they are added. For example, you cannot use the following approach to identify the total # of campaign codes, since they come and go regularly.  But you can use the following approach to estimate the total # of values for items that, once added, will probably remain like files, content items or blog posts (as in this example).

Here is the approach. Step one is to create a date range that spans all of your analytics data like this:

You will also want to create another Date Range for the time period you want to see for recent activity. In this case, I created one for the Current Month To Date.

Next, create Segments for both of these Date Ranges (All Dates & Current month to Date):

Next, create a new Calculated Metric that divides the Current Month Approximate Count Distinct of Blog Posts by the All Dates Approximate Count Distinct of Blog Posts:

Lastly, create a report like this in Analysis Workspace:

By doing this, you are letting Adobe Analytics tell you how many dimension items you have (# of total blog posts in this case) by seeing the Approximate Count Distinct over all of your dates. The theory being that over a large timeframe all (or most) of your dimension items will be viewed at least once. In this case, Adobe Analytics has found 1,216 blog posts that have received at least one view since 1/1/16. As I stated earlier, this may not be exact, since there may be dimension items that are never viewed, but this approach allows you to calculate dimension penetration % in a semi-automated manner.

Lastly, if you wanted to adjust this to look at a different time period, you would drag over a different date range container on the first column and then have to make another copy of the 3rd column that uses the same date range as shown in the bottom table:

Adobe Analytics, Featured

Approximate Count Distinct Function – Part 1

In Adobe Analytics, there are many advanced functions that can be used in Calculated Metrics. Most of the clients I work with have only scratched the surface of what can be done with these advanced functions. In this post, I want to spend some time discussing the Approximate Count Distinct function in Adobe Analytics and in my next post, I will build upon this one to show some ways you can take this function to the next level!

There are many times when you want to know how many rows of data exist for an eVar or sProp (dimension) value. Here are a few common examples:

  • How many distinct pages were viewed this month?
  • How many of our products were viewed this month?
  • How many of our blog posts were viewed this month?
  • How many of our campaign tracking codes generated visits this month?

As you can see, the possibilities are boundless. But the overall gist is that you want to see a count of unique values for a specified timeframe. Unfortunately, there has traditionally not been a great way to see this in Adobe Analytics. I am ashamed to admit that my main way to see this has always been to open the dimension report, scroll down to the area that lets you go to page 2,3,4 of the results and enter 50,000 to go to the last page of results and see the bottom row number and write it down on a piece of paper! Not exactly what you’d expect from a world-class analytics tool! It is a bit easier if you use Analysis Workspace, since you can see the total number of rows here:

To address this, Adobe added the Approximate Count Distinct function that allows you to pick a dimension and will calculate the number of unique values for the chosen timeframe. While the function isn’t exact, it is designed to be no more that 5% off, which is good enough for most analyses. To understand this function, let’s look at an example. Let’s imagine that you work for an online retailer and you sell a lot of products. Your team would like to know how many of these products are viewed at least once in the timeframe of your choosing. To do this, you would simply create a new calculated metric in which you drag over the Approximate Count Distinct function and then select the dimension (eVar or sProp) that you are interested in, which in this case is Products:

Once you save this Calculated Metric, it will be like all of your other metrics in Adobe Analytics. You can trend it and use it in combination with other metrics. Here is what it might look like in Analysis Workspace:

Here you can see the number of distinct products visitors viewed by day for the month of April. I have also included a Visits column to show some perspective. I have also added a new Calculated Metric that divides the distinct count of products by Visits and used conditional formatting to help visualize the data. Here is the formula for the third column:

The same process can be used with any dimension you are interested in within your implementation (i.e. blog posts, campaign codes, etc.)

Combining Distinct Counts With Other Dimensions

While the preceding information is useful, there is another way to use Approximate Distinct Count functions that I think is really exciting. Imagine that you are in a meeting and your boss asks you how many different products each of your marketing campaigns has generated? For example, does campaign X get people to view 20 products and campaign Y get people to view 50 products? For each visit from each campaign, how many products are viewed? Which of your campaigns gets people to view the most products? You get the gist…

To see this, what you really want to do is use the newly created Approximate Count of Products metric in your Tracking Code or other campaign reports. The good news is that you can do that in Adobe Analytics. All you need to do is open one of your campaign reports and add the Calculated Metric we created above to the report like this:

Here you can see that I am showing how many click-throughs and visits each campaign code received in the chosen timeframe. Next, I am showing the Approximate Count of Products for each campaign code and also dividing this by Visit. Just for fun, I also added how many Orders each campaign code generated and divided that by the Approximate Count of Products to see what portion of products viewed from each campaign code were purchased.

You can also view this data by any of your SAINT Classifications. In this case, if you have your campaign Tracking Codes classified by Campaign Name, you can create the same report for Campaign Name:

In this case, you can see that, for example, the VanityURL Campaign generated 19,727 Visits and 15,599 unique products viewed.

At this point, if you are like me you are saying to yourself: “Does this really work?  That seems to be impossible…” I was very suspicious myself, so if you don’t really believe that this function works (especially with classifications), here is a method that Jen Lasser from Adobe told me you can use to check things out:

  1. Open up the report of the dimension for which you are getting Approximate Distinct Counts (in this case Products)
  2. Create a segment that isolates visits for one of the rows (in the preceding example, let’s use Campaign Name = VanityURL)
  3. Add this new segment to the report you opened in step 1 (in this case Products) and use the Instances metric (which in this case is Product Views)
  4. Look at the number of rows in Analysis Workspace (as shown earlier in post) or use the report page links at the bottom to go to the last page of results and check the row number (if using old reports) as shown here:

Here you can see that our value in the initial report for “VanityURL” was 15,599 and the largest row number was 15,101, which puts the value in the classification report about 3% off.

Conclusion

As you can see, the use of the Approximate Count Distinct function (link to Adobe help for more info) can add many new possibilities to your analyses in Adobe Analytics. Here, I have shown just a few examples, but depending upon your business and site objectives, there are many ways you can exploit this function to your advantage. In my next post, I will take this one step further and show you how to see how to calculate dimension penetration, or what % of all of your values received at least one view over a specified timeframe.

Adobe Analytics, Featured

Chicago Adobe Analytics “Top Gun” Class – May 24, 2018

I am pleased to announce my next Adobe Analytics “Top Gun” class, which will be held May 24th in Chicago.

For those of you unfamiliar with my Adobe Analytics “Top Gun” class, it is a one day crash course on how Adobe Analytics works behind the scenes based upon my Adobe Analytics book. This class is not meant for daily Adobe Analytics end-users, but rather for those who administer Adobe Analytics at their organization, analysts who do requirements gathering or developers who want to understand why they are being told to implement things in Adobe Analytics. The class goes deep into the Adobe Analytics product, exploring all of its features from variables to merchandising to importing offline metrics. The primary objective of the class is to teach participants how to translate every day business questions into Adobe Analytics implementation steps. For example, if your boss tells you that they want to track website visitor engagement using Adobe Analytics, would you know how to do that? While the class doesn’t get into all of the coding aspects of Adobe Analytics, it will teach you which product features and functions you can bring to bear to create reports answering any question you may get from business stakeholders. It will also allow you and your developers to have a common language and understanding of the Adobe Analytics product so that you can expedite getting the data you need to answer business questions.

Here are some quotes from recent class attendees:

I have purposefully planned this class for a time of year where Chicago often has nice weather in case you want to spend the weekend!  There is also a Cubs day game the following day!

To register for the class, click here. If you have any questions, please e-mail me. I hope to see you there!

Adobe Analytics, Featured, Tag Management, Technical/Implementation

A Coder’s Paradise: Notes from the Tech Track at Adobe Summit 2018

Last week I attended my 11th Adobe Summit – a number that seems hard to believe. At my first Summit back in 2008, the Great Recession was just starting, but companies were already cutting back on expenses like conferences – just as Omniture moved Summit from the Grand America to the Salt Palace (they moved it back in 2009 for a few more years). Now, the event has outgrown Salt Lake City – with over 13,000 attendees last week converging on Las Vegas for an event with a much larger footprint than just the digital analytics industry.

With the sheer size of the event and the wide variety of products now included in Adobe’s Marketing and Experience Clouds, it can be difficult to find the right sessions – but I managed to attend some great labs, and wanted to share some of what I learned. I’ll get to Adobe Launch, which was again under the spotlight – only this year, it’s actually available for customers to use. But I’m going to start with some of the other things that impressed me throughout the week. There’s a technical bent to all of this – so if you’re looking for takeaways more suited for analysts, I’m sure some of my fellow partners at Demystified (as well as lots of others out there) will have thoughts to share. But I’m a developer at heart, so that’s what I’ll be emphasizing.

Adobe Target Standard

Because Brian Hawkins is such an optimization wizard, I don’t spend as much time with Target as I used to, and this was my first chance to do much with Target Standard besides deploy the at.js library and the global mbox. But I attended a lab that worked through deploying it via Launch, then setting up some targeting on a singe-page ReactJS application. My main takeaway is that Target Standard is far better suited to running an optimization program on a single-page application than Classic ever was. I used to have to utilize nested mboxes and all sorts of DOM trickery to delay content from showing until the right moment when things actually took place. But with Launch, you can easily listen for page updates and then trigger mboxes accordingly.

Target Standard and Launch also makes it easier to handle a common issue with frameworks like ReactJS where the data layer is being asynchronously populated with data from API calls – so you can run a campaign on initial page load even if it takes some time for all the relevant targeting data to be available.

Adobe Analytics APIs

The initial version of the Omniture API was perhaps the most challenging API I’ve ever used. It supported SOAP only, and from authentication to query, you had to configure everything absolutely perfectly for it to work. And you had to do it with no API Explorer and virtually no documentation, all while paying very close attention to the number of requests you were making, since you only had 2,000 tokens per month and didn’t want to run out or get charged for more (I’m not aware this ever happened, but the threat at least felt real!).

Adobe adding REST API support a few years later was a career-changing event for me, and there have been several enhancements and improvements since, like adding OAUTH authentication support. But what I saw last week was pretty impressive nonetheless. The approach to querying data is changed significantly in the following ways:

  • The next iteration of Adobe’s APIs will offer a much more REST-ful approach to interacting with the platform.
  • Polling for completed reports is no longer required. It will likely take several more requests to get to the most complicated reports, but each individual request will run much faster.
  • Because Analytics Workspace is built on top of a non-public version of the API, you truly will be able to access any report you can find in the UI.
  • The request format for each report has been simplified, with non-essential parameters either removed or at least made optional.
  • The architecture of a report request is fundamentally different in some ways – especially in the way that breakdowns between reports work.
  • The ability to search or filter on reports is far more robust than in earlier versions of the API.

Launch by Adobe

While Launch has been available for a few months, I’ve found it more challenging than I expected to talk my clients into migrating from DTM to Launch. The “lottery” system made some of my clients wonder if Launch was really ready for prime-time, while the inability to quickly migrate an existing DTM implementation over to Launch has been prohibitive to others. But whatever the case may be, I’ve only started spending a significant amount of time in Launch in the last month or so. For customers who were able to attend labs or demos on Launch at Summit, I suspect that will quickly change – because the feature set is just so much better than with DTM.

How Launch Differs from DTM

My biggest complaint about DTM has always been that it hasn’t matched the rest of the Marketing Cloud in terms of enterprise-class features. From a limited number of integrations available, to the rigid staging/production publishing structure, I’ve repeatedly run into issues where it was hard to make DTM work the way I needed for some of my larger clients. Along the way, Adobe has repeatedly said they understood these limitations and were working to address them. And Launch does that – it seems fairly obvious now that the reason DTM lagged in offering features other systems did is because Adobe has been putting way more resources into Launch over the past few years. It opens up the platform in some really unique ways that DTM never has:

  • You can set up as many environments as you want.
  • Minification of JavaScript files is now standard (it’s still hard to believe this wasn’t the case with DTM).
  • Anyone can write extensions to enhance the functionality and features available.
  • The user(s) in charge of Launch administration for your company have much more granular control over what is eventually pushed to your production website.
  • The Launch platform will eventually offer open APIs to allow you to customize your company’s Launch experience in virtually any way you need.

With Great Power Comes Great Responsibility

Launch offers a pretty amazing amount of control that make for some major considerations to each company that implements it. For example, the publishing workflow is flexible to the point of being a bit confusing. Because it’s set up almost like a version control system like Git, any Launch user can set up his or her own development environment and configure in any number of ways. This means each user has to then choose which version of every single asset to include in a library, promote to staging/production, etc. So you have to be a lot more careful than when you’re publishing with DTM.

I would hope we’ve reached a point in tag management where companies no longer expect a marketer to be able to own tagging and the TMS – it was the sales pitch made from the beginning, but the truth is that it has never been that easy. Even Tealium, which (in my opinion) has the most user-friendly interface and the most marketer-friendly features, needs at least one good developer to tap into the whole power of the tool. Launch will be no different; as the extension library grows and more integrations are offered, marketers will probably feel more comfortable making changes than they were with DTM – but this will likely be the exception and not the rule.

Just One Complaint

If there is one thing that will slow migration from DTM to Launch, it is be the difficulty customers will face in migration. One of the promises Adobe made about Launch at Summit in 2017 was that you would be able to migrate from DTM to Launch without updating the embed code on your site. This is technically true – you can configure Launch to publish your production environment to an old DTM production publishing target. But this can only be done for production, and not any other environment – which means you can migrate without updating your production embed code, but you will need to update all your non-production codes. Alternatively, you can use a tool like DTM Switch or Charles Proxy for your testing – and that will work fine for your initial testing. But most enterprise companies want to accumulate a few weeks of test data for all the traffic on at least one QA site before they are comfortable deploying changes to production.

It’s important to point out that, even if you do choose to migrate by publishing your Launch configuration to your old production DTM publishing target, you still have to migrate everything currently in DTM over to Launch – manually. Later this year, Adobe has said that they will release a true migration tool that will allow customers to pull rules, data elements, and tags from a DTM property into a new Launch property and migrate them without causing errors. Short of such a tool, some customers will have to invest quite a bit to migrate everything they currently have in DTM over to Launch. Until then, my recommendation is to figure out the best migration approach for your company:

  1. If you have at least one rockstar analytics developer with some bandwidth, and a manageable set of rules and tags in DTM, I’d start playing around with migration in one of your development environments, and put together an actual migration plan.
  2. If you don’t have the resources yet, I’d probably wait for the migration tool to be available later in the year – but still start experimenting with Launch on smaller sites or as more resources become available.

Either way, for some of my clients that have let their DTM implementations get pretty unwieldy, moving from DTM to Launch offers a fresh start and a chance to upgrade to Adobe’s latest technology. No matter which of these two situations you’re in, I’d start thinking now (if you haven’t already) about how you’re going to get your DTM properties migrated to Launch. It is superior to DTM in nearly every way, and it is going to get nearly all of the development resources and roadmap attention from Adobe from here on out. You don’t need to start tomorrow – and if you need to wait for a migration tool, you’ll be fine. But if your long-term plan is to stay with DTM, you’re likely going to limit your ability in the future to tap into additional features, integrations and enhancements Adobe makes across its Marketing and Experience Cloud products.

Conclusion

We’ve come a long ways from the first Summits I attended, with only a few labs and very little emphasis on the technology itself. Whether it was new APIs, new product features announcements, or the hands-on labs, there was a wealth of great information shared at Summit 2018 for developers and implementation-minded folks like me – and hopefully you’re as excited as I am to get your hands on some of these great new products and features.

Photo Credit: Roberto Faccenda (Flickr)

Conferences/Community, Featured, Testing and Optimization

2018 Adobe Summit – the testing guys perspective

The 2018 Adobe Summit season has officially closed.  This year marked my 11th Summit with my first Summit dating back to 2008 when Omniture acquired Offermatica where I was an employee at the time.  I continue to attend Summit for a variety of reasons but I especially enjoy spending time with some of my clients and catching up with many old friends.  I also enjoy geeking out hardcore with the product and product marketing teams.

While I still very much miss the intimacy and the Friday ski day that Salt Lake City offered, I am warming much more than had I anticipated to Las Vegas.  I also got the sense that others were as well.  I also just learned that after Summit this year that quite a few folks have created their own Friday Funday if you will (totally down for Friday Motorcycle day next year!). The conference is bigger than ever with reported attendee numbers around 13,000.  Topics, or Adobe Products, covered have grown quite a bit too.  I am not sure if I got all the whole list but here are the products or topics, I saw covered at Summit:

  • Advertising Cloud
  • Analytics
  • Audience Manager
  • Campaign
  • Cloud Platform
  • Experience Manager
  • Primetime
  • Sensei
  • Target

My world of testing mainly lives in the Adobe Target, Adobe Analytics and to varying degrees, Adobe Audience Manager, Adobe Experience Manager, and Adobe Launch worlds.  It was cool to see and learn more about these other solutions but there was plenty in my testing and personalization world to keep me busy.  I think I counted 31 full sessions and about 7 hands-on labs for testing.  Here is a great write up of the personalization sessions this year broken down by category that was very helpful.

The conference hotel and venue are quite nice and make hosting 13,000 people feel like it is no big deal given its size.  As nice as the hotel is, I still stay around the corner at the Westin.  I like getting away and enjoy the walk to and from the event.  And boy did I walk this year.  According to my Apple Watch, in the four days (Monday – Thursday), I logged 63,665 steps and a mind-blowing 33.38 miles.

The sessions that I focused on where the AI ones given my considerable work with Automated Personalization, Auto-Allocate, and Recommendations.  I also participated in a couple of sessions around optimization programs given my work with MiaProva.

Below was my week and lessons learned for next year.

 

Summit week

Monday

I made a mistake this year and should have come in earlier on Monday or even Sunday for that matter.  Monday is the Adobe Partner day and they have quite a few fun things to learn about in regards to the partnership and Adobe in general.  It is also a nice time to hang out with the product teams at Adobe – before the storm of Summit begins.  In fact, I was able to make it one great event that evening at Lavo in the Venetian.  Over the last couple years at least, organizations that use Adobe solutions and agencies that help those organizations use Adobe solutions can be nominated for awards based on the impact of using Adobe solutions.  That night, attendees got to hear about some great use cases including one from Rosetta Stone where they used testing to minimize any detriment going from boxed software to digital experiences (a very familiar story to Adobe:).  If you find yourself part of a team that does something really cool or impactful with Adobe Experience Cloud solutions, consider nominating it for next year!

Also on that Monday is something called UnSummit.  I have gone to UnSummit a few times and always enjoyed it.  UnSummit is a great gathering of smart and fun people that share interesting presentations.  Topics vary but they are mainly about Analytics and Testing which is reminiscent of the old days at the Grand America in Salt Lake City.  I am not 100% sure why it is called UnSummit as that could leave the impression that it is a protest or rejection of Summit.  I can assure you that it isn’t or at least I’ve never heard of any bashing or protest.  In fact, all attendees are in town because of Summit.  Again, great event and if you have the time next year, I recommend checking it out.

Tuesday

Opening day if you will.  The general session followed up by many sessions and labs.  This sounds silly but I always come early to have breakfast at the conference.  I have had many a great conversation and met so many interesting people by simply joining them at the table.  I do this for all the lunches each day as well.  We are all pretty much there for similar reasons and have similar interests so it is nice to geek out a bit and network as well.

I also enjoy checking out the vendor booths as well and did so this year.  Lots of great conversations and it was cool to run into many former colleagues and friends.  Southwest Airlines even had a booth there but not sure why!  Maybe to market to thousands of business folks?

On Tuesday nights, Adobe Target usually hosts an event for Adobe Target users to get together at.  This year it was at the Brooklyn Bowl which is on the Linq Promenade, only a few blocks from the hotel.  A very cool area if you haven’t been that way.  They also have an In-n-out there too!

This event was great as I got to spend some time with some of my clients and enjoy some good food and music.  There was a live band there that night so it was a bit loud but still a great venue and event.  Lots of folks got to bowl which was awesome too.  Of the nightly events, I usually enjoy this one the most.

Wednesday

Big day today!  Breakfast networking, a session, the general session and then game time!  I had the honor of presenting a session with Kaela Cusack of Adobe.  We presented on how to power true personalization with Adobe Target and Adobe Analytics.  The session was great as we got to share how organizations are using A4T and the bi-directional flow of data between the two solutions to empower organizations to make use of the data that they had in Adobe Analytics.  Lots of really good feedback and I will be following up here with step by step instructions on how exactly organizations can do this for themselves.  You can watch the presentation here.

After my session Q&A, it was Community Pavilion time which is basically snacks and alcohol in the vendor booth area.  I also met with a couple of customers during this time.

Then it was time for Sneaks.  I never heard of Leslie Jones before but she was absolutely hysterical.  She had the crowd laughing like crazy.  Lots of interesting sneaks but the one around Launch visually interpreting something and then inserting a tag, I found to be the most interesting.  If Launch can receive inputs like that, then there should be no reason why Target can’t communicate or send triggers to Launch as well.  I see some pretty cool use cases with Auto-Allocate, Automated Personalization and Launch here!

After Sneaks it was concert time!  Awesome food, copious amounts of Miller Lite and lots of time to hang with clients and friends.  Here is a short clip of Beck who headlined that night:

 

Thursday

Last year I made the big mistake of booking a 3 pm flight out of Vegas on Thursday.  It was a total pain to deal with the luggage and I missed out on two really great sessions that Thursday afternoon.  I wasn’t going to make that mistake this year so I flew home first thing on Friday morning which I will do again next year too.

Thursday is a chill day.  I had quite a few meetings for Demystified and MiaProva prospects and attended a few great sessions.  Several people told me that the session called “The Future of Experience Optimization” was their favorite session of all of Summit and that took place on Thursday afternoon.  I was disappointed that I couldn’t attend due to a client meeting but will definitely be watching the video of this session.

Thursday late afternoon and night were all about catching up on email and getting an early nights rest.  Again, much more relaxing not rushing home.  So that was my week which somehow now feels like it was many weeks ago.

Takeaways

There were many great sessions, far too many to catch live.  Adobe though made every session available here for viewing.

There is quite a bit going on with Adobe Target and not just from a product and roadmap perspective.  There is a lot of community work taking place as well.  If you work with Target in any way, I recommend subscribing to both Target TV and the Adobe Target Forum.  I was able to meet Amelia Waliany at Adobe Summit this year and she totally cool and fun.  She runs these two initiatives for Adobe.

There are many changes and updates being made to Adobe Target and these two channels are great for staying up to date and for seeing what others are doing with the Product.  I also highly recommend joining Adobe’s Personalization Thursdays as they go deep with the product and bring in some pretty cool guests from time to time.

Hope to see you next year!

 

Featured, Testing and Optimization

Personalization Thursdays

Personalization Thursdays and MiaProva

Personalization Thursdays

Each month, the team at Adobe hosts a webinar series called Personalization Thursdays.  The topics vary but the webinars typically focus on features and capabilities of Adobe Target.  The webinars are well attended and they often go deep technically which leads to many great questions and discussions.  Late last year, I joined one of the webinars where I presented “10 Execution tips to get more out of Adobe Target” and it was very well received!  You can watch that webinar here if you are interested.

Program Management

On Wednesday, March 15th, I have the privilege of joining the team again where I am presenting on “Program Management for Personalization at Scale”.  Here is the outline of this webinar:

Program management has become a top priority for our Target clients as we begin to scale optimization and personalization across a highly matrixed, and often global organization. It’s also extremely valuable in keeping workspaces discrete and efficiency of rolling out new activities. We’ll share the latest developments in program management that will assist with ideation and roadmap development, as well as make it easier to schedule and manage all your activities on-the-go, with valuable alerts and out of the box stakeholder reports.

I plan on diving into Adobe I/O and how organizations and software can use to scale their optimization programs.  I will also show how users of MiaProva leverage it to manage their tests from ideation through execution.

You have to register to attend but this webinar is open to everyone.  You can quickly register via this link:  http://bhawk.me/march-15-webinar

Hope to see you there!

Adobe Analytics, Featured

Where I’ll Be – 2018

Each year, I like to let my blog readers know where they can find me, so here is my current itinerary for 2018:

Adobe Summit – Las Vegas (March 27-28)

Once again, I am honored to be asked to speak at the US Adobe Summit. This will be my 13th Adobe Summit in a row and I have presented at a great many of those. This year, I am doing something new by reviewing a random sample of Adobe Analytics implementations and sharing my thoughts on what they did right and wrong. A while ago, I wrote a blog post asking for volunteer implementations for me to review, and I was overwhelmed by how many I received! I have spent some time reviewing these implementations and will share lots of tips and tricks that will help you improve your Adobe Analytics implementations. To view my presentation from the US Adobe Summit, click here.

Adobe Summit – London (May 3-4)

Based upon the success of my session at the Adobe Summit in Las Vegas, I will be coming back to London to present at the EMEA Adobe Summit.  My session will be AN7 taking place at 1:00 pm on May 4th.

DAA Symposium – New York (May 15)

As a board member of the Digital Analytics Association (DAA), I try to attend as many local Symposia as I can. This year, I will be coming to New York to present at the local symposia being held on May 15th. I will be sharing my favorite tips and tricks for improving your analytics implementation.

Adobe Insider Tour (May & September)

I will be hitting the road with Adobe to visit Atlanta, Los Angeles, Chicago, New York and Dallas over the months of June and September. I will be sharing Adobe Analytics tips and tricks are trying something new called Adobe Analytics implementation improv!  Learn more by clicking here.

Adobe Analytics “Top Gun” Training – Chicago/Austin (May 24, October 17)

Each year I conduct my advanced Adobe Analytics training class privately for my clients, but I also like to do a few public versions for those who don’t have enough people at their organization to justify a private class. This year, I will be doing one class in Chicago and one in Austin. The Chicago class will be at the same venue downtown Chicago as the last two years. The date of the class is May 24th (when the weather is a bit warmer and the Cubs are in town the next day for an afternoon game!). You can register for the Chicago class by clicking here.

In addition, for the first time ever, I will be teaming up with the great folks at DA Hub to offer my Adobe Analytics “Top Gun” class in conjunction with DA Hub! My class will be one of the pre-conference training classes ahead of this great conference. This is also a great option for those in the West Coast who don’t want to make the trek into Chicago. To learn more and register for this class and DA Hub, click here.

Marketing Evolution Experience & Quanties  – Las Vegas (June 5-6)

As you may have heard, the eMetrics conference has “evolved” into the Marketing Evolution Experience. This new conference will be in Las Vegas this summer and will also surround the inaugural DAA Quanties event. I will be in Vegas for both of these events.

ObservePoint Validate Conference – Park City, Utah (October 2-5)

Last year, ObservePoint held its inaugural Validate conference and everyone I know who attended raved about it. So this year, I will be participating in the 2nd ObservePoint Validate conference taking place in Park City, Utah. ObservePoint is one of the vendors I work with the most and they definitely know how to put on awesome events (and provide yellow socks!).

DA Hub – Austin (October 18-19)

In addition to doing the aforementioned training at the DA Hub, I will also be attending the conference itself. It has been a few years since I have been at this conference and I look forward to participating in its unique “discussion” format.

 

Featured, google analytics

Google Data Studio “Mini Tip” – Set A “Sampled” Flag On Your Reports!

Google’s Data Studio is their answer to Tableau – a free, interactive data reporting, dashboarding and visualization tool. It has a ton of different automated “Google product” connectors, including Google Analytics, DoubleClick, AdWords, Attribution 360, Big Query and Google Spreadsheets, not to mention the newly announced community connectors (which adds the ability to connect third party data sources.)

One of my favourite things about Data Studio is the fact that it leverages an internal-only Google Analytics API, so it’s not subject to the sampling issues of the normal Google Analytics Core Reporting API.

For those who aren’t aware (and to take a quick, level-setting step back) Google Analytics will run its query on a sample of your data, if the conditions match these two circumstances:

  1. The query is a custom query, not a pre-aggregated table. (Basically, if you apply a secondary dimension, or a segment.)
  2. The number of sessions in your timeframe exceeds:
    • GA Standard: 500K sessions
    • GA 360: 100M sessions
      (at the view level)

The Core Reporting API can be useful for automating reporting out of Google Analytics. However, it has one major limitation: the sample rate for the API is the same as Google Analytics Standard (500K sessions) … even if you’re a GA360 customer. (Note: Google has recently dealt with this by adding the option of a cost based API for 360 customers. And of course, 360 customers also have the option of BigQuery. But, like the Core Reporting API, Data Studio is FREE!) 

Data Studio, however, follows the same sampling rules as the Google Analytics main interface. (Yay!) Which means for 360 customers, Data Studio will not sample until the selected timeframe is over 100M sessions.

As a quick summary…

Google Analytics Standard

  • Google Analytics UI: 500,000 at the view level
  • Google Analytics API: 500,000
  • Data Studio: 500,000

Google Analytics 360

  • Google Analytics UI: 100 million at the view level
  • Google Analytics API: 500,000
  • Data Studio: 100 million 

But here’s the thing… In Google Analytics’ main UI, we see a little “sampling indicator” to tell us if our data is being sampled.

In Data Studio, historically there was nothing to tell you (or your users) if the data they are looking at is sampled or not. Data Studio “follows the same rules as the UI”, so technically, to know if something is sampled, you had to go request the same data via the UI and see if it’s sampled.

At the end of 2017, Data Studio offered a toggle to “Show Sampling”

The toggle won’t work in embedded reports though (so if you’re a big Sites user, or otherwise embed reports a lot, you’ll still want to go to the manual route), and adding your own flag gives you some control on how, where & how prominently any sampling is shown (plus, the ability to have it “always on” rather than requiring a user to toggle.)

What I have historically done is add a discreet “Sampling Flag” to reports and dashboards. Now, keep in mind – this will not tell you if your data is actually being sampled. (That depends on the nature of each query itself.) However, a simple Sampling Flag can at least alert you or your users to the possibility that your query might be sampled, so you can check the original (non-embedded) Data Studio report, or the GA UI, for confirmation.

To create this, I use a very simple CASE formula:

CASE WHEN (Sessions) >= 100000000 THEN 1 ELSEEND

(For a GA Standard client, adjust to 500,000)

I place this in the footer of my reports, but you could choose to display much more prominently if you wanted it to be called out to your users:

Keep in mind, if you have a report with multiple GA Views pulled together, you would need one Sampling Flag for each view (as it’s possible some views may have sampled data, while others may not.) If you’re using Data Studio within its main UI (aka, not embedded reports) the native sampling toggle may be more useful there.

I hope this is useful “mini tip”! Thoughts? Questions? Comments? Cool alternatives? Please add to the comments!

Adobe Analytics, Featured

Free Adobe Analytics Review @ Adobe Summit

For the past seven years (and many years prior to that while at Omniture!), I have reviewed/audited hundreds of Adobe Analytics implementations. In most cases, I find mistakes that have been made and things that organizations are not doing that they should be. Both of these issues impede the ability of organizations to be successful with Adobe Analytics. Poorly implemented items can lead to bad analysis and missed implementation items represent an opportunity cost for data analysis that could be done, but isn’t. Unfortunately, most organizations “don’t know what they don’t know” about implementing Adobe Analytics, because the people working there have only implemented Adobe Analytics oncee, or possibly two times, versus people like me who do it for a living. In reality, I see a lot of the same common mistakes over and over again and I have found that showing my clients what is incorrect and what can be done instead is a great way for them to learn how to master Adobe Analytics (something I do in my popular Adobe Analytics “Top Gun” Class).

Therefore, at this year’s Adobe Summit in Las Vegas, I am  going to try something I haven’t done in any of my past Summit presentations. This year, I am asking for volunteers to have me review your implementation (for free!) and share with the audience a few things that you either need to fix or net new things you could do to improve your Adobe Analytics implementation. In essence, I am offering to do a free review of your implementation and give you some free consulting! The only catch is that when I share my advice, it will be in front of a live audience so that they can learn along with you. In doing this, here are some things I will make sure of:

  • I will work with my volunteers to make sure that no confidential data is shown and will share my findings prior to the live presentation
  • I will not do anything to embarrass you about your current implementation. In fact, I have found that most of the bad things I find are implementation items that were done by people who are no longer part of the organization, so we can blame it on them 😉
  • I will attempt to review a few different types of websites so multiple industry verticals are represented
  • You do not have to be at Adobe Summit for me to review your implementation

So….If you would like to have me do a free review of your implementation, please send me an e-mail or message me via LinkedIn and I will be in touch.

 

 

Featured

Podcasts!

I am a podcast addict! I listen to many podcasts to get me news and for professional reasons. Recently, I came across a great podcast called Everyone Hates Marketers, by Louis Grenier. Louis works for Hotjar, which is a technology I wrote about late last year. His podcast interviews some of the coolest people in Marketing and attempts to get rid of many of the things that Marketers do that annoy people. Some of my favorite episodes were the ones with Seth Godin, DHH from Basecamp and Rand Fishkin. This week, I am honored to be on the podcast to talk about digital analytics. You can check out my episode here, in which I share some of my experiences and stories throughout my 15 years in the field.

There is a lot of great content in the Everyone Hates Marketers podcast and I highly recommend you check it out if you want to get a broader marketing perspective to augment the great stuff you can learn from the more analytics-industry focused Digital Analytics Power Hour.

While I am discussing podcasts, here are some of my other favorites:

  • Recode Decode – Great tech industry updates from the best interviewer in the business – Kara Swisher
  • Too Embarrassed to Ask – This podcast shares specifics about consumer tech stuff with Lauren Goode and Kara Swisher
  • NPR Politics – Good show to keep updated on all things politics
  • How I Built This – Podcast that goes behind the scenes with the founders of some of the most successful companies
  • Masters of Scale – Great podcast by Reid Hoffman about how startups work and practical tips from leading entrepreneurs
  • Rework – Podcast by Basecamp that shares tips about working better

If you need a break from work-related podcasts, I suggest the following non work-related podcasts:

  • West Wing Weekly – This is a fun show to listen to and re-visit each episode of the classic television series “The West Wing”
  • Filmspotting – This one is a bit long, but provides great insights into current and old movies

Here is to a 2018 filled with new insights and learning!

Adobe Analytics, Featured

NPS in Adobe Analytics

Most websites have specific conversion goals they are attempting to achieve. If you manage a retail site, it may be orders and revenue. Conversely, if you don’t sell products, you might use visitor engagement as your primary KPI. Regardless of the purpose of your website (or app), having a good experience and having people like you and your brand is always important. It is normally a good thing when people use your site/product/app and recommend it to others. One method to capture how often people interacting with your site/brand/app have a good experience is to use Net Promoter Score (NPS). I assume that if you are a digital marketer and reading this, you are already familiar with NPS, but in this post, I wanted to share some ways that you can incorporate NPS scoring into Adobe Analytics.

NPS

The easiest way to add NPS to your site or app is to simply add a survey tool that will pop-up a survey to your users and ask them to provide an NPS. My favorite tool for doing this is Hotjar, but there are several tools that can do this.

Once your users have filled out the NPS survey, you can monitor the results in Hotjar or whichever tool you used to conduct the survey.

But, if you also want to integrate this into Adobe Analytics, there is an additional step that you can take. When a visitor is shown the NPS survey, you can to capture the NPS data in Adobe Analytics as well. To start, you would pass the survey identifier to an Adobe Analytics variable (i.e. eVar). This can be done manually or using a tag management system. In this case, let’s assume that you have had two NPS submissions with scores of 7 and 4. Here is what the NPS Survey ID eVar report might look like:

At the same time, you can capture any verbatim responses that users submit with the survey (if you allow them to do this):

This can be done by capturing the text response in another Adobe Analytics variable (i.e. eVar), which allows you to see all NPS comments in Adobe Analytics and, if you want, filter them by specific search keywords (or, if you are low on eVars, you could upload these comments as a SAINT Classification of the NPS Survey ID). Here is what the NPS Comments eVar report might look like when filtered for the phrase “slow:”

Keep in mind that you can also build segments based upon these verbatim comments, which is really cool!

Trending NPS in Adobe Analytics

While capturing NPS Survey ID’s and comments is interesting, you probably want to see the actual NPS scores in Adobe Analytics as well. You can do this by capturing the actual NPS value in a numeric success event in Adobe Analytics when visitors submit the NPS survey. You can also set a counter success event for every NPS survey submission, which allows you to create a calculated metric that shows a trend of your overall NPS.

First, you would setup the success events in the Adobe Analytics administration console:

Let’s look at this using the previously described example. When the first visitor comes to your site and completes an NPS survey with a score of 7, you would set the following upon submission:

s.events="event20=1,event21=7";

When the second visitor completes an NPS survey with a score of 4, you would set the following:

s.events="event20=1,event21=4";

Next, you can build a calculated metric that computes the your overall NPS. Here is the standard formula for computing NPS using a scale of 1-10:

In our scenario, the NPS would be -50, since we had one detractor and no promoters, computed as ((0-1)/2) x 100 = -50.

To create the NPS metric in Adobe Analytics, you first need to create segments to isolate the number of Promotors and Detractors you have in your NPS surveys. This can be done by building a segment for Promoters…

…and a segment for Detractors:

Once these segments have been created, they can be applied to the following calculated metric formula in Adobe Analytics:

Once you have created this calculated metric, you would see a trend report that looks like this (assuming only the two visitors mentioned above):

This report only shows the two scores from one day, so if we pretend that the previous day, two visitors had completed NPS surveys and provided scores of 9 & 10 respectively (a score of 100), the daily trend would look like this:

If we looked at the previous report with just two days (November 3rd & 4th) for a longer duration (i.e. week, month, year), we would see the aggregate NPS Score:

In this case, the aggregate NPS score for the week (which in this case just includes two days) is 25 computed as: ((2 Promoters – 1 Detractor)/4 Responses) x 100 = 25.

If we had data for a longer period of time (i.e. October), the trend might look like this (shown in Analysis Workspace):

And if we looked at the October data set by week, we would see the aggregate NPS (shown in Analysis Workspace):

Here we can see that there is a noticeable dip in NPS around the week of October 22nd. If you break this down by the NPS Comments eVar to see if there may be comments telling us why the scores dipped:

In this case, the comments let us know that the blog portion of the website was having issues, which hurt our overall NPS.

One side note about the overall implementation of this. In the preceding scenario I built the NPS as a calculated metric, but I could have also used the Promoter and Detractor segments to create two distinct calculated metrics (Promoters and Detractors)…

…which would allow me to see a trend of Promoters (or Detractors) over time:

Alternatively, you could also choose to set success events for Promoter Submissions and Detractor Submissions (in real-time) instead of using a segment to create these metrics. Doing this would require using three success events instead of two, but would remove the need for the segments, but the results would be the same.

Summary

As you can see, this is a fair amount of work. So why would you want to do all of this if you already have NPS data in your survey tool (i.e. Hotjar)? For me, having NPS data in Adobe Analytics provides the following potential additional benefits:

  • Build a segment of sessions that had really good or really bad NPS scores and view the specific paths visitors have taken to see if there are any lessons to be learned
  • Build a segment of sessions that had really good or really bad NPS scores and see the differences in cart conversion rates
  • Look at the retention of visitors with varying NPS scores
  • Identify which marketing campaigns are producing visitors with varying NPS scores
  • Easily add NPS trend to an existing Adobe Analytics dashboard
  • Easily correlate other website KPI’s with NPS score to see if there are any interesting relationships (i.e. does revenue correlate to NPS score?)
  • Use NPS score as part of contribution analysis
  • Create alerts for sudden changes in NPS
  • Identify which [Hotjar] sessions (using the captured Survey ID) for which you want to view recordings based upon behavior found in Adobe Analytics

These are just some ideas that I have thought about for incorporating NPS into your Adobe Analytics implementation. If you have any other ideas, feel free to leave a comment here.

Featured, Testing and Optimization

Simple and oh so very sweet

Informatica is a very large B2B company and one of the most successful players in the data management market.  Informatica also has an impressive testing and optimization program and they make heavy use of data and visitor behavior to provide the ideal experience for their digital consumers.

Like most spaces, in the B2B space, there are countless opportunities for testing and learning.  The more data that you have, the more opportunities exist for quantifying personalization efforts through targeted tests and for machine learning through solutions like Adobe’s Automated Personalization tools.  In fact, many B2B optimization programs are focused on the knowns and the unknowns with integrations between the testing solution(s) and with demand generation platforms as I wrote about a few years ago.

In a world with relatively complex testing options available with first-party data, third-party data such as Demandbase (great data source for B2B), and with limitless behavior data, it is important to not lose sight on simpler tests.  Just because rich data is available and complex testing capabilities exist, doesn’t mean the more basic tests and user experience tests should be deprioritized.  It is ideal for organizations to have a nice balance of targeted advanced tests along with an array of more general tests as it gives the organization a wider basket to catch opportunities to learn more about what is important to their digital consumers.   Informatica knows this and here is a very successful user experience test that they recently ran.

Informatica was recently named a leader in Gartner’s Magic Quadrant report and the testing team wanted to optimize how to get this report to their digital consumers of their product pages on their website.  Many different ideas were discussed and the user experience team decided to use a sticky banner that would appear on the bottom of the page.  Two key concepts were introduced into this test with the first being the height of the banner and the second being the inclusion of an image.  Both sticky banners allow for the user to X or close the banner as well.

The Test Design

Here is what Experience A or the Control test variant looked like (small sticky footer and no image) on one of their product pages:

and the Experience B test variant on the same product page (increased height and inclusion of image):

 

And up close:

vs.

 

The primary metric for this test was Form Completes which translates to visitors clicking on the banner and then filling out the subsequent form on the landing page.  We also set up the test to report on these additional metrics:

  • Clicks on the “Get the Reports” CTA in banner
  • Clicking on the Image (which lead to the same landing page)
  • Clicking on the “X” which made the banner go away

The Results

And here is what was learned.  For the “Get the Reports” call to action in both footers:

While our primary test metric is “Form Completes”, this was a great finding and learning.  There was a 32.42% increase in the same call to action either because of the increased height or the image.

For the “Image Click”:

This was not surprising since visitors could technically only click on the image for Experience B since the image didn’t exist for Experience A.  Some might wonder why this metric was even included in the test setup but by doing so, we were able to learn something pretty interesting.   The primary metric is “Form Completes” and in order to get a form complete we need to get visitors to that landing page.  The way that visitors get to that landing page is by either clicking on the “Get the Report” call to action or by clicking on the image.  We wanted to see what percentage of “clickers” for Experience B came from the Image vs. the “Get the Report” call to action.  Turns out 52.6% of clicks in Experience B came from the Image vs. the call to action which had 47.5% of the clicks.  Keep in mind though, while the image did marginally better in clicks, the same call to action in Experience B got a 32.42% increase vs. Experience A.  The image clickers represented an additional net gain of possible form completers!

For the “X” or close clickers:

This was another interesting finding.  There was a significant increase of over 127% of visitors clicking on the X for Experience B.  This metric was included so as to see engagement rates with the “X” and to compare those rates with the other metrics.  We found that engagement with the “X” was significantly higher, almost tenfold, compared to the calls to action or the image.  The increase of “X” clicks of Experience B compared to Experience A was surmised to be because of the increased height of Experience B.

And now, for the primary “Form Complete” metric:

A huge win!  They got close to a 94% lift in form completes with the taller sticky footer and image.  The Experience B “Get the Report” call to action led to a 32.42% increase in visitors arriving on the form page.  The image in this same Experience B brought a significant number of additional visitors to this same form page.  Couple this and we have a massive increase in form completions!

For a test like this, it often also helps to visualize the distribution of clicks across the test content.  In the image below, X represents the number of clicks on the Experience A “Get the Reports” call to action.  Using “X” as the multiplier, you can see the distribution of clicks across the test experiences.

Was it the image or the height or the combination of the two that led to this change in behavior?  Subsequent testing will shed more light but at the end of the day, this relatively simple test led to significant increases in a substantial organizational key performance indicator and provided the user experience teams and designers with fascinating learnings.

 

Featured, General

My First MeasureCamp!

Last Saturday, I attended my first MeasureCamp! It was the inaugural MeasureCamp for Brussels and it had about 150 people there and those people came from as far away as Russia to attend! About 40% of the attendees where not local, but being in central Europe, it was easy for people to come from France, UK, Germany, etc. (I was the lone American there).

Over the years, I have heard great things about MeasureCamp (and not just from Peter!), but due to scheduling conflicts and relatively few having taken place in the US, had not had an opportunity to attend. Now that I have, I can see what all of the fuss is about. It was a great event! While giving up a Saturday to do more “work” may not be for everyone, those who attended were super-excited to be there! Everyone I met was eager to learn and have fun! Unlike traditional conferences, MeasureCamp, being an “un-conference,” has a format where anyone can present whatever they want. That means you don’t just hear from the same “experts” who attend the same conferences each year (like me!). I was excited to see what topics were top of mind for the attendees and debated whether I wanted to present anything at all for my first go-round. But as I saw the sessions hit the board, I saw that there were some slots open, so at the last minute, I decided to do a “lessons learned” session and a small “advanced Adobe Analytics tricks” session. I attended sessions on GDPR, AI, visitor engagement,  attribution and a host of other topics.

Overall, it was great to meet some new analytics folks and to hear different perspectives on things. I love that MeasureCamp is free and has no selling aspects to it. While there are sponsors, they did a great job of helping make the event happen, while not pitching their products.

For those who have not attended and plan to, here is my short list of tips:

  1. Think about what you might want to present ahead of time and consider filling out the session forms ahead of time if you want to make sure you get on the board. Some folks even made pretty formatting to “market” their sessions!
  2. Be prepared to be an active participant vs. simply sitting in and listening. The best sessions I attended were the ones that had the largest number of active speakers.
  3. Bring business cards, as there may be folks you want to continue conversations with!

I am glad that Peter has built such a great self-sustaining movement and I look forward to seeing it more in the US in the future. I recommend that if you have a chance to attend a MeasureCamp, that you go for it!

Adobe Analytics, Featured

Minneapolis Adobe Analytics “Top Gun” Class – 12/7/17

Due to a special request, I will be doing an unexpected/unplanned Adobe Analytics “Top Gun” class in Minneapolis, MN on December 7th. To register, click here.

For those of you unfamiliar with my Adobe Analytics “Top Gun” class, it is a one day crash course on how Adobe Analytics works behind the scenes based upon my Adobe Analytics book. This class is not meant for daily Adobe Analytics end-users, but rather for those who administer Adobe Analytics at their organization, analysts who do requirements gathering or developers who want to understand why they are being told to implement things in Adobe Analytics. The class goes deep into the Adobe Analytics product, exploring all of its features from variables to merchandising to importing offline metrics. The primary objective of the class is to teach participants how to translate every day business questions into Adobe Analytics implementation steps. For example, if your boss tells you that they want to track website visitor engagement using Adobe Analytics, would you know how to do that? While the class doesn’t get into all of the coding aspects of Adobe Analytics, it will teach you which product features and functions you can bring to bear to create reports answering any question you may get from business stakeholders. It will also allow you and your developers to have a common language and understanding of the Adobe Analytics product so that you can expedite getting the data you need to answer business questions.

Here are some quotes from recent class attendees:

To register for the class, click here. If you have any questions, please e-mail me. I hope to see you there!

Conferences/Community, Featured

MeasureCamp Brussels!!

For years I have been trying to get to a MeasureCamp event, but my timing in Europe has always been a bit off.  For those not familiar with MeasureCamp, it is a cool “un-conference” held locally where anyone can attend (for free!) and share things they have done or tips related to the analytics field. I am excited to say that I will finally be able to attend my first MeasureCamp in Brussels this month! Since I will be in London conducting my advanced Adobe Analytics “Top Gun” class, I am going to stay over a few more days to experience MeasureCamp!  I hope to see you there!

Now I just have to figure out what topic I want to talk about!  If you have any suggestions, please leave them as a comment below!!

Adobe Analytics, Analysis, Featured, google analytics

Did that KPI Move Enough for Me to Care?

This post really… is just the setup for an embedded 6-minute video. But, it actually hits on quite a number of topics.

At the core:

  • Using a statistical method to objectively determine if movement in a KPI looks “real” or, rather, if it’s likely just due to noise
  • Providing a name for said statistical method: Holt-Winters forecasting
  • Illustrating time-series decomposition, which I have yet to find an analyst who, when first exposed to it, doesn’t feel like their mind is blown just a bit
  • Demonstrating that “moving enough to care” is also another way of saying “anomaly detection”
  • Calling out that this is actually what Adobe Analytics uses for anomaly detection and intelligent alerts.
  • (Conceptually, this is also a serviceable approach for pre/post analysis…but that’s not called out explicitly in the video.)

On top of the core, there’s a whole other level of somewhat intriguing aspects of the mechanics and tools that went into the making of the video:

  • It’s real data that was pulled and processed and visualized using R
  • The slides were actually generated with R, too… using RMarkdown
  • The video was generated using an R package called ari (Automated R Instructor)
  • That package, in turn, relies on Amazon Polly, a text-to-speech service from Amazon Web Services (AWS)
  • Thus… rather than my dopey-sounding voice, I used “Brian”… who is British!

Neat, right? Give it a watch!

If you want to see the code behind all of this — and maybe even download it and give it a go with your data — it’s available on Github.

Adobe Analytics, Featured

Cart Persistence and Purchases [Adobe Analytics]

Many years ago, I wrote a post about shopping cart persistence based upon a query from a client. That post showed how to see how long items had been in the cart and a few other things. In this post, I am going to take a different slant and talk about how you can see which items are persisting in the cart and whether visitors are purchasing products they have persisted in the shopping cart.

What’s Persisting In The Cart?

The first step is to identify what items are persisting in the shopping cart when visitors arrive at your site. To do this, you can set a success event on the 1st page of the session (let’s call it Persistent Cart Visits) and then set the Products variable with each product that is in the cart.

s.events=”event95″;
s.products=”;blue polo shirt,;soccer ball”;

This will allow you to easily report upon which products are most often in the cart when visits begin:

This data can also be trended over time to see if there are certain products that are frequently persisting in the cart and you can merge this data with product cost information to see potential missed opportunities for revenue. This data can also be useful for re-marketing efforts, like offering a coupon or discount on items left in the cart. You can also use Date Range segments to see which products added to the cart last week (for example) were viewed as a persistent cart this week.

Compare Cart Persistence to Orders

Once you have the preceding items tagged, you can look to see how often any of the products that were persisting in the cart were purchased. One way to do this is to use the Products report to compare Persistent Cart Visits and Orders. This will allow you to see a ratio of orders per persistent cart visits (by product):

This allows you to see which products are getting purchased and you can break this report down by campaign to see if any of your re-marketing efforts are leading to success.

General Persistent Cart Conversion

Another approach to cart persistence is understanding, in general, how often cart persistence leads to conversion. Using the calculated metric shown above by itself, you can easily see the cart persistence conversion rate over time:

Alternatively, you can use segmentation to isolate visits that had an order AND had items in the cart when the visit began. This can be done by creating a success event using the Orders and Persistent Cart Visits success events:

Once this segment is created, it can be added to a Visits metric or Revenue metric or any other number of items to create some interesting derived calculated metrics.

Of course, you can also create product-specific segments to see how often visitors are purchasing a specific product that they have persisted in the cart by adding the Products variable to the preceding segment like this:

Advanced Cart Persistence

If you like this concept and want to take it to the “Top Gun” level, here is another cool use case you can try out. When visitors come to your site and have an item persisting in their cart, have your developers note which products were in the cart (same list passed to the Products variable above). Next, wait until visitors complete an order on the site and look at the persistent cart product list and if any of the products purchased were in the persistent cart list, track that via a Merchandising eVar (as a flag). At the same time, you can add two new success events (Persistent Cart Orders and Persistent Cart Revenue) in the Products string as well:

 s.events=”purchase,event110.event111″;
s.products=”;blue polo shirt;1;50;event110=1|event111=50;evar90=persistent-cart,;blue purse;1;45″;

In this example, the customer is purchasing two items, but only one was a result of the persistent cart. By setting a flag in the Merchandising eVar and two new success events, we can isolate the specific product that was attributed to the persistent cart and see a count of Orders and Revenue resulting from cart persistence. Once this is done, you can trend Persistent Cart Orders and Revenue and even compare those metrics to total Orders and Revenue to see what % of Orders and Revenue is due to cart persistence.

Another super-cool thing you can do is use the new Analysis Workspace Cohort Analysis visualization to compare Cart Additions and Persistent Cart Orders to see what % of people adding items to the cart come back to order items in the cart.

Unfortunately, since you cannot yet use derived calculated metrics in Cohort Analysis, you may get some extraneous data you don’t want in the Cohort table (i.e. people purchasing multiple items and only some being due to cart persistence), but it should still give you some interesting data (and maybe one day Adobe will allow calculated metrics in Cohort Analysis!).

In summary, there are lots of cool ways you can measure shopping cart persistence. These are just a few of them. If you have any other ways you have done this, feel free to leave a comment here.  Thanks!

Adobe Analytics, Featured, General, google analytics, Technical/Implementation

Can Local Storage Save Your Website From Cookies?

I can’t imagine that anyone who read my last blog post set a calendar reminder to check for the follow-up post I had promised to write, but if you’re so fascinated by cookies and local storage that you are wondering why I didn’t write it, here is what happened: Kevin and I were asked to speak at Observepoint’s inaugural Validate conference last week, and have been scrambling to get ready for that. For anyone interested in data governance, it was a really unique, and great event. And if you’re not interested in data governance, but you like outdoor activities like mountain biking, hiking, fly fishing, etc. – part of what made the event unique was some really great networking time outside of a traditional conference setting. So put it on your list of potential conferences to attend next year.

My last blog post was about some of the common pitfalls that my clients see that are caused by an over-reliance on cookies. Cookies are critical to the success of any digital analytics implementation – but putting too much information in them can even crash a customer’s experience. We talked about why many companies have too many cookies, and how a company’s IT and digital analytics teams can work together to reduce the impact of cookies on a website.

This time around, I’d like to take a look at another technology that is a potential solution to cookie overuse: local storage. Chances are, you’ve at least heard about local storage, but if you’re like a lot of my clients, you might not have a great idea of what it does or why it’s useful. So let’s dive into local storage: what it is, what it can (and can’t) do, and a few great uses cases for local storage in digital analytics.

What is Local Storage?

If you’re having trouble falling asleep, there’s more detail than you could ever hope to want in the specifications document on the W3C website. In fact, the W3C makes an important distinction and calls the actual feature “web storage,” and I’ll describe why in a bit. But most people commonly refer to the feature as “local storage,” so that’s how I’ll be referring to it as well.

The general idea behind local storage is this: it is a browser feature designed to store data in name/value pairs on the client. If this sounds a lot like what cookies are for, you’re not wrong – but there are a few key differences we should highlight:

  • Cookies are sent back and forth between client and server on all requests in which they have scope; but local storage exists solely on the client.
  • Cookies allow the developer to manage expiration in just about any way imaginable – by providing an expiration timestamp, the cookie value will be removed from the client once that timestamp is in the past; and if no timestamp is provided, the cookie expires when the session ends or the browser closes. On the other hand, local storage can support only 2 expirations natively – session-based storage (through a DOM object called sessionStorage), and persistent storage (through a DOM object called localStorage). This is why the commonly used name of “local storage” may be a bit misleading. Any more advanced expiration would need to be written by the developer.
  • The scope of cookies is infinitely more flexible: a cookie could have the scope of a single directory on a domain (like http://www.analyticsdemystified.com/blogs), or that domain (www.analyticsdemystified.com), or even all subdomains on a single top-level domain (including both www.analyticsdemystified.com and blog.analyticsdemystified.com). But local storage always has the scope of only the current subdomain. This means that local storage offers no way to pass data from one subdomain (www.analyticsdemystified.com) to another (blog.analyticsdemystified.com).
  • Data stored in either localStorage or sessionStorage is much more easily accessible than in cookies. Most sites load a cookie-parsing library to handle accessing just the name/value pair you need, or to properly decode and encode cookie data that represents an object and must be stored as JSON. But browsers come pre-equipped to make saving and retrieving storage data quick and easy – both objects come with their own setItem and getItem methods specifically for that purpose.

If you’re curious what’s in local storage on any given site, you can find out by looking in the same place where your browser shows you what cookies it’s currently using. For example, on the “Application” tab in Chrome, you’ll see both “Local Storage” and “Session Storage,” along with “Cookies.”

What Local Storage Can (and Can’t) Do

Hopefully, the points above help clear up some of the key differences between cookies and local storage. So let’s get into the real-world implications they have for how we can use them in our digital analytics efforts.

First, because local storage exists only on the client, it can be a great candidate for digital analytics. Analytics implementations reference cookies all the time – perhaps to capture a session or user ID, or the list of items in a customer’s shopping cart – and many of these cookies are essential both for server- and client-side parts of the website to function correctly. But the cookies that the implementation sets on its own are of limited value to the server. For example, if you’re storing a campaign ID or the number of pages viewed during a visit in a cookie, it’s highly unlikely the server would ever need that information. So local storage would be a great way to get rid of a few of those cookies. The only caveat here is that some of these cookies are often set inside a bit of JavaScript you got from your analytics vendor (like an Adobe Analytics plugin), and it could be challenging to rewrite all of them in a way that leverages local storage instead of cookies.

Another common scenario for cookies might be to pass a session or visitor ID from one subdomain to another. For example, if your website is an e-commerce store that displays all its products on www.mystore.com, and then sends the customer to shop.mystore.com to complete the checkout process, you may use cookies to pass the contents of the customer’s shopping cart from one part of the site to another. Unfortunately, local storage won’t help you much here – because, unlike cookies, local storage offers no way to pass data from one subdomain to another. This is perhaps the greatest limitation of local storage that prevents its more frequent use in digital analytics.

Use Cases for Local Storage

The key takeaway on local storage is that there are 2 primary limitations to its usefulness:

  • If the data to be stored is needed both on the client/browser and the server, local storage does not work – because, unlike cookies, local storage data is not sent to the server on each request.
  • If the data to be stored is needed on multiple subdomains, local storage also does not work – because local storage is subdomain-specific. Cookies, on the other hand, are more flexible in scope – they can be written to work across multiple subdomains (or even all subdomains on the same top-level domain).

Given these considerations, what are some valid use cases when local storage makes sense over cookies? Here are a few I came up with (note that all of these assume that neither limitation above is a problem):

  • Your IT team has discovered that your Adobe Analytics implementation relies heavily on several cookies, several of which are quite large. In particular, you are using the crossVisitParticipation plugin to store a list of each visit’s traffic source. You have a high percentage of return visitors, and each visit adds a value to the list, which Adobe’s plugin code then encodes. You could rewrite this plugin to store the list in the localStorage object. If you’re really feeling ambitious, you could override the cookie read/write utilities used by most Adobe plugins to move all cookies used by Adobe (excluding visitor ID cookies of course) into localStorage.
  • You have a session-based cookie on your website that is incremented by 1 on each page load. You then use this cookie in targeting offers based on engagement, as well as invites to chat and to provide feedback on your site. This cookie can very easily be removed, pushing the data into the sessionStorage object instead.
  • You are reaching the limit to the number of Adobe Analytics server calls or Google Analytics hits before you bump up to the next pricing tier, but you have just updated your top navigation menu and need to measure the impact it’s having on conversion. Using your tag management system and sessionStorage, you could “listen” for all navigation clicks, but instead of tracking them immediately, you could save the click information and then read it on the following page. In this way, the click data can be batched up with the regular page load tracking that will occur on the following page (if you do this, make sure to delete the element after using it, so you can avoid double-tracking on subsequent pages).
  • You have implemented a persistent shopping cart on your site and want to measure the value and contents of a customer’s shopping cart when he or she arrives on your website. Your IT team will not be able to populate this information into your data layer for a few months. However, because they already implemented tracking of each cart addition and removal, you could easily move this data into a localStorage object on each cart interaction to help measure this.

All too often, IT and analytics teams resort to the “just stick it in a cookie” approach. That way, they justify, we’ll have the data saved if it’s ever needed. Given some of the limitations I talked about in my last post, we should all pay close attention to the number, and especially the size, of cookies on our websites. Not doing so can have a very negative impact on user experience, which in turn can have painful implications for your bottom line. While not perfect for every situation, local storage is a valuable tool that can be used to limit the number of cookies used by your website. Hopefully this post has helped you think of a few ways you might be able to use local storage to streamline your own digital analytics implementation.

Photo Credit: Michael Coghlan (Flickr)

Adobe Analytics, Featured

European Adobe Analytics “Top Gun” Master Class – October 19th

A while back I ask folks to fill out a form if they were interested in me doing one of my Adobe Analytics “Top Gun” classes locally. and soon after, I had many European folks fill out the form! Therefore, this October 19th I will be conducting my advanced Adobe Analytics class in London. This will likely be the last time I offer this class in Europe for a while, so if you are interested, I encourage you to register before the spots are gone.

For those of you unfamiliar with my Adobe Analytics “Top Gun” class, it is a one day crash course on how Adobe Analytics works behind the scenes based upon my Adobe Analytics book. This class is not meant for daily Adobe Analytics end-users, but rather for those who administer Adobe Analytics at their organization, analysts who do requirements gathering or developers who want to understand why they are being told to implement things in Adobe Analytics. The class goes deep into the Adobe Analytics product, exploring all of its features from variables to merchandising to importing offline metrics. The primary objective of the class is to teach participants how to translate every day business questions into Adobe Analytics implementation steps. For example, if your boss tells you that they want to track website visitor engagement using Adobe Analytics, would you know how to do that? While the class doesn’t get into all of the coding aspects of Adobe Analytics, it will teach you which product features and functions you can bring to bear to create reports answering any question you may get from business stakeholders. It will also allow you and your developers to have a common language and understanding of the Adobe Analytics product so that you can expedite getting the data you need to answer business questions.

Here are some quotes from recent class attendees:

To register for the class, click here. If you have any questions, please e-mail me. I hope to see you there!

 

 

Adobe Analytics, Featured

Content Freshness [Adobe Analytics]

Recently, I had a client ask me about content freshness on their site. In this case, the client wanted to know if the content on their site was going stale after a few days or weeks so they could determine when to pull it off the site. While the best way to use what I will show is on a site that has a LOT of content and new content on a regular basis (like a news site), in this post, I will demonstrate the concept using our blog, which is all I can share publicly.

Step 1 – Set Dates

The first step in seeing how long it takes your users to interact with your content is to capture the number of days between the content publish date and the view date. To do this, you can add an eVar that subtracts the current date from the content publish date. For example, if I look at one of my old blog posts today, I can see in eVar10 the number of days after it was posted that I am viewing it:

In this case, the value of “13” is being passed to the eVar, which tells Adobe Analytics that the post being viewed is 13 days old. Once you have done this, you will see a report like this in Adobe Analytics:

If I break down the “13” row, I will see that it represents the previously shown blog post and if any other posts were published on the same date, they would appear also:

Step 2 – Classify Dates

However, the above report is pretty ugly and way too granular for analysis! Therefore, you can then apply SAINT Classifications to the number of days and make the report a bit more readable. Here is an example of the SAINT file that I used:

Keep in mind that you can pre-classify the number of days ahead of time (I went up to 20,000 to be safe) so that you only have to upload this once.

Next, you can open the classification report and see this, which is much more manageable and can be trended:

Step 3 – Reporting

In this case, I decided to create a data block in Adobe ReportBuilder to see data on a daily/trended basis. Here is what the data block looked like:

This produced a report like this:

Which I then graphed like this:

using Excel pivot tables, you can group the data any way you’d like once you have the data in Excel.

Lastly, you can also use the Cohort Analysis feature of Analysis Workspace to get a different view on how your content is being used:

 

Adobe Analytics, Featured

Advanced Click-Through Rates in Adobe Analytics – Placement

Last week, I described how to track product and non-product click-through rates in Adobe Analytics. This was done via the Products variable and Merchandising eVars. In this post, I will take it a step further and explain how to view click-through rates by placement location. I suggest you read the last post before this one for continuity sake.

Placement Click-Through Rates

In my preceding post, I showed how to see the click-through rate for products by setting two success events and leveraging the products variable. As an example, I showed a page that listed several products like this:

 

To see click-through rates, you would set the following code on the page showing the products to get product impressions:

s.events=”event20″;
s.products=”;11345,;11367,;12456,;11426,;11626,;15522,;17881,;18651″;

Then, when visitors click on a product, you would set code like this:

s.events=”event21″;
s.products=”;11345″;

Then you can create a click-through rate calculated metric and produce a report that looks like this:

However, what if you wanted to see the click-through rate of each product based upon its placement location? For example, you can see above that product# 11345 has a click-through rate of 26.97%, but how much does this click-through rate depend upon its location? How much better does it perform if it is in Row 1 – Slot 1 vs. Row 2 – Slot 3? To understand this, you have to add another component to the mix – Placement.

To do this, you can add a new Merchandising eVar that captures the Placement details and set it in the merchandising slot of the Products string like this:

s.events=”event20″;
s.products=”;11345;;;;evar30=Row1-Slot1,;11367;;;;evar30=Row1-Slot2,;12456;;;;evar30=Row1-Slot3,;11426;;;;evar30=Row1-Slot4,;11626;;;;evar30=Row2-Slot1,;15522;;;;evar30=Row2-Slot2,;17881;;;;evar30=Row2-Slot3,;18651;;;;evar30=Row2-Slot4″;

As you can see, the string is the same as before, just with the addition of a new merchandising eVar30 for each product value. This tells Adobe Analytics that each impression (event20) should be tied to both a product and a placement. And since the product and placement are in the same portion of the product string, there is an additional connection made between the specific product (i.e. 11345) and the placement (i.e. Row1-Slot1) for each impression. This allows you to perform a breakdown between product and placement (or vice-versa), which I will demonstrate later.

If a visitor clicks on a product, you would set the click event and capture the product and placement in the Products string:

s.events=”event21″;
s.products=”;11345;;;;evar30=Row1-Slot1″;

In theory, you don’t need to set the merchandising eVar again on the click, since it can persist, but there is no harm in doing so if you’d like to be sure.

Once this is done, you can break down the any product in the preceding report by its placement and use the click-through rate calculated metric to see click-through rates for each product, by placement location. In addition, since each impression and click is also associated with a placement, you can also see impressions, clicks and the click-through rate for each placement by using the merchandising eVar on its own. Here is what the eVar30 report might look like:

This allows you to see placement click-through rates agnostic of what was shown in the placement. Of course, if you want to break this down by product, you can do that to see a report like this:

Lastly, one other cool thing you can do with this is to view click-through rates by placement row and column using SAINT Classifications. In the report above that shows click-through rates by Row & Slot (the one with 8 rows), you can easily classify each of these rows by row and column (slot). For example, the first four rows would all be grouped into “Row 1” and another classification would group rows 1 & 5, 2 & 6, 3 & 7 and 4 & 8 into four column (slot) values. This would allow you to see click-through rate by row and column with no additional tagging.

Another cool thing you can do is to embed a page identifier in the placement string passed to the merchandising eVar. This is helpful if you want to see how click-through rates differ if products are shown on page A vs. Page B. To do this, simply pre-pend a page identifier before the “Row1-Slot1” values, which can then be filtered or classified using SAINT. For example, you might change the value above to “shoe-landing:Row1-Slot1” in the merchandising eVar value. This would break out the Row1-Slot1 values by page and give you additional data for analysis. The only catch here is that you want to be careful about what data you pass during the click portion of the tagging, as you either want to leave the merchandising eVar value blank (to inherit the previous value with the page of the impression) or you want to set it with the value of the previous page so your impressions and clicks are both associated with the same page. If you are tracking impressions and clicks for things other than products (Ferguson example in my previous post), you can either include the placement in the merchandising eVar string or you can set a second merchandising eVar (like shown above) to capture the placement.

Hence, with the addition of one merchandising eVar, you can see click-through rates by placement, product & placement, placement & product, row, column and page.

Adobe Analytics, Featured, google analytics, Technical/Implementation

Don’t Let Cookies Eat Your Site!

A few years ago, I wrote a series of posts on how cookies are used in digital analytics. Over the past few weeks, I’ve gotten the same question from several different clients, and I decided it was time to write a follow-up on cookies and their impact on digital analytics. The question is this: What can we do to reduce the number of cookies on our website? This follow-up will be split into 2 separate posts:

  1. Why it’s a problem to have too many cookies on your website, and how an analytics team can be part of the solution.
  2. When local storage is a viable alternative to cookies.

The question I described in the introduction to this post is usually posed to me like this: An analyst has been approached by someone in IT that says, “Hey, we have too many cookies on our website. It’s stopping the site from working for our customers. And we think the most expendable cookies on the site are those being used by the analytics team. When can you have this fixed?” At this point, the client frantically reaches out to me for help. And while there are a few quick suggestions I can usually offer, it usually helps to dig a little deeper and determine whether the problem is really as dire as it seems. The answer is usually no – and, surprisingly, it is my experience that analytics tools usually contribute surprisingly little to cookie overload.

Let’s take a step back and identify why too many cookies is actually a problem. The answer is that most browsers put a cap on the maximum size of the cookies they are willing to pass back and forth on each network request – somewhere around 4KB of data. Notice that the limit has nothing to do with the number of cookies, or even the maximum size of a single cookie – it is the total size of all cookies sent. This can be compounded by the settings in place on a single web server or ISP, that can restrict this limit even further. Individual browsers might also have limits on the total number of cookies allowed (a common maximum number is 50) as well as the maximum size of any one cookie (usually that same 4KB size).

The way the server or browser responds to this problem varies, but most commonly it’s just to return a request error and not send back the actual page. At this point it becomes easy to see the problem – if your website is unusable to your customers because you’re setting to many cookies that’s a big problem. To help illustrate the point further, I used a Chrome extension called EditThisCookie to find a random cookie on a client’s website, and then add characters to that cookie value until it exceeded the 4KB limit. I then reloaded the page, and what I saw is below. Cookies are passed as a header on the request – so, essentially, this message is saying that the request header for cookies was longer than what the server would allow.

At this point, you might have started a mental catalog of the cookies you know your analytics implementation uses. Here are some common ones:

  • Customer and session IDs
  • Analytics visitor ID
  • Previous page name (this is a big one for Adobe users, but not Google, since GA offers this as a dimension out of the box)
  • Order IDs and other values to prevent double-counting on page reloads (Adobe will only count an order ID once, but GA doesn’t offer this capability out of the box)
  • Traffic source information, sometimes across multiple visits
  • Click data you might store in a cookie to track on the following page, to minimize hits
  • You’ve probably noticed that your analytics tool sets a few other cookies as well – usually just session cookies that don’t do much of anything useful. You can’t eliminate them, but they’re generally small and don’t have much impact on total cookie size.

If your list looks anything like this, you may be wondering why the analytics team gets a bad rap for its use of cookies. And you’d be right – I have yet to have a client ask me the question above that ended up being the biggest offender in terms of cookie usage on the site. Most websites these days are what I might call “Frankensteins” – it becomes such a difficult undertaking to rebuild or update a website that, over time, IT teams tend to just bolt on new functionality and features without ever removing or cleaning up the old. Ask any developer and they’ll tell you they have more tech debt than they can ever hope to clean up (for the non-developers out there, “tech debt” describes all the garbage left in your website’s code base that you never took the time to clean up; because most developers prefer the challenge of new development to the tediousness of cleaning up old messes, and most marketers would rather have developers add new features anyway, most sites have a lot of tech debt).  If you take a closer look at the cookies on your site, you’d probably find all sorts of useless data being stored for no good reason. Things like the last 5 URLs a visitor has seen, URL-encoded twice. Or the URL for the customer’s account avatar being stored in 3 different cookies, all with the same name and data – one each for mysite.com,  www.mysite.com, and store.mysite.com. Because of employee turnover and changing priorities, a lot of the functionality on a website are owned by different development on the same team – or even different teams entirely. It’s easy for one team to not realize that the data it needs already exists in a cookie owned by another team – so a developer just adds a new cookie without any thought of the future problem they’ve just added to.

You may be tempted to push back on your IT team and say something like, “Come talk to me when you solve your own problems.” And you may be justified in thinking this – most of the time, if IT tells the analytics team to solve its cookie problem, it’s a little like getting pulled over for drunk driving and complaining that the officer should have pulled over another driver for speeding instead while failing your sobriety test. But remember 2 things (besides the exaggeration of my analogy – driving while impaired is obviously worse than overusing cookies on your website):

  1. A lot of that tech debt exists because marketing teams are loathe to prioritize fixing bugs when they could be prioritizing new functionality.
  2. It really doesn’t matter whose fault it is – if your customers can’t navigate your site because you are using too many cookies, or your network is constantly weighed down by the back-and-forth of unnecessary cookies being exchanged, there will be an impact to your bottom line.

Everyone needs to share a bit of the blame and a bit of the responsibility in fixing the problem. But it is important to help your IT team understand that analytics is often just the tip of the iceberg when it comes to cookies. It might seem like getting rid of cookies Adobe or Google sets will solve all your problems, but there are likely all kinds of cleanup opportunities lurking right below the surface.

I’d like to finish up this post by offering 3 suggestions that every company should follow to keep its use of cookies under control:

Maintain a cookie inventory

Auditing the use of cookies frequently is something every organization should do – at least annually. When I was at salesforce.com, we had a Google spreadsheet that cataloged our use of cookies across our many websites. We were constantly adding and removing the cookies on that spreadsheet, and following up with the cookie owners to identify what they did and whether they were necessary.

One thing to note when compiling a cookie inventory is that your browser will report a lot of cookies that you actually have no control over. Below is a screenshot from our website. You can see cookies not only from analyticsdemystified.com, but also linkedin.com, google.com, doubleclick.net, and many other domains. Cookies with a different domain than that of your website are third-party, and do not count against the limits we’ve been talking about here (to simplify this example, I removed most of the cookies that our site uses, leaving just one per unique domain). If your site is anything like ours, you can tell why people hate third-party cookies so much – they outnumber regular cookies and the value they offer is much harder to justify. But you should be concerned primarily with first-party cookies on your site.

Periodically dedicate time to cookie cleanup

With a well-documented inventory your site’s use of cookies in place, make sure to invest time each year to getting rid of cookies you no longer need, rather than letting them take up permanent residence on your site. Consider the following actions you might take:

  • If you find that Adobe has productized a feature that you used to use a plugin for, get rid of it (a great example is Marketing Channels, which has essentially removed the need for the old Channel Manager plugin).
  • If you’re using a plugin that uses cookies poorly (by over-encoding values, etc.), invest the time to rewrite it to better suit your needs.
  • If you find the same data actually lives in 2 cookies, get the appropriate teams to work together and consolidate.

Determine whether local storage is a viable alternative

This is the real topic I wanted to discuss – whether local storage can solve the problem of cookie overload, and why (or why not). Local storage is a specification developed by the W3C that all modern browsers have now implemented. In this case, “all” really does mean “all” – and “modern” can be interpreted as loosely as you want, since IE8 died last year and even it offered local storage. Browsers with support for local storage offer developers the ability to store that is required by your website or web applicaiton, in a special location, and without the size and space limitations imposed by cookies. But this data is only available in the browser – it is not sent back to the server. That means it’s a natural consideration for analytics purposes, since most analytics tools are focused on tracking what goes on in the browser.

However, local storage has limitations of its own, and its strengths and weaknesses really deserve their own post – so I’ll be tackling it in more detail next week. I’ll be identifying specific uses cases that local storage is ideal for – and others where it falls short.

Photo Credit: Karsten Thoms

Adobe Analytics, Featured

Click-Through Rates in Adobe Analytics

One of the more advanced things you can do with Adobe Analytics is to track click-through rates of elements on your web pages. Adobe Analytics doesn’t do this out of the box, but if you know how to use the tool, there are some creative ways that you can add click-through rate tracking to your implementation. In this post, I will share a few different ways to track click-throughs for both products and non-product items.

Product Click-Through Rates

If you sell physical products, you may have pages that show a bunch of products and want to see how often each product is viewed, clicked and the click-through rate. In my Adobe Analytics book, I show an example of a product listing page like this:

If you worked for this company, you might want to know how often each product is shown and clicked, keeping in mind that this could be dynamic due to tests you are running or personalization tools. Luckily, this is pretty easy to do in Adobe Analytics because the Products variable allows you to capture multiple products concurrently. In this case, you would simply set a “Product Impressions” success event and then list out all of the products visible on the page via the Products variable like this:

s.events=”event20″;
s.products=”;11345,;11367,;12456,;11426,;11626,;15522,;17881,;18651″;

Then, if a visitor clicks on one of the products, on the next page, you would set a “Product Clicks” success event and capture the specific product that was clicked in the Products variable:

s.events=”event21″;
s.products=”;11345″;

Once this is done, you can open the Products report and view impressions and clicks for each product. In addition, you can create a new calculated metric that divides Product Clicks by Product Impressions to see the click-through rate of each product:

This report allows you to see how each product performs and can also be trended over time. Additionally, once the click-through rate calculated metric has been created, you can use that metric by itself to see the overall product click-through rate like this:

Non-Product Click-Through Rates

There may be times that you want to see click-through rates for things that are not products. Some examples might include internal website promotions, news story links on a home page or any other important links on key pages. In these cases, you could use the previously described Products variable approach, I don’t recommend it. Using the Products variable for these non-product items would result in many (hundreds or thousands) of non-product values being passed to the Products variable, which is not ideal. It is best if you keep your Products variable for products so you don’t confuse your users.

When I ask Adobe Analytics power users in my Adobe Analytics “Top Gun” class how they would track click-through rates, the most frequent response I get (after the Products variable) is to use a List Var. For those unfamiliar, a List Var is an eVar that can collect multiple values when they are passed in with a delimiter, similarly to how the Products variable is used. On the surface, it makes sense that you can follow the same approach outlined above using a List Var, but unfortunately, this is not always the case. To illustrate why, I will use an example from a company that faced this problem and used a creative solution to it. Ferguson is a plumbing supplies company that displays its main product categories on the home page. They wanted to see the click-through rate of each, but this got complicated because once a visitor clicked on one of the categories, they were taken to a page that had product sub-categories and they also wanted to see impressions of those! So, on the first page, they wanted impressions and then on the second page they wanted to capture the click of the item from the first page, but at the same time capture impressions for more items on the second page! This illustrates why the List Var is not always good for tracking click-through rates. If they were to try and use a List Var, we could easily track impressions on the first page, but what would we do on the second page? It isn’t possible to tell the same List var to collect the ID of the item clicked on the first page AND the list of items getting impressions on the second page. If you passed all of items at the same time, the success events you set (Clicks and Impressions) would be attributed to both and all of your data would be wrong! You could use multiple List Vars, but then you’d have to use two different reports to see impressions and clicks, which makes things very difficult and time consuming. You could also fire off extra server calls when things are clicked, but that can get really expensive!

Therefore, my rule of thumb is that if you want to see impressions and clicks of products, use the Products variable and if you want to see impressions and clicks for non-product items, only use a List Var if there are no items on the page visitors get to after clicking that require impressions themselves. But what if you do want impressions on the subsequent page like Ferguson did? This is where you have to be a bit more advanced in your use of Adobe Analytics as I will explain next.

Advanced Click-Through Rate Tracking (Experts Only!)

The following gets a bit complex, so if you aren’t an Adobe Analytics expert, be forewarned that your head might spin a bit!

As mentioned above, you have solved 2/3 of your impression and click tracking problems – products and non-products where there are no impressions on the subsequent page. Now you are left with the situation that Ferguson faced when they had impressions on both pages. To solve this, you have to use the Product Merchandising feature of Adobe Analytics. This is because you need to find a way to assign impression events and click events on the same page, which means you need to set your success events in the product string so you can be very deliberate about which items get impressions and which get clicks. However, as I started earlier, you don’t want to pass hundreds of non-product items to the Products variable, but you cannot use Merchandising without setting products (I warned you this was advanced stuff!).

To solve this dilemma, you can set two “fake” products and use the Product Merchandising feature to document which non-product items are getting impressions and clicks. By using the Merchandising slot of the Products string in combination with the success events slot of the Products string, you can line up impressions and clicks with the correct values. To illustrate this, let’s look at an example from Ferguson’s website. If you use the Adobe Debugger on the home page, you will see the following in the Products variable:

While this looks pretty intimidating, if you break it down into its parts, it isn’t that bad. First, you will see that a “fake” product named “int_cmp_imp” is being passed to the Products variable once for each item that gets an impression. This means that instead of hundreds of products being added, only one is added to the Products report. Next, in the success event slot of the Products string, you will see that event40 is being incremented by 1 for each item receiving an impression. Next you will see that the actual item receiving the impression is captured in a product syntax merchandising eVar18. For example, the first one captured is “mrch_hh_kob_builder” (you can put whatever values you want here). Then the same approach is repeated once for every item receiving an impression on the page. By setting event40 and eVar18 together, each eVar18 value will increase by one impression upon page load (note: that the “fake” product will receive impressions as well, but we probably will just disregard that).

While this may seem like overkill for this type of tracking, this approach will begin to pay dividends when the user clicks on one of the items and reaches the next page. On the next page, you need to set impressions for all of the new items shown on that page AND set a click for the item clicked on the previous page. Here is what it might look like:

Notice here that the beginning of this string is exactly the same as the first page with the “fake” product of “int_cmp_imp” being set for each item as well as the impression event40 and the item description in eVar18. The key difference here is highlighted in red in which a new product is set “int_cmp_clk” and a new click event41 is incremented by 1 at the same time as eVar18 is set to the item that was clicked on the previous page. The beauty of using the Products variable and Product Merchandising is that you can set both impressions and clicks in the same in the same Products string, while at the same time only adding two new products to the overall Products report.

When you look at the data in Adobe Analytics, you can now add your impressions event (event40), your clicks event (event41) and add a calculated metric to see the click-through rate:

Final Thoughts

By using a combination of success events, the Products variable and, in some cases, Product Merchandising, it is possible to see how often specific items receive impressions, clicks and the resulting click-through rate. There may be some cases in which you have a large number of items for which you want to see impressions and clicks and in those cases, I suggest checking with Adobe Client Care on any limitations you may run into and, as always, be cognizant of how tagging can impact page load speeds. But if you have specific items for which you have always wanted to see click-through rates, feel free to try out one of the techniques described above.

Analysis, Featured, General, Presentation

Foundational Social Psychology Experiments (And Why Analysts Should Know Them) – Part 5 of 5

Digital Analytics is a relatively new field, and as such, we can learn a lot from other disciplines. This post continues exploring classic studies from social psychology, and what we analysts can learn from them.

Jump to an individual topic:

False Consensus

Experiments have revealed that we tend to believe in a false consensus: that others would respond similarly to the way that we would. For example, Ross, Greene & House (1977) provided participants with a scenario, with two different possible ways of responding. Participants were asked to explain which option they would choose, and guess what other people would choose. Regardless of which option they actually chose, participants believed that other people would choose the same one.

Why this matters for analysts: As you are analyzing data, you are looking at the behaviour of real people. It’s easy to make assumptions about how they will react, or why they did what they did, based on what you would do. But our analysis will be far more valuable if we can be aware of those assumptions, and actively seek to understand why our actual customers did these things – without relying on assumptions.

Homogeneity of the Outgroup

There is a related effect here: the Homogeneity of the Outgroup. (Quattrone & Jones, 1980.) In short, we tend to view those who are different to us (the “outgroup”) as all being very similar, while those who are like us (the “ingroup”) are more diverse. For example, all women are chatty, but some men are talkative, some are quiet, some are stoic, some are more emotional, some are cautious, others are more risky… etc.

Why this matters for analysts: Similar to the False Consensus Effect, where we may analyse user behaviour assuming everyone thinks as we do, the Homogeneity of the Outgroup suggests that we may oversimplify the behaviour of customers who are different to us, and fail to fully appreciate the nuance of varied behaviour. This may seriously bias our analyses! For example, if we are a large global company, an analysis of customers in another region may be seriously flawed if we are assuming customers in the region are “all the same.” To overcome this tendency, we might consider leveraging local teams or local analysts to conduct or vet such analyses.

The Hawthorne Effect

In 1955, Henry Landsberger analyzed several studies conducted between 1924 and 1932 at the Hawthorne Works factory. These studies were examining the factors related to worker productivity, including whether the level of light within a building changed the productivity of workers. They found that, while the level of light changing appeared to be related to increased productivity, it was actually the fact that something changed that mattered. (For example, they saw an increase in productivity even in low light conditions, which should make work more difficult…) 

However, this study has been the source of much criticism, and was referred to by Dr. Richard Nisbett as a “glorified anecdote.” Alternative explanations include that Orne’s “Demand Characteristics” were in fact at work (that the changes were due to the workers knowing they were a part of the experiment), or the fact that the changes were always made on a Sunday, and Mondays normally show increased productivity, due to employee’s having a day off. (Levitt & List, 2011.)

Why this matters for analysts: “Demand Characteristics” could mean that your data is subject to influence, if people know they are being observed. For example, in user testing, participants are very aware they are being studied, and may act differently. Your digital analytics data however, may be less impacted. (While people may technically know their website activity is being tracked, it may not be “top of mind” enough during the browsing experience to trigger this effect.) The Sunday vs. Monday explanation reminds us to consider other explanations or variables that may be at play, and be aware of when we are not fully in control of all the variables influencing our data, or our A/B test. However, the Hawthorne studies are also a good example where interpretations of the data may vary! There may be multiple explanations for what you’re seeing in the data, so it’s important to vet your findings with others. 

Conclusion

What are your thoughts? Do these pivotal social psychology experiments help to explain some of the challenges you face with analyzing and presenting data? Are there any interesting studies you have heard of, that hold important lessons for analysts? Please share them in the comments!

Analysis, Featured, General, Presentation

Foundational Social Psychology Experiments (And Why Analysts Should Know Them) – Part 4 of 5

Digital Analytics is a relatively new field, and as such, we can learn a lot from other disciplines. This post continues exploring classic studies from social psychology, and what we analysts can learn from them.

Jump to an individual topic:

The Bystander Effect (or “Diffusion of Responsibility”)

In 1964 in New York City, a woman name Kitty Genovese was murdered. A newspaper report at the time claimed that 38 people had witnessed the attack (which lasted an hour) yet no one called the police. (Later reports suggested this was an exaggeration – that there had been fewer witnesses, and that some had, in fact, called the police.)

However, this event fascinated psychologists, and triggered several experiments. Darley & Latane (1968) manufactured a medical emergency, where one participant was allegedly having an epileptic seizure, and measured how long it took for participants to help. They found that the more participants, the longer it took to respond to the emergency.

This became known as the “Bystander Effect”, which proposes that the more bystanders that are present, the less likely it is that an individual will step in and help. (Based on this research, CPR training started instructing participants to tell a specific individual, “You! Go call 911” – because if they generally tell a group to call 911, there’s a good chance no one will do it.)

Why this matters for analysts: Think about how you present your analyses and recommendations. If you offer them to a large group, without specific responsibility to any individual to act upon them, you decrease the likelihood of any action being taken at all. So when you make a recommendation, be specific. Who should be taking action on this? If your recommendation is a generic “we should do X”, it’s far less likely to happen.

Selective Attention

Before you read the next part, watch this video and follow the instructions. Go ahead – I’ll wait here.

In 1999, Simons and Chabris conducted an experiment in awareness at Harvard University. Participants were asked to watch a video of basketball players, where one team was wearing white shirts, and the other team was wearing black shirts. In the video, the white team and black team respectively were passing the ball to each other. Participants were asked to count the number of passes between players of the white team. During the video, a man dressed as a gorilla walked into the middle of the court, faced the camera and thumps his chest, then leaves (spending a total of 9 seconds on the screen.) Amazingly? Half of the participants missed the gorilla entirely! Since then, this has been termed “the Invisible Gorilla” experiment. 

Why this matters for analysts: As you are analyzing data, there can be huge, gaping issues that you may not even notice. When we focus on a particular task (for example, counting passes by the white-shirt players only, or analyzing one subset of our customers) we may overlook something significant. Take time before you finalize or present your analysis to think of what other possible explanations or variables there could be (what could you be missing?) or invite a colleague to poke holes in your work.

Stay tuned

More to come!

What are your thoughts? Do these pivotal social psychology experiments help to explain some of the challenges you face with analyzing and presenting data?

Analysis, Featured, General, Presentation

Foundational Social Psychology Experiments (And Why Analysts Should Know Them) – Part 3 of 5

Digital Analytics is a relatively new field, and as such, we can learn a lot from other disciplines. This post continues exploring classic studies from social psychology, and what we analysts can learn from them.

Primacy and Recency Effects

The serial position effect (so named by Ebbinghaus in 1913) finds that we are most likely to recall the first and last items in a list, and least likely to recall those in the middle. For example, let’s say you are asked to recall apple, orange, banana, watermelon and pear. The serial position effect suggests that individuals are more likely to remember apple (the first item; primacy effect) and pear (the final item; recency effect) and less likely to remember orange, banana and watermelon.

The explanation cited is that the first item/s in a list are the most likely to have made it to long-term memory, and benefit from being repeated multiple times. (For example, we may think to ourselves, “Okay, remember apple. Now, apple and orange. Now, apple, orange and banana.”) The primacy effect is reduced when items are presented in quick succession (probably because we don’t have time to do that rehearsal!) and is more prominent when items are presented more slowly. Longer lists tend to see a decrease in the primacy effect (Murdock, 1962.)

The recency effect, that we’re more likely to remember the last items, is explained because the most recent item/s are recalled, since they are still contained within our short-term memory (remember, 7 +/- 2!) However, the items in the middle of the list benefit from neither long, nor short, term memory, and therefore are forgotten.

This doesn’t just affect your recall of random lists of items. When participants are given a list of attributes of a person, their order appears to matter. For example, Asch (1964) found participants told “Steve is smart, diligent, critical, impulsive, and jealous” had a positive evaluation of Steve, whereas participants told “Steve is jealous, impulsive, critical, diligent, and smart” had a negative evaluation of Steve. Even though the adjectives are the exact same – only the order is different!

Why this matters for analysts: When you present information, your audience is unlikely to remember everything you tell them. So choose wisely. What do you lead with? What do you end with? And what do you prioritize lower, and save for the middle?

These findings may also affect the amount of information you provide at one time, and the cadence with which you do so. If you want more retained, you may wish to present smaller amounts of data more slowly, rather than rapid-firing with constant information. For example, rather than presenting twelve different “optimisation opportunities” at once, focusing on one may increase the likelihood that action is taken.

This is also an excellent argument against a 50-slide PowerPoint presentation – while you may have mentioned something in it, if it was 22 slides ago, the chance of your audience remembering are slim.

The Halo Effect

Psychologists have found that our positive impressions in one area (for example, looks) can “bleed over” to our perceptions in another, unrelated area (for example, intelligence.) This has been termed the “halo effect.”

In 1977, Nisbet and Wilson conducted an experiment with university students. The two students watched a video of the same lecturer deliver the same material, but one group saw a warm and friendly “version” of the lecturer, while the other saw the lecturer present in a cold and distant way. The group who saw the friendly version rated the lecturer as more attractive and likeable.

There are plenty of other examples of this. For example, “physically attractive” students have been found to receive higher grades and/or test scores than “unattractive” students at a variety of ages, including elementary school (Salvia, Algozzine, & Sheare, 1977; Zahr, 1985), high school (Felson, 1980) and college (Singer, 1964.) Thorndike (1920) found similar effects within the military, where a perception of a subordinate’s intelligence tended to lead to a perception of other positive characteristics such as loyalty or bravery.

Why this matters for analysts: The appearance of your reports/dashboards/analyses, the way you present to a group, your presentation style, even your appearance may affect how others judge your credibility and intelligence.

The Halo Effect can also influence the data you are analysing! It is common with surveys (especially in the case of lengthy surveys) that happy customers will simply respond “10/10” for everything, and unhappy customers will rate “1/10” for everything – even if parts of the experience differed from their overall perception. For example, if a customer had a poor shipping experience, they may extend that negative feeling about the interaction with the brand to all aspects of the interaction – even if only the last part was bad! (And note here: There’s a definite interplay between the Halo Effect and the Recency Effect!)

Stay tuned

More to come soon!

What are your thoughts? Do these pivotal social psychology experiments help to explain some of the challenges you face with analyzing and presenting data?

Analysis, Featured, Presentation

Foundational Social Psychology Experiments (And Why Analysts Should Know Them) – Part 2 of 5

Digital Analytics is a relatively new field, and as such, we can learn a lot from other disciplines. This post continues exploring classic studies from social psychology, and what we analysts can learn from them.

Jump to an individual topic:

Confirmation Bias

We know now that “the facts” may not persuade us, even when brought to our attention. However, Confirmation Bias tells us that we intentionally seek out information that continually reinforces our beliefs, rather than searching for all evidence and fully evaluating the possible explanations.

Wason (1960) conducted a study where participants were presented with a math problem: find the pattern in a series of numbers, such as “2-4-6.” Participants could create three subsequent sets of numbers to “test” their theory, and the researcher would confirm whether these sets followed the pattern or not. Rather than collecting a list of possible patterns, and using their three “guesses” to prove or disprove each possible pattern, Wason found that participants would come up with a single hypothesis, then seek to prove it. (For example, they might hypothesize that “the pattern is even numbers” and check whether “8-10-12”, “6-8-10” and “20-30-40” correctly matched the pattern. When it was confirmed their guesses matched the pattern, they simply stopped. However, the actual pattern was “increasing numbers” – their hypothesis was not correct at all!

Why this matters for analysts: When you start analyzing data, where do you start? With a hunch, that you seek to prove, then stop your analysis there? (For example, “I think our website traffic is down because our paid search spend decreased.”) Or with multiple hypotheses, which you seek to disprove one by one? A great approach used in government, and outlined by Moe Kiss for its applicability to digital analytics, is the Analysis of Competing Hypotheses.

Conformity to the Norm

In 1951, Asch found that we conform to the views of others, even when they are flat-out wrong, surprisingly often! He conducted an experiment where participants were seated in a group of eight others who were “in” on the experiment (“confederates.”) Participants were asked to judge whether a line was most similar in length to three other lines. The task was not particularly “grey area” – there was an obvious right and wrong answer.

(Image Credit)

Each person in the group gave their answer verbally, in turn. The confederates were instructed to give the incorrect answer, and the participant was the sixth of the group to answer.

Asch was surprised to find that 76% of people conformed to others’ (incorrect) conclusions at least once. 5% always conformed to the incorrect answer. Only 25% never once agreed with the group’s incorrect answers. (The overall conformity rate was 33%.)

In follow up experiments, Asch found that if participants wrote down their answers, instead of saying them aloud, the conformity rate was only 12.5%. However, Deutsch and Gerard (1955) found a 23% conformity rate, even in situations of anonymity.

Why this matters for analysts: As mentioned previously, if new findings contradict existing beliefs, it may take more than just presenting new data. However, these conformity studies suggest that efforts to do so may be further hampered if you are presenting information to a group. It is less likely that people will stand up for your new findings against the norm of the group. In this case, you may be better to discuss your findings slowly to individuals, and avoid putting people on the spot to agree/disagree within a group setting. Similarly, this argues against jumping straight to a “group brainstorming” session. Once in a group, Asch demonstrated that 76% of us will agree with the group (even if they’re wrong!) so we stand the best chance of getting more varied ideas and minimising “group think” by allowing for individual, uninhibited brainstorming and collection of all ideas first.

Stay tuned!

More to come next week. 

What are your thoughts? Do these pivotal social psychology experiments help to explain some of the challenges you face with analyzing and presenting data?

Analysis, Featured, Presentation

Foundational Social Psychology Experiments (And Why Analysts Should Know Them) – Part 1 of 5

Digital Analytics is a relatively new field, and as such, we can learn a lot from other disciplines. This series of posts looks at some classic studies from social psychology, and what we analysts can learn from them.

Jump to an individual topic:

The Magic Number 7 (or, 7 +/- 2)

In 1956, George A. Miller conducted an experiment that found that the number of items a person can hold in working memory is seven, plus or minus two. However, all “items” are not created equal – our brain is able to “chunk” information to retain more. For example, if asked to remember seven words or even seven quotes, we can do so (we’re not limited to seven letters) because each word is an individual item or “chunk” of information. Similarly, we may be able to remember seven two-digit numbers, because each digit is not considered its own item.

Why this matters for analysts: This is critical to keep in mind as we are presenting data. Stephen Few argues that a dashboard must be confined to one page or screen. This is due to this limitation of working memory. You can’t expect people to look at a dashboard and draw conclusions about relationships between separate charts, tables, or numbers, while flipping back and forth constantly between pages, because this requires they retain too much information in working memory. Similarly, expecting stakeholders to recall and connect the dots between what you presented eleven slides ago is putting too great a pressure on working memory. We must work with people’s natural capabilities, and not against them.

When The Facts Don’t Matter

In 1957, Leon Festinger studied a Doomsday cult who believed that aliens would rescue them from a coming flood. Unsurprisingly, no flood (nor aliens) eventuated. In their book, When Prophecy Fails, Festinger et al commented, “A man with a conviction is a hard man to change. Tell him you disagree and he turns away. Show him facts or figures and he questions your sources. Appeal to logic and he fails to see your point … Suppose that he is presented with evidence, unequivocal and undeniable evidence, that his belief is wrong: what will happen? The individual will frequently emerge, not only unshaken, but even more convinced of the truth of his beliefs than ever before.”

In a 1967 study by Brock & Balloun, subjects listened to several messages, but the recording was staticky. However, the subjects could press a button to clear up the static. They found that people selectively chose to listen to the message that affirmed their existing beliefs. For example, smokers chose to listen more closely when the content disputed a smoking-cancer link.

However, Chanel, Luchini, Massoni, Vergnaud (2010) found that if we are given an opportunity to discuss the evidence and exchange arguments with someone (rather than just reading the evidence and pondering it alone) we are more likely to change our minds in the face of opposing facts.

Why this matters for analysts: Even if your data seems self-evident, if it goes against what the business has known, thought, or believed for some time, you may need more data to support your contrary viewpoint. You may also want to allow for plenty of time for discussion, rather than simply sending out your findings, as those discussions are critical to getting buy-in for this new viewpoint.

Stay tuned!

More to come tomorrow.

What are your thoughts? Do these pivotal social psychology experiments help to explain some of the challenges you face with analyzing and presenting data?

Adobe Analytics, Conferences/Community, Featured, Presentation, Testing and Optimization

Get Your Analytics Training On – Down Under!

Analytics Demystified is looking at potentially holding Analytics training in Sydney, in November of this year. We’re looking to gauge interest (given it’s a pretty long trip!)

Proposed sessions:

Adobe Analytics Top Gun with Adam Greco

Adobe Analytics, while being an extremely powerful web analytics tool, can be challenging to master. It is not uncommon for organisations using Adobe Analytics to only take advantage of 30%-40% of its functionality. If you would like your organisation to get the most out its investment in Adobe Analytics, this “Top Gun” training class is for you. Unlike other training classes that cover the basics about how to configure Adobe Analytics, this one-day advanced class digs deeper into features you already know, and also covers many features that you may not have used. (Read more about Top Gun here.)

Cost: $1,200AUD
Date: Mon 6/11/17 (8 hours)

Data Visualisation and Expert Presentation with Michele Kiss

The best digital analysis in the world is ineffective without successful communication of the results. In this half-day workshop, Analytics Demystified Senior Partner Michele Kiss will share her advice for successfully presenting data to all audiences, including communication of numbers, data visualisation, dashboard best practices and effective storytelling and presentation. Want feedback on something you’re working on? Bring it along!

Cost: $600 AUD
Date: Fri 3/11/17 (4 hours)

Adobe Target and Optimization Best Practices with Brian Hawkins

Adobe Target has been going through considerable changes over the last year. A4T, at.js, Auto-Target, Auto-Allocate, and significant changes to Automated Personalisation. This half day session will dive into these concepts, as well as some heavy focus on the power of the Adobe Target profile and how it can be used as a key tool to advance personalisation efforts. Time will also be set aside to dive into proven organisational best practices that have helped organisations democratise test intake, work flow, dissemination of learnings and automating test learnings.

Cost: $600 AUD
Date: Fri 3/11/17 (4 hours)

[MeasureCamp Sydney is being proposed to be held on the Saturday, giving you a great reason to stay and hang out in Sydney over the weekend]

If you plan to attend, we need you to sign up here bit.ly/demystified-downunder so we can understand if there’s sufficient interest.

These trainings have not been (and likely never will come again!) to Australia, so it’s an awesome opportunity to get a great training experience at a way lower cost than that of flying to the US!

This is not confirmed yet so please do not book any travel (or any other non-refundable stuff) until you hear from us. Hope to see you all soon!! (edited)

* I’m allowed to say that, because I was born and raised in Australia (though I may no longer sound like it.) From the booming metropolis of Geelong! 

Featured, Reporting

How to Build a Brain-Friendly Bar Chart in R

This post was inspired by a post by Lea Pica: How to Build a Brain-Friendly Bar Chart in Domo. In that post, Lea started with the default rendering of a horizontal bar chart in Domo and then walked through, step-by-step, the modifications she would make to improve the visualization.

The default chart started like this:

And, it ended like this:

I thought it would informative to go through the exact same exercise, but to do it with R. Specifically, I used the ggplot2 package in R, which is the de facto standard for visualization with the platform.

I, too, started with the default rendering (with ggplot2)  of the same data set:

Egad!

But, I ultimately got to a final plot that was more similar to Lea’s Domo rendering than it was different:

The body of the bar chart is almost an exact replica (the gray with a blue highlight bar is something Lea showed as a “bonus,” but the title of the chart changed; it added an extra step, but I’m a big fan of this sort of highlighting, so that’s the version I built).

The exercise, as expected, does not wind up claiming either platform is a “better” one for the task. A few takeaways for me were:

  • Both platforms are able to produce a good, quality, data-pixel-ratio-maximized visualization.
  • Domo has some odd quirks: the “small, medium, or large” as the font size choices seems unnecessarily limiting, for instance.
  • R has (more, I suspect) odd quirks: I couldn’t easily place the title all the way left-justified; putting the “large text higlight” would have been doable, but very hacky; The Paid Search data label crowds the top of the bar a bit (oddly), etc.

Ultimately, when developing visualizations with R, it takes very little code to do the core rendering of the visualization. It then — in my experience — takes 2-4X additional code to get the formatting just right. At the same time, though, much of that additional code operates like CSS — it can be centrally sourced and then used (and selectively overridden) by multiple visualizations.

If you’re interested in seeing the step-by-step evolution of the code from the initial plot to the final plot, you can check it out on RPubs (that document was put together as an RMarkdown file, so the code you see is, literally, the code that was then executed to generate the resulting iteration).

As always, I’d love to hear your feedback in the comments, and I’d love to chat about how R fits (or could fit) into your organization’s analytics technology stack!

Featured, google analytics, Reporting

Your Guide to Understanding Conversion Funnels in Google Analytics

TL;DR: Here’s the cheatsheet.

Often I am asked by clients what their options are for understanding conversion through their on-site funnel, using Google Analytics. This approach can be used for any conversion funnel. For example:

  • Lead Form > Lead Submit
  • Blog Post > Whitepaper Download Form > Whitepaper Download Complete
  • Signup Flow Step 1 > Signup Flow Step 2 > Complete
  • Product Page > Add to Cart > Cart > Payment > Complete
  • View Article > Click Share Button > Complete Social Share

Option 1: Goal Funnels

Goals is a fairly old feature in Google Analytics (in fact, it goes back to the Urchin days.) You can configure goals based on two things:*

  1. Page (“Destination” goal.) These can be “real” pages, or virtual pages.
  2. Events

*Technically four, but IMHO, goals based on duration or Pages/Session are a complete waste of time, and a waste of 1 in 20 goal slots.

Only a “Destination” (Page) goal allows you to create a funnel. So, this is an option if every step of your funnel is tracked via pageviews.

To set up a Goal Funnel, simply configure your goal as such:

Pros:

  • Easy to configure.
  • Can point users to the funnel visualization report in Google Analytics main interface.

Cons:

  • Goal data (including the funnel) is not retroactive. These will only start working after you create them.
    • Note: A session-based segment with the exact same criteria as your goal is an easy way to get the historical data, but you would need to stitch them (together outside of GA.)
  • Goal funnels are only available for page data; not for events (and definitely not for Custom Dimensions, since the feature far predates those.) So, let’s say you were tracking the following funnel in the following way:
    • Clicked on the Trial Signup button (event)
    • Trial Signup Form (page)
    • Trial Signup Submit (event)
    • Trial Signup Thank You Page (page)
    • You would not be able to create a goal funnel, since it’s a mix of events and pages. The only funnel you could create would be the Form > Thank You Page, since those are defined by pages.
  • Your funnel data is only available in one place: the “Funnel Visualization” report (Conversions > Goals > Funnel Visualization)
  • Your funnel can not be segmented, so you can’t compare (for example) conversion through the funnel for paid search vs. display.
  • The data for each step of your funnel is not accessible outside of that single Funnel Visualization report. So, you can’t pull in the data for each step via the API, nor in a Custom Report, nor use it for segmentation.
  • The overall goal data (Conversion > Goals > Overview) and related reports ignores your funnel. So, if you have a mandatory first step, this step is only mandatory within the funnel report itself. In general goal reporting, it is essentially ignored. This is important. If you have two goals, with different funnels but an identical final step, the only place you will actually see the difference is in the Funnel Visualization. For example, if you had these two goals:
    • Home Page > Lead Form > Thank You Page
    • Product Page > Lead Form > Thank You Page

The total goal conversions for these goals would be the same in every report, except the Funnel Visualization. Case in point:

Option 2: Goals for Each Step

If you have a linear conversion flow you’re looking to measure, where the only way to get through from one step to the next is in one path, you can overcome some of the challenges of Goal Funnels, and just create a goal for every step. Since users have to go from one step to the next in order, this will work nicely.

For example, instead of creating a single goal for “Lead Thank You Page”, with a funnel of the previous steps, you would create one goal for “Clicked Request a Quote” another for the next step (“Saw Lead Form”), another for “Submitted Lead Form”, “Thank You Page” (etc.)

You can then use these numbers in a simple table format, including with other dimensions to understand the conversion difference. For example:

Or pull this information into a spreadsheet:

Pros:

  • You can create these goals based on a page or an event, and if some of your steps are pages and some are events, it still works
  • You can create calculated metrics based on these goals (for example, conversion from Step 1 to Step 2.) See how in Peter O’Neill’s great post.
  • You can access this data through many different methods:
    • Standard Reports
    • Custom Reports
    • Core Reporting API
    • Create segments

Cons:

  • Goal data is not retroactive. These will only start working after you create them.
    • Note: A session-based segment with the exact same criteria as your goal is an easy way to get the historical data, but you would need to stitch them (together outside of GA.)
  • This method won’t work if your flow is non-linear (e.g. lots of different paths, or orders in which the steps could be seen.)
    • If your flow is non-linear, you could still use the Goal Flow report, however this report is heavily sampled (even in GA360) so it may not be of much benefit if you have a high traffic site.
  • It requires your steps be tracked via events or pages. A custom dimension is not an option here.
  • You are limited to 20 goals per Google Analytics view, and depending on the number of steps (one client of mine has 13!) that might not leave much room for other goals. (Note: You could create an additional view, purely to “house” funnel goals. But, that’s another view that you need to maintain.)

Option 3: Custom Funnels (GA360 only)

Custom Funnels is a relatively new (technically, it’s still in beta) feature, and only available in GA360 (the paid version.) It lives under Customization, and is actually one type of Custom Report.

Custom Funnels actually goes a long way to solving some of the challenges of the “old” goal funnels.

Pros:

  • You can mix not only Pages and Events, but also include Custom Dimensions and Metrics (in fact, any dimension in Google Analytics.)
  • You can get specific – do the steps need to happen immediately one after the other? Or “just eventually”? You can do this for the report as a whole, or at the individual step level.
  • You can segment the custom funnel (YAY!) Now, you can do analysis on how funnel conversion is different by traffic source, by browser, by mobile device, etc.

Cons:

  • You’re limited to five steps. (This may be a big issue, for some companies. If you have a longer flow, you will either need to selectively pick steps, or analyze it in parts. It is my desperate hope that GA allows for more steps in the future!)
  • You’re limited to five conditions with each step. Depending on the complexity of how your steps are defined, this could prove challenging.
    • For example, if you needed to specify a specific event (including Category, Action and Label) on a specific page, for a specific device or browser, that’s all five of your conditions used.
    • But, there are normally creative ways to get around this, such as segmenting by browser, instead of adding it as criteria.
  • Custom Reports (including Custom Funnels) are kind of painful to share
    • There is (currently) no such thing as “Making a Custom Report visible to everyone who has access to that GA View.” Aka, you can’t set it as “standard.”
    • Rather, you need to share a link to the configuration, the user then has to choose the appropriate view, and add it to their own GA account. (If they add it to the wrong view, the data will be wrong or the report won’t work!)
    • Once you do this, it “disconnects” it from your own Custom Report, so if you make changes, you’ll need to go through the sharing process all over again (and users will end up with multiple versions of the same report.)

Option 4: Segmentation

You can mimic Option 1 (Funnels) and Option 2 (Goals for each step) with segmentation.

You could easily create a segment, instead of a goal. You could do this in the simple way, by creating one segment for each step, or you can get more complicated and create multiple segments to reflect the path (using sequential segmentation.) For example:

One segment for each step
Segment 1: A
Segment 2: B
Segment 3: C
Segment 4: D

or

Multiple segments to reflect the path
Sequential Segment 1: A
Sequential Segment 2: A > B
Sequential Segment 3: A > B > C
Sequential Segment 4: A > B > C > D

Pros:

  • Retroactive
  • Allows you to get more complicated than just Pages and Events (e.g. You could take into account other dimensions, including Custom Dimensions)
  • You can set a segment as visible to all users of the view (“Collaborators and I can apply/edit segment in this View”), making it easier for everyone in the organization to use your segments

Cons:

  • You can only use four segments at one time in the UI, so while you aren’t limited to the number of “steps”, you’d only be able to look at four. (You could leverage the Core Reporting API to automate this.)
  • The limit on the number of segments you can create is high (100 for shared segments and 1000 for individual segments) but let’s be honest – it’s pretty tedious to create multiple sequential segments for a lot of steps. So there may be a “practical limit” you’ll hit, out of sheer boredom!
  • If you are using GA Free, you will hit sampling by using segments (which you won’t encounter when using goals.) THIS IS A BIG ISSUE… and may make this method a non-starter for GA Free customers (depending on their traffic.) 
    • Note: The Core Reporting API v3 (even for GA360 customers) currently follows the sampling rate of GA Free. So even 360 customers may experience sampling, if they’re attempting to use the segmentation method (and worse sampling than they see in the UI.)

Option 5: Advanced Analysis (NEW! GA360 only)

Introduced in mid-2018 (as a beta) Advanced Analysis offers one more way for GA360 customers to analyse conversion. Advanced Analysis is a separate analysis tool, which includes a “Funnel” option. You set up your steps, based on any number of criteria, and can even break down your results by another dimension to easily see the same funnel for, say, desktop vs. mobile vs. tablet.

Pros:

  • Retroactive
  • Allows you to get more complicated than just Pages and Events (e.g. You could take into account other dimensions, including Custom Dimensions)
  • Easily sharable – much more easily than a custom report! (just click the little people icon on the right-hand side to set an Advanced Analysis to “shared”, then share the links to others with access to your Google Analytics view.)
  • Up to 10 steps in your funnel
  • You can even use a segment in a funnel step
  • Can add a dimension as a breakdown

Cons:

  • Advanced Analysis funnels are always closed, so users must come through the first step of the funnel to count.
  • Funnels are always user-based; you do not have the option of a session-based funnel.
  • Funnels are always “eventual conversion”; you can not control whether a step is “immediately followed by” the next step, or simply “followed by” the next step (as you can with Sequential Segments and Custom Funnels.)

Option 6: Custom Implementation

The first three options assume you’re using standard GA tracking for pages and events to define each step of your funnel. There is, of course, a fourth option, which is to specifically implement something to capture just your funnel data.

Options:

  • Collect specific event data for the funnel. For example:
    • Event Category: Lead Funnel
    • Event Action: Step 01
    • Event Label: Form View
  • Then use event data to analyze your funnel.
  • Use Custom Dimensions and Metrics.

Pros:

  • You can specify and collect the data exactly how you want it. This may be especially helpful if you are trying to get the data back in a certain way (for example, to integrate into another data set.)

Cons:

  • It’s one more GA call that needs to be set up, and that needs to remain intact and QA’ed during site and/or implementation changes. (Aka, one more thing to break.)
  • For the Custom Dimensions route, it relies on using Custom Reports (which, as mentioned above, are painful to share.)

Personally, my preference is to use the built-in features and reports, unless what I need simply isn’t possible without custom implementation. However, there are definitely situations in which this would be the optimal route to go.

Hey look! A cheat sheet!

Is this too confusing? In the hopes of simplifying, here’s a handy cheat sheet!

Conclusion

So you might be wondering: Which do I use the most? In general, my approach is generally:

  • If I’m doing an ad hoc, investigative analysis, I’ll typically defer to Advanced Analysis. That is, unless I need a session-based funnel, or control over immediate vs. eventual conversion, in which case I’ll use Custom Funnels.
  • If it’s for on-going reporting, I will typically use Goal-based (or BigQuery-based) metrics, with Data Studio layered on top to create the funnel visualisation. (Note: This does require a clean, linear funnel.)

Are there any approaches I missed? What is your preferred method? 

Featured, google analytics

R You Interested in Auditing Your Google Analytics Data Collection?

One of the benefits of programming with data — with a platform like R — is being able to get a computer to run through mind-numbing and tedious, but useful, tasks. A use case I’ve run into on several occasions has to do with core customizations in Google Analytics:

  • Which custom dimensions, custom metrics, and goals exist, but are not recording any data, or are recording very little data?
  • Are there naming inconsistencies in the values populating the custom dimensions?

While custom metrics and goals are relatively easy to eyeball within the Google Analytics web interface, if you have a lot of custom dimensions, then, to truly assess them, you need to build one custom report for each custom dimension.

And, for all three of these, looking at more than a handful of views can get pretty mind-numbing and tedious.

R to the rescue! I developed a script that, as an input, takes a list of Google Analytics view IDs. The script then cycles through all of the views in the list and returns three things for each view:

  • A list of all of the active custom dimensions in the view, including the top 5 values based on hits
  • A list of all of the active custom metrics in the view and the total for each metric
  • A list of all of the active goals in the view and the number of conversions for the goal

The output is an Excel file:

  • A worksheet that lists all of the views included in the assessment
  • A worksheet that lists all of the values checked — custom dimensions, custom metrics, and goals across all views
  • A worksheet for each included view that lists just the custom dimensions, custom metrics, and goals for that view

The code is posted as an RNotebook and is reasonably well structured and commented (even the inefficiencies in it are pretty clearly called out in the comments). It’s available — along with instructions on how to use it — on github:

I actually developed a similar tool for Adobe Analytics a year ago, but that was still relatively early days for me R-wise. It works… but it’s now due for a pretty big overhaul/rewrite.

Happy scripting!

Analysis, Featured

The Trouble (My Troubles) with Statistics

Okay. I admit it. That’s a linkbait-y title. In my defense, though, the only audience that would be successfully baited by it, I think, are digital analysts, statisticians, and data scientists. And, that’s who I’m targeting, albeit for different reasons:

  • Digital analysts — if you’re reading this then, hopefully, it may help you get over an initial hump on the topic that I’ve been struggling mightily to clear myself.
  • Statisticians and data scientists — if you’re reading this, then, hopefully, it will help you understand why you often run into blank stares when trying to explain a t-test to a digital analyst.

If you are comfortably bridging both worlds, then you are a rare bird, and I beg you to weigh in in the comments as to whether what I describe rings true.

The Premise

I took a college-level class in statistics in 2001 and another one in 2010. Neither class was particularly difficult. They both covered similar ground. And, yet, I wasn’t able to apply a lick of content from either one to my work as a web/digital analyst.

Since early last year, as I’ve been learning R, I’ve also been trying to “become more data science-y,” and that’s involved taking another run at the world of statistics. That. Has. Been. HARD!

From many, many discussions with others in the field — on both the digital analytics side of things and the more data science and statistics side of things — I think I’ve started to identify why and where it’s easy to get tripped up. This post is an enumeration of those items!

As an aside, my eldest child, when applying for college, was told that the fact that he “didn’t take any math” his junior year in high school might raise a small red flag in the admissions department of the engineering school he’d applied to. He’d taken statistics that year (because the differential equations class he’d intended to take had fallen through). THAT was the first time I learned that, in most circles, statistics is not considered “math.” See how little I knew?!

Terminology: Dimensions and Metrics? Meet Variables!

Historically, web analysts have lived in a world of dimensions. We combine multiple dimensions (channel + device type, for instance) and then put one or more metrics against those dimensions (visits, page views, orders, revenue, etc.)

Statistical methods, on the other hand, work with “variables.” What is a variable? I’m not being facetious. It turns out it can be a bit a mind-bender if you come at it from a web analytics perspective:

  • Is device type a variable?
  • Or, is the number of visits by device type a variable?
  • OR, is the number of visits from mobile devices a variable?

The answer… is “Yes.” Depending on what question you are asking and what statistical method is being applied, defining what your variable(s) are, well, varies. Statisticians think of variables as having different types of scales: nominal, ordinal, interval, or ratio. And, in a related way, they think of data as being either “metric data” or “nonmetric data.” There’s a good write-up on the different types — with a digital analytics slant — in this post on dartistics.com.

It may seem like semantic navel-gazing, but it really isn’t: different statistical methods work with specific types of variables, so data has to be transformed appropriately before statistical operations are performed. Some day, I’ll write that magical post that provides a perfect link between these two fundamentally different lenses through which we think about our data… but today is not that day.

Atomic Data vs. Aggregated Counts

In R, when using ggplot to create a bar chart that uses underlying data that looks similar to how data would look in Excel, I have to include a parameter that is stat="identity". As it turns out, that is a symptom of the next mental jump required to move from the world of digital analytics to the world of statistics.

To illustrate, let’s think about how we view traffic by channel:

  • In web analytics, we think: “this is how many (a count) visitors to the site came from each of referring sites, paid search, organic search, etc.”
  • In statistics, typically, the framing would be: “here is a list (row) for each visitor to the site, and each visitor is identified as being visiting from referring sites, paid search, organic search, etc.” (or, possibly, “each visitor is flagged as being yes/no for each of: referring sites, paid search, organic search, etc.”… but that’s back to the discussion of “variables” covered above).

So, in my bar chart example above, R defaults to thinking that it’s making a bar chart out of a sea of data, where it’s aggregating a bunch of atomic observations into a summarized set of bars. The stat="identity" argument has to be included to tell R, “No, no. Not this time. I’ve already counted up the totals for you. I’m telling you the height of each bar with the data I’m sending you!”

When researching statistical methods, this comes up time and time again: statistical techniques often expect a data set to be a collection of atomic observations. Web analysts typically work with aggregated counts. Two things to call out on this front:

  • There are statistical methods (a cross tabulation with a Chi square test for independence is one good example) that work with aggregated counts. I realize that. But, there are many more that actually expect greater fidelity in the data.
  • Both Adobe Analytics (via data feeds, and, to a clunkier extent, Data Warehouse) and Google Analytics (via the GA360 integration with Google BigQuery) offer much more atomic level data than the data they provided historically through their primary interfaces; this is one reason data scientists are starting to dig into digital analytics data more!

The big, “Aha!” for me in this area is that we often want to introduce pseudo-granularity into our data. For instance, if we look at orders by channel for the last quarter, we may have 8-10 rows of data. But, if we pull orders by day for the last quarter, we have a much larger set of data. And, by introducing granularity, we can start looking at the variability of orders within each channel. That is useful! When performing a 1-way ANOVA, for instance, we need to compare the variability within channels to the variability across channels to draw conclusions about where the “real” differences are.

This actually starts to get a bit messy. We can’t just add dimensions to our data willy-nilly to artificially introduce granularity. That can be dangerous! But, in the absence of truly atomic data, some degree of added dimensionality is required to apply some types of statistical methods. <sigh>

Samples vs. Populations

The first definition for “statistics” I get from Google (emphasis added) is:

“the practice or science of collecting and analyzing numerical data in large quantities, especially for the purpose of inferring proportions in a whole from those in a representative sample.”

Web analysts often work with “the whole.” Unless we consider historical data the sample and the “whole” including future web traffic. But, if we view the world that way — by using time to determine our “sample” — then we’re not exactly getting a random (independent) sample!

We’ve also been conditioned to believe that sampling is bad! For years, Adobe/Omniture was able to beat up on Google Analytics because of GA’s “sampled data” conditions. And, Google has made any number of changes and product offerings (GA Premium -> GA 360) to allow their customers to avoid sampling. So, Google, too, has conditioned us to treat the word “sampled” as having a negative connotation.

To be clear: GA’s sampling is an issue. But, it turns out that working with “the entire population” with statistics can be an issue, too. If you’ve ever heard of the dangers of “overfitting the model,” or if you’ve heard, “if you have enough traffic, you’ll always find statistical significance,” then you’re at least vaguely aware of this!

So, on the one hand, we tend to drool over how much data we have (thank you, digital!). But, as web analysts, we’re conditioned to think “always use all the data!” Statisticians, when presented with a sufficiently large data set, like to pull a sample of that data, build a model, and then test the model with another sample of the data. As far as I know, neither Adobe nor Google have an, “Export a sample of the data” option available natively. And, frankly, I have yet to come across a data scientist working with digital analytics data who is doing this, either. But, several people have acknowledged this is something that should be done in some cases.

I think this is going to have to get addressed at some point. Maybe it already has been, and I just haven’t crossed paths with the folks who have done it!

Decision Under Uncertainty

I’ve saved the messiest (I think) for last. Everything on my list to this point has been, to some extent, mechanical. We should be able to just “figure it out” — make a few cheat sheets, draw a few diagrams, reach a conclusion, and be done with it.

But, this one… is different. This is an issue of fundamental understanding — a fundamental perspective on both data and the role of the analyst.

Several statistically-savvy analysts I have chatted with have said something along the lines of, “You know, really, to ‘get’ statistics, you have to start with probability theory.” One published illustration of this stance can be found in The Cartoon Guide to Statistics, which devotes an early chapter to the subject. It actually goes all the way back to the 1600s and an exchange between Blaise Pascal and Pierre de Fermat and proceeds to walk through a dice-throwing example of probability theory. Alas! This is where the book lost me (although I still have it and may give it another go).

Possibly related — although quite different — is something that Matt Gershoff of Conductrics and I have chatted about on multiple occasions across multiple continents. Matt posits that, really, one of the biggest challenges he sees traditional digital analysts facing when they try to dive into a more statistically-oriented mindset is understanding the scope (and limits!) of their role. As he put it to me once in a series of direct messages really boils down to:

  1. It’s about decision-making under uncertainty
  2. It’s about assessing how much uncertainty is reduced with additional data
  3. It must consider, “What is the value in that reduction of uncertainty?”
  4. And it must consider, “Is that value greater than the cost of the data/time/opportunity costs?”

The list looks pretty simple, but I think there is a deeper mindset/mentality-shift that it points to. And, it gets to a related challenge: even if the digital analyst views her role through this lens, do her stakeholders think this way? Methinks…almost certainly not! So, it opens up a whole new world of communication/education/relationship-management between the analyst and stakeholders!

For this area, I’ll just leave it at, “There are some deeper fundamentals that are either critical to understand or something that can be kicked down the road a bit.” I don’t know which it is!

What Do You Think?

It’s taken me over a year to slowly recognize that this list exists. Hopefully, whether you’re a digital analyst dipping your toe more deeply into statistics or a data scientist who is wondering why you garner blank stares from your digital analytics colleagues, there is a point or two in this post that made you think, “Ohhhhh! Yeah. THAT’s where the confusion is.”

If you’ve been trying to bridge this divide in some way yourself, I’d love to hear what of this post resonates, what doesn’t, and, perhaps, what’s missing!

Adobe Analytics, Featured

Do You Want My Adobe Analytics “Top Gun” Class In Your City?

This past May, I conducted my annual Adobe Analytics “Top Gun” classes to a packed room in Chicago. I always love doing this class because it helps the attendees get more out of Adobe Analytics when they get back to their organizations. I have done this class in Europe several times and usually once a year in the US. The feedback has been tremendous as can be seen by some of the reviews on LinkedIn shown below.

However, I often get requests to do my class in various cities across the US (and the world), but I don’t have the time to orchestrate doing that many trainings per year. To conduct a class, I need a minimum of 15 people and the cost of the class is about $1,250 per person for the full one-day class. I also need to find a free venue to conduct the class, which is often at a company that has a large conference room or a training room.

Since I would like to do more classes, but am time constrained, I am going to try something new this year.  I am going to let anyone out there bring my “Top Gun” class to their city by asking you to help host my class.  If you have a venue where I can conduct my Adobe Analytics “Top Gun” class, and you think you can work with your local Adobe Analytics community to get at least 10 people to commit (I can usually get a bunch once I advertise the class), I am happy to hit the road and come to you and conduct a class. So if you are interested in hosting my “Top Gun” class, please e-mail me and let’s discuss. I also conduct my class privately for companies that have enough people wanting to attend to justify the cost, so feel free to reach out to me about that if interested as well.

To help identify cities that are interested (or if you just want to be notified of my next class), I have created a Google Form where anyone can submit their name, e-mail and City/Region, so if you are interested in having my “Top Gun” class in your city, please submit this form!

In case you need help selling the class to your local folks, more info about the class follows.

Adobe Analytics “Top Gun” Class Description

It is a one day crash course on how Adobe Analytics works behind the scenes based upon my Adobe Analytics book. This class is not meant for daily Adobe Analytics end-users, but rather for those who administer Adobe Analytics at their organization, analysts who do requirements gathering or developers who want to understand why they are being told to implement things in Adobe Analytics. The class goes deep into the Adobe Analytics product, exploring all of its features from variables to merchandising to importing offline metrics. The primary objective of the class is to teach participants how to translate every day business questions into Adobe Analytics implementation steps. For example, if your boss tells you that they want to track website visitor engagement using Adobe Analytics, would you know how to do that? While the class doesn’t get into all of the coding aspects, it will teach you which product features and functions you can bring to bear to create reports answering any question you may get from business stakeholders. It will also allow you and your developers to have a common language and understanding of the Adobe Analytics product so that you can expedite getting the data you need to answer business questions.

Adobe Analytics “Top Gun Class Feedback

To view more feedback, check out the recommendations on my LinkedIn Profile.

Adobe Analytics, Featured

Search Result Page Exit Rates

Recently, I was working with a client who was interested in seeing how often their internal search results page was the exit page. Their goal was to see how effective their search results were and which search terms were leading to high abandonment. Way back in 2010, I wrote a post about how to see which internal search terms get clicks and which do not, but this question is a bit different from that. So in this post, I will share some thoughts on how to quantify your internal search exit rates in Adobe Analytics.

The Basics

Seeing the general exit rate of the search results page on your site is pretty easy to do simply with the Pages report. To start, simply open the pages report and add the Exits metric to the report and use the search box to isolate your search results page:

Next, you can trend this by changing to the trended view:

But to see the Exit Rate, you need to create a new calculated metric that divides these Exits by the Total # of Visits (keep in mind that you need to use the gear icon to change Visits to “Total”). The calculated metric would look like this:

Once you have this metric, you can change your previous trending view to use this calculated metric (still for the Search Results Page) to see this:

Now we have a trend of the Search Results page exit rate and this graph can be added to a dashboard as needed.

More Advanced

As you can see getting our site search results page exit rate is pretty easy. However, the Pages approach is a bit limiting because it is difficult to view these Search Result page exit rates by search term. For example, if I want to see the trend of Search Result Exit Rates for the term “Bench,” I can create a segment defined as “Hit where Internal Search term = Bench” and apply it to see this:

Here you can see that this search term has a much higher than average Search Result page Exit Rate. But if I want to do this for more search terms, I would have to create many keyword-based segments, which would be very time consuming.

Fortunately, there is another way. Instead of using the Pages report, you can create a new Search Result Page Exit Rate calculated metric that is unrelated to the Pages report. To do this, you would first build a segment that looks for Visits where the Exit Page was “Search Results” as shown here:

Next, you would use this new segment in a new “derived” calculated metric and use it to divide Search Page Exit Visits by all Visits like this:

 

This would produce a trend that is [almost] identical to the report shown above:

Just as before, this trend line can be added to a dashboard as needed. But additionally, this new calculated metric can be added to your Internal Search Term eVar report to see the different Search Result Page Exit Rates for each term:

This allows you to compare terms and look for ones that are doing well and/or poorly. Whereas before, if you wanted to see a trend for any particular phrase, you had to create a new segment, in this report, you can simply trend the Search Result Page Exit Rate and then pick the internal search terms you want to see trended. For example, here is a trend of “Bench” and “storage bench” seen together:

This means that you can see the Search Page Exit Rate for any term without having to build tons of segments (yay!). And, as you can see, the daily trend of Search Page Exit Rates for “Bench” here are the same as the ones shown above for the Pages version of the metric with the one-off “Bench” segment applied.

One More Thing!

As if this weren’t enough, there is one more thing!  If you sort the Search Term Exit Rate (in descending order) in the Internal Search Term eVar report, you can find terms that have 100% (or really high) exit rates!

This can help you figure out where you need more content or might be missing product opportunities. Of course, many of these will be cases in which there are very few internal searches, so you should probably view this with the raw number of searches as shown above.

Adobe Analytics, Featured

Out of Stock Products

For retail/e-commerce websites that sell physical products, one of the worst things that can happen is having your products be out of stock. Imagine that you have done a lot of marketing and campaigns to get people to come to your site, led them to the perfect product, only to find that for some people, you don’t have enough inventory to sell them what they want. Nothing is more frustrating than having customers who want to give you their money but can’t! Often times, inventory is beyond the control of merchandisers, but I have found that monitoring the occurrences of products being out of stock can be beneficial, if for no other reason than, to make sure others at your organization know about it and to apply pressure to avoid inventory shortfalls when possible. In this post, I am going to show you how to monitor instances of products being out of stock and how to quantify the potential financial impact of out of stock products.

Tracking Out of Stock Products

The first step in quantifying the impact of out of stock products is to understand how often each product is out of stock. Doing this is relatively straightforward. When visitors reach a product page on your site, you should already be setting a Product View success event and passing the Product Name or ID to the Products variable. If a visitor reaches a product page for a product that is out of stock, you should set an additional “Out of Stock” success event at the same time as the Product View event. This will be a normal counter success event and should be associated with the product that is out of stock. Once this is done, you can open your Products report and add both Product Views and this new Out of Stock success event and sort by the Out of Stock event to see which products are out of stock the most:

In this example, you can see that the products above are not always out of stock and how often each is out of stock. If you wanted, you could even create an Out of Stock % calculated metric to see the out of stock percent by product using this formula:

This would produce a report that looks like this:

If you have SAINT Classifications that allow you to see products by category or other attributes, you could also see this Out of Stock percent by any of those attributes as well.

Of course, since you have created a new calculated metric, you can also see it by itself (agnostic of product) to see the overall Out of Stock % for the entire website:

In this case, it looks like there are several products that are frequently out of stock, but overall, the total out of stock percent is under two percent.

Tracking Out of Stock Product Amounts

Once you have learned which products tend to be out of stock, you might want to figure out how much money you could be losing due to out of stock products. Since the price of the product is typically available on the product page, you can capture that amount in a currency success event and associate it with each product. For example, if a visitor reaches a product page and the product normally sells for $50, but is out of stock, you could pass $50 to a new “Out of Stock Amount” currency success event. Doing this would produce a report that looks like this:

This shows you the amount of money, by product, that would have been lost if every visitor viewing that product actually wanted to buy it. You can also see this amount in aggregate by looking at the success event independently:

However, these dollar amounts are a bit fake, because it is not ideal to assume a 100% product view to order conversion for these out of stock products and doing so, greatly inflates this metric. Therefore, what is more realistic is to weight this Out of Stock dollar amount by how often products are normally purchased after viewing the product page. This is still not an exact science, but it is much more realistic than assuming 100% conversion.

Fortunately, creating a weighted version of this Out of Stock Amount metric is pretty easy by using calculated metrics. To do this, you simply take the Out of Stock Amount currency success event and divide it by the Order to Product View ratio. This is done by adding a few containers to a new calculated metric as shown here:

Once this metric is created, you can add it to the previous Products report to see this:

In this report, I have added Orders and this new Weighted Out of Stock Amount calculated metric. If you look at row 4, you can see that the total Out of Stock Amount is $348, but that the Weighted Out of Stock Amount is $34. The $34 is calculated by our new metric by dividing the normal product conversion rate (26/268 = 9.70149%) by the total Out of Stock Amount of $348 (348*.0970149=33.76), which means that the $34 amount is much more likely to be the lost value amount for that product. The cool part, is that since each product has different numbers of Orders and Product Views, the discount percent applied to each product is calculated relatively by our new weighted calculated metric! For example, while the Product View to Order conversion ratio for row 4 was 9.7%, the conversion rate for row 10 is only 2.6% (4/154), meaning that only $22 out of the $843 Out of Stock Amount is moved to the Weighted Out of Stock Amount calculated metric. Pretty cool huh?

One Last Problem

Before we go patting ourselves on the back, however, we have one more problem to solve. If you look at the report above, you might have noticed the problem in rows 1,2,3,5,6,8,9. Even though there is a lot of money in the Out of Stock Amount success event, there is no money being applied to the Weighted Out of Stock Amount calculated metric we created. This is due to the fact that there were no Orders for these products, meaning that the conversion rate is zero, which when multiplied by the Out of Stock Amount also results in zero (which hopefully you recall from elementary school). That is not ideal, because now the Weighted Out of Stock Amount is too low and the raw amount in the success event is too high! Unfortunately, our calculated metric above only works when there are Orders during the time range, so we can calculate the average Product View to Order ratio for each product.

Unfortunately, there is no perfect way to solve this without manually downloading a lot of historical data to look for what the Product View to Order ratio was for each product over the past year or two, but the good news is that if you use a large enough timeframe, the cases of zero orders should be relative small. But just in case you do have cases where zero orders exist, I am going to show you an advanced trick that you can use to get the next best thing in your Weighted Out of Stock Amount calculated metric.

My solution for the zero-order issue is to use the average Product View to Order ratio for all cases in which there are zero orders. The idea here is that if the first metric is counting 100% and zero-Order rows are count 0%, why not use the site average for the zero-Order rows? This will not be perfect, but it is far better than using 100% or 0%! To do this, you need to make a slight tweak to the preceding calculated metric. This tweak involves adding an IF statement to first look to see if an Order exists. If it does, the calculated metric should use the formula shown above. But if no Order exists, you will multiply the Out of Stock Amount success event metric by the average (site wide) Order to Product View ratio. This is easy to do by using the TOTAL metrics for Orders and Product Views. While this all sounds complex, here is what the new calculated metric looks like when it is completed:

Next, you simply add this to the previous report to see this:

As you can see, the rows that worked previously are unchanged (rows 4,7,10), but the other rows now have Weighted Out of Stock Amounts. If you divide the total Orders by total Product Views, you can see that the average Order to Product View ratio is 4.21288% (16215/384,891). If you then apply this ratio to any of the Out of Stock Amounts with zero-Orders, you will get the Weighted Out of Stock Amount. For example, row 1 has a value of $286, which is 4.21288% multiplied by $6,786. In this case, you can remove the old calculated metric and just use the new one and as you use longer date ranges, you will have fewer zero order rows and your data will be more accurate.

Of course, since this is a calculated metric, you can always look at it independent of products to see the weighted Out of Stock Amount trended over time:

While this information is interesting by itself, it can also be applied to many other reports you may already have in Adobe Analytics. Here are just some sample scenarios in which knowing how often products are out of stock and a ballpark amount of potential lost revenue could come in handy:

  • How much money are we spending on marketing campaigns to drive visitors to products that are out of stock?
  • Which of our known customers (with a Customer ID in an eVar) wanted products that were out of stock and can we retarget them via e-mail or Adobe Target later when stock is replenished?
  • Which of our stores/geographies have the most out of stock issues and what is the potential lost revenue by store/region

Summary

If your site sells physical products and has instances where products are not in stock, the preceding is one way that you can conduct web analysis on how often this is happening, for which products and how much money you might be losing out on as a result. When this data is mixed with other data you might have in Adobe Analytics (i.e. campaign data, customer ID data, etc.), it can lead to many more analyses that might help to improve site conversion.

Adobe Analytics, Featured

Visitor Retention in Adobe Analytics Workspace

I recently had a client of mine ask me how they could report new and retained visitors for their website. In this particular case, the site had an expectation that the same visitors would return regularly since it is a subscription site. At first, my instinct was to use the Cohort Analysis report in Adobe Analytics Workspace, but that only shows which visitors who came to the site came back, not which visitors are truly new over an extended period of time. In addition, it is not possible to add Unique Visitors to a cohort table, so that rules this option out. What my client really wanted to see is which visitors who came this month, had not been to the site in the past (or at least past 24 Months) and differentiate those visitors from those who had been to the site in the past 24 months. While I explained the inherent limitation of knowing if visitors were truly new due to potential cookie deletion, they said that they still wanted to see this analysis assuming that cookie deletion is a common issue across the board.

While at first, this problem seemed pretty easy, it turned out to be much more complex that I had first thought it would be. The following will show how I approached this in Adobe Analytics Workspace.

Starting With Segments

To take on this challenge, I started by building two basic segments. The first segment I wanted was a count of brand new Visitors to the website in the current month. To do this, I needed to create a segment that had visitors who has been to the site in the current month, but not in the 24 Months prior to the current month. I did this by using the new rolling date feature in Adobe Analytics to include the current month and to exclude the previous 24 months like this:

If you have not yet used the rolling date feature, here is what the Last 24 Months Date Range looked like using the rolling date builder:

As you can see, this date range includes the 24 months preceding the current month (April 2017 in this case), so when this date range is added to the preceding segment, we should only get visitors from the current month who have not been to the site in the preceding 24 months. Next, you can apply this segment to the Unique Visitors metric in Analysis Workspace:

As you can see, this only shows the count of Visitors for the current month and it excludes those who had been to the site in the preceding 24 months. In this case, it looks like we had 1,786 new Visitors this month. We can verify this by creating a new calculated metric that subtracts the “new” Visitors from all Visitors:

When you add this to the Analysis Workspace table, it looks like this:

Next, we can create a retention rate % by creating another calculated metric that divides our retained Visitors by the total Unique Visitors:

This allows us to see the following in the Analysis Workspace table:

 

[One note about Analysis Workspace. Since our segment spans 25 months, the freeform table will often revert back to the oldest month, so you may have to re-sort in descending order by month when you make changes to the table.]

The Bad News

So far, things look like they are going ok. A bit of work to create date ranges, segments and calculated metrics, but we can see our current month new and retained Visitors. Unfortunately, things take a turn for the worse from here. Since date ranges are tied to the current day/month, I could not find a way to truly roll the data for 24 months (I am hoping there is someone smarter than me out there who can do this in Adobe!). Therefore, to see the same data for Last Month, I had to create two more date ranges and segments called “Last Month Visitors” & “Last Month, But Not 24 Months Prior Visitors” and then apply these to create new calculated metrics. Here are the two new segments I created for Last Month::

 

When these are applied to the Analysis Workspace table, we see this:

To save space, I have omitted the raw count of Retained Visitors and am just showing the retention rate, which for last month was 7.42% vs. 10.82% for the current month.

Unfortunately, this means that if you want to go back 24 months, you will have to create 24 date ranges, 24 segments and 24 calculated metrics. While this is not ideal, the good news is that once you create them, they will always work for the last rolling 24 months, so it is a one-time task and if you only care about the last 12 months, your work is cut in half. However, a word of caution when you are building the prior 24-month date ranges, you have to really keep track of what is 2 months ago and 3 months ago.  To keep it straight, I created the following cheat sheet in Excel and you can see the formula I used at the top:

Here is what the table might look like after doing this for three months:

And if you have learned how to highlight cells and graph them in Analysis Workspace, you can select only the retention rate percentages and create a graph that looks like this:

Other Cool Applications

While this all may seem like a pain, once you are done, there are some really cool things you can do with it. One of those things is to break these retention rates down by other segments. For example, below, I have added three segments as a breakdown to April 2017. These segments look for specific visits that contain blog posts by author. Once this breakdown is active, it is possible to see the new, retained and retention rate by month and blog author:

Alternatively, if your business was geographically-based, you could look at the data by US State by simply dragging over the State dimension container:

Or, you could see which campaign types have better or worse retention rates:

Summary

To summarize, the new features Adobe has added to Analysis Workspace, including Rolling Dates, open up more opportunities for analysis. To view rolling visitor retention, you may need to create a series of distinct segments/metrics, but in the end, you can find the data you are looking for. If you have any ideas or suggestions on different/easier ways to perform this type of analysis in Adobe Analytics, please leave a comment here.

Analysis, Featured

“What will you do with that?” = :-(

Remember back when folks wrote blog posts that were blah-blah-blah “best practice”-type posts? I think this is going to be one of those – a bit of a throwback, perhaps. But, hopefully, mildly entertaining and, hell, maybe even useful!

Let’s Start with Three Facts

  • Fact #1: Business users sometimes (often?) ask for data that they’re not actually going to be able to act on.
  • Fact #2: Analysts’ time is valuable.
  • Fact #3: Analysts need to prioritize their time pulling data, compiling reports, and conducting analyses with a bias towards results that will drive action.

None of the above are earth-shattering or particularly insightful observations.

And Yet…

…I am regularly dismayed by the application of these facts by analysts I watch or chat with. (Despite being an analytics curmudgeon, I don’t actually enjoy being dismayed.)

The following questions are all variations of the same thing, and they all make the hair on the back of my neck stand up when I hear an analyst ask them (or proudly tell me they ask them as part of their intake process ):

“What are you going to do with that information (or data or report) if I provide it?”

“What decision will you make based on that information?”

“What action will you take if I provide that information?”

I abhor these questions (and various variations of them).

Do you share my abhorrence?

Pause for a few seconds and ask yourself if you see these types of questions as counterproductive.

If you do see a problem with these questions, then read on and see if it’s for the same reason that I do.

If you do not  see a problem, then read on and see if I can change your mind.

If you’re not sure…well, then, get off the damn fence and form an opinion!

Some More Facts

We have to add to our fact base a bit to explain why these questions elevate my dander:

  • Fact #4: Analysts must build and maintain a positive relationship with their stakeholders.
  • Fact #5: Analysts hold the keys to the data (even if business users have some direct data access, they don’t have the expertise or depth of access that analysts do).

How Those Questions Can Be Heard

When an analyst says, “What decision will you make based on that information?” what they can (rightly!) be heard saying is any (or all) of the following:

“You (the business user) must convince me (the analyst) that it is worth my time to support you.”

“I don’t believe that information would be valuable to you, so you must convince me that it would be.”

“I would rather not add anything to my plate, so I’m going to make you jump through a few more hoops before I agree to assist you. (I’m kinda’ lazy.)”

Do you see the problem here? By asking a well-intended question, the analyst can easily come across as adversarial: as someone who holds the “power of the data” such that the business user must (metaphorically) grovel/justify/beg for assistance.

This is not a good way to build and grow strong relationships with the business! And, we established with Fact #4 that this was important.

But…What About Fact #3?

Do we have an intractable conflict here? Am I saying that we can’t say, “No” or, at least, “Why?” to a business user? There are only so many hours in the day!

I’m not actually saying that at all.

Let’s shift from facts to two assumptions that I (try to) live by:

  • Assumption #1: No business user wants to waste their own or the analyst’s time.
  • Assumption #2: Stakeholders have reasonably deep knowledge of their business areas, and they want to drive positive results.

“Aren’t assumptions dangerous?” you may ask. “Aren’t they the cousins of ‘opinions,’ which we’ve been properly conditioned to eschew?”

Yes… except not really in this case. These are useful assumptions to work from and to only discard if and only if they are thoroughly and conclusively invalidated in a specific situation.

Have You Figured Out Where I Am Heading?

As soon as a business user approaches me with any sort of request:

  • I start with an assumption that the request is based on a meaningful and actionable need.
  • I put the onus on myself to take the next step to articulate what that need is.

Is that a subtle pivot? Perhaps. But, with both of the above in mind, the questions I listed at the beginning of this post should start to appear as clearly inappropriate.

The Savvy Analyst’s Approach

I hope you’re not expecting anything particularly magic here, as it’s not. But, no matter the form of the question or request, I always try to work through the following basic process by myself:

  1. Is the requestor trying to simply measure results or are they looking to validate a hypothesis? (There is no room for “they just want some numbers” – given my own knowledge of the business and any contextual clues I picked up in the request, I will put it into one bucket or the other.)
  2. If I determine the stakeholder is trying to measure results, then I try to articulate (on the fly in conversation or in writing as a follow-up) what I think their objective is for the thing they’re trying to measure. And then I skip to step 4.
  3. If I determine the stakeholder is trying to validate a hypothesis (or “wants some analysis”), then I try to articulate one or more of the most likely and actionable hypotheses that I can using the structure:
    • The requestor believes… <something>.
    • If that belief is right, then we will… <some action>.
  4. I then play back what I’ve come up with to the stakeholder. I’ll couch it as though I’ve just completed a master class in active listening: “I want to make sure I’m getting you information that is as useful as possible. What I think you’re looking for is…(play back of what came out of step 2 or 3).”
  5. Then — after a little (or a lot) of discussion — I’ll dive into actually doing the work.

If you’re more of a graphical thinker, then the above words can be represented as a flowchart:

This approach has several (hopefully obvious) benefits:

  • It immediately makes the request a collaboration rather than a negotiation.
  • It sneakily demonstrates that, as an analyst, I’m focused on business results and on providing useful information.
  • It prevents me from spending time (hours or days) pulling and crunching data that is wildly off the mark for what the stakeholder actually wants.
  • It provides me with a space to outline several different approaches that require various levels of effort (or, often, provides the opportunity to say, “Let’s just check this one thing very quickly before we head too far down this path.”).

Are You With Me?

What do you think? Have you been guilty of guiding a stakeholder to put up her dukes every time she comes to you with a request, or do you take a more collaborative approach right out of the chute?

Adobe Analytics, Featured

Trending Data After Moving Variables

Most of my consulting work involves helping organizations fix and clean-up their Adobe Analytics implementations. Often times, I find that organizations have multiple Adobe Analytics report suites and that they are not setup consistently. As I wrote about in this post, having different variables in different variable slots across different report suites can result in many issues. To see whether you have this problem, you can select multiple report suites in the administration console and then review your variables. Here is an example looking at the Success Events:

As you can see, this organization is in real trouble, because all of their Success Events are different across all of their report suites.  The biggest issue with this is that you cannot aggregate data across the various report suites. For example, if you had one suite with “Internal Searches” in Success Event 1 and another suite with “Lead Forms Completed” in Success Event 1, combining the two in a master [global] report suite would make no sense, since you’d be combining apples and oranges.

Conversely, if you do have the same variable definitions across your Adobe Analytics report suites, you get the following benefits:

  • You can look at a report in one report suite and then with one click see the same report in another report suite;
  • You can re-use bookmarks, dashboards, segments and calculated metrics, since they are all built on the same variable definitions;
  • You can apply SAINT Classifications to the same variable in all suites concurrently via FTP;
  • You can re-use JavaScript code and/or DTM configurations;
  • You can more easily QA your data by building templates in ReportBuilder or other tools that work across all suites;
  • You can re-use implementation documentation and training materials.

To read more about why you should have consistent report suites, click here, but needless to say, it is normally a best practice to have the same variable definitions across most or all of your report suites.

How Do I Re-Align?

So, what happens if you have already messed up and your report suites are not synchronized (like the one shown above)? Unfortunately, there is no magic fix for this. To rectify the situation, you will need to move variables in some of your report suites to align them if you want to get the benefits outlined above. The level of difficulty in doing this is directly correlated to the disparity of your report suites. Normally, I find that there are a bunch of report suites that are set up consistently and then a few outliers or that the desktop website implementation is different from the mobile app implementation. Regardless of the cause of the differences, I recommend that you make the report suite(s) that are most prevalent the new “master” suite and then force the others to move their data to the variable slots found in the new “master.”

Of course, the next logical question I get is always: “What about all of my historical data?” If you move data from variable slot 1 to slot 5, for example, Adobe Analytics cannot easily move all of your historical data. You won’t lose the old data, it just is not easy to transfer historical data to the new variable slot. Old data will be in the old variable slot and new data will be in the new variable slot. This can be annoying for about a year until you have new year over year data in the new variable slot. In general, even though this is annoying for a year, I still advocate making this type of change, since it is much better for the long term when it comes to your Adobe Analytics implementation. It is a matter of short-term pain, for long-term gain and in some way is a penitence for not implementing Adobe Analytics the correct way in the beginning. However, there are ways that you can mitigate the short-term pain associated with making variable slot changes. In the next section, I will share two different ways to mitigate this until you once again have year over year data.

Trending Data After Moving Variables

Adobe ReportBuilder Method

This first method of getting year over year data from two different variable slots is to use Adobe ReportBuilder. ReportBuilder is Adobe’s Microsoft Excel plug-in that allows you to import Adobe Analytics data into Excel data blocks. In this case, you can create two date-based data blocks in Excel and place them right next to each other. The first data block will be the metric (Success Event) or dimension (eVar/sProp) from the old variable slot and it will use the old dates in which data was found in that variable. The second data block will be the new variable slot and will start with the date that data was moved to the new variable slot. For example, let’s imagine that you had a report suite that had “Internal Searches” in Success Event 2, but in order to conform to the new standard, you needed to move “Internal Searches” to Success Event 10 as of June 1st. In this case, you would build a data block in Excel that had all data from Success Event 2 prior to June 1st and then, next to it, another data block that had all data from Success Event 10 starting June 1st. Once you refresh both data blocks, you will have one combined table of data, both of which contain “Internal Searches” over time. Then you can build a graph to see the trend and even show year over year data.

This Excel solution, still takes some work, since you’d have to repeat this for any variables that move locations, but it is one way to see historical data over time and mask for end-users the fact that a change has occurred. Once you have a year’s worth of “Internal Search” data in Success Event 10, you can likely abandon the Excel solution and go back to reporting on “Internal Searches” using the new variable slot (Success Event 10 in this case), which will now show year over year data.

Derived Calculated Metric Method

The downside of the preceding Excel approach is that seeing year over year data requires your end-users to [temporarily for one year] abandon the standard web-based Adobe Analytics interface in order to see trended data. This can be a real disadvantage since most users are already trained on how to use the normal Adobe Analytics interface, including Analysis Workspace. Therefore, the other approach to combining data when variables have to be moved is to use a derived calculated metric. Now that you can apply segments, including dates, to calculated metrics in Adobe Analytics, you can create a combined metric that uses data from two different variables for two different date ranges. This allows you to join the old and new data into one metric that has a historical trend of the data and the same concept can apply to dimensions like eVars and sProps.

Let’s illustrate this with an example. Imagine that you have a metric called “Blog Post Views” that has historically been captured in Success Event 3. In order to conform to a new implementation standard, you need to move this data to Success Event 5 as of April 5th, 2017. You ultimately want to have a metric that shows all Blog Post Views over time, even though behind the scenes the data will be shifting from one variable to another on April 5th. To to this, you would start by creating two new Date Ranges in Adobe Analytics – one for the pre-April 5th time period and one for the post-April 5th period. While you could make a different set of date ranges for each variable slot being moved, the odds are that you will be making multiple changes with each release, so I would suggest making more generic date ranges that can be used for any variables changing in a release like these:

In this case, let’s assume that your historical data started January 1st, 2016, and that you won’t need the combined calculated metric past December 31st, 2019, but you can put whatever dates you’d like. The important part is that one ends on April 4th and the next one begins on April 5th. Once these date ranges have been created, you can create two new segments that leverage them. Below, you can see two basic segments that include hits for each date range:

Once these segments are created, you can begin to create your derived calculated metric. This is done by creating a metric that adds together the two Success Events that represent the same metric (Blog Post Views in this case). To do this, you simply add the old Success Event (Event 3 in this case) and the new Success Event (Event 5 in this case):

But before you save this, you need to apply the date ranges to each of these metrics. For Success Event 3 that is the date range prior to April 5th and for Success Event 5, it is the date range after April 5th. To do this, simply drag over the two new segments you created that are based upon the date ranges like this:

By doing this, you are telling Adobe Analytics that you want Success Event 3 data prior to April 5th to be added to Success Event 5 data after April 5th. Therefore, if your tagging goes as planned, you should be able to see a unified historical view of Blog Post Views from January 1st, 2016 until December 31st, 2019 using this new combined calculated metric. Here is what it would look like (with post-conversion data showing in the red highlight box):

 

This report is being run on April 8th, shortly after the April 5th conversion and you can see that the data is flowing seamlessly with the historical data.

For you Analysis Workspace junkies, you can see the same data there either by using the new calculated metric or applying the same segments as shown here:

Of course, this still requires some end-user education, since looking at Success Event 3 or Success Event 5 in isolation can cause issues during the transition period. But in reality, most people only look at the last few weeks of data, so the new variable (Success Event 5 in this case) should be fine for most people after a few weeks and the combined metric is only necessary when you need to look at historical or year over year data. In extreme cases, you can hide the raw variable reports (Event 3 & Event 5) and use the Custom Report feature to replace them with this new combined calculated metric in the reporting menu structure (though that won’t help you in Analysis Workspace).

Summary

To summarize, if your organization isn’t consistent in the way it implements, you may lose out on many of the advantages inherent to Adobe Analytics. If you decide that you want to clean house and make your implementations more consistent, you may have to shift data from one variable to another. Doing this can cause some short-term reporting issues, since it is difficult to see historical data spanning across two different variables. However, this can be mitigated by using Adobe Report Builder or a derived calculated metric as shown in this post. Both of these are not perfect, but they can help get your organization over the hump until you have enough historical data that you can disregard the old data prior to your variable conversion.

Adobe Analytics, Featured

2017 Adobe Analytics “Top Gun” Class – May 2017 (Chicago)

Back by popular demand, it is once again time for my annual Adobe Analytics “Top Gun” class! This May 17th (note: originally the date was June 19th, but had to be moved) I will be conducting my advanced Adobe Analytics class downtown Chicago. This will likely be the only time I offer the class publicly (vs. privately for clients), so if you are interested, I encourage you to register before the spots are gone (last year’s class sold out).

For those of you unfamiliar with my class, it is a one day crash course on how Adobe Analytics works behind the scenes based upon my Adobe Analytics book. This class is not meant for daily Adobe Analytics end-users, but rather for those who administer Adobe Analytics at their organization, analysts who do requirements gathering or developers who want to understand why they are being told to implement things in Adobe Analytics. The class goes deep into the Adobe Analytics product, exploring all of its features from variables to merchandising to importing offline metrics. The primary objective of the class is to teach participants how to translate every day business questions into Adobe Analytics implementation steps. For example, if your boss tells you that they want to track website visitor engagement using Adobe Analytics, would you know how to do that? While the class doesn’t get into all of the coding aspects, it will teach you which product features and functions you can bring to bear to create reports answering any question you may get from business stakeholders. It will also allow you and your developers to have a common language and understanding of the Adobe Analytics product so that you can expedite getting the data you need to answer business questions.

Here are some quotes from past class attendees:

Screen Shot 2016-08-18 at 1.29.48 PM

To register for the class, click here. If you have any questions, please e-mail me. I hope to see you there!

Adobe Analytics, Featured

Leveraging Data Anomalies – Prospects & Competitors

A few weeks ago, I shared a new tool called Alarmduck that helps detect data anomalies in Adobe Analytics and posts these to Slack. This data anomaly tool is pretty handy if you want to keep tabs on your data or be notified when something of interest pops-up. Unlike other Slack integrations, Alarmduck doesn’t use the out-of-box Adobe Analytics anomaly detection, but rather, has its own proprietary method for identifying data anomalies. In this post, I will demonstrate a few examples of how I use the Alarmduck tool in my daily Adobe Analytics usage.

Identifying Hot Prospects

As I have demonstrated in the past, I use a great tool called DemandBase to see which companies are visiting my blog. This helps me see which companies might one day be interested in my Adobe Analytics consulting services. Sometimes, I will notice a huge spike in visits from a particular company, which may indicate that I should reach out to them to see if they need my help (“strike while the iron is hot” as they say). However, it is a pain for me to check daily or weekly to see if there are companies that are hitting my blog more than normal, but this is a great use for Alarmduck.

To do this, I would create a new Alarmduck report (see instructions in previous post) that looks for anomalies using the DemandBase eVar which contains the Company Name by selecting the correct eVar in the Dimension drop-down box:

In this case, I am also going to narrow down my data to a rolling 14 days, US companies only and exclude any of my competitors (which I track as a SAINT Classification of the DemandBase Company eVar):

 

Once I set this up, I will be notified if there are any known companies that hit my blog over a rolling 14-day period that cause a noticeable increase of decrease. This way, I can go about my daily business and know that I will automatically be notified in Slack if something happens that requires my attention. For example, the other day, I sat down to work in the morning and saw this notification in Slack:

It is cool that Alarmduck can show graphs of data right within Slack! However, if I want to dig deeper, I can click on the link above the graph to see the same report in Adobe Analytics and, for example, see which of my blog posts this company was viewing:

Eventually, if I wanted to, I could reach out to the analytics team of this company and see if they need my help.

Competitor Spikes

From time to time, I like to check out what some of my “competitors” (more like others who provide analytics consulting) are reading on our website or my blog. This is something that can also be done using DemandBase. In my case, I have picked a bunch of companies and classified them using SAINT. This allows me to create a “Competitors” segment and see what activity is taking place on our website from these companies. Just as was done above, I can create a new Alarmduck report and use a segment (Competitors in this case) and then choose the Demandbase Company Dimension and select the metrics I want to use (Page Views and Visits in this case):

Once this is created, I will start receiving alerts (and graphs!) in Slack if there are any spikes by my competition like this:

In this case, there were two companies that had unusually high Page Views on our website. If I want to, I can click on the “Link to Web Report” link within Slack to see the report in Adobe Analytics:

Once in Adobe Analytics, I can do any normal type of analysis, like viewing what specific pages on our website this competitor viewed:

In most cases, this is just something I would view out of curiosity, but it is a fun use-case for how to leverage anomaly detection in Adobe Analytics via Alarmduck.

Summary

These are just two simple examples of how you can let bots like Alarmduck do the work for you and use more of your time on more value-added activities, knowing that you will be alerted if there is something you need to take action upon. If you want to try Alarmduck for free with your Adobe Analytics implementation, click here.

Featured, google analytics

An Overview of the New Google Analytics Alerts

Google Analytics users have become very familiar with the “yellow ribbon” notices that appear periodically in different reports.

For instance, if you have a gazillion unique page names, you may see a “high-cardinality” warning:

Or, if you are using a user-based report and have any filters applied to your view (which you almost always do!), then you get a warning that that could potentially muck with the results:

These can be helpful tips. Most analysts read them, interpret them, and then know whether or not they’re of actual concern. More casual users of the platform may be momentarily thrown off by the terminology, but there is always the Learn More, link, and an analyst is usually just an email away to allay any concerns.

The feedback on these warnings has been pretty positive, so Google has started rolling out a number of additional alerts. Some of these are pretty direct and, honestly, seem like they might be a bit too blunt. But, I’m sure they will adjust the language over time, as, like all Google Analytics features, this one is in perpetual beta!

This post reviews a handful of the these new “yellow ribbon” messages. As I understand it, these are being rolled out to all users over the coming weeks. But, of course, you will not see them unless you are viewing a report under the conditions that trigger it.

Free Version Volume Limits

The free version of Google Analytics is limited to 10 million hits per month based on the terms of service. But, historically, Google has not been particularly aggressive about enforcing that limit. I’ve always assumed that is simply because, once you get to a high volume of traffic, any sort of mildly deep analysis will start running into sufficiently severe sampling issues that they figured, eventually, the site would upgrade to GA360.

But, now, there is a warning that gets a bit more in your face:

Interestingly, the language here is “may” rather than “will,” so there is no way of knowing if Google will actually shut down the account. But, they are showing that they are watching (or their machines are!).

Getting Serious about PII

Google has always taken personally identifiable information (PII) seriously. And, as the EU’s GDPR directive gets closer, and as privacy concerns have really become a topic that is never far below the surface, Google has been taking the issue even more seriously. Historically, they have said things like, “If we detect an email address is being passed in, we’ll just strip it out of the data.” But, now, it appears that they will also be letting you know that they detected that you were trying to pass PII in:

There isn’t a timeframe given as to when the account will be terminated, but note that the language here is stronger than the warning above: it’s “will be terminated” rather than “may be terminated.”

Competitive Dig

While the two new warnings above are really just calling out in the UI aspects of the terms of service, there are a few other new notifications that are a bit more pointed. For instance:

Wow. I sort of wonder if this was one that got past someone in the review process. The language is… wow. But, the link actually goes to a Google Survey that asks about differences between the platforms and the user’s preferences therein.

Data Quality Checks

Google also seems to have kicked up their machine learning quite a bit — to the point that they’re actually doing some level of tag completeness checking:

Ugh! As true as this almost certainly is, this is not going to drive the confidence in the data that analysts would like when business stakeholders are working in the platform.

The Flip Side of PII

Interestingly, while one warning calls out that PII is being collected on your site, Google also apparently is being more transparent/open about their knowledge of GA users themselves. These get to being downright creepy, and I’d be surprised if they actually stick around over the long haul (or, if they do, then I’d expect some sort of Big Announcement from Google about their shifting position on “Don’t Be Evil”). A few examples on that front:

My favorite new message, though, is this one:

Special thanks to Nancy Koons for helping me identify these new messages!

Adobe Analytics, Featured

Alarmduck – The Data Anomaly Slack App for Adobe Analytics

One of the most difficult parts of managing an Adobe Analytics implementation is uncovering data anomalies. For years, Adobe Analytics has offered an Alerts feature to try and address this, but very few companies end up using them. Recently, Adobe improved their Alerts functionality, in particular, allowing you to add segments to Alerts and a few other options. However, I still see very few companies engaging with Adobe Analytics Alerts, despite the fact that few people (or teams) have enough time to check every single Adobe Analytics report, every day to find data anomalies.

Part of the issue with Alerts is the fact that many people don’t go into Adobe Analytics every day, so even if there were Alerts, they wouldn’t see them.Even the really cool data anomaly indicators in Analysis Workspace are only useful if you are in a particular report to see them. While Adobe Analytics Alerts can be sent via e-mail, those tend to get filtered into folders due to all of the noise, especially on weekends! To rectify this, I have even tried to figure out how to get Adobe Analytics Alerts into the place where I spend a lot of my time – Slack. But despite my best efforts, I still wasn’t able to get the right alerting that I needed from Adobe Analytics to the people that needed to see them. I felt like there had to be an easier way…

Introducing Alarmduck

It was around this time that I stumbled upon some folks building a tool called Alarmduck. The idea of Alarmduck was to make it super easy to be notified in Slack when data in your Adobe Analytics implementation has changed significantly. Being a lover of Adobe Analytics and Slack, it was the perfect union of my favorite technologies! Alarmduck uses Adobe Analytics API’s to query your data and look for anomalies and then Slack API’s to post those anomalies into the Slack channel of your choosing.

For example, a few weeks ago, we had a tagging issue on our Demystified website that caused our bounce rate to metric to break. The next day, here is what I saw in my Slack channel:

I was alerted right away, was able to see a graph and the data causing the anomaly and even had a link to the report in Adobe Analytics! In this case, we were able to fix the issue right away and minimize the amount of bad data in our implementation. Best of all, I saw the alert in the normal course of my work day, since it was automatically injected into Slack with all of my other communications.

Going From Good to Great

So, as I started using Alarmduck, I was pleased that my metrics (including Success Events) were automatically notifying me if something had changed significantly, but as you could imagine (being an Adobe Analytics addict), I wanted more! I got in touch with the founders of the company and shared with them all of the other stuff Alarmduck could be doing related to Adobe Analytics such as:

  • Allowing me to get data anomalies for any eVar/sProp and metric combination (i.e. Product anomalies for Orders & Revenue or Tracking Code anomalies for Visits)
  • Allowing me to check multiple Adobe Analytics report suites
  • Allowing me to check Adobe Analytics Virtual Report Suites
  • Allowing me to apply Adobe Analytics segments to data anomaly checks
  • Allowing me to post different types of data anomaly alerts to different Slack channels
  • Allowing me to send data anomalies from different report suites to different Slack channels

As you could imagine, they were a bit overwhelmed, so I agreed to be their Adobe Analytics advisor (and partial investor) so they could tap into my Adobe Analytics expertise. While there were almost 100 companies already testing out the free beta release of the product, I was convinced that power Adobe Analytics users like me would eventually want more functionality and flexibility.

Over the last few months, the Alarmduck team has been hard at work and I am proud to say that all of the preceding features have been added to the product! While there are many additional features I’d still love to see added, the v1.0 version of the product is now available and packs quite a punch for a v1.0 release. Anyone can try the product for free for 30 days and then there are several tiers of payment based upon how many data anomaly reports you need. The following section will demonstrate how easy it is for you to create data anomaly alerts.

Creating Data Anomaly Reports

To get started with Alarmduck, you first have to login using the credentials of your Slack team (like any other Slack integration). When you do this, you will choose your Slack team and then identify the Slack channel into which you’d like to post data anomalies (you can add more of these later). You should make the channel in Slack first so it will appear in the dropdown list shown here:

 

Next, you will see an Adobe Analytics link in the left navigation and be asked to enter your Adobe Analytics API credentials:

If you are not an administrator of your Adobe Analytics implementation, you can ask the admin to get you your username and secret key, which is part of your Adobe Analytics User ID:

Next, you will add your first Adobe Analytics report suite:

(Keep in mind that in most cases, the preceding steps will only have to be done one time.)

Once you are done with this, Alarmduck will create your first data anomaly report for your first 30 metrics (you can use the pencil icon to customize which metrics you want it to check):

This will send metric alerts to the designated Slack channel once per day.

Beyond Metrics

The preceding metric anomaly alerts will be super useful, but if you want to go deeper, you can add segments, eVars, sProps, etc. To do this, click the “Add Report” button to get this window:

Next, you choose a report suite or a Virtual Report Suite (Exclude Excel Posts in this example). Once you do this, you will have the option to select a segment (if desired):

And then choose a dimension (eVar or sProp) if needed:

 

Lastly, you can choose the metrics for which you want to see data anomalies:

In this case, you would see data anomalies for a Virtual Report Suite, with an additional segment applied and see when there are data anomalies for Blog Post (eVar5) values for the Blog Post Views (event 3) metric (Note: At this time, Alarmduck checks the top 20 eVar/sProp dimension values over the last 90 days to avoid triggering data anomalies for insignificant dimension values). That shows how granular you can get with the new advanced features of Alarmduck (pretty cool huh?)!

When you are done, you can save and will see your new report in the report list on the Adobe Analytics page:

Here is a video of a similar setup process:

Summary

As you can see, adding reports is pretty easy once you have your Slack team and Adobe Analytics credentials in place. Once setup, you will begin receiving daily alerts in your designated Slack channel unless you edit or remove the report using the screen above. You can create up to 10 reports in the lowest tier package and during your 30-day free trial. After that, you can use a credit card and pay for the number of reports you need:

Since the trial is free and setting up a Slack team (if you don’t already have one) is also free, there is no reason to not try Alarmduck for your Adobe Analytics implementation. If you have any questions, feel free to ping me. Enjoy!

Adobe Analytics, Featured

Catch Me If You Can!

Being a Chicagoan, I tend to hibernate in the winter when it is too cold to go outside, but as Spring arrives, I will be hitting the road and getting back out into the world! If you’d like to hear me speak or chat about analytics, here are some places you can find me:

US Adobe Summit

Next week I will be attending what I believe is my 14th US Adobe Summit (which makes me sound pretty old!). It is in Las Vegas again this year and I am sure will be bigger than ever.

At the conference, I will be doing a session on Adobe Analytics “Worst Practices” in which I highlight some of the things I have seen companies do with Adobe Analytics that you may want to avoid. I have had a great time identifying these and have had the help of many in the Adobe Analytics community. This session is meant for those with a bit of experience in the product, but should make sense to most novices as well. Here is a link to the session in case you want to pre-register (space is limited): https://adobesummit.lanyonevents.com/2017/connect/sessionDetail.ww?SESSION_ID=4340&tclass=popup#.WMbsL2nTGYY.twitter

In addition to this session, I will also be co-presenting with my friends from ObservePoint to share an exciting new product they are launching related to Adobe Analytics. Many of my clients use ObservePoint, which is highly complimentary and this session should be useful to those who focus on implementing Adobe Analytics. Here is a link to that session: https://adobesummit.lanyonevents.com/2017/connect/sessionDetail.ww?SESSION_ID=4320&tclass=popup#.WMbrvEN9wcM.twitter

Last, but not least, I will be stopping by the SweetSpot Intelligence booth (#1046) on Wednesday March 22nd @ 4:00 PST to sign the last hardcopies of my book in existence! As you may have seen in some of my recent tweets, Amazon is no longer producing hardcopies of my Adobe Analytics book. I have 25 of these hardcopies left and am selling the last 10 on Amazon and the remaining 15 will be auctioned off by Sweetspot Intelligence during Adobe Summit and signed by yours truly Wednesday @ 4:00. This is your last chance to get a physical copy of my book and a signed one to boot! So if you want a copy of my book, make sure to stop by their booth on Tuesday and find out how to win a copy.

EMEA Adobe Summit

In addition to the US Adobe Summit, I will also be attending the EMEA Adobe Summit in the UK. I have been to this event a few times and it is a bit smaller than the US version, but just as much fun! I will be presenting there with my friend Jan Exner, who is one of the best Adobe Analytics folks I know, so it should be a great session. We are still working out the details on that session now, but you will not want to miss it!

Chicago Adobe Analytics “Top Gun” Class

On May 17th in Chicago, I will be hosting my annual Adobe Analytics “Top Gun” class for those who want to go really deep into the Adobe Analytics product. You can learn more about that class in this blog post.

A4 Conference – Lima, Peru

The following week, I will be speaking at the A4 Conference in Lima, Peru. This will be my first time to Peru and I am excited to use my Spanish skills once again and to meet marketers from South and Latin America!

eMetrics Chicago

In June, I will be back home and attending the Chicago eMetrics conference where I will be sharing information about the success of the DAA’s Analysis Recipe initiative and enjoying having analysts come visit my hometown when the weather is actually warm!

So that is where I will be! If you happen to be anywhere near these places, I’d love to see you. In addition, you can see all of the places my Demystified Partners will be by clicking here.

Analysis, Featured

3-Day Training: R & Statistics for the Digital Analyst – June 13-15 (Columbus, OH)

One challenge I found over the course of last year as I worked to learn R and learn how to apply statistics in a meaningful way to digital analytics data was that, while there is a wealth of information on both subjects, there is limited information available that speaks directly to working with digital analytics data. The data isn’t necessarily all that special, but even something as (theoretically) simple as translating web analytics “dimensions and metrics” to “variables” (multi-level factors, continuous vs. categorical variables, etc.) sent me into multiple mental circles.

In an effort to shorten that learning curve for other digital analysts, Mark Edmondson from IIH Nordic and I recruited Dr. Michael Levin from Otterbein University and have put together a 3-day training class:

  • Dates: June 13, 2017 – June 15, 2017
  • Location: 95 Liberty Street, Columbus, OH, 43215
  • Early Bird Price (through March 15, 2017): $1,695
  • Full Registration (after March 15, 2017): $1,995
  • Event Website

Course Description

The course is a combination of lectures and hands-on examples. The goal is that every attendee will leave with a clear understanding of:

  • The syntax and structure of the R language, as well as the RStudio interface
  • How to automatically pull data from web analytics and other platforms
  • How to transform and manipulate data using R
  • How to visualize data with R
  • How to troubleshoot R scripts
  • Various options for producing deliverables directly from R
  • The application of core statistics concepts and methods to digital analytics data

The course is broken down into three core units, with each day being devoted to a specific unit, and the third day bringing together the material taught on the first two days:

The first and third days have a heavy hands-on component to them.

Who Should Attend?

This training is primarily for digital analysts who have hit the limits of what can be done effectively with Microsoft Excel, the native interfaces of digital analytics platforms, and third party platforms like Tableau. Specifically, it is for digital analysts who are looking to:

  • Improve their efficiency and effectiveness when it comes to accessing and manipulating data from digital/social/mobile/internal platforms
  • Increase the analytical rigor they are able to apply to their work – applying statistical techniques like correlation, regression, standardization, and chi square so they can increase the value they deliver to their organizations

Attendees should be relatively well-versed in digital analytics data. We will primarily be working with Google Analytics data sets in the course, but the material itself is not platform-specific, and the class discussion will include other platforms as warranted based on the make-up of the attendees.

Attendees who currently work (or have dabbled with) R or statistics are welcome. The material goes “beyond the basics” on both subjects. But, attendees who have not used R at all will be fine. We start with the basics, and those basics are reinforced throughout the course.

Oh… and Columbus, Ohio, in June is a great place to be. The class includes meals and evening activities!

Head over to the event website for additional details and to register!

 

Adobe Analytics, Featured

Inter-Site Pathing

Some of my clients have many websites that they track with Adobe Analytics. Normally, this is done by having a different Report Suite for each site and then a Global Report Suite that combines all data. In some of these cases, my clients are interested in seeing how often the same person, in the same visit, views more than one of their websites. In this post, I will share some ways to do this and also show an example of how you can see the converse – how often visitors view only one of the sites instead of multiple.

Multi-Site Pathing

The first step in seeing how often visitors navigate to your various properties, is to capture some sort of site ID or name in an Adobe Analytics variable. Since you want to see navigation, I would suggest using an sProp, though you can now see similar data with an eVar in Analysis Workspace Path reports. If you capture the site identifier on every hit of every site and enable Pathing, in the Global Report Suite, you will be able to see all navigation behavior. For example, here is a Next Flow report showing all site visits after viewing the “Site1” site:

 

Here we can see that (~42%) remained in the “Site1” site, but if they did navigate to others, it was the “Site2″or “Site3” sites. You can switch which site is your starting point at any time and also see reverse flows to see how visitors got to each site. You can also see which sites are most often Entries and Exits, all through the normal pathing reports.

Single Site Usage

Now let’s imagine that upon seeing a report like the one above, you notice that there is a high exit rate for “Site1,” meaning that most visitors are only viewing “Site1” and not other sites owned by the company. Based upon this, you decide to dig deeper and see which sites do better and worse when it comes to inter-site pathing.

The easiest place to start with this is to go to your Global Report Suite and open the Full Paths report for “site” variable in the Global Report Suite and then pick one of your sites (in this case “Site1”) where shown in red below:

This report shows you all of the paths that include your chosen site (“Site1” in this case). Next, you can add this report to a dashboard so you see a reportlet like this:

You can now do the same for each site and see which ones are “one and done” and which are leading people to other company-owned sites.  For some clients, I add a bunch of these reportlets to a single dashboard to get a bird’s eye view of what is going on with all of the sites.

Trending Data

However, the preceding reports only answer part of the question, since they only show a snapshot in time (the month of February in this case). Another thing you may want to look at is the trend of single site usage. Getting this information takes a bit more work. First, you will want to create a segment for each of your sites in which you look for Visits that view a specific site and no other sites. This can be done by using an include and exclude container in the segment builder. Here is an example in which you are isolating Visits in which “Site1” is viewed and no other sites are viewed:

One you save this segment, you can apply it to the Visits report and see a trend of single site visits for “Site1” over time, as shown here:

You will have to build a different segment for each of your sites, but you can do that easily by using the Save As feature in the segment builder.

Lastly, since all of the cool kids are using Analysis Workspace these days, you can re-use the segments you created above in Analysis Workspace and apply them to the Visits metric and then graph the trends of as many sites as you want. Below I am trending two sites and using raw numbers, but could have just as easily trended the percentages if that is more relevant and added more sites if I wanted. This allows you to visually compare the ups and downs of each sites’ single site usage in one nice view.

Summary

So to conclude, by using a site identifier, Pathing reports and Analysis Workspace, you can begin to understand how often visitors are navigating between your sites or using just one of them. The same concept can be applied to Site Sections within one site as well. To see that, you simply have to pass a Site Section value to the s.channel sProp and repeat the steps above. So if you have multiple sites that you expect visitors to view in the same session, consider trying these reports to conduct your analysis.