Adobe Analytics, Featured

A Product Container for Christmas

Dear Adobe,

For Christmas this year I would like a product container in the segment builder. It is something I’ve wanted for years and if you saw my post on Product Segmentation Gotchas you’ll realize that there are some ways that people may inevitably get bad data when using product-level dimensions with any of the existing containers. Because there can be multiple products per hit, segmentation on product attributes can be tough. Really, any bad data is due to a misuse of how the segment builder currently works. However, if we were to add additional functionality to the segment builder we can expand the uses of the builder. A product container is also interesting because it isn’t necessarily smaller in scope compared to a visit or hit. One product could span all of those. So because of all this I would love a new container for Christmas.

Don’t get me wrong, I love the segment builder. This would be another one of those little features that adds to the overall amazingness of the product. Or, since containers are a pretty fundamental aspect of the segment builder, maybe it’s much more than just a “little feature”? Hmmm, the more I think about in those terms the more I think it would be a feature of epic proportions 🙂

How would this work?

Good question! I have some ideas around that. I imagine a product container working similar to a product line item in a tabular report. In a table visualization everything on that row is limited to what is associated with that product. Usually we just use product-specific metrics in those types of reports, but if you were to pull in a non-product-specific metric it would pull in whatever was the effective values for the hit that the product was on at that time. So really it wouldn’t be too different from how data is generated now. The big change is making it accessible in the segment builder.

Here’s an example of what I mean. Let’s use the first scenario from the  Product Segmentation Gotchas post. We are interested in segmenting for visits that placed an order where product A had a “2_for_1” discount applied. Let’s say that we have a report suite that has only two orders like so:

Order #101 (the one we want)

Product Unit Price Units Net Revenue Discount Code Marketing Channel
A $10 2 $10 2_for_1 PPC
B $6 2 $12 none

Notice that product A has the discount and this visit came from a PPC channel.

Order #102 (the one we don’t want)

Product Unit Price Units Net Revenue Discount Code Marketing Channel
A $10 2 $20 none Email
B $6 2 $6 2_for_1

Notice that product B has the discount now and this visit came from an Email channel.

Here is the resulting report if we were to not use any segments. You’ll notice that everything lines up great in this view and we know exactly which discount applied to which product.

The Bad

Now let’s get on to our question and try to segment for the visits that had the 2_for_1 discount applied to product A. In the last post I already mentioned that this segment is no good:

If you were to use this to get some high-level summary data it would look like this:

Notice that it doesn’t look any different from the All Visits segment. The reason for this is that we just have two orders in our dataset and each of them have product A and a 2_for_1 discount. To answer our question we really need a way to associate the discount specifically with product A.

The Correct Theoretical Segment

Using my theoretical product container, the correct segment would look something like the image below. Here I’m using a visit-level outer container but my inner container is set to the product level (along with a new cute icon, of course). Keep in mind this is fake!

The results of this would be just what we wanted which is “visits where product A had a ‘2_for_1’ discount applied”.

This visit had an order of multiple products so the segment would include more than just product A in the final result. The inner product container would qualify the product and the outer visit container would then qualify the entire visit. This results in the whole order showing up in our list. We are able to answer our question and avoid including the extra order that was unintentionally included with the first segment. 

Even More Specific

Let’s refine this and say that we wanted just the sales from product A in our segment. The correct segment would look like this with my theoretical product scope in the outer container.

And the results would be trimmed down to just the product-specific dimensions and metrics like so:

Notice that this gives us a single row that is just like the line item in the table report! Now you can see that we have great flexibility to get to just what we want when it comes to product-level dimensions.

Summary

Wow, that was amazing! Fake data and mockups are so cooperative! This may be a little boring for just this simple example but when thousands of products are involved the table would be a mess and I’d be pretty grateful for this feature. There are a bunch of other ways that this could be useful in building segments with this feature at different levels like wrapping visit or visitor containers or working with non-product-specific metrics but this post is already well past my attention span limits. Hopefully this is enough to explain the idea. I know that this Christmas is getting pretty close so I’d be glad to accept it as a belated gift on MLK Day instead. Thanks Adobe!

Sincerely,

Kevin Willeitner

 

PS, for others that might be reading this, if you’d like this feature to be implemented please vote for it here. After some searching I also found that several people have asked for related capability so vote for theirs as well! Those are linked to in the idea post.

Adobe Analytics, Featured

My Favorite Analysis Workspace Right-Clicks – Part 2

In my last blog post, I began sharing some of my favorite hidden right-click actions in Analysis Workspace. In this post, I continue where I left off (since that post was getting way too long!). Most of these items are related to the Fallout visualization since I find that it has so many hidden features!

Freeform Table – Change Attribution Model for Breakdowns

Attribution is always a heated topic. Some companies are into First Touch and others that believe in Last Touch. In many cases, you have to agree as an organization which attribution model to use, especially when it comes to marketing campaigns. However, what if you want to use multiple attribution models? For example, let’s say that as an organization, you decide that the over-arching attribution model is Last Touch, meaning that the campaign source taking place most closely to the success (Order, Blog Post View, etc.) is the one that gets credit. Here is what this looks like for my blog:

However, what if, at the tracking code level, you want to see attribution differently. For example, what if you decide that once the Last Touch model is applied to the campaign source, you want to see the specific tracking codes leading to Blog Posts allocated by First Touch? Multiple allocation models are available in Analysis Workspace, but this feature is hidden. The use of multiple concurrent attribution models is described below.

First, you want to break down your campaign source into tracking codes by right-clicking and choosing your breakdown:

You can see that the breakdown is showing tracking codes by source and that the attribution model is Last Touch | Visitor (highlighted in red above). However, if you hover your mouse over the attribution description of the breakdown header, you can see an “Edit” link like this:

Clicking this link allows you to change the attribution model for the selected metric for the breakdown rows. In this case, you can view tracking codes within the “linkedin-post” source attributed using First Touch Attribution and, just for fun, you can change the tracking code attribution for Twitter to an entirely different attribution model (both shown highlighted in red below):

So with a few clicks, I have changed my freeform table to view campaign source by Last Touch, but then within that, tracking codes from LinkedIn by First Touch and Twitter by J Curve attribution. Here is what the new table looks like side-by-side with the original table that is all based upon Last Touch:

As you can see, the numbers can change significantly! I suggest you try out this hidden tip whenever you want to see different attribution models at different levels…

Fallout – Trend

The next right-click I want to talk about has to do with the Fallout report. The Fallout report in Analysis Workspace is beyond cool! It lets you add pages, metrics and pretty much anything else you want to it to see where users are dropping off your site or app. You can also apply segments to the Fallout report holistically or just to a specific portion of the Fallout report. In this case, I have created a Fallout report that shows how often visitors come to our home page, eventually view one of my blog posts and then eventually view one my consulting services pages:

Now, let’s imagine that I want to see how this fallout is trending over time. To do this, right-click anywhere in the fallout report and choose the Trend all touchpoints option as shown here:

Trending all touchpoints produces a new graph that shows fallout trended over time:

Alternatively, you can select the Trend touchpoint option for a specific fallout touchpoint and see one of the trends. Seeing one fallout trend provides the added benefit of being able to see anomaly detection within the graph:

Fallout – Fall-Through & Fall-Out

The Fallout visualization also allows you to view where people go directly after your fallout touchpoints. Fallthrough reporting can help you understand where they are going if they don’t go directly to the next step in your fallout steps. Of course, there are two possibilities here. Some visitors eventually do make it to the remaining steps in your fallout and others do not. Therefore, Analysis Workspace provides right-clicks that show you where people went in both situations. The Fallthrough scenario covers cases where visitors do eventually make it to the next touchpoint and right-clicking and selecting that option looks like this:

In this case, I want to see where people who have completed the first two steps of my fallout go directly after the second step, but only for cases in which they eventually make it to the third step of my fallout. Here is what the resulting report looks like:

As you can see, there were a few cases in which users went directly to the pages I wanted them to go to (shown in red), but now I can see where they deviated and view the latter in descending order.

The other option is to use the fallout (vs. fallthrough) option. Fallout shows you where visitors went next if they did not eventually make it to the next step in your fallout. You can choose this using the following right-click option:

Breakdown fallout by touchpoint produces a report that looks like this:

Another quick tip related to the fallout visualization that some of my clients miss is the option to make fallout steps immediate instead of eventual. At each step of the fallout, you can change the setting shown here:

Changing the setting to Next Hit, narrows down the scope of your fallout to only include cases in which visitors went directly from one step to the next. Here is what my fallout report looks like before and after this change:

Fallout – Multiple Segments

Another cool feature of the fallout visualization is that you can add segments to it to see fallout for different segments of visitors. You can add multiple segments to the fallout visualization. Unfortunately, this is another “hidden” feature because you need to know that this is done by dragging over a segment and dropping it on the top part of the visualization as shown here:

This shows a fallout that looks like this:

Now I can see how my general population falls out and also how it is different for first-time visits. To demonstrate adding multiple segments, here is the same visualization with an additional “Europe” segment added:

Going back to what I shared earlier, right-clicking to trend touchpoints with multiple segments added requires you to click precisely on the part that you want to see trended. For example, right-clicking on the Europe Visits step two shows a different trend than clicking on the 1st Time Visits bar:

Therefore, clicking on both of the different segment bars displays two different fallout trends:

So there you have it. Two blog posts worth of obscure Analysis Workspace features that you can explore. I am sure there are many more, so if you have any good ones, feel free to leave them as a comment here.

Adobe Analytics, Featured

Product Segmentation Gotchas

If you have used Adobe Analytics segmentation you are likely very familiar with the hierarchy of container. These containers illustrate the scope of the criteria wrapped in the container and are available at the visitor, visit, and hit levels. These containers help you control exactly what happens on each of those levels and your analysis can be heavily impacted by which of these you use. These are extremely useful and handle most use cases.

When doing really detailed analysis related to products, however, the available containers can get confused. This is because there can be multiple products per visitor, visit, or hit. Scenarios like a product list page and checkout pages, when analyzed at a product level, can be especially problematic. Obviously this has a disproportionate impact on retailers but other industries may also be impacted if they use the products variable to facilitate fancy implementations. Any implementation that has a need to collect attributes that have a many-to-many relationship may need to leverage the products variable.

Following are a few cases illustrating where this might happen so be on the lookout.

Product Attributes at Time of Order

Let’s say you want to segment for visits that purchased a product with a discount. Or, rather than a discount, it could be a flag indicating the product should be gift wrapped. It could even be some other attribute that you want passed “per product” on the thank you page. Using the scenario of a discount, if a product-level discount (e.g. 2 for 1 deal) is involved and that same discount can apply to other products, you won’t quite be able to get the right association between the two dimension. You may be tempted to create a segment like this:

However, this segment can disappoint you. Imagine that your order includes two products (product A and product B) and product B is the one that has the “2_for_1” discount applied to it (through a product syntax merchandising eVar). In that case the visit will qualify for our segment because our criteria will be applied at the hit level (note the red arrow). This setting will result in the segment looking for a hit with product A and a code of “2_for_1” but it doesn’t care beyond that. This segment will include the correct results (the right discount associated with the right product), but it will also include undesired results such as right discount associated with the wrong product. This is caused when the correct product just so happened to be purchased at the same time. In the end you are left with a segment you shouldn’t use.

This example is centered around differing per-product attributes at the time of an order but really the event doesn’t matter. This could apply at any time you have a bunch of products collected at once that may each have different values. If multiple products are involved and your implementation is using merchandising evars with product syntax (correctly) then this will be a consideration for you.

Differentiating Test Products

I once had a super-large retailer run a test on a narrow set of a few thousand products. They wanted to know what kind of impact different combinations of alternate images available on the product detail page would have on conversion. This included still images, lifestyle images, 360 views, videos, etc. However, not all products had comparable alternate images available. Because of this they ran the test only across products that did have comparable imagery assets. This resulted in the need to segment very carefully at a product level. Inevitably they came to me with the question “how much revenue was generated by the products that were in the test?” This is a bit tricky because in A/B tests we normally look at visitor-level data for a certain timeframe. If someone in the test made a purchase and the test products were only a fraction of the overall order then the impact of the test could be washed out. So we had to get specific. Unfortunately, through a segment alone we couldn’t get good summary information.

This is rooted in the same reasons as the first example. If you were to only segment for a visitor in the test then your resulting revenue would include all orders for that visitor while in that test. From there you could try to get more specific and segment for the products you are interested in; however,  the closest you’ll get is order-level revenue containing the right products. You’ll still be missing the product-specific revenue for the right products. At least you would be excluding orders placed by test participants that didn’t have the test products at all…but a less-bad segment is still a bad segment 🙂

Changes to Product Attributes

This example involves the fulfillment method of the product. Another client wanted to see how people changed their fulfillment method (ship to home, ship to store, buy online/pickup in store) and was trying to work around a limited implementation. The implementation was set up to answer “what was the fulfillment method changed to?” but what they didn’t have built in was this new question — “of those that start with ship-to-home products in the cart, how often is that then changed to ship to store?” Also important is that each product in the cart could have different fulfillment methods at any given time.

In this case we can segment for visits that start with some product with a ship-to-home method. We can even segment for those that change the fulfillment method. We get stuck, though, when trying to associate the two events together by a specific product. You’re left without historical data and resorting to implementation enhancements.

Other Options

The main point of this post is to emphasize where segmenting on products could go wrong. There are ways to work around the limitations above, though. Here are a few options to consider:

  • In the case of the product test, we could apply a classification to identify which products are in the test. Then you would just have to use a table visualization, add a dimension for your test groups, and break that down by this new classification. This will show you the split of revenue within the test group.
  • Turn to the Adobe Data Feed and do some custom crunching of the numbers in your data warehouse.
  • Enhance your implementation. In the case of the first scenario where persistence isn’t needed you could get away with appending the product to the attribute to provide the uniqueness you need. That may, though, give you some issue with the number of permutations that could create. Depending on how into this you want to get, you could even try some really crazy/fun stuff like rewriting the visitor ID to include the product. This results in some really advanced product-level segmentation. No historical data available, though.
  • Limit your dataset to users that just interacted with or ordered one product to avoid confusion with other products. Blech! Not recommended.

Common Theme

You’ll notice in all of these examples the common thread is where we are leveraging product-specific attributes (merchandising eVars) and trying to tease out specific products from other products based on those attributes. Given that none of the containers perfectly match the same scope of a product you may run into something like the problems described above. Have you come across other segmenting-at-a-product-level problems? If so please comment below!

 

Adobe Analytics, Featured

My Favorite Analysis Workspace Right-Clicks – Part 1

If you use Adobe Analytics, Analysis Workspace has become the indispensable tool of choice for reporting and analysis. As I mentioned back in 2016, Analysis Workspace is the future and where Adobe is concentrating all of its energy these days. However, many people miss all of the cool things they can do with Analysis Workspace because much of it is hidden in the [in]famous right-click menus. Analysis Workspace gurus have learned “when in doubt, right-click” while using Analysis Workspace. In this post, I will share some of my favorite right-click options in Analysis Workspace in case you have not yet discovered them.

Freeform Table – Compare Attribution Models

If you are an avid reader of my blog, you may recall that I recently shared that a lot of attribution in Adobe Analytics is shifting from eVars to Success Events. Therefore, when you are using a freeform table in Analysis Workspace, there may be times when you want to compare different attribution models for a metric you already have in the table. Instead of forcing you to add the metric again and then modify its attribution model, you can now choose a second attribution model right from within the freeform table. To do this, just right-click on the metric header and select the Compare Attribution Model option:

This will bring up a window asking you which comparison attribution model you want to use that looks like this:

Once you select that, Analysis Workspace will create a new column with the secondary attribution model and also automatically create a third column that compares the two:

My only complaint here is that when you do this, it becomes apparent that you aren’t sure what attribution model was being used for the column you had in the first place. I hope that, in the future,  Adobe will start putting attribution model indicators underneath every metric that is added to freeform tables, since the first metric column above looks a bit confusing and only an administrator would know what its allocation is based upon eVar settings in the admin console. Therefore, my bonus trick is to use the Modify Attribution Model right-click option and set it to the correct model:

In this case, the original column was Last Touch at the Visitor level, so modifying this keeps the data as it was, but now shows the attribution label:

This is just a quick “hack” I figured out to make things clearer for my end-users… But, as you can see, all of this functionality is hidden in the right-click of the Freeform table visualization. Obviously, there are other uses for the Modify Attribution Model feature, such as changing your mind about which model you want to use as you progress through your analysis.

Freeform Table – Compare Date Range

Another handy freeform table right-click is the date comparison. This allows you to pick a date range and compare the same metric for the before and after range and also creates a difference column automatically. To do this, just right-click on the metric column of interest and specify your date range:

This what you will see after you are finished with your selection:

In this case, I am looking at my top blog posts from October 11 – Nov 9 compared to the prior 30 days. This allows me to see how posts are doing in both time periods and see the percent change. In your implementation, you might use this technique to see product changes for Orders and Revenue.

Cohort – Create Segment From Cell

If you have situations on your website or mobile app that require you to see if your audience is coming back over time to perform specific actions, then the Cohort visualization can be convenient. By adding the starting and ending metric to the Cohort visualization, Analysis Workspace will automatically show you how often your audience (“cohorts”) are returning. Here is what my blog Cohort looks like using Blog Post Views as the starting and ending metrics:

While this is interesting, what I like is my next hidden right-click. This is the ability to automatically create a segment from a specific cohort cell. There are many times where you might want to build a segment of people who came to your site, did something and then came back later to do either the same thing or a different thing. Instead of spending a lot of time trying to build a segment for this, you can create a Cohort table and then right-click to create a segment from a cell. For example, let’s imagine that I notice a relatively high return rate the week after September 16th. I can right-click on that cell and use the Create Segment from Cell option:

This will automatically open up the segment builder and pre-populate the segment, which may look like this:

From here you can modify the segment any way you see fit and then save it. Then you can use this segment in any Adobe Analytics report (or even make a Virtual Report Suite from it!). This is a cool, fast way to build cohort segments! Sometimes, I don’t even keep the Cohort table itself. I merely use the Cohort table to make the segment I care about. I am not sure if that is smart or lazy, but either way, it works!

Venn – Create Segment From Cell

As long as we are talking about creating segments from a visualization, I would be remiss if I didn’t mention the Venn visualization. This visualization allows you to add up to three segments and see the overlap between all of them. For example, let’s say that for some crazy reason I need to look at people who view my blog posts, are first-time visitors and are from Europe. I would just drag over all three of these segments and then select the metric I care about (Blog Post Views in this case):

This would produce a Venn diagram that looks like this:

While this is interesting, the really cool part is that I can now right-click on any portion of the Venn diagram to get a segment. For example, if I want a segment for the intersection of all three segments, I just right-click in the region where they all overlap like this:

This will result in a brand new segment builder window that looks like this:

From here, I can modify it, save it and use it any way I’d like in the future.

Venn – Add Additional Metrics

While we are looking at the Venn visualization, I wanted to share another secret tip that I learned from Jen Lasser while we traveled the country performing Adobe Insider Tours. Once you have created a Venn visualization, you can click on the dot next to the visualization name and check the Show Data Source option:

This will expose the underlying data table that is powering the visualization like this:

But the cool part is what comes next. From here, you can add as many metrics as you want to the table by dragging them into the Metrics area. Here is an example of me dragging over the Visits metric and dropping it on top of the Metrics area:

Here is what it looks like after multiple metrics have been added (my implementation is somewhat lame, so I don’t have many metrics!):

But once you have numerous metrics, things get really cool! You can click on any metric, and the Venn visualization associated with the table will dynamically change! Here is a video that shows what this looks like in real life:

This cool technique allows you to see many Venn visualizations for the same segments at once!

Believe it or not, that is only half of my favorite right-clicks in Analysis Workspace! Next week, I will share the other ones, so stay tuned!

Adobe Analytics, Featured

New Adobe Analytics Class – Managing Adobe Analytics Like A Pro!

While training is only a small portion of what I do in my consulting business, it is something I really enjoy. Training allows you to meet with many people and companies and help them truly understand the concepts involved in a product like Adobe Analytics. Blog posts are great for small snippets of information, but training people face-to-face allows you to go so much deeper.

For years, I have provided general Adobe Analytics end-user training for corporate clients and, more recently, Analysis Workspace training. But my most popular class has always been my Adobe Analytics “Top Gun” Class, in which I delve deep into the Adobe Analytics product and teach people how to really get the most out of their investment in Adobe Analytics. I have done this class for many clients privately and also offer public versions of the class periodically (click here to have me come to your city!).

In 2019, I am launching a brand new class related to Adobe Analytics! I call this class:

Having worked with Adobe Analytics for fifteen years now (yeesh!), I have learned a lot about how to run a successful analytics program, especially those using Adobe Analytics. Therefore, I have attempted to put all of my knowledge and best practices into this new class. Some of the things I cover in the class include:

  • How to run an analytics implementation based upon business requirements
  • What does a fully functioning Solution Design Reference look like and how can you use it to track implementation status
  • Why data quality is so important and what steps can you take to minimize data quality issues
  • What are best practices in organizing/managing your Adobe Analytics implementation (naming conventions, admin settings, etc…)
  • What are the best ways to train users on Adobe Analytics
  • What team structures are available for an analytics team which is best for your organization
  • How to create the right perception of your analytics team within the organization
  • How to get executives to “buy-in” to your analytics program

These are just some of the topics covered in this class. About 70% of the class applies to those using any analytics tool (i.e. Adobe, GA, etc…), but there are definitely key portions that are geared towards Adobe Analytics users.

I decided to create this class based on feedback from people attending my “Top Gun” Class over the years. Many of the attendees were excited about knowing more about the Adobe Analytics product, but they expressed concerns about running the overall analytics function at their company. I have always done my best to share ideas, lessons, and anecdotes in my conference talks and training classes, but in this new class, I have really formalized my thinking in hopes that class participants can learn from what I have seen work over the past two decades.

ACCELERATE

This new class will be making its debut at the Analytics Demystified ACCELERATE conference this January in California. You can come to this class and others at our two-day training/conference event, all for under $1,000! In addition to this class and others, you also have access to our full day conference with great speakers from Adobe, Google, Nordstrom, Twitch and many others. I assure you that this two-day conference is the best bang for the buck you can get in our industry! Unfortunately, space is limited, so I encourage you to register as soon as possible.

Tag Management

Adobe Launch Linking to DTM

Earlier this week I mentioned a feature that allows you to link your Adobe Launch files to your old DTM files. Some have asked me for more details so you now get this follow-up post.

Essentially this feature allows you to make the transition from DTM to Launch easier for sites that were already implemented with DTM. How does it make it easier? Well, let’s say you are an implementation manager who spent years getting DTM in place across 100+ sites that are each running on different platforms and managed by a multitude of internal and external groups. That isn’t a process that most people get excited to revisit. To avoid all that, Adobe has provided this linking feature. As you create a new configuration in Launch those Launch files can just replace your DTM files.

Let’s imagine that you have a setup where a variety of sites are currently pointing to DTM code and your newer implementations are pointing to Launch code. This assumes you are using one property across many sites which may or may not be a good idea depending on your needs. You could visualize it like below where the production environment for both products is used by different sites.

Once you enable the linking, the production code is now shared between the two products. The new visual would look something like this:

It is just a one-way sharing, though. If you were to link and then publish from DTM that would not impact your Launch files. It would impact the DTM files. It’s best to get to the point where you have published in Launch and then just disable the DTM property.

How to Enable Linking

Here is how it is done if you are starting from a brand new property in Launch. You should do these steps before any site is using the Production embed script from Launch. This is because Adobe will give you a new embed code during this process.

  1. The new property will already have Environments enabled (this may be new, I was under the impression you had to create the environments from scratch). Find your Production environment, consider the warning above, and, if all is well, delete it.
  1. Once deleted then hit the Add Environment button and select Production. This will allow you to add a new Production environment to replace the one you just deleted.
  2. As you are configuring the environment  just toggle the “Link DTM embed code” on and paste in your DTM embed code.
  1. Save your settings and if everything checks out ok you will be given new Production embed code. This embed code is what you would use for any production sites.

Other Considerations

  • The embed code will change every time you delete and add a new Production environment. You’ll want sites with the Launch embed code to have the latest version. I haven’t tested what will happen if you try to implement a site with the old Production embed code. It makes me uneasy, though, so I would just avoid it.
  • Note that in my picture above I only show the Production environment being shared. This actually brings up an important point around testing. If you have a staging version of the old sites that uses the staging version of the DTM script then you really can’t test the migration to Launch. The linking only updates the production files. But really you neeeed to test. In order to do this I would recommend just using a tool like Charles or Chrome overrides to rewrite the the DTM embed code to your Launch embed code.
  • Watch out for old methods. When Adobe warned of the transition from DTM to Launch they noted that only the methods below will be supported. If you did something crazy on your site that has outside-of-DTM scripts using something in the _satellite object then you’ll need to figure out an alternative. Once you publish your Launch files to the DTM location, any other methods previously made available by DTM may not be there anymore. Here are the methods that you can still use:
    • _satellite.notify()
    • _satellite.track()
    • _satellite.getVar()
    • _satellite.setVar()
    • _satellite.getVisitorId()
    • _satellite.setCookie()
    • _satellite.readCookie()
    • _satellite.removeCookie()
    • _satellite.isLinked()
  • You can see Adobe’s documentation around this feature here. Especially important are the prerequisites for enabling the linking (DTM and Launch need to be associated with the same org, etc)
Tag Management, Uncategorized

Thankful Launch Features

In honor of Thanksgiving last week I wanted to take a moment to provide a possibly odd mashup between holiday and tag management systems. When I’m converting an implementation from Adobe DTM to Adobe Launch there are a few small features that I’m grateful Adobe added to Launch. Here they are in no particular order…

Better Support for the S Object

Adobe Analytics implementations have traditionally leveraged an ‘s’ global object. The standard setup in DTM would either obfuscate the object that Adobe Analytics used or just not make it globally scoped. This could be annoying when you wanted some other script to use the ‘s’ object. You can force DTM to use the ‘s’ object, but then you would lose some features like the “Managed by Adobe” option for your app measurement code. Here is the DTM setup:

Now in Launch you can opt to “make tracker globally accessible” in your extension configuration.

This will create the ‘s’ object at the window scope so you can have other scripts potentially reference the object directly. With this you get the added benefit of future library updates being easier. Having scripts that directly reference the ‘s’ object isn’t something you should plan on leveraging heavily. However, depending what you are needing while migrating it sure can be useful.

Ordering Tags

When you have an implementation with dependencies between tags the ordering is important. In DTM you had some ordering available by using different event types on the page (top of page, page bottom, DOM ready) but no supported ordering at the time of a single event (although I did once find an unsupported hack for this).

With Launch the ordering is built right into the event configuration of your rules.

It is pretty simple. The default number is 50. If you need something to run earlier on the same event just give it a lower number.

Modifying default values sometimes make me nervous, though, so if you do change the number from 50 just do yourself a favor and update the event name and even the rule name to reflect that. Because my names often represent a list of attributes, I’ll just add “order 10” to the end of the name.

Link Launch to DTM Embed Code

When you configure environments in Launch you will get new embed code to implement on your site. If you were on DTM for a long time and had a bunch of internal or agency groups implement DTM across many different applications then chances are making a global code updates like this is tough! Fortunately, Launch has a feature that allows you to simply update your old DTM payload with the new Launch logic without making all those updates. When creating a new production environment you can just add your DTM embed code to the field shown below. Once that is done, your production Launch code will also publish to the old DTM embed file as well. With this any site on the old or new embed code will have the same, consistent code. Yay!

So what’s one of your favorite Launch features? Comment below!

Adobe Analytics, Featured

Using Builders Visibility in Adobe Analytics

Recently, while working on a client implementation, I came across something I hadn’t seen before in Adobe Analytics. For me, that is quite unusual! While in the administration console, I saw a new option under the success event visibility settings called “Builders” as shown here:

A quick check in the documentation showed this:

Therefore, the new Builders setting for success events is meant for cases in which you want to capture data and use it in components (i.e. Calculated Metrics, Segments, etc.), but not necessarily expose it in the interface. While I am not convinced that this functionality is all that useful, in this post, I will share some uses that I thought of related to the feature.

Using Builders in Calculated Metrics

One example of how you could use the Builders visibility is when you want to create a calculated metric, but don’t necessarily care about one of the elements contained in the calculated metric formula as a standalone metric. To illustrate this, I will reference an old blog post I wrote about calculating the average internal search position clicked. In that post, I suggested that you capture the search result position clicked in a numeric success event, so that it could be divided by the number of search result clicks to calculate the average search position. For example, if a user conducts two searches and clicks on the 4th and 6th results respectively, you would pass the values of 4 and 6 to the numeric success event and divide it by the number of search result clicks (6+4/2=5.0). Once you do that, you will see a report that looks like this:

In this situation, the Search Position column is being used to calculate the Average Search Position, but by itself, the Search Position metric is pretty useless. There aren’t many cases in which someone would want to view the Search Position metric by itself. It is simply a means to an end. Therefore, this may be a situation in which you, as the Adobe Analytics administrator, may choose to use the Builders functionality to hide this metric from the reporting interface and Analysis Workspace, only exposing it when it comes to building calculated metrics and segments. This allows you to remove a bit of the clutter from your implementation and can be done by simply checking the box in the visibility column and using the Builders option as shown here:

As I stated earlier, this feature will not solve world peace, but I guess it can be handy in situations like this.

Using Builders in Segments

In addition to using “Builders” Success Events in calculated metrics, you can also use them when building segments. Continuing the preceding internal search position example, there may be cases in which you want to use the Search Position metric in a segment like the one shown here:

Make Builder Metrics Selectively Visible

One other thing to note with Builders has to do with calculated metrics. If you choose to hide an element from the interface, but one of your advanced users wants to view it, keep in mind that they still can by leveraging calculated metrics. Since the element set to Builders visibility is available in the calculated metrics builder, there is nothing stopping you or your users from creating a calculated metric that is equal to the hidden success event. They can do this by simply dragging over the metric and saving it as a new calculated metric as shown here:

This will be the same as having the success event visible, but by using a calculated metric, your users can determine who they want to share the resulting metric with at the organization.

Adobe Analytics, Featured

Viewing Classifications Only via Virtual Report Suites

I love SAINT Classifications! I evangelize the use of SAINT Classifications anytime I can, especially in my training classes. Too often Adobe customers fail to take full advantage of the power of SAINT Classifications. Adding meta-data to your Adobe Analytics implementation greatly expands the types of analysis you can perform and what data you can use for segmentation. Whether the meta-data is related to campaigns, products or customers, enriching your data via SAINT is really powerful.

However, there are some cases in which, for a variety of reasons, you may choose to put a lot of data into an eVar or sProp with the intention of splitting the data out later using SAINT Classifications. Here are some examples:

  • Companies concatenate a lot of “ugly” campaign data into the Tracking Code eVar which is later split out via SAINT
  • Companies store indecipherable data (like an ID) in an eVar or sProp which only makes sense when you look at the SAINT Classifications
  • Companies have unplanned bad data in the “root” variable that they fix using SANIT Classifications
  • Companies are low on variables, so they concatenate disparate data points into an eVar or sProp to conserve variables

One example of the latter I encountered with a client is shown here:

In this example, the client was low on eVars and instead of wasting many eVars, we concatenated the values and then split out the data using SAINT like this:

Using this method, the company was able to get all of the reports they wanted, but only had to use one eVar. The downside was that users could open up the actual eVar28 report in Adobe Analytics and see the ugly values shown above (yuck!). Because of this, a few years ago I suggested an idea to Adobe that they let users hide an eVar/sProp in the interface, but continue letting users view the SAINT Classifications of the hidden eVar/sProp. Unfortunately, since SAINT Classification reports were always tied directly to the “root” eVar/sProp from which they are based, this wasn’t possible. However, with the advent of Virtual Report Suites, I am pleased to announce that you now can curate your report suite to provide access to SAINT Classification meta-data reports, while at the same time not providing access to the main variable they are based upon. The following will walk you through how to do this.

Curate Your Classifications

The first step is to create a new Virtual Report Suite off of another report suite. At the last step of the process, you will see the option to curate/customize what implementation elements will go over to the new Virtual Report Suite. In this case, I am going to copy over everything except the Tracking Code and Blog Post Title (eVar5) elements as shown here:

As you can see, I am hiding Blog Post Title [v5], but users still have access to the four SAINT Classifications of eVar5. Once the Virtual Report Suite is saved and active, if you go into Analysis Workspace and look at the dimensions in the left nav, you will see the meta-data reports for eVar5, but not the original eVar5 report:

If you drag over one of the SAINT Classification reports, it works just like you would expect it to:

If you try to break this report down by the “root” variable it is based upon, you can’t because it isn’t there:

Therefore, you have successfully hidden the “root” report, but still provided access to the meta-data reports. Similarly, you can view one of the Campaign Tracking Code SAINT Classification reports (like Source shown below), but not have access to the “root” Tracking Code report:

Summary

If you ever have situations in which you want to hide an eVar/sProp that is the “root” of a SAINT Classification, this technique can prove useful. Many of the reasons you might want to do this are shown in the beginning of this post. In addition, you can combine Virtual Report Suite customization and security settings to show different SAINT Classification elements to different people. For example, you might have a few Classifications that are useful to an executive and others that are meant for more junior analysts. There are lots of interesting use cases where you can apply this cool trick!

Conferences/Community

Announcing additional ACCELERATE speakers from Twitch, Google, and Nordstrom!

Today we are excited to announce some additional speakers at our 2019 ACCELERATE conference in Los Gatos, California on January 24th and 25th. In addition to Ben Gaines from Adobe and Krista Seiden from Google, we are delighted to be joined by June Dershewitz from Twitch, Lizzie Allen Klein from Google, and David White from Nordstrom.

June is a long-time friend of the firm and will be sharing her insights into the emerging relationships between Data Analysts, Data Scientists, and Data Engineers, Lizzie is an analytics rock-star at Google and will be talking about how any data worker can elevate their own skills in an effort to get the most from their career, and David will be talking about how Nordstrom is essentially “rolling their own” digital analytics and building data collection and distribution based on open source, cloud-based technology.

June Dershewitz is a Director of Analytics at Twitch, the world’s leading video platform and community for gamers (a subsidiary of Amazon). As an analytics practitioner she builds and leads teams that focus on marketing analytics, product analytics, business intelligence, and data governance. As a long-standing advocate of the analytics community, she was the co-founder of Web Analytics Wednesdays (along with Eric Peterson!); she’s also a Director Emeritus of the Digital Analytics Association and a current Advisory Board Member at Golden Gate University.
Lizzie Allen Klein is a consumer insights analyst at Google, where she focuses on support analytics for Google consumer apps. Prior to this role, she ran experimentation and analytics on the Google Cloud Platform website. Aside from playing with her dog in the mountains of Colorado, she enjoys learning new data exploration techniques, using those techniques to better understand users and encouraging data-informed decision-making by sharing user insights.
David White is a Cloud Security Engineer at Nordstrom. He is passionate about event-driven architectures, clickstream analytics and keeping data secure. He has experience working on building analytics pipelines, both in the corporate space, as well as open source communities. He lives in Seattle, WA with his girlfriend and dog.
Ben Gaines is a Group Product Manager at Adobe, where he is responsible for guiding aspects of the Adobe Analytics product strategy and roadmap related to product integration and Analysis Workspace. In this role, he and his team work closely with Adobe customers to understand their needs and manage the planning and design of new analysis capabilities in the product. He lives near Salt Lake City, Utah, with his wife and four children.
Krista Seiden is a Product Manager for Google Analytics and the Analytics Advocate for Google, advocating for all things data, web, mobile, optimization and more. Keynote speaker, practitioner, writer on Analytics and Optimization, and passionate supporter of #WomenInAnalytics. You can follow her blog at www.kristaseiden.com and on twitter @kristaseiden.
Adobe Analytics, Featured

Adjusting Time Zones via Virtual Report Suites

When you are doing analysis for an organization that spans multiple time zones, things can get tricky. Each Adobe Analytics report suite is tied to one specific time zone (which makes sense), but this can lead to frustration for your international counterparts. For example, let’s say that Analytics Demystified went international and had resources in the United Kingdom. If they wanted to see when visitors located in the UK viewed blog posts (assume that is one of our KPI’s), here is what they would see in Adobe Analytics:

This report shows a Blog Post Views success event segmented for people located in the UK. While I wish our content was so popular that people were reading blogs from midnight until the early morning hours, I am not sure that is really the case! Obviously, this data is skewed because the time zone of our report suite is on US Pacific time. Therefore, analysts in the UK would have to mentally shift everything eight hours on the fly, which is not ideal and can cause headaches.

So how do you solve this? How do you let the people in the US see data in Pacific time and those in the UK see data in their time zone? Way back in 2011, I wrote a post about shifting time zones using custom time parting variables and SAINT Classifications. This was a major hack and one that I wouldn’t really recommend unless you were desperate (but that was 2011!). Nowadays, using the power of Virtual Report Suites, there is a more elegant solution to the time zone issue (thanks to Trevor Paulsen from Adobe Product Management for the reminder).

Time-Zone Virtual Report Suites

Here are step-by-step instructions on how to solve the time zone paradox. First, you will create a new Virtual Report Suite and assign it a new name and a new time zone:

You can choose whether this Virtual Report Suite has any segments applied and/or contains all of your data or just a subset of your data in the subsequent settings screens.

When you are done, you will have a brand new Virtual Report Suite that has all data shifted to the UK time zone:

Now you are able to view all reports in the UK time zone.  To illustrate this, let’s look at the report above in the regular report suite side by side with the same report in the new Virtual Report Suite:

As you can see, both of these reports are for the same date and have the same UK geo-segmentation segment applied. However, as you can see, the data has been shifted eight hours. For example, Blog Post Views that previously looked like they were viewed by UK residents at 2:00am, now show that they were viewed at 10:00am UK time. This can also be seen by looking at the table view and lining up the rows:

This provides a much more realistic view of the data for your international folks. In theory, you could have a different Virtual Report Suite for all of your major time zones.

So that is all you need to do to show data in different time zones. Just a handy trick if you have a lot of international users.

Industry Analysis, Tag Management, Technical/Implementation

Stop Thinking About Tags, and Start Thinking About Data

Nearly three weeks ago, I attended Tealium’s Digital Velocity conference in San Francisco. I’ve attended this event every year since 2014, and I’ve spent enough time using its Universal Data Hub (the name of the combined UI for AudienceStream, EventStream, and DataAccess, if you get a little confused by the way these products have been marketed – which I do), and attended enough conferences, to know that Tealium considers these products to be a big part of its future and a major part of its product roadmap. But given that the majority of my my clients are still heavily focused on tag management and getting the basics under control, I’ve spent far more time in Tealium iQ than any of its other products. So I was a little surprised as I left the conference on the last day by the force with which my key takeaway struck me: tag management as we knew it is dead.

Back in 2016, I wrote about how much the tag management space had changed since Adobe bought Satellite in 2013. It’s been awhile since tag management was the sole focus of any of the companies that offer tag management systems. But what struck me at Digital Velocity was that the most successful digital marketing organizations – while considering tag management a prerequisite for their efforts – don’t really use their tools to manage tags at all. I reflected on my own clients, and found that the most successful ones have realized that they’re not managing tags at all – they’re managing data. And that’s why Tealium is in such an advantageous position relative to any of the other companies still selling tag management systems while Google and Adobe give it away for free.

This idea has been kicking around in my head for awhile now, and maybe I’m stubborn, but I just couldn’t bring myself to admit it was true. Maybe it’s because I still have clients using Ensighten and Signal – in spite of the fact that neither product seems to have committed many resources to their tag management products lately (they both seem much more heavily invested in identity and privacy these days). Or maybe it’s because I still think of myself as the “tag management guy” at Demystified, and haven’t been able to quite come to grips with how much things have changed. But my experience at Digital Velocity was really the final wake-up call.

What finally dawned on me at Digital Velocity is that Tealium, like many of their early competitors, really doesn’t think of themselves as a tag management company anymore, either. They’ve done a much better job of disguising that though – because they continue to invest heavily in TiQ, and have even added some really great features lately (I’m looking at you, New JavaScript Code Extension). And maybe they haven’t really had to disguise it, either,  because of a single decision they made very early on in their history: the decision to emphasize a data layer and tightly couple it with all the core features of its product. In my opinion, that’s the most impactful decision any of the early tag management vendors made on the industry as a whole.

Most tag management vendors initially offered nothing more than code repositories outside of a company’s regular IT processes. They eventually layered on some minimal integration with a company’s “data layer” – but really without ever defining what a data layer was or why it was important. They just allowed you to go in and define data elements, write some code that instructed the TMS on how to access that data, and then – in limited cases – gave you the option of pushing some of that data to your different vendor tags.

On the other hand, Tealium told its customers up front that a good data layer was required to be successful with TiQ. They also clearly defined best practices around how that data layer should be structured if you wanted to tap into the power of their tool. And then they started building hundreds of different integrations (i.e. tags) that took advantage of that data layer. If they had stopped there, they would have been able to offer customers a pretty useful tool that made it easier to deploy and manage JavaScript tags. And that would have made Tealium a pretty similar company to all of its early competitors. Fortunately, they realized they had built something far more powerful than that – the backbone of a potentially very powerful customer data platform (or, as someone referred to Tealium’s tag management tool at DV, a “gateway drug” to its other products).

The most interesting thing that I saw during those 2 days was that there are actual companies for which tag management is only a subset of what they are doing through Tealium. In previous years, Tealium’s own product team has showcased AudienceStream and EventStream. But this year, they had actual customers showing off real-world examples of the way that they have leveraged these products to do some pretty amazing things. Tealium’s customers are doing much more real-time email marketing than you can do through traditional integrations with email service providers. They’re leveraging data collected on a customer’s website to feed integrations with tools like Slack and Twilio to meet customers’ needs in real-time. They’re solving legitimate concerns about the impact all these JavaScript tags have on page-load performance to do more flexible server-side tagging than is possible through most tools. And they’re able to perform real-time personalization across multiple domains and devices. That’s some really powerful stuff – and way more fun to talk about than “tags.” It’s also the kind of thing every company can start thinking about now, even if it’s something you have to ramp up to first.

In conclusion, Tealium isn’t the only company moving in this direction. I know Adobe, Google, an Salesforce all have marketing tools offer a ton of value to their customers. Segment offers the ability to do server-side integrations with many different marketing tools. But I’ve been doing tag management (either through actual products or my own code) for nearly 10 years, and I’ve been telling customers how important it is to have a solid data layer for almost as long – at Salesforce, we had a data layer before anyone actually called it that, and it was so robust that we used it to power everything we did. So to have the final confirmation that tag management is the past and that customer data is the future was a pretty cool experience for me. It’s exciting to see what Adobe Launch is doing with its extension community and the integration with the newest Adobe mobile SDKs. And there are all kinds of similar opportunities for other vendors in the space. So my advice to marketers is this: if you’re still thinking in terms of tags, or if you still think of all your third-party vendors as “silos,” make the shift to thinking about data and how to use it to drive your digital marketing efforts.

Photo Credit: Jonathan Poh (Flickr)

Featured, General

Analytics Demystified Interview Service Offering

Finding good analytics talent is hard! Whether you are looking for technical or analysis folks, it seems like many candidates are good from afar, but far from good! As someone who has been part of hundreds of analytics implementations/programs, I can tell you that having the right people makes all of the difference. Unfortunately, there are many people in our industry who sound like they know Adobe Analytics (or Google Analytics or Tealium, etc…), but really don’t.

One of the services that we have always provided to our clients at Demystified is the ability to have our folks interview prospective client candidates. For example, if a client of ours is looking for an Adobe Analytics implementation expert, I would conduct a skills assessment interview and let them know how much I think the candidate knows about Adobe Analytics. Since many of my clients don’t know the product as well as I do, they have found this to be extremely helpful.  In fact, I even had one case where a candidate withdrew from contention upon finding out that they would be interviewing with me, basically admitting that they had been trying to “BS” their way to a new job!

Recently, we have had more and more companies ask us for this type of help, so now Analytics Demystified is going to open this service up to any company that wants to take advantage of it. For a fixed fee, our firm will conduct an interview with your job candidates and provide an assessment about their product-based capabilities. While there are many technologies we can assess, so far most of the interest has been around the following tools:

  • Adobe Analytics
  • Google Analytics
  • Adobe Launch/DTM
  • Adobe Target
  • Optimizely
  • Tealium
  • Ensighten
  • Optimize
  • Google Tag Manager

If you are interested in getting our help to make sure you hire the right folks, please send an e-mail to contact@analyticsdemystified.com.

Adobe Analytics, Featured

Setting After The Fact Metrics in Adobe Analytics

As loyal blog readers will know, I am a big fan of identifying business requirements for Adobe Analytics implementations. I think that working with your stakeholders before your implementation (or re-implementation!) to understand what types of questions they want to answer helps you focus your efforts on the most important items and can reduce unnecessary implementation. However, I am also a realist and acknowledge that there will always be times where you miss stuff. In those cases, you can set a new metric after the fact for the thing you missed, but what about the data from the last few years? It would be ideal if you could create a metric today that would be retroactive such that it shows you data from the past.

This ability to set a metric “after the fact” is very common in other areas of analytics and there are even vendors like Heap, SnowPlow and Mixpanel that allow you to capture virtually everything and then set up metrics/goals afterwards. These tools capture raw data, let you model it as you see fit and change your mind on definitions whenever you want. For example, in Heap you can collect data and then one day decide that something you have been collecting for years should be a KPI and assign it a name. This provides a ton of flexibility. I believe that tools like Heap and SnowPlow are quite a bit different than Adobe Analytics and that each tool has its strengths, but for those who have made a long-term investment in Adobe Analytics, I wanted to share how you can have some of the Heap-like functionality in Adobe Analytics in case you ever need to assign metrics after the fact. This by no means is meant to discount the cool stuff that Heap or SnowpPlow are doing, but rather, just showing how this one cool feature of theirs can be mimicked in Adobe Analytics if needed.

After The Fact Metrics

To illustrate this concept, let’s imagine that I completely forgot to set a success event in Adobe Analytics when visitors hit my main consulting service page. I’d like to have a success event called “Adobe Analytics Service Page Views” when visitors hit this page, but as you can see here, I do not:

To do this, you simply create a new calculated metric that has the following definition:

This metric allows you to see the count of Adobe Analytics Service Page Views based upon the Page Name (or you could use URL) that is associated with that event and can then be used in any Adobe Analytics report:

So that is how simple it is to retroactively create a metric in Adobe Analytics. Obviously, this becomes more difficult if the metric you want is based on actions beyond just a page loading, but if you are tracking those actions in other variables (or ClickMap), you can follow the same process to create a calculated metric off of those actions.

Transitioning To A New Success Event

But what if you want to use the new success event going forward, but also want all of the historical data? This can be done as well with the following steps:

The first step would be to set the new success event going forward via manual tagging, a processing rule or via tag management. To do this, assign the new success event in the Admin Console:

The next step is to pick a date in which you will start setting this new success event and then start populating it.  If you want to have it be a clean break, I recommend doing this one day at midnight.

Next, you want to add the new success event to the preceding calculated metric so that you can have both the historical count and the count going forward:

However, this formula will double-count the event for all dates in which the new success event 12 has been set. Therefore, the last step is to apply two date-based segments to each part of the formula. The first date range contains the historical dates before the new success event was set. The second date range contains the dates after the new success event has been set (you can make the end date some date way into the future). Once both of these segments have been created, you can add them to the corresponding part of the formula so it looks like this:

This combined metric will use the page name for the old timeframe and the new success event for the new timeframe. Eventually, if desired, you can transition to using only the success event instead of this calculated metric when you have enough data in the success event alone.

Summary

To wrap up, this post shows a way that you can create metrics for items that you may have missed in your initial implementation and provides a way to fix your original omission and combine the old and the new. As I stated, this functionality isn’t as robust as what you might get from a Heap, SnowPlow or Mixpanel, but it can be a way to help if you need it in a pinch.

Adobe Analytics, Featured

Shifting Attribution in Adobe Analytics

If you are a veteran Adobe Analytics (or Omniture SiteCatalyst) user, for years the term attribution was defined by whether an eVar was First Touch (Original Value) or Last Touch (Most Recent). eVar attribution was setup in the administration console and each eVar had a setting (and don’t bring up Linear because that is a waste!). If you wanted to see both First and Last Touch campaign code performance, you needed to make two separate eVars that each had different attribution settings. If you wanted to see “Middle Touch” attribution in Adobe Analytics, you were pretty much out of luck unless you used a “hack” JavaScript plug-in called Cross Visit Participation (thanks to Lamont C.).

However, this has changed in recent releases of the Adobe Analytics product. Now you can apply a bunch of pre-set attribution models including J Curve, U Curve, Time Decay, etc… and you can also create your own custom attribution model that assigns some credit to first, some to last and the rest divided among the middle values. These different attribution models can be built into Calculated Metrics or applied on the fly in metric columns in Analysis Workspace (not available for all Adobe Analytics packages). This stuff is really cool! To learn more about this, check out this video by Trevor Paulsen from Adobe.

However, this post is not about the new Adobe Analytics attribution models. Instead, I wanted to take a step back and look at the bigger picture of attribution in Adobe Analytics. This is because I feel that the recently added Attribution IQ functionality is fundamentally changing how I have always thought about where and how Adobe performs attribution. Let me explain. As I mentioned above, for the past decade or more, Adobe Analytics attribution has been tied to eVars. sProps didn’t really even have attribution since their values weren’t persistent and generally didn’t work with Success Events. But what has changed in the past year, is that attribution has shifted to metrics instead of eVars. Today, instead of having a First Touch and Last Touch campaign code eVar, you can have one eVar (or sProp – more on that later) that captures campaign codes and then choose the attribution (First or Last Touch) in whatever metric you care about. For example, if you want to see First Touch Orders vs. Last Touch Orders, instead of breaking down two eVars by each other like this…

…you can use one eVar and create two different Order metric columns with different attribution models to see the differences:

In fact, you could have metric columns for all available attribution models (and even create Calculated Metrics to divide them by each other) as shown here:

In addition, the new attribution models work with sProps as well. Even though sProp values don’t persist, you can use them with Success Events in Analysis Workspace and then apply attribution models to those metrics. This means that the difference between eVars and sProps is narrowing due to the new attribution model functionality.

To prove this, here is an Analysis Workspace table based upon an eVar…

…and here is the same table based upon an sProp:

What Does This Mean?

So, what does this mean for you? I think this changes a few things in significant ways:

  1. Different Paradigm for Attribution – You are going to have to help your Adobe Analytics users understand that attribution (First, Last Touch) is no longer something that is part of the implementation, but rather, something that they are empowered to create. I recommend that you educate your users on how to apply attribution models to metrics and what each model means. You will want to avoid “analysis paralysis” for your users, so you may want to suggest which model you think makes the most sense for each data dimension.
  2. Different Approach to Implementation – The shift in attribution from eVars to metrics means that  you no longer have to use multiple eVars to see different attribution models. Also, the fact that you can see success event attribution for sProps means that you can also use sProps if you are using Analysis Workspace.
  3. sProps Are Not Dead! – While I have been on record saying that outside of Pathing, sProps are just a relic of old Omniture days, but as stated above, the new attribution modeling feature is helping make them useful again! sProps can now be used almost like eVars, which gives you more variables. Plus, they have Pathing that is better than eVars in Flow reports (until the instances bug is fixed!). Eventually, I assume all eVars and sProps will merge and simply be “dimensions,” but for now, you just got about 50 more variables!
  4. Create Popular Metric/Attribution Combinations – I suggest that you identify your most important metrics and create different versions of them for the relevant attribution models and share those out so your users can easily access them.  You may want to use tags as I suggested in this post.
Featured, Testing and Optimization

Adobe Target Chrome Extension

Adobe Target Chrome Extension

I use many different testing solutions each day as part of my strategic and tactical support of testing programs here at Analytics Demystified.  I am very familiar with how each of these different solutions functions and how to get the most value out of them.  To that end, I had a Chrome Extension built that will allow Adobe Target users to get much more value with visibility into test interaction, their Adobe Target Profile, and the bidirectional communication taking place.  23 (and counting:) powerful features, all for free.  Check out the video below to see it in action.

 

Video URL: https://youtu.be/XibDjGXPY4E

To learn more details about this Extension and download it from the Chrome Store, click below:
MiaProva Chrome Extension

Adobe Analytics, Featured

Ingersoll Rand Case Study

One of my “soapbox” issues is that too few organizations focus on analytics business requirements and KPI definition. This is why I spend so much time working with clients to help them identify their analytics business requirements. I have found that having requirements enables you to make sure that your analytics solution/implementation are aligned with the true needs of the organization. For this reason, I don’t take on consulting engagements unless the customer agrees to spend time defining their business requirements.

A while back, I had the pleasure of working with Ingersoll Rand to help them transform their legacy Adobe Analytics implementation to a more business requirements driven approach. The following is a quick case study that shares more information on the process and the results:

The Demystified Advantage – Ingersoll Rand – September 2018

 

Adobe Analytics

Page Names with and Without Locale in Adobe Analytics

Have you found yourself in a situation where your pages in Adobe Analytics are specific to a locale but you would like to aggregate them for a global view? It really isn’t uncommon to collect pages with a locale. If your pages names are in a URL format then two localized version of the same page may  look like so:

/us/en/services/security/super-series/

/jp/jp/services/security/super-series/

Or if you are using a custom page name perhaps it looks like this:

techco:us:en:services:security:super-series

techco:jp:jp:services:security:super-series

For this example we’re going to use the URL version of the page name. This could have been put in place to provide the ability to see different locales of the same page next to each other. Or maybe it was just the easiest or most practical approach to generate a page name at the time. Suppose that you just inherited an implementation with this setup but now you are getting questions of a more global nature. Your executives and users are wanting information on a global level. At the same time we still have the need for the locale-specific information. In order to meet their needs, you now need to have a version of the pages combined but still have the flexibility to break out by locale. To do this we’ll keep our original report with pages like “/us/en/services/security/super-series” but create a version that combines those into something like “/services/security/super-series/”. This new value would represent the total across all locales such as /us/en, /jp/jp, or any others we have.

Since we need to do this retroactively, classifications are going to be the best approach here. We’ll set this up so that we have a new version of the pages report without a locale and use the rule builder to automate the classification. Here’s how it would work…

Classification Setup

If you have worked with classifications before then this will be easy. First, go to your report suites under the admin tab, select your report suites, and navigate to the traffic classifications

The page variable should show by default in the dropdown of the Traffic Classifications page. From here select the icon next to Page and click Add Classification. Name your new classification something like “Page w/o Locale” and Save.

Your classification schema should now look something like this:

Classification Automation

Now let’s automate the population of this new classification by using the Rule Builder. To do so, navigate to the Admin tab and then click on Classification Rule Builder. Select the “Add Rule Set” button and configure the rule set like so:

Purple Arrow: this is where you select the report suite and variable where you want the classification applied. In this case we are using the Page variable.

Green Arrow: when this process runs for the first time this is how long it should look back to classify old value. For something like this I would select the maximum lookback. On future runs it will just use a one-month lookback which works great.

Red Arrow: Here is where you set up the logic for how each page should be classified. The order here is important as each rule is applied to each page in sequence. In a case where multiple rules apply to a value the last case will win since it is later in the sequence. We are going to use that to our advantage here with the following two expressions:

  1. (.*) This will simply classify all pages with the original value. I’m doing this because many sites also have non-localized content in addition to the localized URLs. This ensures that all Page values are represented in our new report.
  2. ^\/..\/..(\/.*) This expression actually does something for our localized pages. There are several ways to write this expression but this one tends to be more simple and shorter than others I’ve thought of. This will look for values starting with a slash and two characters, repeated twice (e.g. “/us/en”). It will then extract the following slash and anything after that. That means it would pull out the “/services/security/super-series/” from “/us/en/services/security/super-series/”.

 

Other considerations

If you have copied your page name into an eVar (hopefully so) then be sure to set up the same classification there.

If the Classification Rule Builder already has a rule set doing something for the page variable then you may need to add these rules to the existing rule set.

If you want to remind users that “Page w/o Locale” has the locale removed you can also prefix the new values with some value that indicates the value was removed. That might be something like “[locale removed]” or “/**/**” or whatever works for you. To do this you would just use “[locale removed]$1” instead of “$1” in the second rule of the rule set.

If you are using a custom page name like “techco:jp:jp:services:security:super-series” then the second rule in the Rule Builder would need to be modified. Instead of the expression I outlined above it would be something like “^([^:]*):..:..(:.*)” and you would set the “To” column to “$1$2”. This will pull out the locale from the middle of the string and give you a final value such as “techco:services:security:super-series”

 

 

 

Adobe Analytics, Featured

Analysis Workspace Drop-downs

Recently, the Adobe Analytics team added a new Analysis Workspace feature called “Drop-downs.” It has always been possible to add Adobe Analytics components like segments, metrics, dimensions and date ranges to the drop zone of Analysis Workspace projects. Adding these components allowed you to create “Hit” segments based upon what was brought over or, in the case of a segment, segment your data accordingly. Now, with the addition of drop-downs, this has been enhanced to allow you to add a set of individual elements to the filter area and then use a drop-down feature to selectively filter data. This functionality is akin to the Microsoft Excel Filter feature that lets you filter rows of a table. In this post, I will share some of the cool things you can do with this new functionality.

Filter on Dimension Values

One easy way to take advantage of this new feature is to drag over a few of your dimension values and see what it is like to filter on each. To do this, you simply find a dimension you care about in the left navigation and then click the right chevron to see its values like this:

Next you can use the control/shift key to pick the values you want (up to 50) and drag them over to the filter bar. Before you drop them, you must hold down the shift key to make it a drop-down:

When this is done, you can see your items in the drop-down like this:

 

Now you can select any item and all of your Workspace visualizations will be filtered. For example, if I select my name in the blog post author dimension, I will see only blog posts I have authored:

Of course, you can add as many dimensions as you’d like, such as Visit Number and/or Country. For example, if I wanted to narrow my data down to my blog posts viewed in the United States and the first visit, I might choose the following filters:

This approach is likely easier for your end-users to understand than building complex segments.

Other Filters

In addition to dimensions, you can create drop-downs for things like Metrics, Time Ranges and Segments. If you want to narrow your data down to cases in which a specific Metric was present, you can drag over the Metrics you care about and filter like this:

Similarly, you can filter on Data Ranges that you have created in your implementation (note that this will override whatever dates you have selected in the calendar portion of the project):

One of the coolest parts of this new feature is that you can also filter on Segments:

This means that instead of having multiple copies of the same Analysis Workspace project for different segments, you can consolidate down to one version and simply use the Segment drop-down to see the data you care about. This is similar to how you might use the report suite drop-down in the old Reports & Analytics interface. This should also help improve the performance times of your Analysis Workspace projects.

Example Use – Solution Design Project

Over the last few weeks, I have been posting about a concept of adding your business requirements and solution design to an Analysis Workspace project. In the final post of the series (I suggest reading all parts in order), I talked about how you could apply segmentation to the solution design project to see different completion percentages based upon attributes like status or priority (shown here):

Around this time, after reading my blog post, one of my old Omniture cohorts tweeted this teaser message:

At the time, I didn’t know what Brandon was referring to, but as usual, he was absolutely correct that the new drop-down feature would help with my proposed solution design project. Instead of having to constantly drag over different dimension/value combinations, the new drop-down feature allows any user to select the ways they want to filter the solution design project and, once they apply the filters, the overall project percentage completion rate (and all other elements) will dynamically change. Let’s see how this works through an example:

As shown in my previous post, I have a project that is 44.44% complete as shown above. Now I have added a few dimension filters to the project like this:

Now, if I choose to filter by “High” priority items, the percentage changes to 66.67% and only high priority requirements are shown:

Another cool side benefit of this is that the variable panel of the project now only shows variables that are associated with high priority requirements:

If I want to see how I am doing for all of Kevin’s high priority business requirements, I can simply select both high priority and then select Kevin in the requirement owner filter:

This is just a fun way to see how you can apply this new functionality to old Analysis Workspace projects into which you have invested time.

Future Wishlist Items

While this new feature is super-cool, I have already come up with a list of improvements that I’d like to eventually see:

  • Ability to filter on multiple items in the list instead of just one item at a time
  • Ability to clear the entire filter without having to remove each item individually
  • Ability to click a button to turn currently selected items (across all filters) into a new Adobe Analytics Segment
  • Ability to have drop-down list values generated dynamically based upon a search criteria (using the same functionality available when filtering values in a freeform table shown below)

Conferences/Community, Featured

ACCELERATE 2019

Back in 2015, the Analytics Demystified team decided to put on a different type of analytics conference we called ACCELERATE. The idea was that we as partners and a few select other industry folks would share as much information as we could in the shortest amount of time possible. We chose a 10 tips in 20 minutes format to force us and our other presenters to only share the “greatest hits” instead of the typical (often boring) 50 minute presentation with only a few minutes worth of good information. The reception of these events (held in San Francisco, Boston, Chicago, Atlanta and Columbus) was amazing. Other than some folks feeling a bit overwhelmed with the sheer amount of information, people loved the concept. We also coupled this one day event with some detailed training classes that attendees could optionally attend. The best part was that our ACCELERATE conference was dramatically less expensive than other industry conferences.

I am pleased to say that, after a long hiatus, we are bringing back ACCELERATE in January of 2019 in the Bay Area! As someone who attends a LOT of conferences, I still find that there is a bit of a void that we once again hope to fill with an updated version of ACCELERATE. In this iteration, we are going to do some different things in the agenda in addition to our normal 10 tips format. We hope to have a few roundtable discussions where attendees can network and have some face-to-face discussions like what is available at the popular DA Hub conference. We are also bringing in product folks Ben Gaines (Adobe) and Krista Seiden (Google) to talk about the two most popular digital analytics tools. I will even be doing an epic bake-off comparison of Adobe Analytics and Google Analytics with my partner Kevin Willeitner! We may also have some other surprises coming as the event gets closer…

You will be hard-pressed to find a conference at this price that provides as much value in the analytics space. But seats are limited and our past ACCELERATE events all sold out, so I suggest you check out the information now and sign-up before spaces are gone. This is a great way to start your year with a motivating event, at a great location, with great weather and great industry peers! I hope to see you there…

Featured, Testing and Optimization

Adobe Target and Marketo

The Marketo acquisition by Adobe went from rumor to fact earlier today.  This is a really good thing for the Adobe Target community.

I’ve integrated Adobe Target and Marketo together many times over the years and the two solutions complement each other incredibly well.  Independent of this acquisition and of marketing automation in general, I’ve also been saying for years that organizations need to shift their testing programs such that the key focus is on the Knowns and Unknowns if they are to succeed.  Marketo can maybe help those organizations with this vision if it is part of their Adobe stack since Marketo is marketing automation for leads (Unknowns) and customers (Knowns).

The assimilation of Marketo into the Adobe Experience Cloud will definitely deepen the integration between the multiple technologies but let me layout here how Target and Marketo work together today so as to relay the value the two together bring.

Marketo

For those of you in the testing community that is unfamiliar with Marketo or Marketing Automation in general, let me layout at a very high level some of the things these tools do.

Initially and maybe most commonly the Marketing Automation starts out with Lead Management space which means, when you fill out those forms on websites, the management of that “lead” is then handled by these systems.  At that point, you get emails, deal with salespeople, consume more content, etc…  The management of that process is handled here and if done well, prospects turn into customers.  Unknowns become Knowns.

Once you are Known, a whole new set of Marketing and Customer Marketing kicks in and that is also typically managed by Marketing Automation technologies like Marketo.

Below is an image that was taken directly from Marketo’s Solution’s website that highlights their offering.

Image from: https://www.marketo.com/solutions/

Adobe Target

Just like Marketo, testing solutions like Adobe Target also focus on different audiences as well.  The most successful testing programs out there have testing roadmaps and personalization strategies dedicated to getting Unknowns (prospects) to becoming Knowns (customers).  And when that transition takes place, these newly gotten Knowns then fall into tests and personalization initiatives focused on different KPIs vs becoming a Known.

Combining the power of testing and the quantification/reporting of consumer experiences (Adobe Target) with the power of marketing automation (Marketo) provide a value significantly higher than the value these solutions provide independently.

Target into Marketo

Envision a scenario where you bring testing to unknowns and use the benefits of testing to find ideal experiences that lead to more forms completions.  This is a no-brainer for Marketo customers and works quite well.  At this point, when tests are doing their thing, it is crucial to communicate or share this test data to Marketo when end users make the transition from Unknowns to Knowns.  This data will help with the management of leads because we will know what test and test experience influenced their transition to becoming a Known.

Just like Target, Marketo loves data and this code below is what Target would deliver with tests targeted to Unknowns.  This code delivers to Marketo the test name but also the Adobe Target ID in the event users of Marketo wanted to retargeted certain Adobe Target visitors.

var customData = {value: ‘${campaign.name}:${user.recipe.name}’};
rtp(‘send’, ‘AdobeTarget’, customData);
var customData = {value: ‘${profile.mboxPCId}’};
rtp(‘send’, ‘AdobeTarget_ID’, customData);

Marketo into Target

Adobe Target manages a rich profile that can be made up of online behaviors, 3rd Party Data, and offline data.  Many Target customers use this profile for strategic initiatives that change and quantify consumer experiences based off of the values of the profile attributes associated with this profile or Adobe Target ID.

In the Marketo world, there are many actions or events that take place as the leads are nurtured and the customers are marketed to.  Organizations differ on how the specific actions or stages of lead or customer management/marketing are defined but no matter what definition, those stages/actions/events can be mirrored or shared with Adobe Target.  This effort allows Marketo users to run tests online that are coordinated with their efforts managed offline – hence making those offline efforts more successful.

Push Adobe Target ID into Marketo

Marketo can get this data into Target in one of two ways.  The first method uses the code that I shared above where the Adobe Target ID is shared with Marketo.  Marketo can then generate a report or gather all Adobe Target IDs at a specific stage/event/action and then set up a test targeted to them.  It is literally that easy.

Push Marketo ID into Adobe Target

The second method is a more programmatic approach.  We have the Marketo visitor ID passed to Adobe Target as a special mbox parameter called mbox3rdPartyId.  When Adobe Target sees this value it immediately marries its ID to that ID so that any data shared to Adobe with that ID will be available for any testing efforts.  This process is one that many organizations use with their own internal ID.  At this point, any and all (non-PII) data can be sent to Adobe Target by way of APIs using nothing more than the Marketo ID – all possible because it passed the ID to Adobe Target when the consumer was on the website.

And then the cycle repeats itself with Adobe Target communicating test and experience names again to Marketo but this time for the Knowns – thus making that continued management more effective.

 

Adobe Analytics, Featured

Bonus Tip: Quantifying Content Creation

Last week and this week, I shared some thoughts on how to quantify content velocity in Adobe Analytics. As part of that post, I showed how to assign a publish date to each piece of content via a SAINT Classification like this:

Once you have this data in Adobe Analytics, you can download your SAINT file and clean it up a bit to see your content by date published in a table like this:

The last three columns split out the Year, the Month and then I added a “1” for each post. Adding these three columns allows you to then build a pivot table to see how often content is published by both Month and Year:

Then you can chart these like you would any other pivot table. Here are blog posts by month:

Here are blog posts by year:

As long as you are going to go through the work of documenting the publish date of your key content, you can use this bonus tip to leverage your SAINT Classifications file to do some cool reporting on your content creation.

Adobe Analytics, Featured

Quantifying Content Velocity in Adobe Analytics – Part 2

Last week, I shared how to quantify content velocity in Adobe Analytics. This involved classifying content with the date it was published and looking at subsequent days to see how fast it is viewed. As part of this exercise, the date published was added via the SAINT classification and dates were grouped by Year and Month & Year. At the same time, it is normal to capture the current Date in an eVar (as I described in this old blog post). This Date eVar can also be classified into Year and Year & Month. The classification file might look like this:

Once you have the Month-Year for both Blog Post Launches and Views, you can use the new cross-tab functionality of Analysis Workspace to do some analysis. To do this, you can create a freeform table and add your main content metric (Blog Post Views in my case) and break it down by the Launch Month-Year:

In this case, I am limiting data to 2018 and showing the percentages only. Next, you can add the Blog Post View Month-Year as cross-tab items by dragging over this dimension from the left navigation:

This will insert five Blog Post View Month-Year values across the top like this:

From here, you can add the missing three months, order them in chronological order and then change column settings like this:

Next, you can change the column percentages so they go by row instead of column, but clicking on the row settings gear icon like this:

After all of this, you will have a cross-tab table that looks like this:

Now you have a cross-tab table that allows you to see how blog posts launched in each month are viewed in subsequent months. In this case, you can see that from January to August, for example, blog posts launched in February had 59% of their views take place in February and the remaining 40% over the next few months.

Of course, the closer you are to the month content was posted, the higher the view percentage will be for the current month and the months that follow. This is due to the fact that over time, more visitors will end up viewing older content. You can see this above by the fact that 100% of content launched in August was viewed in August (duh!). But in September, August will look more like July in the table above when September will steal a percentage of content that was launched in August.

This type of analysis can be used to see how sticky your content is in a way that is similar to the Cohort Analysis visualization. For example, four months after content was launched in March, its view % was 3.5%, whereas, four months after content was released in April, its view % was 5.3%. There are many ways that you can dissect this data and, of course, since this is Analysis Workspace, if you ever want to do a deeper dive on one of the cross-tab table elements, you can simply right-click and build an additional visualization. For example, if I want to see the trend of February content, I can simply right-click on the 59.4% value and add an area visualization like this:

This would produce an additional Analysis Workspace visualization like this:

For a bonus tip related to this concept, click here.

Conferences/Community, Digital Analytics Community

Registration for ACCELERATE 2019 is now open!

Analytics Demystified is excited to have opened registration for ACCELERATE 2019 on January 25th in Los Gatos, California.  You can see the entire agenda including speakers, topics, and information about our training day and the Toll House hotel via the following links:

Registration for ACCELERATE is only $299 USD making the conference among the most affordable in the industry.  Registration for training day is only $999 USD and includes the cost of the conference and seats are limited and available on a first-come basis … so don’t delay in signing up for ACCELERATE 2019!

 

Adobe Analytics, Featured

Quantifying Content Velocity in Adobe Analytics

If publishing content is important to your brand, there may be times when you want to quantify how fast users are viewing your content and how long it takes for excitement to wane. This is especially important for news and other media sites that have content as their main product. In my world, I write a lot of blog posts, so I also am curious about which posts people view and how soon they are viewed. In this post, I will share some techniques for measuring this in Adobe Analytics.

Implementation Setup

The first step to tracking content velocity is to assign a launch date to each piece of content, which is normally the publish date. Using my blog as an example, I have created a SAINT Classification of the Blog Post Title eVar and classified each post with the publish date:

Here is what the SAINT File looks like when completed:

The next setup step is to set a date eVar on every website visit. This is as simple as capturing today’s date in an eVar on every hit, which I blogged about back in 2011. Having the current date will allow you to compare the date the post was viewed with the date it was published. Here is an example on my site:

Reporting in Analysis Workspace

Once the setup is complete, you can move onto reporting. First, I’ll show how to report on the data in Analysis Workspace. In Workspace, you can create a panel and add the content item you care about (blog post in my example) and then break it down by the launch date and the view date. I recommend setting the date range to begin with the publish date:

In this example, you can see that the blog post launched on 8/7/18 and that 36% of total blog post views since then occurred on the launch date. You can also see how many views took place on each date thereafter. As you would expect, most of the views took place around the launch date and then slowed down in subsequent days. If you want to see how this compares to another piece of content, you can create a new panel and view the same report for another post (making sure to adjust the date range in the new panel to start with the new post’s launch date):

By viewing two posts side by side, I can start to see how usage varies. The unfortunate part, is that it is difficult to see which date is “Launch Date,” Launch Date +1,” Launch Date +2, ” etc… Therefore, Analysis Workspace, in this situation, is good for seeing some ad-hoc data (no pun intended!), but using Adobe ReportBuilder might actually prove to be a more scalable solution.

Reporting in Adobe ReportBuilder

When you want to do some more advanced formulas, sometimes Adobe ReportBuilder is the best way to go. In this case, I want to create a data block that pulls in all of my blog posts and the date each post was published like this:

Once I have a list of the content I care about (blog posts in this example), I want to pull in how many views of the content occurred each date after the publish date. To do this, I have created a set of reporting parameters like this:

The items in green are manually entered by setting them equal to the blog post name and publish date I am interested in from the preceding data block. In this case, I am setting the Start Date equal to the sixth cell in the second column and the Blog Post equal to the cell to the left of that. Once I have done that I create a data block that looks like this:

This will produce the following table of data:

Now I have a daily report of content views beginning with the publish date. Next, I created a table that references this table that captures the launch date and the subsequent seven days (you can use more days if you want). This is done by referencing the first eight rows in the preceding table and then creating a sum of all other data to create a table that looks like this:

In this table, I have created a dynamic seven-day distribution and then lumped everything else into the last row. Then I have calculated the percentage and added an incremental percentage formula as well. These extra columns allow me to see the following graphs on content velocity:

The cool part about this process, is that it only takes 30 seconds to produce the same reports/graphs for any other piece of content (blog post in my example). All you have to do is alter the items in green and then refresh the data block. Here is the same reporting for a different blog post:

You can see that this post had much more activity early on, whereas the other post started slow and increased later. You could even duplicate each tab in your Excel worksheet so you have one tab for each key content item and then refresh the entire workbook to update the stats for all content at once.

Check out Part 2 of this post here: https://analyticsdemystified.com/featured/quantifying-content-velocity-in-adobe-analytics-part-2/

Featured, google analytics

Google Analytics Segmentation: A “Gotcha!” and a Hack

Google Analytics segments are a commonly used feature for analyzing subsets of your users. However, while they seem fairly simple at the outset, certain use cases may unearth hidden complexity, or downright surprising functionality – as happened to me today! This post will share a gotcha with user-based segments I just encountered, as well as two options for hit-based Google Analytics segmentation. 

First, the gotcha.

One of these things is not like the other

Google Analytics allows you to create two kinds of segments: session-based, and user-based. A session-based segment requires that the behaviour happened within the same session (for example, watched a video and purchased.) A user-based segment requires that one user did those two things, but it does not need to be within the same session.

However, thanks to the help and collective wisdom of Measure Slack, Simo Ahava and Jules Stuifbergen (thank you both!), I stumbled upon a lesser-known fact about Google Analytics segmentation. 

These two segmentation criteria “boxes” do not behave the same:

I know… they look identical, right? (Except for Session vs. User.)

What might the expected behaviour be? The first looks for sessions in which the page abc.html was seen, and the button was clicked in that same session. The second looks for users who did those two things (perhaps in different sessions.) 

When I built a session-based segment and attempted to flip it to user-based, imagine my surprise to find… the session-based segment worked. The user-based segment, with the exact same criteria didn’t work. (Note: It’s logically impossible for sessions to exist in which two things were done, but no users have done those two things…) I will confess that I typically use session-based segmentation far more, as I’m often looking back more than 90 days, so it’s not something I’ve happened upon.

That’s when I found out that if two criteria in a Google Analytics user-based segment are in the same criteria “box”, they have to occur on the same hit. The same functionality and UI works differently depending on if you’re looking at a user- or session-based segment. 

I know.

Note: There is some documented of this, within the segment builder, though not within the main segmentation documentation.

In summary:

If you want to create a User-based segment that looks for two events (or more) occurring for the same user, but not on the same hit? You need to use two separate criteria “boxes”, like this:

So, there you go.

This brings me to the quick hack:

Two Hacks for Hit-Level Segmentation

Once you know about the strange behaviour of User-based segments, you can actually use them to your advantage.

Analysts familiar with Adobe Analytics know that Adobe has three options for segmentation: hit, visit and visitor level. Google Analytics, however, only has session (visit) and user (visitor) level.

Why might you need hit-level segmentation?

Sometimes when doing analysis, we want to be very specific that certain criteria must have taken place on the same hit. For example, the video play on a specific page. 

Since Google Analytics doesn’t have built-in hit-based segmentation, you can use one of two possible hacks:

1. User-segment hack: Use our method above: Create a user-based segment, and put your criteria in the same “box.” Voila! It’s a feature, not a bug! 

2. Sequential segment hack: Another clever method brought to my attention by Charles Farina is to use a sequential segment. Sequential segments evaluate each “step” as a single hit, so this sequential segment is the equivalent of a hit-based segment:  

Need convincing? Here are the two methods, compared. You’ll see the number of users is identical:

(Note that the number of sessions is different since, in the user-based segment, the segment of users who match that criteria might have had other sessions in which the criteria didn’t occur.)

So which hit-level segmentation method should you use? Personally I’d recommend sticking with Charles’ sequential segment methodology, since a major limitation of user-based segments is that they only look back 90 days. However, it may depend on your analysis question as to what’s more appropriate. 

I hope this was helpful! If you have any similar “gotchas” or segmentation hacks you’ve found, please don’t hesitate to share them in the comments. 

Adobe Analytics, Featured

Adobe Analytics Requirements and SDR in Workspace – Part 4

Last week, I shared how to calculate and incorporate your business requirement completion percentage in Analysis Workspace as part of my series of posts on embedding your business requirements and Solution Design in Analysis Workspace (Part 1, Part 2, Part 3). In this post, I will share a few more aspects of the overall SDR in Workspace solution in case you endeavor to try it out.

Updating Business Requirement Status

Over time, your team will add and complete business requirements. In this solution, adding new business requirements is as simple as uploading a few more rows of data via Data Sources as shown in the “Part 2” blog post. In fact, you can re-use the same Data Sources template and FTP info to do this. When uploading, you have two choices. You can upload only new business requirements or you can re-upload all of your business requirements each time, including the new ones. If you upload only the new ones, you can tie them to the same date you originally used or use the current date. Using the current date allows you to see your requirements grow over time, but you have to be mindful to make sure your project date ranges cover the timeframe for all requirements. What I have done is re-uploaded ALL of my business requirements monthly and changed the Data Sources date to the 1st of each month. Doing this allows me to see how many requirements I had in January, Feb, March, etc., simply by changing the date range of my SDR Analysis Workspace project. The only downside of this approach is that you have to be careful not to include multiple months or you will see the same business requirements multiple times.

Once you have all of your requirements in Adobe Analytics and your Analysis Workspace project, you need to update which requirements are complete and which are not. As business requirements are completed, you will update your business requirement SAINT file to change the completion status of business requirements. For example, let’s say that you re-upload the requirements SAINT file and change two requirements to be marked as “Complete” as shown here in red:

Once the SAINT file has processed (normally 1 day), you would see that 4 out of your 9 business requirements are now complete, which is then reflected in the Status table of the SDR project:

Updating Completion Percentage

In addition, as shown in Part 3 of the post series, the overall business requirement completion percentage would be automatically updated as soon as the two business requirements are flagged as complete. This means that the overall completion percentage would move from 22.22% (2/9) to 44.44% (4/9):

Therefore, any time you add new business requirements, the overall completion percentage would decrease, and any time you complete requirements, the percentage would increase.

Using Advanced Segmentation

For those that are true Adobe Analytics geeks, here is an additional cool tip. As mentioned above, the SAINT file for the business requirements variable has several attributes. These attributes can be used in segments just like anything else in Adobe Analytics. For example, here you see the “Priority” SAINT Classification attribute highlighted:

This means that each business requirement has an associated Priority value, in this case, High, Medium or Low, which can be seen in the left navigation of Analysis Workspace:

Therefore, you can drag over items to create temporary segments using these attributes. Highlighted here, you see “Priority = High” added as a temporary segment to the SDR panel:

Doing this, applies the segment to all project data, so only the business requirements that are marked as “High Priority” are included in the dashboard components. After the segment is applied, there are now three business requirements that are marked as high priority, as shown in our SAINT file:

Therefore, since, after the upload described above, two of those three “High Priority” business requirements are complete, the overall implementation completion percentage automatically changes from 44.44% to 66.67% (2 out 3), as shown here (I temporarily unhid the underlying data table in case you want to see the raw data):

As you can see, the power of segmentation is fully at your disposal to make your Requirements/Solution Design project highly dynamic! That could mean segmenting by requirement owner, variable or any other data points represented within the project! For example, once we apply the “High Priority” segment to the project as shown above, viewing the variable portion of the project displays this:

This now shows all variables associated with “High Priority” business requirements.  This can be useful if you have limited time and/or resources for development.

Another example might be creating a segment for all business requirements that are not complete:

This segment can then be applied to the project as shown here to only see the requirements and variables that are yet to be implemented:

As you can see, there are some fun ways that you can use segmentation to to slice and dice your Solution Design! Pretty cool huh?

Adobe Analytics, Featured

Adobe Analytics Requirements and SDR in Workspace – Part 3

Over the past two weeks, I have been posting about how to view your business requirements and solution design in Analysis Workspace. First, I showed how this would look in Workspace and then I explained how I created it. In this post, I am going to share how you can extend this concept to calculate the completion percentage of business requirements directly within Analysis Workspace. Completion percentage is important because Adobe Analytics implementations are never truly done. Most organizations are continuously doing development work and/or adding new business requirements. Therefore, one internal KPI that you may want to monitor and share is the completion percentage of all business requirements.

Calculating Requirement Percentage Complete

As shown in the previous posts, you use Data Sources to upload a list of business requirements and each business requirement has one or more Adobe Analytics variables associated to it:

When this is complete, you can see a report like this:

Unfortunately, this report is really showing you how many total variables are being used, not the number of distinct business requirements (Note: You could divide the “1” in event30 by the number of variables, but that can get confusing!). This can be seen by doing a breakdown by the Variable eVar:

Since your task is to see how many business requirements are complete, you can upload a status for each business requirement via a SAINT file like this:

This allows you to create a new calculated metric that counts how many business requirements have a status of complete (based upon the SAINT Classification attribute) like this:

However, this is tricky, because the SAINT Classification that is applied to the Business Requirement metric doesn’t sum the number of completed business requirements, but rather the number of variables associated with completed requirements. This can be seen here:

What is shown here is that there are five total variables associated with completed business requirements out of twenty-five total variables associated with all business requirements. You could divide these two to show that your implementation is 20% complete (5/25), but that is not really accurate. The reality is that two out of nine business requirements are complete, so your actual completion percentage is 22.22 % (2/9).

So how do you solve this? Luckily, there are some amazing functions included in Adobe Analytics that can be used to do advanced calculations. In this case, what you want to do is count how many business requirements are complete, not how many variables are complete. To do this, you can use an IF function with a GREATER THAN function to set each row equal to either “1” or “0” based upon its completion status using this formula:

This produces the numbers shown in the highlighted column here:

Next, you want to divide the number of rows that have a value of “1” by the total number of rows (which represents the number of requirements). To do this, you simply divide the preceding metric by the ROW COUNT function, which will produce the numbers shown in the highlighted column here:

Unfortunately, this doesn’t help that much, because what you really want is the sum of the rows (22.22%) versus seeing the percentages in each row. However, you can wrap the previous formula in a COLUMN SUM function to sum all of the individual rows. Here is what the final formula would look like:

This would then produce a table like this:

Now you have the correct requirement percentage completion rate. The last step is to create a new summary number visualization using the column heading in the Requirement Completion % column as shown highlighted here:

To be safe, you should use the “lock” feature to make sure that this summary number will always be tied to the top cell in the column like this:

Before finishing, there are a few clean-up items left to do. You can remove any extraneous columns in the preceding table (which I added just to explain the formula) to speed up the overall project so the final table looks like this:

You can also hide the table completely by unchecking the “Show Data Source” box, which will avoid confusing your users) :

Lastly, you can move the completion percentage summary number to the top of the project where it is easily visible to all:

So now you have an easy way to see the overall business requirement completion % right in your Analysis Workspace SDR project!

[Note: The only downside of this overall approach is that the completion status is flagged by a SAINT Classification, which, by definition, is retroactive. This means that the Analysis Workspace project will always show the current completion percentage and will not record the history. If that is important to you, you’d have to import two success events for each business requirement via Data Sources. One for requirements and another for completed requirements and use formulas similar to the ones described above.]

Click here to see Part 4 for even more cool things related to this concept!

Featured, google analytics

Understanding Marketing Channels in Google Analytics: The Good, The Bad – and a Toy Surprise!

Understanding the effectiveness of marketing efforts is a core use case for Google Analytics. While we may analyze our marketing at the level of an individual site, or ad network, typically we are also looking to understand performance at a higher channel level. (For example, how did my Display ads perform?)

In this post I’ll discuss two ways you can approach this, as well as the gotchas, and even offer a handy little tool you can use for yourself!

Option 1: Channel Groupings in GA

There are two relevant features here:

  1. Default channel groupings
  2. Custom channel groupings

Default Channel Groupings

Default channel groupings are defined rules, that apply at the time the data is processed. So, they apply from the time you set them up, onwards. Note also that the rule set execute in order

The default channel grouping dimension is available throughout Google Analytics, including for use in segments, as a secondary dimensions, in custom reports, Data Studio, Advanced Analysis and the API. (Note: They are not included in Big Query.)

Unfortunately, there are some real frustrations associated with this feature:

  1. The default channel groupings that come pre-setup aren’t typically applicable. By default, GA provides some default rules. However, in my experience, they rarely map well enough to marketing efforts. Which leads me to…
  2. You have to customize them. Makes sense – for your data to be useful, it should be customized to your business, right? I always end up editing the default grouping, to take into account the UTM and tracking standards we use. Unfortunately…  
  3. The manual work in customizing them makes kittens cry. Why?
    • You have to manually update them for every.single.view. Default Channel Groupings are a view level asset. So if your company has two views (or worse, twenty!) you need to manually set them up over. and over. again.
    • (“I know! I’ll outsmart GA! I’ll set up the groupings then copy the view. Nope, sorry.) Unlike goals, any customizations made to your Default Channel Groupings don’t copy over when you copy a view, even if they were created before you copied it. You start from scratch, with the GA default. So you have to create them. Again.
    • There is no way to create them programmatically. They can’t be edited or otherwise managed via the Management API.
    • Personally, I consider this to be a huge limitation for feature use in an enterprise organization, as it requires an unnecessary level of manual work.
  4. They are not retroactive. This is a common complaint. Honestly, it’s the least of my issues with them. Yes, retroactive would be nice. But I’d take a solve of the issues in #3 any day.

“Okay… I’ll outsmart GA (again)! Let’s not use the default. Let’s just use the custom groupings!” Unfortunately, custom channel groupings aren’t a great substitute either.

Custom Channel Groupings

Custom Channel Groupings are a very similar feature. However, the custom groupings aren’t processed with the data, they’re a rule set applied on top of the data, after it’s processed.

The good:

The bad:

  • The custom grouping created is literally only available in one report. You can not use the dimensions they create in a segment, as a secondary dimension, via the API or Data Studio. So they have exceptionally limited value. (IMHO they’re only useful for checking a grouping before you set it as the default.) 

So, as you may have grasped, the channel groupings features in Google Analytics are necessary… but incredibly cumbersome and manual.

<begging>

Dear GA product team,

For channel groupings to be a useful and more scalable enterprise feature, one of the following things needs to happen:

  1. The Default should be sharable as a configured link, the same way that a segment or a goal works. Create them once, share the link to apply them to other views; or
  2. The Default should be a shared asset throughout the Account (similar to View filters) allowing you to apply the same Default to multiple views; or
  3. The Default should be manageable via the Management API; or
  4. Custom Groupings need to be able to be “promoted” to the default; or
  5. Custom-created channels need to be accessible like any other dimension, for use in segmentation, reports and via the API and Data Studio.

Pretty please? Just one of them would help…

</begging>

So, what are the alternate options?

Option 2: Define Channels within Data Studio, instead of GA

The launch of Data Studio in 2016 created a new option that didn’t used to exist: use Data Studio to create your groupings, and don’t bother with the Default Channel Groupings at all.

You can use Data Studio’s CASE formula to recreate all the same rules as you would in the GA UI. For example, something like this:  

CASE
WHEN REGEXP_MATCH (Medium, 'social') OR REGEXP_MATCH (Source, 'facebook|linkedin|youtube|plus|stack.(exc|ov)|twitter|reddit|quora|google.groups|disqus|slideshare|addthis|(^t.co$)|lnk.in') THEN 'Social'
WHEN REGEXP_MATCH (Medium, 'cpc') THEN 'Paid Search'
WHEN REGEXP_MATCH (Medium, 'display|video|cpm|gdn|doubleclick|streamads') THEN 'Display'
WHEN REGEXP_MATCH (Medium, '^organic

You can then use this newly created “Channel” dimension in Data Studio for your reports (instead of the default.)

Note, however, a few potential downsides:

  • This field is only available in Data Studio (so, it is not accessible for segments, via the API, etc.)
  • Depending on the complexity of your rules, you could bump up against a character limit for CASE formulas in Data Studio (2048 characters.) Don’t laugh… I have one set of incredibly complex channel rules where the CASE statement was 3438 characters… 

Note: If you use BigQuery, you could then use a version of this channel definition in your queries, as well.

And a Toy Surprise!

Let’s say you do choose to use Default Channel Groupings (I do end up using them, I just grumble incessantly during the painful process of setting them up, or amending them.) You might put a lot of thought in to the rules, the order in which they execute, etc. But nonetheless, you’ll still need to check your results after you set them up, to make sure they’re correct.

To do this, I created a little Data Studio report, that you are welcome to copy and use for your own purposes. Basically, after you setup your default groupings and collect at least a (full) day’s data, the report allows you to flip through each channel, and see what Sources, Mediums and Campaigns are falling in to each channel, based on your rules.

mkiss.me/DefaultChannelGroupingCheck
Note: At first it will load with errors, since you don’t have access to my data set. You need to select a data set you have access to, and then the tables will load. 

If you see something that seems miscategorized, you can then edit the rules in the GA admin settings. (Keeping in mind that your edits will only apply moving forward.)

I also recommend you keep documentation of your rules. I use something like this:

I also set up alerts for big increases in the “Other” channel, so that I can catch where the rules might need to be amended. 

Thoughts? Comments?

I hope this is helpful! If there are other ways you do this, I would love to hear about it.


) OR REGEXP_MATCH(Source, 'duckduckgo') THEN 'Organic Search'
WHEN REGEXP_MATCH (Medium, '^blog

You can then use this newly created “Channel” dimension in Data Studio for your reports (instead of the default.)

Note, however, a few potential downsides:

  • This field is only available in Data Studio (so, it is not accessible for segments, via the API, etc.)
  • Depending on the complexity of your rules, you could bump up against a character limit for CASE formulas in Data Studio (2048 characters.) Don’t laugh… I have one set of incredibly complex channel rules where the CASE statement was 3438 characters… 

Note: If you use BigQuery, you could then use a version of this channel definition in your queries, as well.

And a Toy Surprise!

Let’s say you do choose to use Default Channel Groupings (I do end up using them, I just grumble incessantly during the painful process of setting them up, or amending them.) You might put a lot of thought in to the rules, the order in which they execute, etc. But nonetheless, you’ll still need to check your results after you set them up, to make sure they’re correct.

To do this, I created a little Data Studio report, that you are welcome to copy and use for your own purposes. Basically, after you setup your default groupings and collect at least a (full) day’s data, the report allows you to flip through each channel, and see what Sources, Mediums and Campaigns are falling in to each channel, based on your rules.

mkiss.me/DefaultChannelGroupingCheck
Note: At first it will load with errors, since you don’t have access to my data set. You need to select a data set you have access to, and then the tables will load. 

If you see something that seems miscategorized, you can then edit the rules in the GA admin settings. (Keeping in mind that your edits will only apply moving forward.)

I also recommend you keep documentation of your rules. I use something like this:

I also set up alerts for big increases in the “Other” channel, so that I can catch where the rules might need to be amended. 

Thoughts? Comments?

I hope this is helpful! If there are other ways you do this, I would love to hear about it.


) THEN 'Blogs'
WHEN REGEXP_MATCH (Medium, 'email|edm|(^em$)') THEN 'Email'
WHEN REGEXP_MATCH (Medium, '^referral

You can then use this newly created “Channel” dimension in Data Studio for your reports (instead of the default.)

Note, however, a few potential downsides:

  • This field is only available in Data Studio (so, it is not accessible for segments, via the API, etc.)
  • Depending on the complexity of your rules, you could bump up against a character limit for CASE formulas in Data Studio (2048 characters.) Don’t laugh… I have one set of incredibly complex channel rules where the CASE statement was 3438 characters… 

Note: If you use BigQuery, you could then use a version of this channel definition in your queries, as well.

And a Toy Surprise!

Let’s say you do choose to use Default Channel Groupings (I do end up using them, I just grumble incessantly during the painful process of setting them up, or amending them.) You might put a lot of thought in to the rules, the order in which they execute, etc. But nonetheless, you’ll still need to check your results after you set them up, to make sure they’re correct.

To do this, I created a little Data Studio report, that you are welcome to copy and use for your own purposes. Basically, after you setup your default groupings and collect at least a (full) day’s data, the report allows you to flip through each channel, and see what Sources, Mediums and Campaigns are falling in to each channel, based on your rules.

mkiss.me/DefaultChannelGroupingCheck
Note: At first it will load with errors, since you don’t have access to my data set. You need to select a data set you have access to, and then the tables will load. 

If you see something that seems miscategorized, you can then edit the rules in the GA admin settings. (Keeping in mind that your edits will only apply moving forward.)

I also recommend you keep documentation of your rules. I use something like this:

I also set up alerts for big increases in the “Other” channel, so that I can catch where the rules might need to be amended. 

Thoughts? Comments?

I hope this is helpful! If there are other ways you do this, I would love to hear about it.


) THEN 'Referral'
WHEN REGEXP_MATCH (Source, '(direct)') THEN 'Direct'
ELSE 'Other' 
END

You can then use this newly created “Channel” dimension in Data Studio for your reports (instead of the default.)

Note, however, a few potential downsides:

  • This field is only available in Data Studio (so, it is not accessible for segments, via the API, etc.)
  • Depending on the complexity of your rules, you could bump up against a character limit for CASE formulas in Data Studio (2048 characters.) Don’t laugh… I have one set of incredibly complex channel rules where the CASE statement was 3438 characters… 

Note: If you use BigQuery, you could then use a version of this channel definition in your queries, as well.

And a Toy Surprise!

Let’s say you do choose to use Default Channel Groupings (I do end up using them, I just grumble incessantly during the painful process of setting them up, or amending them.) You might put a lot of thought in to the rules, the order in which they execute, etc. But nonetheless, you’ll still need to check your results after you set them up, to make sure they’re correct.

To do this, I created a little Data Studio report, that you are welcome to copy and use for your own purposes. Basically, after you setup your default groupings and collect at least a (full) day’s data, the report allows you to flip through each channel, and see what Sources, Mediums and Campaigns are falling in to each channel, based on your rules.

mkiss.me/DefaultChannelGroupingCheck
Note: At first it will load with errors, since you don’t have access to my data set. You need to select a data set you have access to, and then the tables will load. 

If you see something that seems miscategorized, you can then edit the rules in the GA admin settings. (Keeping in mind that your edits will only apply moving forward.)

I also recommend you keep documentation of your rules. I use something like this:

I also set up alerts for big increases in the “Other” channel, so that I can catch where the rules might need to be amended. 

Thoughts? Comments?

I hope this is helpful! If there are other ways you do this, I would love to hear about it.

Adobe Analytics, Featured

Adobe Analytics Requirements and SDR in Workspace – Part 2

Last week, I wrote about a concept of having your business requirements and SDR inside Analysis Workspace. My theory was that putting business requirements and implementation information as close to users as possible could be a good thing. Afterwards, I had some folks ask me how I implemented this, so in this post I will share the steps I took. However, I will warn you that my approach is definitely a “hack” and it would be cool if, in the future, Adobe provided a much better way to do this natively within Adobe Analytics.

Importing Business Requirements (Data Sources)

The first step in the solution I shared is getting business requirements into Adobe Analytics so they can be viewed in Analysis Workspace. To do this, I used Data Sources and two conversion variables – one for the business requirement number and another for the variables associated with each requirement number. While this can be done with any two conversion variables (eVars), I chose to use the Products variable and another eVar because my site wasn’t using the Products variable (since we don’t sell a physical product). You may choose to use any two available eVars. I also used a Success Event because when you use Data Sources, it is best to have a metric to view data in reports (other than occurrences). Here is what my data sources file looked like:

Doing this allowed me to create a one to many relationship between Req# (Products) and the variables for each (eVar17). The numbers in event 30 are inconsequential, so I just put a “1′ for each. Also note, that you need to associate a date with data being uploaded via Data Sources. The cool thing about this, is that you can change your requirements when needed by re-uploading the entire file again at a later date (keeping in mind that you need to choose your data ranges carefully so you don’t get the same requirement in your report twice!). Another reason I uploaded the requirement number and the variables into conversion variables is that these data points should not change very often, whereas, many of the other attributes will change (as I will show next).

Importing Requirement & Variable Meta-Data (SAINT Classifications)

The next step of the process is adding meta-data to the two conversion variables that were imported. Since the Products variable (in my case) contains data related to business requirements, I added SAINT Classifications for any meta-data that I would want to upload for each business requirement. This included attributes like description, owner, priority, status and source.

Note, these attributes are likely to change over time (i.e. status), so using SAINT allows me to update them by simply uploading an updated SAINT file. Here is the SAINT file I started with:

 

The next meta-data upload required is related to variables. In my case, I used eVar17 to capture the variable names and then classified it like this:

As you can see, I used classifications and sub-classifications to document all attributes of variables. These attributes include variable types, descriptions and, if desired, all of the admin console attributes associated with variables. Here is what the SAINT file looks like when completed:

[Note: After doing this and thinking about it for a while, in hindsight, I probably should have uploaded Variable # into eVar17 and made variable name a classification in case I want to change variable names in the future, so you may want to do that if you try to replicate this concept.]

Hence, when you bring together the Data Sources import and the classifications for business requirements and variables, you have all of the data you need to view requirements and associated variables natively in Adobe Analytics and Analysis Workspace as shown here:

Project Curation

Lastly, if you want to minimize confusion for your users in this special SDR project, you can use project curation to limit the items that users will see in the project to those relevant to business requirements and the solution design. Here is how I curated my Analysis Workspace project:

This made it so visits only saw these elements by default:

Final Thoughts

This solution has a bit of set-up work, but once you do that, the only ongoing maintenance is uploading new business requirements via Data Sources and updating requirements and variable attributes via SAINT Classifications. Obviously, this was just a quick & dirty thing I was playing around with and, as such, not something for everyone. I know many people are content with keeping this information in spreadsheets, in Jira/Confluence or SharePoint, but I have found that this separation can lead to reduced usage. My hope is that others out there will expand upon this concept and [hopefully] improve it. If you have any additional questions/comments, please leave a comment below.

To see the next post in this series, click here.

Adobe Analytics, Featured

Adobe Analytics Requirements and SDR in Workspace

Those who know me, know that I have a few complaints about Adobe Analytics implementations when it comes to business requirements and solution designs. You can see some of my gripes around business requirements in the slides from my 2017 Adobe Summit session and you can watch me describe why Adobe Analytics Solution Designs are often problematic in this webinar (free registration required). In general, I find that:

  • Too few organizations have defined analytics business requirements
  • Most Solution Designs are simply lists of variables and not tied to business requirements
  • Often times, Solution Designs are outdated/inaccurate

When I start working with new clients, I am shocked at how few have their Adobe Analytics implementation adequately organized and documented. One reason for this, is that requirements documents and solution designs tend to live on a [digital] shelf somewhere, and as you know, out of sight, often means out of mind. For this reason, I have been playing around with something in this area that I wanted to share. To be honest, I am not sure if the concept is the right solution, but my hope is that some of you out there can possibly think about it and help me improve upon it.

Living in Workspace

It has become abundantly clear that the future of Adobe Analytics is Analysis Workspace. If you haven’t already started using Workspace as your default interface for Adobe Analytics, you will be soon. Most people are spending all of their time in Analysis Workspace, since it is so much more flexible and powerful than the older “SiteCatalyst” interface. This got me thinking… “What if there were a way to house all of your Adobe Analytics business requirements and the corresponding Solution Design as a project right within Analysis Workspace?” That would put all of your documentation a few clicks away from you at all times, meaning that there would be no excuse to not know what is in your implementation, which variables answer each business requirement and so on.

Therefore, I created this:

The first Workspace panel is simply a table of contents with hyperlinks to the panels below it. The following will share what is contained within each of the Workspace panels.

The first panel is simply a list of all business requirements in the Adobe Analytics implementation, which for demo purposes is only two:

The second panel shows the same business requirements split out by business priority, in case you want to look at ones that are more important than others:

One of the ways you can help your end-users understand your implementation is to make it clear which Adobe Analytics variables (reports) are associated with each business requirement. Therefore, I thought it would make sense to let users breakdown each business requirement by variable as shown here:

Of course, there will always be occasions where you just want to see a list of all of your Success Events, eVars and sProps, so I created a breakdown by variable type:

Since each business requirement should have a designated owner, the following breakdown allows you to see all business requirements broken down by owner:

Lastly, you may want to track which business requirements have been completed and which are still outstanding. The following breakdown allows you to see requirements by current implementation status:

Maximum Flexibility

As you can see, the preceding Analysis Workspace project, and panels contained within, provide an easy way to understand your Adobe Analytics implementation. But since you can break anything down by anything else in Analysis Workspace, these are just some sample reports of many more that could be created. For example, what if one of my users wanted to drill deep into the first business requirement and see what variables it uses, descriptions of those variables and even the detailed settings of those variables (i.e. serialization, expiration, etc…)? All of these components can be incorporated into this solution such that users can simply choose from a list of curated Analysis Workspace items (left panel) and drop them in as desired like shown here:

Granted, it isn’t as elegant as seeing everything on an Excel spreadsheet, it is convenient to be able to see all of this detail without having to leave the tool! And maybe one day, it will be possible to see multiple items on the same row in Analysis Workspace, which would allow this solution to look more like a spreadsheet. I also wish there were a way to hyper-link right from the variable (report) name to a new project that opens with that report, but maybe that will be possible in the future.

If you want to see the drill-down capabilities in action, here is a link to a video that shows me doing drill-downs live:

Summary

So what do you think? Is this something that your Adobe Analytics users would benefit from? Do you have ideas on how to improve it? Please leave a comment here…Thanks!

P.S. To learn how I created the preceding Analysis Workspace project, check out Part Two of this post.

Tag Management

GTM: Using Multiple Inputs for RegEx Tables

I’m a big fan of RegEx table variables in Google Tag Manager. These are especially useful if you have implemented a gtm blacklist with your GTM setup to avoid custom scripting. If that is your situation (likely not) then this variable type will provide some flexibility that you wouldn’t have otherwise.  Keep in mind, though, that RegEx Tables have one limiting factor that sometimes take a bit of extra work…they only allow for a single input variable:

This can be limiting in scenarios where you want to have an output that depends on a variety of inputs. As an example, let’s say that we want to deploy a bunch of Floodlights, each with a different activity string, based on a combination of an event category, event action and URL path. For this example, you can just assume that the event category and action are being pushed into the dataLayer. Now, I could probably create a trigger for each floodlight and avoid doing any of this; however I find that approach less scalable and tends to make a mess of your GTM configuration. Let’s just say I don’t want to be miserable so I decide to not create a trigger for each floodlight. Instead we can have all that logic in one place by just concatenating together the different variables we want and use the RegEx table to identify a match as needed. To do this I just create a new variable that pulls in all the other variables like so:

Notice that I like to use a name/value concatenation so it ends up looking like URL parameters. I use “^” as the delimiter since that is pretty unique. I avoid using delimiters that are leveraged in URLs (such as ?, &, #). Also using a name with the value helps to ensure that we don’t accidentally match on a value that is unexpectedly in one of the other variables.

With the concatenation of variables giving us a new, single input I can now set up my regex table as needed:

Once I have worked out my RegEx table logic to match correctly on what I want I then just plug it into my floodlight as normal:

And with that you have now deployed a bunch of floodlights with a single tag, trigger, and variable. I have found this to be useful in simplifying the GTM setup in a bunch of scenarios so I hope it helps you too!

 

 

Testing and Optimization

Steps to Automation [Adobe Webinar]

On August 9th, this upcoming Thursday, I will be joining Adobe on the Adobe Target Basics webinar to geek out over how to dip your toes in Automation, using Automated Personalization in Adobe Target.

I am going to dive deep into the strategy, the setup, best practices, and how to interpret the results.  To make things even more fun, I am going to walk attendees through a LIVE Automated Personalization test that is currently running on our home page.

This test has only been up and running for 12 days and the image below represents a sneak preview of the results.  During the webinar, I will explain what is going on with this test.

To register for the webinar, simply register via the CTA at the bottom.

 

 

Hope to see you there!

Adobe Analytics, Featured

Transaction ID – HR Example

The Transaction ID feature in Adobe Analytics is one of the most underrated in the product. Transaction ID allows you to “close the loop,” so to speak, and import offline metrics related to online activity and apply those metrics to pre-existing dimension values.  This means that you can set a unique ID online and then import offline metrics tied to that unique ID and have the offline metrics associated with all eVar values that were present when the online ID was set. For example, if you want to see how many people who complete a lead form end up becoming customers a few weeks later, you can set a Transaction ID and then later import a “1” into a Success Event for each ID that becomes a customer. This will give “1” to every eVar value that was present when the Transaction ID was set, such as campaign code, visit number, etc…. It is almost like you are tricking Adobe Analytics into thinking that the offline event happened online. In the past, I have described how you could use Transaction ID to import recurring revenue and import product returns, but in this post, I will share another example related to Human Resources and recruiting.

Did They Get Hired?

So let’s imagine that you work for an organization that uses Adobe Analytics and hires a lot of folks. It is always a good thing if you can get more groups to use analytics (to justify the cost), so why not have the HR department leverage the tool as well? On your website, you have job postings and visitors can view jobs and then click to apply. You would want to set a success event for “Job Views” and another for “Job Clicks” and store the Job ID # in an eVar. Then if a user submits a job application, you would capture this with a “Job Applications” Success Event. Thus, you would have a report that looks like this:

Let’s assume that your organization is also using marketing campaigns to find potential employees. These campaign codes would be captured in the Campaigns (Tracking Code) eVar and, of course, you can also see all of these job metrics in this and any other eVar reports:

But what if you wanted to see which of these job applicants were actually hired? Moreover, what if you wanted to see which marketing campaigns led to hires vs. just unqualified applicants? All of this can be done with Transaction ID. As long as you have some sort of back-end system that knows the unique “transaction” ID and knows if a hire took place, you can upload the offline metric and close the loop. Here is what the Transaction ID upload file might look like:

Notice that we are setting a new “Job Hires” Success Event success event and tying it to the Transaction ID. This will bind the offline metric to the Job # eVar value, the campaign code and any other eVars. Once this has loaded, you can see a report that looks like this:

Additionally, you can then switch to the Campaigns report to see this:

This allows you to then create Calculated Metrics to see which marketing campaigns are most effective at driving new hires.

Are They Superstars?

If you want to get a bit more advanced with Transaction ID, you can extend this concept to import additional metrics related to employee performance. For example, let’s say that each new hire is evaluated after their first six months on the job and that they are rated on a scale of 1 (bad) to 10 (great). In the future, you can import their performance as another numeric Success Event (just be sure to have your Adobe account manager extend Transaction ID beyond the default 90 days):

Which will allow you to see a report like this:

Then you can create a Calculated Metric that divides the rating by the number of hires. This will allow you to see ratings per hire in any eVar report, like the Campaigns report shown here:

Final Thoughts

This is a creative way to apply the concept of Transaction ID, but as you can imagine, there are many other ways to utilize this functionality. Anytime that you want to tie offline metrics to online metrics, you should consider using Transaction ID.

Adobe Analytics, Uncategorized

Daily Averages in Adobe Analytics

Traditionally it has been a tad awkward to create a metric that gave you a daily average in Adobe Analytics. You either had to create a metric that could only be used with a certain time frame (with a fixed number of days), or create the metric in Report Builder using Excel functions. Thankfully, with today’s modern technology we are better equipped to do basic math ;). This is still a bit awkward, but should be easy for advanced users to create a metric that others can easily pull into their reports.

This approach takes advantage of the Approximate Count Distinct function to count the number of days your metric is seen across. The cool thing about this approach is that you can then use the metric across any time range and your denominator will always be right. Here’s how it would look in the calculated metric builder for a daily average of visits:

 

The most important part of this is the red section which is the APPROXIMATE COUNT DISTINCT function. This asks for a dimension as the only argument into which you would plug the “Day” dimension.

Now what’s up with the ROUND function in yellow around that? Well, as the name indicates, the distinctness of the count is approximate and doesn’t necessarily return a whole number like you would expect. To help it out a bit I just use the ROUND function to ensure that it is a round number. From what I have seen so far this is good enough to make the calculation accurate. However, if it is off by more than .5 this could cause problems, so keep an eye open for that and let me know if this happens to you.

With this metric created you can now use this in your reporting to show a daily average along with your total values:

Weekday and Weekend Averages

You can also use a variation of this to give you averages for just the weekday or weekend. This can be especially useful if your company experiences dramatic shifts in traffic on the weekend, and you don’t want the usual weekly trend to throw off your comparisons. For example, if I’m looking at a particular Saturday and I want to know how that compares to the average, it may not make sense to compare to the average across all days. If the weekday days are really high then they would push the average up and the Saturday I’m looking at will always seem low. You could also do the same for certain days of the week if you had the need.

To do this we need to add just a smidge bit more to the metric. In this example, notice that the calculation is essentially the same. I have just wrapped it all in a “Weekend Hits” segment. The segment is created using a hits container where the “Weekday/Weekend” dimension is equal to “Weekend”.

Here’s how the segment would look:

And here is the segment at play in the calculated metric:

With the metric created just add it to your report. Now you can have the average weekend visits right next to the daily average and your total. You have now given birth to a beautiful little metric family. Congratulations!

Caution

Keep in mind that this will count the days where you have data. This means your denominator could be deflated if you use this to look at a site, dimension, segment or combination that doesn’t get data every day. For example, let’s say you want to look at the daily average visits to a page that gets a tiny amount of traffic. If over 30 days it just has traffic for 28 of those days then this approach will just give the average over 28 days. The reason for this is that the function is counting the line items in the day dimension for that item. If the day doesn’t have data it isn’t available for counting.

In most cases this will likely help you. I say this mainly because date ranges in AA default to “This Month”. If you are in the middle of the current month, then using the total number of days in your time range would throw the calculations off. With this approach, if you are using “This Month” and you are just on the 10th then this approach will use 10 days in the calculation. Cool, eh?

Conferences/Community, Featured

ACCELERATE 2.0 coming in 2019: Save the Date

After a brief hiatus while we examined the ever-changing conference landscape and regrouped here at Analytics Demystified I am delighted to announce that our much loved ACCELERATE conference will be returning in January 2019.

On January 25th we will be gathering in Los Gatos, California at the beautiful Toll House Hotel to ACCELERATE attendees knowledge of digital measurement and optimization via our “Ten Tips in Twenty Minutes” format.  If you haven’t experienced our ground-breaking “Ten Tips” format before … think of it as a small firehose of information, aimed directly at you, in rapid-fire succession all morning long.

What’s more, as part of the evolution of ACCELERATE, the afternoon will feature both a keynote presentation that we think you will love and a session of intimate round-tables led by each of our “Ten Tips” speakers designed to allow participants to dig into each topic more deeply.  I am especially excited about the round-tables since, as an early participant and organizer in the old X Change conference, I have seen first-hand how deep these sessions can go, and how valuable they can be (when done properly!)

Also, as we have done in the past, on Thursday, January 24th, the Partners at Analytics Demystified will be leading half-day training sessions.  Led by Adam Greco, Brian Hawkins, Kevin Willeitner, Michele Kiss, Josh West, Tim Patten, and possibly … yours truly … these training sessions will cover the topics that digital analysts need most to ACCELERATE their own knowledge of Adobe and Google, analytics and optimization in practice, and their own professional careers.

But wait, there is one more thing!

While we have long been known for our commitment to the social aspects of analytics via Web Analytics Wednesday and the “lobby bar” gathering model … at ACCELERATE 2.0 we will be offering wholly social activities for folks who want to hang around and see a little more of Los Gatos.  Want to go mountain biking with Kevin Willeitner?  Or hiking with Tim Patten and Michele Kiss?  Now is your chance!

Watch for more information including our industry-low ticket prices, scheduling information, and details about hotel, training, and activities in the coming weeks … but for now we hope you will save January 24th and January 25th to join us in Los Gatos, California for ACCELERATE 2.0!

Adobe Analytics, Featured

Return Frequency % of Total

Recently, a co-worker ran into an issue in Adobe Analytics related to Visit Frequency. The Visit Frequency report in Adobe Analytics is not one that I use all that often, but it looks like this:

This report simply shows a distribution of how long it takes people to come back to your website. In this case, my co-worker was looking to show these visit frequencies as a percentage of all visits. To do this, she created a calculated metric that divided visits by the total number of visits like this:

Then she added it to the report as shown here:

At this point, she realized that something wasn’t right. As you can see here, the total number of Visits is 5,531, but when she opened the Visits metric, she saw this:

Then she realized that the Return Frequency report doesn’t show 1st time visits and even though you might expect the % of Total Visits calculated metric to include ALL visits, it doesn’t. This was proven by applying a 1st Time Visits segment to the Visits report like this:

Now we can see that when subtracting the total visits (27,686) from the 1st time visits (22,155), we are left with 5,531, which is the amount shown in the return frequency report. Hence, it is not as easy as you’d think to see the % of total visits for each return frequency row.

Solution #1 – Adobe ReportBuilder

The easiest way to solve this problem is to use Adobe ReportBuilder. Using ReportBuilder, you can download two data blocks – one for Return Frequency and one for Visits:

Once you have downloaded these data blocks you can create new columns that divide each row by the correct total number of visits to see your % of total:

In this case, I re-created the original percentages shown in the Return Frequency report, but also added the desired % of Total visits in a column next to it so both could be seen.

Solution #2 – Analysis Workspace & Calculated Metrics

Since Analysis Workspace is what all the cool kids are using these days, I wanted to find a way to get this data there as well. To do this, I created a few new Calculated Metrics that used Visits and Return Frequency. Here is one example:

This Calculated Metric divides Visits where Return Frequency was less than 1 day by all Visits. Here is what it looks like when you view Total visits, the segmented version of Visits and the Calculated Metric in a table in Analysis Workspace:

Here you can see that the total visits for June is 27,686, that the less than 1 day visits were 2,276 and that the % of Total Visits is 8.2%. You will see that these figures match exactly what we saw in Adobe ReportBuilder as well (always a good sign!). Here is what it looks like if we add a few more Return Frequencies:

Again, our numbers match what we saw above. In this case, there is a finite number of Return Frequency options, so even though it is a bit of a pain to create a bunch of new Calculated Metrics, once they are created, you won’t have to do them again. I was able to create them quickly by using the SAVE AS feature in the Calculated Metrics builder.

As a bonus, you can also right-click and create an alert for one or more of these new calculated metrics:

Summary

So even though Adobe Analytics can have some quirks from time to time, as shown here, you can usually find multiple ways to get to the data you need if you understand all of the facets of the product. If you know of other or easier ways to do this, please leave a comment here. Thanks!

Adobe Analytics, Tag Management, Technical/Implementation, Testing and Optimization

Adobe Target + Analytics = Better Together

Last week I wrote about an Adobe Launch extension I built to familiarize myself with the extension development process. This extension can be used to integrate Adobe Analytics and Target in the same way that used to be possible prior to the A4T integration. For the first several years after Omniture acquired Offermatica (and Adobe acquired Omniture), the integration between the 2 products was rather simple but quite powerful. By using a built-in list variable called s.tnt (that did not count against the 3 per report suite available to all Adobe customers), Target would pass a list of all activities and experiences in which a visitor was a participant. This enabled reporting in Analytics that would show the performance of each activity, and allow for deep-dive analysis using all the reports available in Analytics (Target offers a powerful but limited number of reports). When Target Standard was released, this integration became more difficult to utilize, because if you choose to use Analytics for Target (A4T) reporting, the plugins required to make it work are invalidated. Luckily, there is a way around it, and I’d like to describe it today.

Changes in Analytics

In order to continue to re-create the old s.tnt integration, you’ll need to use one of your three list variables. Choose the one you want, as well as the delimiter and the expiration (the s.tnt expiration was 2 weeks).

Changes in Target

The changes you need to make in Target are nearly as simple. Log into Target, go to “Setup” in the top menu and then click “Response Tokens” in the left menu. You’ll see a list of tokens, or data elements that exist within Target, that can be exposed on the page. Make sure that activity.id, experience.id, activity.name, and experience.name are all toggled on in the “Status” column. That’s it!

Changes in Your TMS

What we did in Analytics and Target made an integration possible – we now have a list variable ready to store Target experience data, and Target will now expose that data on every mbox call. Now, we need to connect the two tools and get data from Target to Analytics.

Because Target is synchronous, the first block of code we need to execute must also run synchronously – this might cause problems for you if you’re using Signal or GTM, as there aren’t any great options for synchronous loading with those tools. But you could do this in any of the following ways:

  • Use the “All Pages – Blocking (Synchronous)” condition in Ensighten
  • Put the code into the utag.sync.js template in Tealium
  • Use a “Top of Page” (DTM) or “Library Loaded” rule (Launch)

The code we need to add synchronously attaches an event listener that will respond any time Target returns an mbox response. The response tokens are inside this response, so we listen for the mbox response and then write that code somewhere it can be accessed by other tags. Here’s the code:

	if (window.adobe && adobe.target) {
		document.addEventListener(adobe.target.event.REQUEST_SUCCEEDED, function(e) {
			if (e.detail.responseTokens) {
				var tokens = e.detail.responseTokens;
				window.targetExperiences = [];
				for (var i=0; i<tokens.length; i++) {
					var inList = false;
					for (var j=0; j<targetExperiences.length; j++) {
						if (targetExperiences[j].activityId == tokens[i]['activity.id']) {
							inList = true;
							break;
						}
					}
					
					if (!inList) {
						targetExperiences.push({
							activityId: tokens[i]['activity.id'],
							activityName: tokens[i]['activity.name'],
							experienceId: tokens[i]['experience.id'],
							experienceName: tokens[i]['experience.name']
						});
					}
				}
			}
			
			if (window.targetLoaded) {
				// TODO: respond with an event tracking call
			} else {
				// TODO: respond with a page tracking call
			} 
		});
	}
	
	// set failsafe in case Target doesn't load
	setTimeout(function() {
		if (!window.targetLoaded) {
			// TODO: respond with a page tracking call
		}
	}, 5000);

So what does this code do? It starts by adding an event listener that waits for Target to send out an mbox request and get a response back. Because of what we did earlier, that response will now carry at least a few tokens. If any of those tokens indicate the visitor has been placed within an activity, it checks to make sure we haven’t already tracked that activity on the current page (to avoid inflating instances). It then adds activity and experience IDs and names to a global object called “targetExperiences,” though you could push it to your data layer or anywhere else you want. We also set a flag called “targetLoaded” to true that allows us to use logic to fire either a page tracking call or an event tracking call, and avoid inflating page view counts on the page. We also have a failsafe in place, so that if for some reason Target does not load, we can initiate some error handling and avoid delaying tracking.

You’ll notice the word “TODO” in that code snippet a few times, because what you do with this event is really up to you. This is the point where things get a little tricky. Target is synchronous, but the events it registers are not. So there is no guarantee that this event will be triggered before the DOM ready event, when your TMS likely starts firing most tags.. So you have to decide how you want to handle the event. Here are some options:

  • My code above is written in a way that allows you to track a pageview on the very first mbox load, and a custom link/event tracking call on all subsequent mbox updates. You could do this with a utag.view and utag.link call (Tealium), or trigger a Bootstrapper event with Ensighten, or a direct call rule with DTM. If you do this, you’ll need to make sure you configure the TMS to not fire the Adobe server call on DOM ready (if you’re using DTM, this is a huge pain; luckily, it’s much easier with Launch), or you’ll double-count every page.
  • You could just configure the TMS to call a custom link call every time, which will probably increase your server calls dramatically. It may also make it difficult to analyze experiences that begin on page load.

What my Launch extension does is fire one direct call rule on the first mbox call, and a different call for all subsequent mbox calls. You can then configure the Adobe Analytics tag to fire an s.t() call (pageview) for that initial direct call rule, and an s.tl() call for all others. If you’re doing this with Tealium, make sure to configure your implementation to wait for your utag.view() call rather than allowing the automatic one to track on DOM ready. This is the closest behavior to how the original Target-Analytics integration worked.

I’d also recommend not limiting yourself to using response tokens in just this one way. You’ll notice that there are tokens available for geographic data (based on an IP lookup) and many other things. One interesting use case is that geographic data could be extremely useful in achieving GDPR compliance. While the old integration was simple and straightforward, and this new approach is a little more cumbersome, it’s far more powerful and gives you many more options. I’d love to hear what new ways you find to take advantage of response tokens in Adobe Target!

Photo Credit: M Liao (Flickr)

Adobe Analytics, Featured

Measuring Page Load Time With Success Events

One of the things I have noticed lately is how slowly some websites are loading, especially media-related websites. For example, recently I visited wired.com and couldn’t get anything to work. Then I looked at Ghostery and saw that they had 126 tags on their site and a page load time of almost 20 seconds!

I have seen lots of articles showing that fast loading pages can have huge positive impacts on website conversion, but the proliferation of JavaScript tags may be slowly killing websites! Hopefully some of the new GDPR regulations will force companies to re-examine how many tags are on their sites and whether all of them are still needed. In the meantime, I highly recommend that you use a tool like ObservePoint to understand how many tags are lingering on your site now.

As a web analyst, you may want to measure how long it is taking your pages to load. Doing this isn’t trivial, as can be seen in my partner Josh West’s 2015 blog post. In this post, Josh shows some of the ways you can capture page load time in a dimension in Adobe or Google Analytics, though doing so is not going to be completely exact. Regardless, I suggest you check out his post and consider adding this dimension to your analytics implementation.

One thing that Josh alluded to, but did not go into depth on, is the idea of storing page load time as a metric. This is quite different than capturing the load time in a dimension, so I thought I would touch upon how to do this in Adobe Analytics (which can also be done in Google Analytics). If you want to store page load time as a metric in Adobe Analytics, you would pass the actual load time (in seconds or milliseconds) to a Numeric Success Event. This would create an aggregated page load time metric that is increased with every website page view. This new metric can be divided by page views or you can set a separate counter page load denominator success event (if you are not going to track page load time on every page). Here is what you might see if you set the page load time and denominator metrics in the debugger:

You would also want to capture the page name in an eVar so you can easily see the page load time metrics by page. This is what the data might look like in a page name (actual page names hidden here):

In this case, there is a calculated metric that is dividing the aggregated page load time by the denominator to see an average page load time for each page. There are also ways that you can use Visit metrics to see the average page load time per visit. Regardless of which version you use, this type of report can help you identify your problem pages so you can see if there are things you can do to improve conversion. I suggest combing this with a Participation report to see which pages impact your conversion the most, but are loading slowly.

Another cool thing you can do with this data is to trend the average page load time for the website overall. Since you already have created the calculated metric shown below, you can simply open this metric by itself (vs. viewing by page name), to see the overall trend of page load speeds for your site and then set some internal targets or goals to strive for in the future.

Adobe Analytics, Tag Management, Technical/Implementation

My First Crack at Adobe Launch Extension Development

Over the past few months, I’ve been spending more and more time in Adobe Launch. So far, I’m liking what I see – though I’m hoping the publish process gets ironed out a bit in coming months. But that’s not the focus of this post; rather, I wanted to describe my experience working with extensions in Launch. I recently authored my first extension – which offers a few very useful ways to integrate Adobe Target with other tools and extensions in Launch. You can find out more about it here, or ping me with any questions if you decide to add the extension to your Launch configuration. Next week I’ll try and write more about how you might something similar using any of the other major tag management systems. But for now, I’m more interested in how extension development works, and I’d like to share some of the things I learned along the way.

Extension Development is New (and Evolving) Territory for Adobe

The idea that Adobe has so freely opened up its platform to allow developers to share their own code across Adobe’s vast network of customers is admittedly new to me. After all, I can remember the days when Omniture/Adobe didn’t even want to open up its platform to a single customer, much less all of them. Remember the days of usage tokens for its APIs? Or having to pay for a consulting engagement just to get the code to use an advanced plugin like Channel Manager? So the idea that Adobe has opened things up to the point where I can write my own code within Launch, programmatically send it to Adobe, and have it then available for any Adobe customer to use – that’s pretty amazing. And for being so new, the process is actually pretty smooth.

What Works Well

Adobe has put together a pretty solid documentation section for extension developers. All the major topics are covered, and the Getting Started guide should help you get through the tricky parts of your first extension like authentication, access tokens, and uploading your extension package to the integration environment. One thing to note is that just about everything you define in your extension is a “type” of that thing, not the actual thing. For example, my extension exposes data from Adobe Target for use by other extensions. But I didn’t immediately realize that my data element definitions didn’t actually define new data elements for use in Launch; it only created a new “type” of data element in the UI that can then be used to create a data element. The same is true for custom events and actions. That makes sense now, but it took some getting used to.

During the time I spent developing my extension, I also found the Launch product team is working continuously to improve the process for us. When I started, the documentation offered a somewhat clunky process to retrieve an access token, zip my extension, and use a Postman collection to upload it. By the time I was finished, Adobe had released a Node package (npm) to basically do all the hard work. I also found the Launch product team to be incredibly helpful – they responded almost immediately to my questions on their Slack group. They definitely seem eager to build out a community as quickly as possible.

I also found the integration environment to be very helpful in testing out my extension. It’s almost identical to the production environment of Launch; the main difference is that it’s full of extensions in development by people just like me. So you can see what others are working on, and you can get immediate feedback on whether your extension works the way it should. There is even a fair amount of error logging available if you break something – though hopefully this will be expanded in the coming months.

What Could Work Better

Once I finished my extension, I noticed that there isn’t a real natural spot to document how your extension should work. I opted to put mine into the main extension view, even though there was no other configuration needed that would require such a view. While I was working on my extension, it was suggested that I put instructions in my Exchange listing, which doesn’t seem like a very natural place for it, either.

I also hope that, over time, Adobe offers an easier way to style your views to match theirs. For example, if your extension needs to know the name of a data element it should populate, you need a form field to collect this input. Making that form look the same as everything else in Launch would be ideal. I pulled this off by scraping the HTML and JavaScript from one Adobe’s own extensions and re-formatting it. But a “style toolkit” would be a nice addition to keep the user experience the same.

Lastly, while each of the sections in the Getting Started guide had examples, some of the more advanced topics could use some more additional exploration. For example, it took me a few tries to decide whether my extension would work better with a custom event type, or with just some custom code that triggered a direct call rule. And figuring out how to integrate with other extensions – how to access other extensions’ objects and code – wasn’t exactly easy and I still have some unanswered questions because I found a workaround and ended up not needing it.

Perhaps the hardest part of the whole process was getting my Exchange listing approved. The Exchange covers a lot of integrations beyond just Adobe Launch, some of which are likely far more complex than what mine does. A lot of the required images, screenshots, and details seemed like overkill – so a tiered approach to listings would be great, too.

What I’d Like to See Next

Extension development is in its infancy still, but one thing I hope is on the roadmap is the ability to customize an extension to work the way you need it. For a client I recently migrated, they used both Facebook and Pinterest, though the extensions didn’t work for their tag implementation. There were events and data they needed to capture that the extension didn’t support. I hope that in a future iteration, I’ll be able to “check out” an extension from the library and download the package, make it work the way I need and either create my own version of the extension or contribute to an update of someone else’s extension that the whole community can benefit from. The inability to customize tag templates has plagued every paid tag management solution except Tealium (which has supported it from the beginning) for years – in my opinion, it’s what turns tag management from a tool used primarily to deploy custom JavaScript into a powerful digital marketing toolbelt. It’s not something I’d expect so early in the game, but I hope it will be added soon.

In conclusion, my hat goes off to the Launch development team; they’ve come up with a really great way to build a collaborative community that pushes Launch forward. No initial release will ever be perfect, but there’s a lot to work with and a lot of opportunity for all of us in the future to shape the direction Launch takes and have some influence in how it’s adopted. And that’s an exciting place to be.

Photo Credit: Rod Herrea (Flickr)

Featured, google analytics, Reporting

A Scalable Way To Add Annotations of Notable Events To Your Reports in Data Studio

Documenting and sharing important events that affected your business are key to an accurate interpretation of your data.

For example, perhaps your analytics tracking broke for a week last July, or you ran a huge promo in December. Or maybe you doubled paid search spend, or ran a huge A/B test. These events are always top of mind at the time, but memories fade quickly, and turnover happens, so documenting these events is key!

Within Google Analytics itself, there’s an available feature to add “Annotations” to your reports. These annotations show up as little markers on trend charts in all standard reports, and you can expand to read the details of a specific event.

However, there is a major challenge with annotations as they exist today: They essentially live in a silo – they’re not accessible outside the standard GA reports. This means you can’t access these annotations in:

  • Google Analytics flat-table custom reports
  • Google Analytics API data requests
  • Big Query data requests
  • Data Studio reports

While I can’t solve All.The.Things, I do have a handy option to incorporate annotations in to Google Data Studio. Here’s a quick example:

Not too long ago, Data Studio added a new feature that essentially “unified” the idea of a date across multiple data sources. (Previously, a date selector would only affect the data source you had created it for.)

One nifty application of this feature is the ability to pull a list of important events from a Google Spreadsheet in to your Data Studio report, so that you have a very similar feature to Annotations.

To do this:

Prerequisite: Your report should really include a Date filter for this to work well. You don’t want all annotations (for all time) to show, as it may be overwhelming, depending on the timeframe.

Step 1: Create a spreadsheet that contains all of your GA annotations. (Feel free to add any others, while you’re at it. Perhaps yours haven’t been kept very up to date…! You’re not alone.)

I did this simply, by just selecting the entire timeframe of my data set, and copy-pasting from the Annotations table in GA in to a spreadsheet

You’ll want to include these dimensions in your spreadsheet:

  • Date
  • The contents of the annotation itself
  • Who added it (why not, might as well)

You’ll also want to add a “dummy metric”, which I just created as Count, which is 1 for each row. (Technically, I threw a formula in to put a one in that row as long as there’s a comment.)

Step 2: Add this as a Data Source in Data Studio

First, “Create New Data Source”

Then select your spreadsheet:

It should happen automatically, but just confirm that the date dimension is correct:

3. Create a data table

Now you create a data table that includes those annotations.

Here are the settings I used:

Data Settings:

  • Dimensions:
    • Date
    • Comment
    • (You could add the user who added it, or a contact person, if you so choose)
  • Metric:
    • Count (just because you need something there)
  • Rows per Page:
    • 5 (to conserve space)
  • Sort:
    • By Date (descending)
  • Default Date Range:
    • Auto (This is important – this is how the table of annotations will update whenever you use the date selector on the report!)

Style settings:

  • Table Body:
    • Wrap text (so they can read the entire annotation, even if it’s long)
  • Table Footer:
    • Show Pagination, and use Compact (so if there are more than 5 annotations during the timeframe the user is looking at, they can scroll through the rest of them)

Apart from that, a lot of the other choices are stylistic…

  • I chose a lot of things based on the data/pixel ratio:
    • I don’t show row numbers (unnecessary information)
    • I don’t show any lines or borders on the table, or fill/background for the heading row
    • I choose a small font, just since the data itself is the primary information I want the user to focus on

I also did a couple of hack-y things, like just covering over the Count column with a grey filled box. So fancy…!

Finally, I put my new “Notable Events” table at the very bottom of the page, and set it to show on all pages (Arrange > Make Report Level.)

You might choose to place it somewhere else, or display it differently, or only show it on some pages.

And that’s it…!

But, there’s more you could do 

This is a really simple example. You can expand it out to make it even more useful. For example, your spreadsheet could include:

  • Brand: Display (or allow filtering) of notable events by Brand, or for a specific Brand plus Global
  • Site area: To filter based on events affecting the home page vs. product pages vs. checkout (etc)
  • Type of Notable Event: For example, A/B test vs. Marketing Campaign vs. Site Issue vs. Analytics Issue vs. Data System Affected (e.g. GA vs. AdWords)
  • Country… 
  • There are a wide range of possible use cases, depending on your business

Your spreadsheet can be collaborative, so that others in the organization can add their own events.

One other cool thing is that it’s very easy to just copy-paste rows in a spreadsheet. So let’s say you had an issue that started June 1 and ended June 7. You could easily add one row for each of those days in June, so that even if a user pulled say, June 6-10, they’d see the annotation noted for June 6 and June 7. That’s more cumbersome in Google Analytics, where you’d have to add an annotation for every day.

Limitations

It is, of course, a bit more leg work to maintain both this set of annotations, AND the default annotations in Google Analytics. (Assuming, of course, that you choose to maintain both, rather than just using this method.) But unless GA exposes the contents of the annotations in a way that we can pull in to Data Studio, the hack-y solution will need to be it!

Solving The.Other.Things

I won’t go in to it here, but I mentioned the challenge of the default GA annotations and both API data requests and Big Query. This solution doesn’t have to be limited to Data Studio: you could also use this table in Big Query by connecting the spreadsheet, and you could similarly pull this data into a report based on the GA API (for example, by using the spreadsheet as a data source in Tableau.)

Thoughts? 

It’s a pretty small thing, but at least it’s a way to incorporate comments on the data within Data Studio, in a way that the comments are based on the timeframe the user is actually looking at.

Thoughts? Other cool ideas? Please leave them in the comments!

Tag Management, Technical/Implementation

Helpful Implementation Tip – Rewrite HTML on Page Load with Charles Proxy

There are a variety of methods and tools used for debugging and QA’ing an analytics implementation.  While simply using the developer tools built into your favorite browser will usually suffice for some of the more common QA needs, there are times that a more robust tool is needed.  One such situation is the need to either swap out code on the page, or add code to a page.

To use an example, there are many times that a new microsite will be launching, but due to dev sprints/cycles, the Tag Management System (TMS), and dataLayer, that you are going to be working with hasn’t been added to the site yet.  However, you may need to get some tags set up in the TMS and ensure they are working, while the engineering team works on getting the TMS installed on the site. In these situations, it would be very difficult to ensure that everything in the TMS is working correctly prior to the release.  This is one of many situations where Charles Proxy can be a useful tool to have available.  

Charles Proxy is a proxy tool that sits in the middle of your connection to the internet.  This means that it captures and processes every bit of information that is sent between your computer and the internet and therefore can allow you to manipulate that information.  One such manipulation you can perform is to change the body of any response that is received from a web server, i.e. the HTML of a web page.

To go back to my example above, let’s say that I wanted to install Google Tag Manager (GTM) on a webpage.  I would open up Charles, go to the Tools menu and then go to Rewrite. I would then create a new rule that replaces the “</head>” text with the dataLayer definition, GTM snippet and then the “</head>” text. (See below for how this is setup).  This will result in the browser (only on your computer and only while Charles is open with the rewrite enabled) receiving an HTML response that contains the GTM snippet.

Example Setup

Step 1: Open the Rewrite tool from the “Tools” menu.

Step 2: Click the add button on the bottom left to add a new Rewrite rule.  Then click on the Add button on the middle right to add a configuration for which domains/pages to apply this rewrite rule to.

Step 3: Click the Add button on the bottom right.  This will allow you to specify what action you want to take on the domains/pages that you specified.  For this example, you will want to choose a “Type” of “Body” and “Where” of “Response” as you are modifying the Body of the response. Under “Match” you are going to put the closing “</head>” tag as this is where you are going to install the TMS.  Then, under “Replace”, you will put the snippet you want to place before the closing head tag (in this example, the TMS and dataLayer tags) followed by the same closing “</head>” tag. When you are done, click “OK” to close the Action window.

Step 4: Click OK on the Rewrite Settings window to save your rewrite rule.  Then refresh the domains/pages in your browser to see whether your new rewrite rule is working as expected.

Why use Charles instead of standard browser developer tools?

While you could perform this same task using the developer tools in your chosen browser, that would have to be done on each page that you need to QA, each time a page is loaded.  A Charles rewrite, on the other hand, would be automatically placed on each page load. It also ensures that the GTM snippet and dataLayer are loaded in the correct place in the DOM and that everything fires in the correct order.  This is essential to ensuring that your QA doesn’t return different results than it would on your production site (or staging site once the GTM snippet is placed).

There are many ways that Charles rewrites can be used.  Here are a few examples of when I utilize rewrites –

  • Changing the container bootstrap/container that is used for staging sites (this is less common, but sometimes needed depending on the situation);
  • Adding/changing core JavaScript that is needed for analytics to function;
  • Modifying HTML to test out specific scenarios with tracking (instead of having to do so in the browser’s developer tools on each page load);
  • Manipulating the dataLayer on each page prior to a staging site update.  This can be useful for testing out a tagging plan prior to sending to a dev team (which helps to ensure less back and forth in QA when something wasn’t quite defined correctly in your requirements).

I hope you have found this information useful.  What are your thoughts? Do you have any other great use cases that I may have missed?  Leave your comments below!

Adobe Analytics, Featured

Product Ratings/Reviews in Adobe Analytics

Many retailers use product ratings as a way to convince buyers that they should take the next step in conversion, which is usually a cart addition. Showing how often a product has been reviewed and its average product rating helps build product credibility and something consumers have grown used to from popular sites like amazon.com.

Digital analytics tools like Adobe Analytics can be used to determine whether the product ratings on your site/app are having a positive or negative impact on conversion. In this post, I will share some ways you can track product review information to see its impact on your data.

Impact of Having Product Ratings/Reviews

The first thing you should do with product ratings and reviews is to capture the current avg. rating and # of reviews in a product syntax merchandising eVar when visitors view the product detail page. In order to save eVars, I sometimes concatenate these two values with a separator and then use RegEx and the SAINT Classification RuleBuilder to split them out later. In the preceding screenshot, for example, you might pass 4.7|3 to the eVar and then split those values out later via SAINT. Capturing these values at the time of the product detail page view allows you to lock in what the rating and # of reviews was at the time of the product view. Here is what the rating merchandising eVar might look like once split out:

You can also group these items using SAINT to see how ratings between 4.0 – 4.5 perform vs. 4.5 – 5.0, etc… You can also sort this report by your conversion metrics, but if you do so, I would recommend adding a percentile function so you don’t just see rows that have very few product views or orders. The same type of report can be run for # of reviews as well:

Lastly, if you have products that don’t have ratings/reviews at all, the preceding reports will have a “None” row, which will allow you to see the conversion rate when no ratings/reviews exist, which may be useful information to see overall impact of ratings/reviews for your site.

Average Product Rating Calculated Metric

In addition to capturing the average rating and the # of reviews in an eVar, another thing you can do is to capture the same values in numeric success events. As a reminder, a numeric success event is a metric that can be incremented by more than one in each server call. For example, when a visitor views the following product page, the average product rating of 4.67 is being passed to a numeric success event 50. This means that event 50 is being increased for the entire website by 4.67 each time this product is viewed. Since the Products variable is also set, this 4.67 is “bound” (associated) to product H8194. At the same time, we need a denominator to divide this rating by to compute the overall product rating average. In this case, event 51 is set to “1” each time that a rating is present (you cannot use Product Views metric since there may be cases in which no rating is present but there is a product view).  Here is what the tagging might look like when it is complete:

Below is what the data looks like once it is collected:

You can see Product Views, the accumulated star ratings, the number of times ratings were available and a calculated metric to compute the average rating for each product. Given that we already have the average product rating in an eVar, this may not seem important, but the cool part of this is that now the product rating can be trended over time. Simply add a chart visualization and then select a specific product to see how its rating changes over time:

The other cool part of this is that you can leverage your product classifications to group these numeric ratings by product category:

Using both eVars and success events to capture product ratings/reviews on your site allows you to capture what your visitors saw for each product while on your product detail pages. Having this information can be helpful to see if ratings/reviews are important to your site and to be aware of the impact for each product and/or product category.

Adobe Analytics, Featured

Engagement Scoring Using Approx. Count Distinct

Back in 2015, I wrote a post about using Calculated Metrics to create an Engagement Score. In that post, I mentioned that it was possible to pick a series of success events and multiply them by some sort of weighted number to compute an overall website engagement score. This was an alternative to a different method of tracking visitor engagement via numeric success events set via JavaScript (which was also described in the post). However, given that Adobe has added the cool Approximate Count Distinct function to the analytics product, I recently had an idea about a different way to compute website engagement that I thought I would share.

Adding Depth to Website Engagement

In my previous post, website engagement was computed simply by multiplying chosen success events by a weighted multiplier like this:

This approach is workable but lacks a depth component. For example, the first parameter looks at how many Product Views take place but doesn’t account for how many different products are viewed. There may be a situation in which you want to assign more website engagement to visits that get visitors to view multiple products vs. just one. The same concept could apply to Page Views and Page Names, Video Views and Video Names, etc…

Using the Approximate Count Distinct function, it is now possible to add a depth component to the website engagement formula. To see how this might work, let’s go through an example. Imagine that in a very basic website engagement model, you want to look at Blog Post Views and Internal Searches occurring on your website. You have success events for both Blog Post Views and Internal Searches and you also have eVars that capture the Blog Post Titles and Internal Search Keywords.

To start, you can use the Approximate Count Distinct function to calculate how many unique Blog Post Titles exist (for the chosen date range) using this formula:

Next, you can multiply the number of Blog Post Views by the number of unique Blog Post Titles to come up with a Blog Post Engagement score as shown here:

Note that since the Approximate Count Distinct function is not 100% accurate, the numbers will differ slightly from what you would get if you use a calculator, but in general, the function will be at least 95% accurate or greater.

You can repeat this process for Internal Search Keywords. First, you compute the Approximate Count of unique Search Keywords like this:

Then you create a new calculated metric that multiplies the number of Internal Searches by the unique number of Keywords. Here is what a report looks like with all six metrics:

Website Engagement Calculation

Now that you have created the building blocks for your simplistic website engagement score, it is time to put them together and add some weighting. Weighting is important, because it is unlikely that your individual elements will have the same importance to your website. In this case, let’s imagine that a Blog Post View is much more important than an Internal Search, so it is assigned a weight score of 90, whereas a score of 10 is assigned to Internal Searches. If you are creating your own engagement score, you may have more elements and can weight them as you see fit.

In the following formula, you can see that I am adding the Blog Post engagement score to the Internal Search engagement score and adding the 90/10 weighting all in one formula. I am also dividing the entire formula by Visits to normalize it, so my engagement score doesn’t rise or fall based upon differing number of Visits over time:

Here you can see a version of the engagement score as a raw number (multiplied by 90 & 10) and then the final one that is divided by Visits:

Finally, you can plot the engagement score in a trended bar chart. In this case, I am trending both the engagement score and visits in the same chart:

In the end, this engagement score calculation isn’t significantly different than the original one, but adding the Approximate Count Distinct function allows you to add some more depth to the overall calculation If you don’t want to multiply the number of success event instances by ALL of the unique count of values, you could alternatively use an IF function with the GREATER THAN function to cap the number of unique items at a certain amount (i.e. If more than 50 unique Blog Post Titles, use 50, else, use the unique count).

The best part of this approach is that it requires no JavaScript tagging (assuming you already have the success events and eVars you need in the calculation). So you can play around with the formula and its weightings with no fear of negatively impacting your implementation and no IT resources! I suggest that you give it a try and see if this type of engagement score can be used as an overall health gauge of how your website is performing over time.

Featured, Testing and Optimization

Adobe Insider Awesomeness and Geo Test deep dive

Adobe Insider and EXBE

The first Adobe Insider with Adobe Target took place on June 1st in Atlanta, Georgia.  I wrote a blog post a couple of weeks back about the multi-city event but after attending the first one, I thought I would share some takeaways.  

The event was very worthwhile and everyone that I talked to was glad to have attended.  The location was an old theatre and Hamilton was even set to run in that building later that evening.  Had I known that my flight back to Chicago that evening would be delayed by four hours, I would have tried to score a ticket.  The Insider Tour is broken down into two tracks.  An Analytics one and an Adobe Target or Personalization one.  My guess is that there were about 150 to 180 attendees which made for a more social and intimate gathering.

The Personalization track got to hang directly with the Target Product Team and hear some presentations on what they are working on, what is set to be released, and they even got to give some feedback as to product direction and focus.

The roundtable discussions went really well with lots of interaction and feedback.  I especially found it interesting to see the company to company conversations taking place.  The roundtable that I was at had really advanced users of Adobe Target as well as brand new users which allowed newbies to get advice and tips directly from other organizations vs. vendors or consultants.

As for the what the attendees liked the most, they seem to really enjoy meeting and working directly with the Product Team members but the biggest and most popular thing for the day was EXBE.   EXBE represents “Experiences Business Experience Excellence”.  You are not alone if that doesn’t roll off the tongue nicely.  Essentially, this all translates to someone (not Adobe and not a Consultant) sharing a case study of a test that they ran.  The test could be simple or the test could be very complex, it doesn’t matter.  The presenter would simply share any background, test design, setup, and any results that they could share.

Home Depot shared a case study at this year’s event and it was a big hit.  Priyanka, from Home Depot, walked attendees through a test that made a very substantial impact into Home Depot’s business.  Attendees asked a ton of questions about the test and the conversation even turned into a geek out.  Priyanka made really cool use of using multiple locations within a single experience.   This capability mapped back to using multiple mboxes in the same experience.  Some advanced users didn’t know it was possible.

So, if you are in LOS ANGELES, CHICAGO, NEW YORK, or DALLAS and plan on attending the Insider Tour, I strongly encourage to submit a test and present it.  Even if the test may seem very straightforward or not that exciting, there will be attendees that will benefit substantially.  The presentation could be 5 minutes or 30 minutes, and there is no need to worry if you can’t share actual results.  It is also a great opportunity to present to your peers and in front of a very friendly audience.  You can register here or via the very nerdy non-mboxy CTA below (see if you can figure out what I am doing here) if you are interested.

Sample Test and feedback…

At the event that day, an attendee was telling me that they don’t do anything fancy with their tests otherwise they would have submitted something and gotten the experience presenting to fellow testers.  I explained that I don’t think that matters as long as the test is valuable to your or to your business.  I then explained a very simple test that I am running on the Demystified site that some might think is simple but would a good example of a test to present.  

Also, at the event, a few people asked that I write more about test setup and some of the ways I approach test setup within Target.  So, I thought I would walk through the above mention Geo Targeted test that I have running on the Demystified website.

 

Test Design and Execution

Hypothesis

Adam and I are joining Adobe on the Adobe Insider Tour in Atlanta, Los Angeles, Chicago, New York and in Dallas.  We hypothesize that geo-targeting a banner to those five cities encouraging attendance will increase clicks on the hero compared to the rotating carousel that is hard-coded into the site.  We also hope that in the event that some of our customers or previous customers didn’t know about the Insider event, that maybe the test might make them aware of it and they attend.  

Built into Adobe Target is geo-targeting based on reverse IP lookup.  Target user the same provider that is in Analytics and users can target based on zip code, city, state, DMA, and country.  I chose to use DMA so as to get the biggest reach.

This data in this box represents the geo attributes for YOU, based on your IP address.  I am pumping this in via a test on this page.

Default Content – if you are seeing this, you are not getting the test content from Target

Test Design

So as to make sure we have a control group and to make sure we get our message out to as many people as possible, we went with a 90/10 split.  Of course, this is not ideal for sample sizes calculations, etc… but that is a whole other subject.  This is more about the tactical steps or a geo-targeted test.

Experience A:  10% holdout group to serve as my baseline (all five cities will be represented here)

Experience B:  Atlanta 

Experience C:  Los Angeles

Experience D:  Chicago

Experience E:  New York

Experience F:  Dallas

I also used an Experience Targeted test in the event that someone got into the test and happen to travel to another city that was part of our test.  The Experience Targeted test enables their offer to change to the corresponding test-Experience.

The banner would look like this (I live in Chicago DMA so I am getting this banner:).  When I go to Los Angeles next week, I will get the one for Los Angeles.  If I used an A/B test, I would continue to get Chicago since that is where I was first assigned.

Profile to make this happen

To have my 10% group, I have to use Target profiles.  There is no way to use % allocation coupled with visitor attributes like DMA so profiles are the way to go.  I’ve long argued that the most powerful part of the Adobe Target platform is the ability to profile visitors client side or server side.  For this use case, we are going to use the server side scripts to get our 10% control group.  Below is my script and you are welcome to copy it into your account.  Just be sure to name it “random_10_group”.

This script randomly generates a number and based off of that number, puts visitors into 1 of 10 groups.  Each group or set of groups can be used for targeting.  You can also force yourself into a group by appending the URL parameter ‘testgroup’ = the number of the group that you want.  For example, http://analyticsdemystified.com/?testgroup=4 would put me in the group4 for this profile.  Helpful when debugging or QA’ing tests that make use of this.

These groups are mutually exclusive as well so if your company wants to incorporate test swimlanes, this script will be helpful.

if (!user.get('random_10_group')) {
var ran_number = Math.floor(Math.random() * 99),
query = (page.query || '').toLowerCase();
query = query.indexOf('testgroup=') > -1 ? query.substring(query.indexOf('testgroup=') + 10) : '';
if (query.charAt(0) == '1') {
return 'group1';
} else if (query.charAt(0) == '2') {
return 'group2';
} else if (query.charAt(0) == '3') {
return 'group3';
} else if (query.charAt(0) == '4') {
return 'group4';
} else if (query.charAt(0) == '5') {
return 'group5';
} else if (query.charAt(0) == '6') {
return 'group6';
} else if (query.charAt(0) == '7') {
return 'group7';
} else if (query.charAt(0) == '8') {
return 'group5';
} else if (query.charAt(0) == '9') {
return 'group6';
} else if (query.charAt(0) == '10') {
return 'group10';
} else if (ran_number <= 9) {
return 'group1';
} else if (ran_number <= 19) {
return 'group2';
} else if (ran_number <= 29) {
return 'group3';
} else if (ran_number <= 39) {
return 'group4';
} else if (ran_number <= 49) {
return 'group5';
} else if (ran_number <= 59) {
return 'group6';
} else if (ran_number <= 69) {
return 'group7';
} else if (ran_number <= 79) {
return 'group8';
} else if (ran_number <= 89) {
return 'group9';
} else if (ran_number <= 99) {
return 'group10';
} else {
return 'sorry';
}
}

Audiences

Before I go into setting up the test, I am going to create my Audiences.  If you are going to be using more than a couple of Audiences in your test, I recommend you adopt this process.  Creating Audiences during the test setup can interrupt the flow of things and if you have them already created, it takes no time at all to add them as needed.

Here is my first Audience – it is my 10% control group that was made possible by the above profile parameter and it has all five cities that I am using for this test.  This will be my first Experience in my Experience Targeted Test which is a very important component.  For Experience Targeted Tests, visitors are evaluated for Experiences from top to bottom so had I put my New York Experience first, I would get visitors that should be in my Control group in that Experience.

And here is my New York Audience.  Chicago, Dallas, Atlanta, and Los Angeles are setup the same way.

 

Offer Code

Here is an example of the code I used for my test. This is the code for the offer that will display for users in Los Angeles.  I could have used VEC to do this test but our carousel is finicky and would have taken too much time to figure out in VEC so I went with FORM based.  I am old school and prefer to use Form vs. VEC.  I do love the easy click tracking as conversions events in VEC and wish they would put that in Form-based testing.  Users should only use VEC if they are using the Visual Composer.  Too often I see users select VEC only to place in custom code.  That adds overhead and is unnecessary.

 

<!– I use CSS here to suppress the hero from showing –>
<style id=”flickersuppression”>
#slider {visibility:hidden !important}
</style>
<script>
(function($){var c=function(s,f){if($(s)[0]){try{f.apply($(s)[0])}catch(e){setTimeout(function(){c(s,f)},1)}}else{setTimeout(function(){c(s,f)},1)}};if($.isReady){setTimeout(“c=function(){}”,100)}$.fn.elementOnLoad=function(f){c(this.selector,f)}})(jQuery);
// this next like wants for my test content to show up in the DOM then changes the experience
jQuery(‘.rsArrowRight > .rsArrowIcn’).elementOnLoad(function(){
$(“.rsContainer”).replaceWith(“<div class=\”rsContent\”>\n <a href=\”https://webanalyticsdemystif.tt.omtrdc.net/m2/webanalyticsdemystif/ubox/page?mbox=insider&mboxDefault=http%3A%2F%2Fwww.adobeeventsonline.com%2FInsiderTour%2F2018%2F/\”><img class=\”rsImg rsMainSlideImage\” src=\”http://analyticsdemystified.com/wp-content/uploads/2015/02/header-image-services-training-700×400.jpg\” alt=\”feature-image-1\” style=\”width:100%; height: 620px; margin-left: 0px; margin-top: -192px;\”></a>\n \n \n <div class=\”rsSBlock ui-draggable-handle\” style=\”width: auto; height: 600px; left: 40px; top: 317px;\”><h1><strong>Los Angeles! Analytics Demystified is joining Adobe on the Adobe Insider Tour</strong></h1>\n<p style=\”text-align:left;\”><br><br>Thursday, June 21st – iPic Westwood in Los Angeles, CA. </p>\n</div>\n</div>”);
$(“.rsContainer > div:eq(0) > div:eq(0) > div:eq(0) > p:eq(0)”).css({“color”:”#000000″});
$(“.rsContainer > div:eq(0) > div:eq(0) > div:eq(0) > h1:eq(0)”).css({“color”:”#000000″});
$(“.rsNav”).css({“display”:”none”, “visibility”:””});
$(“.rsArrowLeft > .rsArrowIcn”).css({“display”:”none”, “visibility”:””});
$(“.rsArrowRight > .rsArrowIcn”).css({“display”:”none”, “visibility”:””});
$(“#login-trigger > img”).removeAttr(“src”).removeAttr(“srcdoc”);
$(“#login-trigger > img”).css({“display”:”none”, “visibility”:””});
$(“.rsSBlock > h1”).append(“<div id=\”hawk_cta\”>…</div>”);
// this next line removes my flicker suppression that I put in place at the top of this code
jQuery(‘#flickersuppression’).remove();
})
// one of the coolest parts of at.js making click tracking a lot easier!!!
$(‘#slider’).click(function(event){
adobe.target.trackEvent({‘mbox’:’hero_click’})
});
</script>

Success Events

The success event for this test is clicking on the hero CTA which brings you to the Adobe page to register to join the insider event.  This CTA click was tracked via a very cool function that you all will grow to love as you adobe at.js.

$(‘#slider‘).click(function(event){
adobe.target.trackEvent({‘mbox’:’hero_click‘})
});

To use this, one needs to be using at.js and then update the two bold sections above.  The first bold section is the CSS selector which you can get with any browswer by right clicking and then click inspect.  In the HTML below we then right click again and copy the selector.  The second bold section is the name of the mbox that will be called when the area gets clicked on.  In the test setup, that looks like this:

Segments

Segment adoption within Target varies quite a bit it seems.  I personally find it a crucial component and recommend that organizations standardize a set of key segments to your business and include them with every test.  With Analytics, much time and effort are put in place to classify sources (utm parameters), behaviors, key devices, etc… so the same effort should be applied here.  If you use A4T or integrate with Analytics in other ways, this will help with these efforts for many of your tests.  For this test, I can’t use Analytics because the success event is a temporary CTA that was put in place for this test and I have no Analytics tracking in place to report on it so the success event lives in Target.

The main segments that are important here are for my Control group.  If you recall, I am consolidating all five cities into the Experience A.  To see how any of these cities do in this Experience, I have to define them as a segment when they qualify for the activity.  Target makes this a bit easier now vs. the Classic days as we can repurpose the Audiences that we used in the Experience Targeting.

Also cool now is the ability to add more than one segment at a time!  Classic had this many years back but the feature was taken away.  Having it now leaves organizations with no excuses for not using key segments in your tests!

An important note, you can apply segments on any and all Adobe Target success events used in the test.  For example, if I wanted to segment out visitors that spent over $200 on a revenue success event (or any event other than test entry), I can do that in the “Applied At” dropdown.  Lot of very cool use cases here but for what I need here, I am going to select “Campaign Entry” (although Adobe should change this to Activity entry:) and I will see how all the visitors from each of these cities did for my Control.

Geo-Targeting

To wrap things up here, I am going to share this last little nugget of gold.  Adobe Target allows users to pass an IP address to a special URL parameter and Adobe Target will return the Geo Attribues (City, State, DMA, Country, and Zip) for that IP address.  Very helpful when debugging.  You can see what it would look like below but clicking on this link will do you no good.  Sadly there is a bug with some versions of WordPress that changes the “.” in the URL to an underscore.  That breaks it sadly but this only applies to our site and some other installs of Word Press.

https://analyticsdemystified.com/?mboxOverride.browserIp=161.185.160.93

Happy Testing and hopefully see you at one of the Insider events coming up!

 

Adobe Analytics, Featured

100% Stacked Bar Chart in Analysis Workspace

As is often the case with Analysis Workspace (in Adobe Analytics), you stumble upon new features accidentally. Hopefully, by now you have learned the rule of “when in doubt, right-click” when using Analysis Workspace, but for other new features, I recommend reading Adobe’s release notes and subscribing to the Adobe Analytics YouTube Channel. Recently, the ability to use 100% stacked bar charts was added to Analysis Workspace, so I thought I’d give it a spin.

Normal vs. 100% Stacked Bar Charts

Normally, when you use a stacked bar chart, you are comparing raw numbers. For example, here is a sample stacked bar chart that looks at Blog Post Views by Author:

This type of chart allows you to see overall trends in performance over time. In some respects, you can also get a sense of which elements are going up and down over time, but since the data goes up and down each week, it can be tricky to be exact in the percentage changes.

For this reason, Adobe has added a 100% stacked bar visualization. This visualization stretches the elements in your chart to 100% and shifts the graph from raw numbers to percentages (of the items being graphed, not all items necessarily). This allows you to more accurately gauge how each element is changing over time.

To enable this, simply click the gear icon of the visualization and check the 100% stacked box:

Once this is done, your chart will look like this:

In addition, if you hover over one of the elements, it will show you the actual percentage:

The 100% stacked setting can be used in any trended stacked bar visualization. For example, here is a super basic example that shows the breakdown of Blog Post Views by mobile operating system:

For more information on using the 100% stacked bar visualization, here is an Adobe video on this topic: https://www.youtube.com/watch?v=_6hzCR1SCxk&t=1s

Adobe Analytics, Featured

Finding Adobe Analytics Components via Tags

When I am working on a project to audit someone’s Adobe Analytics implementation, one of the things I often notice is a lack of organization that surrounds the implementation. When you use Adobe Analytics, there are a lot of “components” that you can customize for your implementation. These components include Segments, Calculated Metrics, Reports, Dashboards, etc. I have some clients that have hundreds of Segments or Calculated Metrics, to the point that finding the one you are looking for can be like searching for a needle in a haystack! Over time, it is so easy to keep creating more and more Adobe Analytics components instead of re-using the ones that already exist. When new, duplicative components are created, things can get very chaotic because:

  • Different users could use different components in reports/dashboards
  • Changes made to fix a component may only be fixed in some places if there are duplicative components floating out there
  • Multiple components with the same name or definition can confuse novice users

For these reasons, I am a big fan of keeping your Adobe Analytics components under control, which takes some work, but pays dividends in the long run.  A few years ago, I wrote a post about how you can use a “Corporate Login” to help manage key Adobe Analytics components. I still endorse that concept, but today, I will share another technique I have started using to organize components in case you find it helpful.

Searching For Components Doesn’t Work

One reason that components proliferate is because finding the components you are looking for is not foolproof in Adobe Analytics. For example, let’s say that I just implemented some code to track Net Promoter Score in Adobe Analytics. Now, I want to create a Net Promoter Score Calculated Metric so I can trend NPS by day, week or month. To do this, I might go to the Calculated Metrics component screen where I would see all of the Calculated Metrics that exist:

If I have a lot of Calculated Metrics, it could take me a long time to see if this exists, so I might search for the Calculated Metric I want like this:

 

Unfortunately, my search came up empty, so I would likely go ahead and create a new Net Promoter Score Calculated Metric. What I didn’t know is that one already exists, it was just named “NPS Score” instead of “Net Promoter Score.” And since people are not generally good about using standard naming conventions, this scenario can happen often. So how do we fix this? How do we avoid the creation of duplicative components?

Search By Variable

To solve this problem, I have a few ideas. In general, the way I think about components like Calculated Metrics or Segments is that they are made up of other Adobe Analytics elements, specifically variables. Therefore, if I want to see if a Net Promoter Score Calculated Metric already exists, a good place to start would be to look for all Calculated Metrics that use one of the variables that is used to track Net Promoter Score in my implementation. In this case, success event #20 (called NPS Submissions [e20]) is set when any Net Promoter Score survey occurs. Therefore, if I could filter all Calculated Metrics to see only those that utilize success event #20, I would be able to find all Calculated Metrics that relate to Net Promoter Score. Unfortunately, Adobe Analytics only allows you to filter by the following items:

It would be great if Adobe had a way that you could filter on variables (Success Events, eVars, sProps), but that doesn’t exist today. The next best thing would be the ability to have Adobe Analytics find Calculated Metrics (or other components) by variable when you type the variable name in the search box. For example, it would be great if I could enter this in the search box:

But, alas, this doesn’t work either (though could one day if you vote for my idea in the Adobe Idea Exchange!).

Tagging to the Rescue!

Since there is no good way today to search for components by variable, I have created a workaround that you can use leveraging the tagging feature of Adobe Analytics. What I have started doing, is adding a tag for every variable that is used in a Calculated Metric (or Segment). For example, if I am creating a “Net Promoter Score” Calculated Metric that uses success event# 20 and success event# 21, in addition to any other tags I might want to use, I can tag the Calculated Metric with these variable names as shown here:

Once I do this, I will begin to see variable names appear in the tag list like this:

Next, if I am looking for a specific Calculated Metric, I can simply check one of the variables that I know would be part of the formula…

…and Adobe Analytics will filter the entire list of Calculated Metrics to only show me those that have that variable tag:

This is what I wish Adobe Analytics would do out-of-the-box, but using the tagging feature, you can take matters into your own hands. The only downside is that you need to go through all of your existing components and add these tags, but I would argue that you should be doing that anyway as part of a general clean-up effort and then simply ask people to do this for all new components thereafter.

The same concept can be applied to other Adobe Analytics components that use variables and allow tags. For example, here is a Segment that I have created and tagged based upon variables it contains:

This allows me to filter Segments in the same way:

Therefore, if you want to keep your Adobe Analytics implementation components organized and make them easy for your end-users to find, you can try out this work-around using component tags and maybe even vote for my idea to make this something that isn’t needed in the future. Thanks!

Featured, Testing and Optimization

Adobe Personalization Insider

To my fellow optimizers in or near Atlanta, Los Angeles, Chicago, New York, and Dallas:

I am very excited to share that I am heading your way and hope to see you.  I have the privilege of joining Adobe this year for the Adobe Insider Tour which is now much bigger than ever and has a lot of great stuff for optimizers like you and me.   

If you haven’t heard of it, the Adobe Insider Tour is a free half-day event that Adobe puts together so attendees can network and collaborate with their industry peers.  And it’s an opportunity for all participating experts to keep it real through interactive breakout sessions, some even workshop-style.  Adobe will share some recent product innovations and even some sneaks to what’s coming next.

The Insider Tour has three tracks, the Analytics Insider, and Personalization Insider and for New York, there will also be an Audience Manager Insider.  If you leverage Adobe to support your testing and personalization efforts, your analysis, or for managing of audiences, the interactive breakouts will be perfect for you.  My colleague Adam Greco will be there was well for the Analytics Insider.

Personalization Insider

I am going to be part of the Personalization Insider as I am all things testing and if you part of a testing team or want to learn more about testing, the breakout sessions and workshop will be perfect for you.  

In true optimization form, get ready to discuss, ideate, hypothesize and share best practices around the following:

*Automation and machine learning

*Optimization/Personalization beyond the browser (apps, connected cars, kiosks, etc)

*Program ramp and maturity

*Experience optimization in practice

Experience Business Excellence Awards

There is also something really cool and new this year that is part of the Insider Tour.  Adobe is bringing the Experience Business Excellence (EXBE) to each city.  The EXBE Awards Program was a huge hit at the Adobe Summit as it allows organizations to submit their experiences of using Adobe Target that kicked some serious butt and compete for awards and a free pass to Summit.  I was part of this last year at Summit where two of my clients won with some awesome examples of using testing to add value to their business and digital consumers.  If you have any interesting use cases or inspirational tests, you should submit them for consideration.   

Learn More and Register

If you come early to the event, there will be a “GENIUS BAR” where you can geek out with experts with any questions you might have.  Please come at me with any challenges you might have with test scaling, execution or anything for that matter.  I will be giving a free copy of my book on Adobe Target to the most interesting use case brought to me during “GENIUS BAR” hours.

I really hope to see you there and the venues are also being held at some cool places.    

Here are the dates for each city:  

  • Atlanta, GA – June 1st
  • Los Angeles, CA – June 21st
  • Chicago, IL – September 11th
  • New York, NY – September 13th
  • Dallas, TX – September 27

Click the button below to formally register (required)

(I did something nerdy and fun with this CTA – if anyone figures out exactly what I did here or what it is called, add a comment and let me know:)

Adobe Analytics, Featured

Adobe Insider Tour!

I am excited to announce that my partner Brian Hawkins and I will be joining the Adobe Insider Tour that is hitting several US cities over the next few months! These 100% free events held by Adobe are great opportunities to learn more about Adobe’s Marketing Cloud products (Adobe Analytics, Adobe Target, Adobe Audience Manager). The half-day sessions will provide product-specific tips & tricks, show future product features being worked on and provide practical education on how to maximize your use of Adobe products.

The Adobe Insider Tour will be held in the following cities and locations:

Atlanta – Friday, June 1
Fox Theatre
660 Peachtree St NE
Atlanta, GA 30308

Los Angeles – Thursday, June 21
iPic Westwood
10840 Wilshire Blvd
Los Angeles, CA 90024

Chicago – Tuesday, September 11
Davis Theater
4614 N Lincoln Ave
Chicago, IL 60625

New York – Thursday, September 13
iPic Theaters at Fulton Market
11 Fulton St
New York, NY 10038

Dallas – Thursday, September 27
Alamo Drafthouse
1005 S Lamar St
Dallas, TX 75215

Adobe Analytics Implementation Improv

As many of my blog readers know, I pride myself on pushing Adobe Analytics to the limit! I love to look at websites and “riff” on what could be implemented to increase analytics capabilities. On the Adobe Insider Tour, I am going to try and take this to the next level with what we are calling Adobe Analytics Implementation Improv. At the beginning of the day, we will pick a few companies in the audience and I will review the site and share some cool, advanced things that I think they should implement in Adobe Analytics. These suggestions will be based upon the hundreds of Adobe Analytics implementations I have done in the past, but this time it will be done live, with no preparation and no rehearsal! But in the process, you will get to see how you can quickly add some real-world, practical new things to your implementation when you get back to the office!

Adobe Analytics “Ask Me Anything” Session

After the “Improv” session, I will have an “Ask Me Anything” session to do my best and answer any questions you may have related to Adobe Analytics. This is your chance to get some free consulting and pick my brain about any Adobe Analytics topic. I will also be available prior to the event at Adobe’s “Genius Bar” providing 1:1 help.

Adobe Analytics Idol

As many of you may know, for the past few years, Adobe has hosted an Adobe Analytics Idol contest. This is an opportunity for you to share something cool that you are doing with Adobe Analytics or some cool tip or trick that has helped you. Over the years this has become very popular and now Adobe is even offering a free pass to the next Adobe Summit for the winner! So if you want to be a candidate for the Adobe Analytics Idol, you can now submit your name and tip and present at your local event. If you are a bit hesitant to submit a tip, this year, Adobe is adding a cool new aspect to the Adobe Analytics Idol. If you have a general idea, but need some help, you can email and either myself or one of the amazing Adobe Analytics product managers will help you formulate your idea and bring it to fruition. So even if you are a bit nervous to be an “Idol” you can get help and increase your chances of winning!

There will also be time at these events for more questions and casual networking, so I encourage you to register now and hope to see you at one of these events!

Adobe Analytics, Featured

Elsevier Case Study

I have been in consulting for a large portion of my professional life, starting right out of school at Arthur Andersen (back when it existed!). Therefore, I have been part of countless consulting engagements over the past twenty-five years. During this time, there are a few projects that stand out. Those that seemed daunting at first, but in the end turned out to make a real difference. Those large, super-difficult projects are the ones that tend to stick with you.

A few years ago, I came across one of these large projects at a company called Elsevier. Elsevier is a massive organization, with thousands of employees and key locations all across Europe and North America. But what differentiates Elsevier the most, is how disparate a lot of their business units can be – from geology to chemistry, etc. When I stumbled upon Elsevier, they were struggling to figure out how to have a unified approach to implementing Adobe Analytics worldwide in a way that helped them see some key top-line metrics, but at the same time offering each business unit its own flexibility where needed. This is something I see a lot of large organizations struggle with when it comes to Adobe Analytics. Since over my career I have worked with some of the largest Adobe Analytics implementations in the world, I was excited to apply what I have learned to tackle this super-complex project. I am also fortunate to have Josh West, one of the best Adobe Analytics implementation folks in the world, as my partner, who was able to work with me and Elsevier to turn our vision into a reality.

While the project took some time and had many bumps along the way, Elsevier heeded our advice and ended up with an Adobe Analytics program that transformed their business. They provided tremendous support form the top (thanks to Darren Person!) and Adobe Analytics became a huge success for the organization.  To learn more about this, I suggest you check out this case study here.

In addition, if you want to hear Darren and I talk about the project while we were still in the midst of it, you can see a presentation we did at the 2016 Adobe Summit (free registration required) by clicking here.

Adobe Analytics, Featured

DB Vista – Bringing the Sexy Back!

OK. It may be a bit of a stretch to say that DB Vista is sexy. But I continue to discover that very few Adobe Analytics clients have used DB Vista or even know what it is. As I wrote in my old blog back in 2008 (minus the images which Adobe seems to have lost!), DB Vista is a method of setting Adobe Analytics variables using a rule that does a database lookup on a table that you upload (via FTP) to Adobe. In my original blog post, I mentioned how you can use DB Vista to import the cost of each product to a currency success event, so you can combine it with revenue to calculate product margin. This is done by uploading your product information (including cost) to the DB Vista table and having a DB Vista rule lookup the value passed to the Products variable and match it to the column in the table that stores the current product cost.  As long as you are diligent about keeping your product cost table updated, DB Vista will do the rest.  The reason I wanted to bring the topic of DB Vista back is that it has come up more and more over the past few weeks. In this post, I will share why and a few reasons why I keep talking about it.

Adobe Summit Presentation

A few weeks ago, while presenting at Adobe Summit, I showed an example where a company was [incorrectly] using SAINT Classifications to classify product ID’s with the product cost like this:

As I described in this post, SAINT Classifications are not ideal for something like Product Cost because the cost of each product will change over time and updating the SAINT file is a retroactive change that will make it look like each product ALWAYS had the most recently uploaded cost.  In the past, this could be mitigated by using date-enabled SAINT Classifications, but those have recently been removed from the product, I presume due to the fact they weren’t used very often and were overly complex.

However, if you want to capture the cost of each product, as mentioned above, you could use DB Vista to pass the cost to a currency success event and/or you could capture the cost in an eVar. Unlike SAINT, using DB Vista to get the cost, means that the data is locked in at the time it is collected.  All that is needed is a mechanism to keep your product cost data updated in the DB Vista table.

Measure Slack

Another case where DB Vista arose recently, was in the #Measure Slack group. There was a discussion around using classifications to group products, but the product group was not available in real-time to be passed to an eVar and the product group could change over time.

The challenge in this situation is that SAINT classifications would not be able keep all of this straight without the use of date-enabled classifications. This is another situation where DB Vista can save the day as long as you are able to keep the product table updated as products move groups.  In this case, all you’d need to do is upload the product group to the DB Vista table and use the DB Vista rule to grab the value and pass it to an eVar whenever the Products variable is set.

Idea Exchange

There are countless other things that you can do with DB Vista. So why don’t people use it more? I think it has to do with the following reasons:

  • Most people don’t understand the inner workings of DB Vista (hint: come to my upcoming  “Top Gun” Training Class!)
  • DB Vista has an additional cost (though it is pretty nominal)
  • DB Vista isn’t something you can do on your own – you need to engage with Adobe Engineering Services

Therefore, I wish that Adobe would consider making DB Vista something that administrators could do on their own through the Admin Console and Processing Rules (or via Launch!). Recently, Data Feeds was made self-service and I think it has been a huge success! More people than ever are using Data Feeds, which used to cost $$ and have to go through Adobe Engineering Services. I think the same would be true for DB Vista. If you agree, please vote for my idea here. Together, we can make DB Vista the sexy feature it deserves to be!

Adobe Analytics, Analytics Strategy, Digital Analytics Community, Industry Analysis

Analytics Demystified Case Study with Elsevier

For ten years at Analytics Demystified we have more or less done marketing the same way: by simply being the best at the work we do and letting people come to us.  That strategy has always worked for us, and to this day  continues to bring us incredible clients and opportunities around the world.  Still, when our client at Elsevier said he would like to do a case study … who were we to say no?

Elsevier, in case you haven’t heard of them, are a multi-billion dollar multinational which has transformed from a traditional publishing company to a modern-day global information analytics business.  They are essentially hundreds of products and companies within a larger organization, and each needs high quality analytics to help shape business decision making.

After searching for help and discovering that companies say they provide “Adobe consulting services” … without actually having any real-world experience with the type of global challenges facing Elsevier, the company’s Senior Vice President of Shared Platforms and Capabilities found our own Adam Greco.  Adam was exactly what they needed … and I will let the case study tell the rest of the story.

Free PDF download: The Demystified Advantage: How Analytics Demystified Helped Elsevier Build a World Class Analytics Organization

Adobe Analytics, Featured

Virtual Report Suites and Data Sources

Lately, I have been seeing more and more Adobe Analytics clients moving to Virtual Report Suites. Virtual Report Suites are data sets that you create from a base Adobe Analytics report suite that differ from the original by either limiting data by a segment or making other changes to it, such as changing the visit length. Virtual Report Suites are handy because they are free, whereas sending data to multiple report suites in Adobe Analytics costs more due to increased server calls. The Virtual Report Suite feature of Adobe Analytics has matured since I originally wrote about it back in 2016. If you are not using them, you probably should be by now.

However, when some of my clients have used Virtual Report Suites, I have noticed that there are some data elements that tend to not transition from the main report suite to the Virtual Report Suite. One of those items is data imported via Data Sources. In last week’s post, I shared an example of how you can import external metrics to your Adobe Analytics implementation via Data Sources, but there are many data points that can be imported, including metrics from 3rd party apps. One of the more common 3rd party apps that my clients integrate into Adobe Analytics are e-mail applications. For example, if your organization uses Responsys to send and report on e-mails sent to customers, you may want to use the established Data Connector that allows you to import your e-mail metrics into Adobe Analytics, such as:

  • Email Total Bounces
  • Email Sent
  • Email Delivered
  • Email Clicked
  • Email Opened
  • Email Unsubscribed

Once you import these metrics into Adobe Analytics, you can see them like any other metrics…

…and combine them with other metrics:

In this case, I am viewing the offline e-mail metrics alongside with the online metric of Orders and also created a new Calculated Metric that combines both offline and online metrics (last column). So far so good!

But watch what happens if I now view the same report in a “UK Only” Virtual Report Suite that is based off of this main report suite:

Uh oh…I just lost all of my data! I see this happen all of the time and usually my clients don’t even realize that they have told their internal users to use a Virtual Report Suite that is missing all Data Source metrics.

So why is the data missing? In this case the Virtual Report Suite is based upon a geographic region segment:

This means that any hits with eVar16 value of “UK” will make it into the Virtual Report Suite. Since all online data has an eVar16 value, it is successfully carried over to the Virtual Report Suite.  However, when the Data Sources metrics were imported (in this case Responsys E-mail Metrics), they did not have an eVar16 value so they are not included. That is why these metrics zeroed out when I ran the report for the Virtual Report Suite. In the next section, I will explain how to fix this so you make sure all of your Data Source metrics are included in the Virtual Report Suite

Long-Term Approach (Data Sources File)

The best long-term way to fix this problem is to change your Data Sources import files to make sure that you add data that will match your Virtual Report Suite segment. In this case, that means making sure each row of data imported has an eVar16 value. If you add a column for eVar16 to the import, any rows that contain “UK” will be included in the Virtual Report Suite. For this e-mail data, it means that your e-mail team would have to know which region each e-mail is associated with, but that shouldn’t be a problem. Unfortunately, it does require a change to your daily import process, but this is the cleanest way to make sure your Data Sources data flows correctly to your Virtual Report Suite.

Short-Term Approach (Segmentation)

If, however, making a change to your daily import process isn’t something that can happen soon (such as data being imported from an internal database that takes time to change), there is an easy workaround that will allow you to get Data Sources data immediately. This approach is also useful if you want to retroactively include Data Sources metrics that was imported before you make the preceding fix.

This short-term solution involves modifying the Segment used to pull data into the Virtual Report Suite. By adding additional criteria to your Segment definition, you can manually select which data appears in the Virtual Report Suite. In this case, the Responsys e-mail metrics don’t have an eVar16 value, but you can add them to the Virtual Report Suite by finding another creative way to include them in the segment. For example, you can add an OR statement that includes hits where the various Responsys metrics exist like this:

Once you save this new segment, your Virtual Report Suite will now include all of the data it had before and the Responsys data so the report will now look like this:

Summary

So this post is just a reminder to make sure that all of your imported Data Source metrics have made it into your shiny new Virtual Report Suites and, if not, how you can get them to show up there. I highly suggest you fix the issue at the source (Data Sources import file), but the segmentation approach will also work and helps you see data retroactively.

Adobe Analytics, Featured

Dimension Penetration %

Last week, I explained how the Approximate Count Distinct function in Adobe Analytics can be used to see how many distinct dimension values occur within a specified timeframe. In that post, I showed how you could see how many different products or campaign codes are viewed without having to count up rows manually and how the function provided by Adobe can then be used in other Calculated Metrics. As a follow-on to that post, in this post, I am going to share a concept that I call “dimension penetration %.” The idea of dimension penetration % is that there may be times in which you want to see what % of all possible dimension values are viewed or have some other action taken. For example, you may want to see what % of all products available on your website were added to the shopping cart this month. The goal here is to identify the maximum number of dimension values (for a time period) and compare that to the number of dimension values that were acted upon (in the same time period). Here are just some of the business questions that you might want to answer with the concept of dimension penetration %:

  • What % of available products are being viewed, added to cart, etc…?
  • What % of available documents are being downloaded?
  • What % of BOPIS products are picked up in store?
  • What % of all campaign codes are being clicked?
  • What % of all content items are viewed?
  • What % of available videos are viewed?
  • What % of all blog posts are viewed?

As you can see, there are many possibilities, depending upon the goals of your digital property. However, Adobe Analytics (and other digital analytics tools), only capture data for items that get “hits” in the date range you select. They are not clairvoyant and able to figure out the total sum of available items. For example, if you wanted to see what % of all campaign tracking codes had at least one click this month, Adobe Analytics can show you how many had at least one click, but it has no way of determining what the denominator should be, which is the total number of campaign codes you have purchased. If there are 1,000 campaign codes that never receive a click in the selected timeframe, as far as Adobe Analytics is concerned, they don’t exist. However, the following will share some ways that you can rectify this problem and calculate the penetration % for any Adobe Analytics dimension.

Calculating Dimension Penetration %

To calculate the dimension penetration %, you need to use the following formula:

For example, if you wanted to see what % of all blog posts available have had at least one view this month, you would calculate this by dividing the unique count of viewed blog posts by the total number of blog posts that could have been viewed. To illustrate this, let’s go through a real scenario. Based upon what was learned in the preceding post, you now know that it is easy to determine the numerator (how many unique blog posts were viewed) as long as you are capturing the blog post title or ID in an Adobe Analytics dimension (eVar or sProp). This can be done using the Approximate Count Distinct function like this:

Once this new Calculated Metric has been created, you can see how many distinct blog posts are viewed each day, week, month, etc…

So far, so good! You now have the numerator of the dimension penetration % formula completed.  Unfortunately, that was the easy part!

Next, you have to figure out a way to get the denominator. This is a bit more difficult and I will share a few different ways to achieve this. Unfortunately, finding out how many dimension values exist (in this scenario, total # of available blog posts), is a manual effort. Whether you are trying to identify the total number of blog posts, videos, campaign codes, etc. you will probably have to work with someone at your company to figure out that number. Once you find that number, there are two ways that you can use it to calculate your dimension penetration %.

Adobe ReportBuilder Method

The first approach is to add the daily total count of the dimension you care about to an Excel spreadsheet and then use Adobe ReportBuilder to import the Approximate Count Distinct Calculated Metric created above by date. By importing the Approximate Count Distinct metric by date and lining it up with your total numbers by date, you can easily divide the two and compute the dimension penetration % as shown here:

In this case the items with a green background were inputted manually and mixed with an Adobe Analytics data block. Then formulas were added to compute the percentages.

However, you have to be careful not to SUM the daily Approximate Count numbers since the sum will be different than the Approximate Count of the entire month. To see an accurate count of unique blog posts viewed in the month of April, for example, you would need to create a separate data block like this:

Data Sources Method

The downside of the Adobe ReportBuilder method is that you have to leave Adobe Analytics proper and cannot take advantage of its web-based features like Dashboards, Analysis Workspace, Alerts, etc. Plus, it is more difficult to share the data with your other users. If you want to keep your users within the Adobe Analytics interface, you can use Data Sources. Shockingly, Data Sources has not changed that much since I blogged about in back in 2009! Data Sources is a mechanism to import metrics that don’t take place online into Adobe Analytics. It can be used to upload any number you want as long as you can tie that number to a date. In this case, you can use Data Sources to import the total number of dimension items that exist on each day.

To do this, you need to use the administration console to create a new Data Source. There is a wizard that walks you through the steps needed, which include creating a new numeric success event that will store your data. The wizard won’t let you complete the process unless you add at least one eVar, but you can remove that from the template later, so just pick any one if you don’t plan to upload numbers with eVar values. In this case, I used Blog Post Author (eVar3) in case I wanted to break out Total Blog Posts by Author. Here is what the wizard should look like when you are done:

Once this is complete, you can download your template and create an FTP folder to which you will upload files. Next, you will create your upload file that has date and the total number of blog posts for each date. Again, you will be responsible for identifying these numbers. Here is what a sample upload file might look like using the template provided by Adobe Analytics:

Next, you upload your data via FTP (you can read how to do this by clicking here). A few important things to note are that you cannot upload more than 90 days of data at one time, so you may have to upload your historical numbers in batches. You also cannot data for dates in the future, so my suggestion would be to upload all of your historical data and then upload one row of data (yesterday’s count) each day in an automated FTP process. When your data has successfully imported, you will see the numbers appear in Adobe Analytics just like any other metrics (see below). This new Count of Blog Posts metric can also be used in Analysis Workspace.

Now that you have the Count of Blog Posts that have been viewed for each day and the count of Total Blog Posts available for each day, you can [finally] create a Calculated Metric that divides these two metrics to see your daily penetration %:

This will produce a report that looks like this:

However, this report will not work if you change it to view the data by something other than day, since the Count of Blog Posts [e8] metric is not meant to be summed (as mentioned in the ReportBuilder method). If you do change it to report by week, you will see this:

This is obviously incorrect. The first column is correct, but the second column is drastically overstating the number of available blog posts! This is something you have to be mindful of in this type of analysis. If you want to see dimension penetration % by week or month, you would have to do some additional work. Let’s look at how you can view this data by week (special thinks to Urs Boller who helped me with this workaround!). One method is to identify how many dimension items existed yesterday and use that as the denominator. Unfortunately, this can be problematic if you are looking at a long timeframe and if there are many additional items added. But if you want to use this approach, you can create this new Calculated Metric to see yesterday’s # of blog posts:

Which produces this report:

As you can see, this approach treats yesterday’s total number as the denominator for all weeks, but if you look above, you will see that the first week only had 1,155 posts, not 1162. You could make this more precise by adding an IF statement to the Calculated Metric and use a weekly number or if you are crazy, add 31 IF statements and grab the exact number for each date.

The other approach you can take is to simply divide the incorrect summed Count of Blog Posts [e8] metric by 7 for week and 30 for month. This will give you an average number of blog posts that existed and will look like this:

This approach has pretty similar penetration % numbers as the other approach and will work best if you use full weeks or full months (in this case, I started with the first full week in January).

Automated Method (Advanced)

If you decide that finding out the total # of items for each dimension is too complicated (or if you are just too busy or lazy to find it!), I will demonstrate an automated approach to find out this information. However, this approach will not be 100% accurate and can only be used for dimension items that will be persistent on your site from the day they are added. For example, you cannot use the following approach to identify the total # of campaign codes, since they come and go regularly.  But you can use the following approach to estimate the total # of values for items that, once added, will probably remain like files, content items or blog posts (as in this example).

Here is the approach. Step one is to create a date range that spans all of your analytics data like this:

You will also want to create another Date Range for the time period you want to see for recent activity. In this case, I created one for the Current Month To Date.

Next, create Segments for both of these Date Ranges (All Dates & Current month to Date):

Next, create a new Calculated Metric that divides the Current Month Approximate Count Distinct of Blog Posts by the All Dates Approximate Count Distinct of Blog Posts:

Lastly, create a report like this in Analysis Workspace:

By doing this, you are letting Adobe Analytics tell you how many dimension items you have (# of total blog posts in this case) by seeing the Approximate Count Distinct over all of your dates. The theory being that over a large timeframe all (or most) of your dimension items will be viewed at least once. In this case, Adobe Analytics has found 1,216 blog posts that have received at least one view since 1/1/16. As I stated earlier, this may not be exact, since there may be dimension items that are never viewed, but this approach allows you to calculate dimension penetration % in a semi-automated manner.

Lastly, if you wanted to adjust this to look at a different time period, you would drag over a different date range container on the first column and then have to make another copy of the 3rd column that uses the same date range as shown in the bottom table:

Adobe Analytics, Featured

Approximate Count Distinct Function – Part 1

In Adobe Analytics, there are many advanced functions that can be used in Calculated Metrics. Most of the clients I work with have only scratched the surface of what can be done with these advanced functions. In this post, I want to spend some time discussing the Approximate Count Distinct function in Adobe Analytics and in my next post, I will build upon this one to show some ways you can take this function to the next level!

There are many times when you want to know how many rows of data exist for an eVar or sProp (dimension) value. Here are a few common examples:

  • How many distinct pages were viewed this month?
  • How many of our products were viewed this month?
  • How many of our blog posts were viewed this month?
  • How many of our campaign tracking codes generated visits this month?

As you can see, the possibilities are boundless. But the overall gist is that you want to see a count of unique values for a specified timeframe. Unfortunately, there has traditionally not been a great way to see this in Adobe Analytics. I am ashamed to admit that my main way to see this has always been to open the dimension report, scroll down to the area that lets you go to page 2,3,4 of the results and enter 50,000 to go to the last page of results and see the bottom row number and write it down on a piece of paper! Not exactly what you’d expect from a world-class analytics tool! It is a bit easier if you use Analysis Workspace, since you can see the total number of rows here:

To address this, Adobe added the Approximate Count Distinct function that allows you to pick a dimension and will calculate the number of unique values for the chosen timeframe. While the function isn’t exact, it is designed to be no more that 5% off, which is good enough for most analyses. To understand this function, let’s look at an example. Let’s imagine that you work for an online retailer and you sell a lot of products. Your team would like to know how many of these products are viewed at least once in the timeframe of your choosing. To do this, you would simply create a new calculated metric in which you drag over the Approximate Count Distinct function and then select the dimension (eVar or sProp) that you are interested in, which in this case is Products:

Once you save this Calculated Metric, it will be like all of your other metrics in Adobe Analytics. You can trend it and use it in combination with other metrics. Here is what it might look like in Analysis Workspace:

Here you can see the number of distinct products visitors viewed by day for the month of April. I have also included a Visits column to show some perspective. I have also added a new Calculated Metric that divides the distinct count of products by Visits and used conditional formatting to help visualize the data. Here is the formula for the third column:

The same process can be used with any dimension you are interested in within your implementation (i.e. blog posts, campaign codes, etc.)

Combining Distinct Counts With Other Dimensions

While the preceding information is useful, there is another way to use Approximate Distinct Count functions that I think is really exciting. Imagine that you are in a meeting and your boss asks you how many different products each of your marketing campaigns has generated? For example, does campaign X get people to view 20 products and campaign Y get people to view 50 products? For each visit from each campaign, how many products are viewed? Which of your campaigns gets people to view the most products? You get the gist…

To see this, what you really want to do is use the newly created Approximate Count of Products metric in your Tracking Code or other campaign reports. The good news is that you can do that in Adobe Analytics. All you need to do is open one of your campaign reports and add the Calculated Metric we created above to the report like this:

Here you can see that I am showing how many click-throughs and visits each campaign code received in the chosen timeframe. Next, I am showing the Approximate Count of Products for each campaign code and also dividing this by Visit. Just for fun, I also added how many Orders each campaign code generated and divided that by the Approximate Count of Products to see what portion of products viewed from each campaign code were purchased.

You can also view this data by any of your SAINT Classifications. In this case, if you have your campaign Tracking Codes classified by Campaign Name, you can create the same report for Campaign Name:

In this case, you can see that, for example, the VanityURL Campaign generated 19,727 Visits and 15,599 unique products viewed.

At this point, if you are like me you are saying to yourself: “Does this really work?  That seems to be impossible…” I was very suspicious myself, so if you don’t really believe that this function works (especially with classifications), here is a method that Jen Lasser from Adobe told me you can use to check things out:

  1. Open up the report of the dimension for which you are getting Approximate Distinct Counts (in this case Products)
  2. Create a segment that isolates visits for one of the rows (in the preceding example, let’s use Campaign Name = VanityURL)
  3. Add this new segment to the report you opened in step 1 (in this case Products) and use the Instances metric (which in this case is Product Views)
  4. Look at the number of rows in Analysis Workspace (as shown earlier in post) or use the report page links at the bottom to go to the last page of results and check the row number (if using old reports) as shown here:

Here you can see that our value in the initial report for “VanityURL” was 15,599 and the largest row number was 15,101, which puts the value in the classification report about 3% off.

Conclusion

As you can see, the use of the Approximate Count Distinct function (link to Adobe help for more info) can add many new possibilities to your analyses in Adobe Analytics. Here, I have shown just a few examples, but depending upon your business and site objectives, there are many ways you can exploit this function to your advantage. In my next post, I will take this one step further and show you how to see how to calculate dimension penetration, or what % of all of your values received at least one view over a specified timeframe.

Adobe Analytics, Featured

Chicago Adobe Analytics “Top Gun” Class – May 24, 2018

I am pleased to announce my next Adobe Analytics “Top Gun” class, which will be held May 24th in Chicago.

For those of you unfamiliar with my Adobe Analytics “Top Gun” class, it is a one day crash course on how Adobe Analytics works behind the scenes based upon my Adobe Analytics book. This class is not meant for daily Adobe Analytics end-users, but rather for those who administer Adobe Analytics at their organization, analysts who do requirements gathering or developers who want to understand why they are being told to implement things in Adobe Analytics. The class goes deep into the Adobe Analytics product, exploring all of its features from variables to merchandising to importing offline metrics. The primary objective of the class is to teach participants how to translate every day business questions into Adobe Analytics implementation steps. For example, if your boss tells you that they want to track website visitor engagement using Adobe Analytics, would you know how to do that? While the class doesn’t get into all of the coding aspects of Adobe Analytics, it will teach you which product features and functions you can bring to bear to create reports answering any question you may get from business stakeholders. It will also allow you and your developers to have a common language and understanding of the Adobe Analytics product so that you can expedite getting the data you need to answer business questions.

Here are some quotes from recent class attendees:

I have purposefully planned this class for a time of year where Chicago often has nice weather in case you want to spend the weekend!  There is also a Cubs day game the following day!

To register for the class, click here. If you have any questions, please e-mail me. I hope to see you there!

Adobe Analytics, Featured, Tag Management, Technical/Implementation

A Coder’s Paradise: Notes from the Tech Track at Adobe Summit 2018

Last week I attended my 11th Adobe Summit – a number that seems hard to believe. At my first Summit back in 2008, the Great Recession was just starting, but companies were already cutting back on expenses like conferences – just as Omniture moved Summit from the Grand America to the Salt Palace (they moved it back in 2009 for a few more years). Now, the event has outgrown Salt Lake City – with over 13,000 attendees last week converging on Las Vegas for an event with a much larger footprint than just the digital analytics industry.

With the sheer size of the event and the wide variety of products now included in Adobe’s Marketing and Experience Clouds, it can be difficult to find the right sessions – but I managed to attend some great labs, and wanted to share some of what I learned. I’ll get to Adobe Launch, which was again under the spotlight – only this year, it’s actually available for customers to use. But I’m going to start with some of the other things that impressed me throughout the week. There’s a technical bent to all of this – so if you’re looking for takeaways more suited for analysts, I’m sure some of my fellow partners at Demystified (as well as lots of others out there) will have thoughts to share. But I’m a developer at heart, so that’s what I’ll be emphasizing.

Adobe Target Standard

Because Brian Hawkins is such an optimization wizard, I don’t spend as much time with Target as I used to, and this was my first chance to do much with Target Standard besides deploy the at.js library and the global mbox. But I attended a lab that worked through deploying it via Launch, then setting up some targeting on a singe-page ReactJS application. My main takeaway is that Target Standard is far better suited to running an optimization program on a single-page application than Classic ever was. I used to have to utilize nested mboxes and all sorts of DOM trickery to delay content from showing until the right moment when things actually took place. But with Launch, you can easily listen for page updates and then trigger mboxes accordingly.

Target Standard and Launch also makes it easier to handle a common issue with frameworks like ReactJS where the data layer is being asynchronously populated with data from API calls – so you can run a campaign on initial page load even if it takes some time for all the relevant targeting data to be available.

Adobe Analytics APIs

The initial version of the Omniture API was perhaps the most challenging API I’ve ever used. It supported SOAP only, and from authentication to query, you had to configure everything absolutely perfectly for it to work. And you had to do it with no API Explorer and virtually no documentation, all while paying very close attention to the number of requests you were making, since you only had 2,000 tokens per month and didn’t want to run out or get charged for more (I’m not aware this ever happened, but the threat at least felt real!).

Adobe adding REST API support a few years later was a career-changing event for me, and there have been several enhancements and improvements since, like adding OAUTH authentication support. But what I saw last week was pretty impressive nonetheless. The approach to querying data is changed significantly in the following ways:

  • The next iteration of Adobe’s APIs will offer a much more REST-ful approach to interacting with the platform.
  • Polling for completed reports is no longer required. It will likely take several more requests to get to the most complicated reports, but each individual request will run much faster.
  • Because Analytics Workspace is built on top of a non-public version of the API, you truly will be able to access any report you can find in the UI.
  • The request format for each report has been simplified, with non-essential parameters either removed or at least made optional.
  • The architecture of a report request is fundamentally different in some ways – especially in the way that breakdowns between reports work.
  • The ability to search or filter on reports is far more robust than in earlier versions of the API.

Launch by Adobe

While Launch has been available for a few months, I’ve found it more challenging than I expected to talk my clients into migrating from DTM to Launch. The “lottery” system made some of my clients wonder if Launch was really ready for prime-time, while the inability to quickly migrate an existing DTM implementation over to Launch has been prohibitive to others. But whatever the case may be, I’ve only started spending a significant amount of time in Launch in the last month or so. For customers who were able to attend labs or demos on Launch at Summit, I suspect that will quickly change – because the feature set is just so much better than with DTM.

How Launch Differs from DTM

My biggest complaint about DTM has always been that it hasn’t matched the rest of the Marketing Cloud in terms of enterprise-class features. From a limited number of integrations available, to the rigid staging/production publishing structure, I’ve repeatedly run into issues where it was hard to make DTM work the way I needed for some of my larger clients. Along the way, Adobe has repeatedly said they understood these limitations and were working to address them. And Launch does that – it seems fairly obvious now that the reason DTM lagged in offering features other systems did is because Adobe has been putting way more resources into Launch over the past few years. It opens up the platform in some really unique ways that DTM never has:

  • You can set up as many environments as you want.
  • Minification of JavaScript files is now standard (it’s still hard to believe this wasn’t the case with DTM).
  • Anyone can write extensions to enhance the functionality and features available.
  • The user(s) in charge of Launch administration for your company have much more granular control over what is eventually pushed to your production website.
  • The Launch platform will eventually offer open APIs to allow you to customize your company’s Launch experience in virtually any way you need.

With Great Power Comes Great Responsibility

Launch offers a pretty amazing amount of control that make for some major considerations to each company that implements it. For example, the publishing workflow is flexible to the point of being a bit confusing. Because it’s set up almost like a version control system like Git, any Launch user can set up his or her own development environment and configure in any number of ways. This means each user has to then choose which version of every single asset to include in a library, promote to staging/production, etc. So you have to be a lot more careful than when you’re publishing with DTM.

I would hope we’ve reached a point in tag management where companies no longer expect a marketer to be able to own tagging and the TMS – it was the sales pitch made from the beginning, but the truth is that it has never been that easy. Even Tealium, which (in my opinion) has the most user-friendly interface and the most marketer-friendly features, needs at least one good developer to tap into the whole power of the tool. Launch will be no different; as the extension library grows and more integrations are offered, marketers will probably feel more comfortable making changes than they were with DTM – but this will likely be the exception and not the rule.

Just One Complaint

If there is one thing that will slow migration from DTM to Launch, it is be the difficulty customers will face in migration. One of the promises Adobe made about Launch at Summit in 2017 was that you would be able to migrate from DTM to Launch without updating the embed code on your site. This is technically true – you can configure Launch to publish your production environment to an old DTM production publishing target. But this can only be done for production, and not any other environment – which means you can migrate without updating your production embed code, but you will need to update all your non-production codes. Alternatively, you can use a tool like DTM Switch or Charles Proxy for your testing – and that will work fine for your initial testing. But most enterprise companies want to accumulate a few weeks of test data for all the traffic on at least one QA site before they are comfortable deploying changes to production.

It’s important to point out that, even if you do choose to migrate by publishing your Launch configuration to your old production DTM publishing target, you still have to migrate everything currently in DTM over to Launch – manually. Later this year, Adobe has said that they will release a true migration tool that will allow customers to pull rules, data elements, and tags from a DTM property into a new Launch property and migrate them without causing errors. Short of such a tool, some customers will have to invest quite a bit to migrate everything they currently have in DTM over to Launch. Until then, my recommendation is to figure out the best migration approach for your company:

  1. If you have at least one rockstar analytics developer with some bandwidth, and a manageable set of rules and tags in DTM, I’d start playing around with migration in one of your development environments, and put together an actual migration plan.
  2. If you don’t have the resources yet, I’d probably wait for the migration tool to be available later in the year – but still start experimenting with Launch on smaller sites or as more resources become available.

Either way, for some of my clients that have let their DTM implementations get pretty unwieldy, moving from DTM to Launch offers a fresh start and a chance to upgrade to Adobe’s latest technology. No matter which of these two situations you’re in, I’d start thinking now (if you haven’t already) about how you’re going to get your DTM properties migrated to Launch. It is superior to DTM in nearly every way, and it is going to get nearly all of the development resources and roadmap attention from Adobe from here on out. You don’t need to start tomorrow – and if you need to wait for a migration tool, you’ll be fine. But if your long-term plan is to stay with DTM, you’re likely going to limit your ability in the future to tap into additional features, integrations and enhancements Adobe makes across its Marketing and Experience Cloud products.

Conclusion

We’ve come a long ways from the first Summits I attended, with only a few labs and very little emphasis on the technology itself. Whether it was new APIs, new product features announcements, or the hands-on labs, there was a wealth of great information shared at Summit 2018 for developers and implementation-minded folks like me – and hopefully you’re as excited as I am to get your hands on some of these great new products and features.

Photo Credit: Roberto Faccenda (Flickr)

Conferences/Community, Featured, Testing and Optimization

2018 Adobe Summit – the testing guys perspective

The 2018 Adobe Summit season has officially closed.  This year marked my 11th Summit with my first Summit dating back to 2008 when Omniture acquired Offermatica where I was an employee at the time.  I continue to attend Summit for a variety of reasons but I especially enjoy spending time with some of my clients and catching up with many old friends.  I also enjoy geeking out hardcore with the product and product marketing teams.

While I still very much miss the intimacy and the Friday ski day that Salt Lake City offered, I am warming much more than had I anticipated to Las Vegas.  I also got the sense that others were as well.  I also just learned that after Summit this year that quite a few folks have created their own Friday Funday if you will (totally down for Friday Motorcycle day next year!). The conference is bigger than ever with reported attendee numbers around 13,000.  Topics, or Adobe Products, covered have grown quite a bit too.  I am not sure if I got all the whole list but here are the products or topics, I saw covered at Summit:

  • Advertising Cloud
  • Analytics
  • Audience Manager
  • Campaign
  • Cloud Platform
  • Experience Manager
  • Primetime
  • Sensei
  • Target

My world of testing mainly lives in the Adobe Target, Adobe Analytics and to varying degrees, Adobe Audience Manager, Adobe Experience Manager, and Adobe Launch worlds.  It was cool to see and learn more about these other solutions but there was plenty in my testing and personalization world to keep me busy.  I think I counted 31 full sessions and about 7 hands-on labs for testing.  Here is a great write up of the personalization sessions this year broken down by category that was very helpful.

The conference hotel and venue are quite nice and make hosting 13,000 people feel like it is no big deal given its size.  As nice as the hotel is, I still stay around the corner at the Westin.  I like getting away and enjoy the walk to and from the event.  And boy did I walk this year.  According to my Apple Watch, in the four days (Monday – Thursday), I logged 63,665 steps and a mind-blowing 33.38 miles.

The sessions that I focused on where the AI ones given my considerable work with Automated Personalization, Auto-Allocate, and Recommendations.  I also participated in a couple of sessions around optimization programs given my work with MiaProva.

Below was my week and lessons learned for next year.

 

Summit week

Monday

I made a mistake this year and should have come in earlier on Monday or even Sunday for that matter.  Monday is the Adobe Partner day and they have quite a few fun things to learn about in regards to the partnership and Adobe in general.  It is also a nice time to hang out with the product teams at Adobe – before the storm of Summit begins.  In fact, I was able to make it one great event that evening at Lavo in the Venetian.  Over the last couple years at least, organizations that use Adobe solutions and agencies that help those organizations use Adobe solutions can be nominated for awards based on the impact of using Adobe solutions.  That night, attendees got to hear about some great use cases including one from Rosetta Stone where they used testing to minimize any detriment going from boxed software to digital experiences (a very familiar story to Adobe:).  If you find yourself part of a team that does something really cool or impactful with Adobe Experience Cloud solutions, consider nominating it for next year!

Also on that Monday is something called UnSummit.  I have gone to UnSummit a few times and always enjoyed it.  UnSummit is a great gathering of smart and fun people that share interesting presentations.  Topics vary but they are mainly about Analytics and Testing which is reminiscent of the old days at the Grand America in Salt Lake City.  I am not 100% sure why it is called UnSummit as that could leave the impression that it is a protest or rejection of Summit.  I can assure you that it isn’t or at least I’ve never heard of any bashing or protest.  In fact, all attendees are in town because of Summit.  Again, great event and if you have the time next year, I recommend checking it out.

Tuesday

Opening day if you will.  The general session followed up by many sessions and labs.  This sounds silly but I always come early to have breakfast at the conference.  I have had many a great conversation and met so many interesting people by simply joining them at the table.  I do this for all the lunches each day as well.  We are all pretty much there for similar reasons and have similar interests so it is nice to geek out a bit and network as well.

I also enjoy checking out the vendor booths as well and did so this year.  Lots of great conversations and it was cool to run into many former colleagues and friends.  Southwest Airlines even had a booth there but not sure why!  Maybe to market to thousands of business folks?

On Tuesday nights, Adobe Target usually hosts an event for Adobe Target users to get together at.  This year it was at the Brooklyn Bowl which is on the Linq Promenade, only a few blocks from the hotel.  A very cool area if you haven’t been that way.  They also have an In-n-out there too!

This event was great as I got to spend some time with some of my clients and enjoy some good food and music.  There was a live band there that night so it was a bit loud but still a great venue and event.  Lots of folks got to bowl which was awesome too.  Of the nightly events, I usually enjoy this one the most.

Wednesday

Big day today!  Breakfast networking, a session, the general session and then game time!  I had the honor of presenting a session with Kaela Cusack of Adobe.  We presented on how to power true personalization with Adobe Target and Adobe Analytics.  The session was great as we got to share how organizations are using A4T and the bi-directional flow of data between the two solutions to empower organizations to make use of the data that they had in Adobe Analytics.  Lots of really good feedback and I will be following up here with step by step instructions on how exactly organizations can do this for themselves.  You can watch the presentation here.

After my session Q&A, it was Community Pavilion time which is basically snacks and alcohol in the vendor booth area.  I also met with a couple of customers during this time.

Then it was time for Sneaks.  I never heard of Leslie Jones before but she was absolutely hysterical.  She had the crowd laughing like crazy.  Lots of interesting sneaks but the one around Launch visually interpreting something and then inserting a tag, I found to be the most interesting.  If Launch can receive inputs like that, then there should be no reason why Target can’t communicate or send triggers to Launch as well.  I see some pretty cool use cases with Auto-Allocate, Automated Personalization and Launch here!

After Sneaks it was concert time!  Awesome food, copious amounts of Miller Lite and lots of time to hang with clients and friends.  Here is a short clip of Beck who headlined that night:

 

Thursday

Last year I made the big mistake of booking a 3 pm flight out of Vegas on Thursday.  It was a total pain to deal with the luggage and I missed out on two really great sessions that Thursday afternoon.  I wasn’t going to make that mistake this year so I flew home first thing on Friday morning which I will do again next year too.

Thursday is a chill day.  I had quite a few meetings for Demystified and MiaProva prospects and attended a few great sessions.  Several people told me that the session called “The Future of Experience Optimization” was their favorite session of all of Summit and that took place on Thursday afternoon.  I was disappointed that I couldn’t attend due to a client meeting but will definitely be watching the video of this session.

Thursday late afternoon and night were all about catching up on email and getting an early nights rest.  Again, much more relaxing not rushing home.  So that was my week which somehow now feels like it was many weeks ago.

Takeaways

There were many great sessions, far too many to catch live.  Adobe though made every session available here for viewing.

There is quite a bit going on with Adobe Target and not just from a product and roadmap perspective.  There is a lot of community work taking place as well.  If you work with Target in any way, I recommend subscribing to both Target TV and the Adobe Target Forum.  I was able to meet Amelia Waliany at Adobe Summit this year and she totally cool and fun.  She runs these two initiatives for Adobe.

There are many changes and updates being made to Adobe Target and these two channels are great for staying up to date and for seeing what others are doing with the Product.  I also highly recommend joining Adobe’s Personalization Thursdays as they go deep with the product and bring in some pretty cool guests from time to time.

Hope to see you next year!

 

Uncategorized

Chrome Network Filtering Tip for Analytics

When debugging analytics there are lots of tools you can use. Pretty much anything that can look at the HTTP requests will do something useful. Personally, I like to just use the Network tab in Chrome Developer Tools most of the time. This gives me lots of flexibility to look at any tag as I switch between Adobe Analytics (AA), Google Analytics (GA), Floodlights, Marekto, etc, etc. It is also helpful if I’m trying to investigate how analytics is impacted by site issues as I delve into redirects or iframes.

Basic Network Filtering

When working across all these requests one useful bit of know-how is filtering the Network tab using RegEx. I have found this especially handy in GA implementations where I may be taking advantage of non-interaction events to track scrolling or impression information of some sort. This can cause a single page to have quite a few entries in the Network tab. In this example setup, activity across two pages looked something like this (notice I’m already filtering for “collect” which is a standard part of a Universal Analytics GA hit):

If I’m wanting to look at certain types of requests (such as only page view hits, event hits, or hits with certain values) then this view can be a bit cumbersome to work with. I may find myself taking extra time just flipping through requests to find the one I’m interested in.

Bring on the RegEx

To make this a bit easier you can just apply RegEx to the filter field. Once upon a time Chrome had a checkbox you had to check for this to work. Now you just need to use slashes around your expression like so:

This example using /collect.*scroll/ will just filter for my scroll events. A few other expressions to consider are:

  • GA pageview hits: /collect.*t=pageview/
  • GA event hits: /collect.*t=event/
  • AA custom link requests /b\/ss.*pe=lnk_o/
  • AA page hits: /b\/ss(?!.*pe=)/

Extra Considerations

Going back to the scroll example…these events have an event category of “page scroll”. I could be more specific and do something like /collect.*ec=page scroll/ …except that this just wont work. Keep in mind that you are now filtering based on the request URL so you need to keep URL encoding in mind. We would have to modify this expression to /collect.*ec=page%20scroll/. Notice the space is replaced by %20 which is the URL encoding for a space.

Some of these examples could be unnecessarily complicated. Often filtering for “pageview” or “t=pageview” is unique enough. But, hey, you have more options now.

Another catch is this doesn’t work with POST requests since in the case of a POST the values are passed in the payload instead of the URL. POST is used by AA and GA if the URL is going to be too long so expect that to crop up at times where you are just sending a lot of data.

Adobe Analytics, Analytics Strategy, Conferences/Community, General

Don’t forget! YouTube Live event on Adobe Data Collection

March is a busy month for all of us and I am sure for most of you … but what a great time to learn from the best about how to get the most out of your analytics and optimization systems! Next week on March 20th at 11 AM Pacific / 2 PM Eastern we will be hosting our first YouTube Live event on Adobe Data Collection. You can read about the event here or drop us a note if you’d like a reminder the day of the event.

Also, a bunch of us will be at the Adobe Summit in Las Vegas later this month.  If you’d like to connect in person and hear firsthand about what we have been up to please email me directly and I will make sure it happens.

Finally, Senior Partner Adam Greco has shared some of the events he will be at this year … just in case you want to hear first-hand how your Adobe Analytics implementation could be improved.

 

Featured, Testing and Optimization

Personalization Thursdays

Personalization Thursdays and MiaProva

Personalization Thursdays

Each month, the team at Adobe hosts a webinar series called Personalization Thursdays.  The topics vary but the webinars typically focus on features and capabilities of Adobe Target.  The webinars are well attended and they often go deep technically which leads to many great questions and discussions.  Late last year, I joined one of the webinars where I presented “10 Execution tips to get more out of Adobe Target” and it was very well received!  You can watch that webinar here if you are interested.

Program Management

On Wednesday, March 15th, I have the privilege of joining the team again where I am presenting on “Program Management for Personalization at Scale”.  Here is the outline of this webinar:

Program management has become a top priority for our Target clients as we begin to scale optimization and personalization across a highly matrixed, and often global organization. It’s also extremely valuable in keeping workspaces discrete and efficiency of rolling out new activities. We’ll share the latest developments in program management that will assist with ideation and roadmap development, as well as make it easier to schedule and manage all your activities on-the-go, with valuable alerts and out of the box stakeholder reports.

I plan on diving into Adobe I/O and how organizations and software can use to scale their optimization programs.  I will also show how users of MiaProva leverage it to manage their tests from ideation through execution.

You have to register to attend but this webinar is open to everyone.  You can quickly register via this link:  http://bhawk.me/march-15-webinar

Hope to see you there!

Adobe Analytics, Featured

Where I’ll Be – 2018

Each year, I like to let my blog readers know where they can find me, so here is my current itinerary for 2018:

Adobe Summit – Las Vegas (March 27-28)

Once again, I am honored to be asked to speak at the US Adobe Summit. This will be my 13th Adobe Summit in a row and I have presented at a great many of those. This year, I am doing something new by reviewing a random sample of Adobe Analytics implementations and sharing my thoughts on what they did right and wrong. A while ago, I wrote a blog post asking for volunteer implementations for me to review, and I was overwhelmed by how many I received! I have spent some time reviewing these implementations and will share lots of tips and tricks that will help you improve your Adobe Analytics implementations. To view my presentation from the US Adobe Summit, click here.

Adobe Summit – London (May 3-4)

Based upon the success of my session at the Adobe Summit in Las Vegas, I will be coming back to London to present at the EMEA Adobe Summit.  My session will be AN7 taking place at 1:00 pm on May 4th.

DAA Symposium – New York (May 15)

As a board member of the Digital Analytics Association (DAA), I try to attend as many local Symposia as I can. This year, I will be coming to New York to present at the local symposia being held on May 15th. I will be sharing my favorite tips and tricks for improving your analytics implementation.

Adobe Insider Tour (May & September)

I will be hitting the road with Adobe to visit Atlanta, Los Angeles, Chicago, New York and Dallas over the months of June and September. I will be sharing Adobe Analytics tips and tricks are trying something new called Adobe Analytics implementation improv!  Learn more by clicking here.

Adobe Analytics “Top Gun” Training – Chicago/Austin (May 24, October 17)

Each year I conduct my advanced Adobe Analytics training class privately for my clients, but I also like to do a few public versions for those who don’t have enough people at their organization to justify a private class. This year, I will be doing one class in Chicago and one in Austin. The Chicago class will be at the same venue downtown Chicago as the last two years. The date of the class is May 24th (when the weather is a bit warmer and the Cubs are in town the next day for an afternoon game!). You can register for the Chicago class by clicking here.

In addition, for the first time ever, I will be teaming up with the great folks at DA Hub to offer my Adobe Analytics “Top Gun” class in conjunction with DA Hub! My class will be one of the pre-conference training classes ahead of this great conference. This is also a great option for those in the West Coast who don’t want to make the trek into Chicago. To learn more and register for this class and DA Hub, click here.

Marketing Evolution Experience & Quanties  – Las Vegas (June 5-6)

As you may have heard, the eMetrics conference has “evolved” into the Marketing Evolution Experience. This new conference will be in Las Vegas this summer and will also surround the inaugural DAA Quanties event. I will be in Vegas for both of these events.

ObservePoint Validate Conference – Park City, Utah (October 2-5)

Last year, ObservePoint held its inaugural Validate conference and everyone I know who attended raved about it. So this year, I will be participating in the 2nd ObservePoint Validate conference taking place in Park City, Utah. ObservePoint is one of the vendors I work with the most and they definitely know how to put on awesome events (and provide yellow socks!).

DA Hub – Austin (October 18-19)

In addition to doing the aforementioned training at the DA Hub, I will also be attending the conference itself. It has been a few years since I have been at this conference and I look forward to participating in its unique “discussion” format.

 

Adobe Analytics, Tag Management, Technical/Implementation

Adobe Data Collection Demystified: Ten Tips in Twenty(ish) Minutes

We are all delighted to announce our first of hopefully many live presentations on the YouTube platform coming up on March 20th at 11 AM Pacific / 2 PM Eastern!  Join Josh West and Kevin Willeitner, Senior Partners at Analytics Demystified and recognized industry leaders on the topic of analytics technology, and learn some practical techniques to help you avoid common pitfalls and improve your Adobe data collection.  Presented live, Josh and Kevin will touch on aspects of the Adobe Analytics collection process from beginning to end with tips that will help your data move through the process more efficiently and give you some know-how to make your job a little easier.

The URL for the presentation is https://www.youtube.com/watch?v=FtJ40TP1y44 and if you’d like a reminder before the event please just let us know.

Again:

Adobe Data Collection Demystified
Tuesday, March 20th at 11 AM Pacific / 2 PM Eastern
https://www.youtube.com/watch?v=FtJ40TP1y44

Also, if you are attending this year’s Adobe Summit in Las Vegas … a bunch of us will be there and would love to meet in person. You can email me directly and I will coordinate with Adam Greco, Brian Hawkins, Josh West, and Kevin Willeitner to make sure we have time to chat.

Featured, google analytics

Google Data Studio “Mini Tip” – Set A “Sampled” Flag On Your Reports!

Google’s Data Studio is their answer to Tableau – a free, interactive data reporting, dashboarding and visualization tool. It has a ton of different automated “Google product” connectors, including Google Analytics, DoubleClick, AdWords, Attribution 360, Big Query and Google Spreadsheets, not to mention the newly announced community connectors (which adds the ability to connect third party data sources.)

One of my favourite things about Data Studio is the fact that it leverages an internal-only Google Analytics API, so it’s not subject to the sampling issues of the normal Google Analytics Core Reporting API.

For those who aren’t aware (and to take a quick, level-setting step back) Google Analytics will run its query on a sample of your data, if the conditions match these two circumstances:

  1. The query is a custom query, not a pre-aggregated table. (Basically, if you apply a secondary dimension, or a segment.)
  2. The number of sessions in your timeframe exceeds:
    • GA Standard: 500K sessions
    • GA 360: 100M sessions
      (at the view level)

The Core Reporting API can be useful for automating reporting out of Google Analytics. However, it has one major limitation: the sample rate for the API is the same as Google Analytics Standard (500K sessions) … even if you’re a GA360 customer. (Note: Google has recently dealt with this by adding the option of a cost based API for 360 customers. And of course, 360 customers also have the option of BigQuery. But, like the Core Reporting API, Data Studio is FREE!) 

Data Studio, however, follows the same sampling rules as the Google Analytics main interface. (Yay!) Which means for 360 customers, Data Studio will not sample until the selected timeframe is over 100M sessions.

As a quick summary…

Google Analytics Standard

  • Google Analytics UI: 500,000 at the view level
  • Google Analytics API: 500,000
  • Data Studio: 500,000

Google Analytics 360

  • Google Analytics UI: 100 million at the view level
  • Google Analytics API: 500,000
  • Data Studio: 100 million 

But here’s the thing… In Google Analytics’ main UI, we see a little “sampling indicator” to tell us if our data is being sampled.

In Data Studio, historically there was nothing to tell you (or your users) if the data they are looking at is sampled or not. Data Studio “follows the same rules as the UI”, so technically, to know if something is sampled, you had to go request the same data via the UI and see if it’s sampled.

At the end of 2017, Data Studio offered a toggle to “Show Sampling”

The toggle won’t work in embedded reports though (so if you’re a big Sites user, or otherwise embed reports a lot, you’ll still want to go to the manual route), and adding your own flag gives you some control on how, where & how prominently any sampling is shown (plus, the ability to have it “always on” rather than requiring a user to toggle.)

What I have historically done is add a discreet “Sampling Flag” to reports and dashboards. Now, keep in mind – this will not tell you if your data is actually being sampled. (That depends on the nature of each query itself.) However, a simple Sampling Flag can at least alert you or your users to the possibility that your query might be sampled, so you can check the original (non-embedded) Data Studio report, or the GA UI, for confirmation.

To create this, I use a very simple CASE formula:

CASE WHEN (Sessions) >= 100000000 THEN 1 ELSEEND

(For a GA Standard client, adjust to 500,000)

I place this in the footer of my reports, but you could choose to display much more prominently if you wanted it to be called out to your users:

Keep in mind, if you have a report with multiple GA Views pulled together, you would need one Sampling Flag for each view (as it’s possible some views may have sampled data, while others may not.) If you’re using Data Studio within its main UI (aka, not embedded reports) the native sampling toggle may be more useful there.

I hope this is useful “mini tip”! Thoughts? Questions? Comments? Cool alternatives? Please add to the comments!

Adobe Analytics, Featured

Free Adobe Analytics Review @ Adobe Summit

For the past seven years (and many years prior to that while at Omniture!), I have reviewed/audited hundreds of Adobe Analytics implementations. In most cases, I find mistakes that have been made and things that organizations are not doing that they should be. Both of these issues impede the ability of organizations to be successful with Adobe Analytics. Poorly implemented items can lead to bad analysis and missed implementation items represent an opportunity cost for data analysis that could be done, but isn’t. Unfortunately, most organizations “don’t know what they don’t know” about implementing Adobe Analytics, because the people working there have only implemented Adobe Analytics oncee, or possibly two times, versus people like me who do it for a living. In reality, I see a lot of the same common mistakes over and over again and I have found that showing my clients what is incorrect and what can be done instead is a great way for them to learn how to master Adobe Analytics (something I do in my popular Adobe Analytics “Top Gun” Class).

Therefore, at this year’s Adobe Summit in Las Vegas, I am  going to try something I haven’t done in any of my past Summit presentations. This year, I am asking for volunteers to have me review your implementation (for free!) and share with the audience a few things that you either need to fix or net new things you could do to improve your Adobe Analytics implementation. In essence, I am offering to do a free review of your implementation and give you some free consulting! The only catch is that when I share my advice, it will be in front of a live audience so that they can learn along with you. In doing this, here are some things I will make sure of:

  • I will work with my volunteers to make sure that no confidential data is shown and will share my findings prior to the live presentation
  • I will not do anything to embarrass you about your current implementation. In fact, I have found that most of the bad things I find are implementation items that were done by people who are no longer part of the organization, so we can blame it on them 😉
  • I will attempt to review a few different types of websites so multiple industry verticals are represented
  • You do not have to be at Adobe Summit for me to review your implementation

So….If you would like to have me do a free review of your implementation, please send me an e-mail or message me via LinkedIn and I will be in touch.

 

 

Analysis, Presentation

10 Tips for Presenting Data

Big data. Analytics. Data science. Businesses are clamoring to use data to get a competitive edge, but all the data in the world won’t help if your stakeholders can’t understand, or if their eyes glaze over as you present your incredibly insightful analysis. This post outlines my top ten tips for presenting data.

It’s worth noting that these tips are tool agnostic—whether you use Data Studio, Domo, Tableau or another data viz tool, the principles are the same. However, don’t assume your vendors are in lock-step with data visualization best practices! Vendor defaults frequently violate key principles of data visualization, so it’s up to the analyst to put these principles in practice.

Tip #1: Recognize That Presentation Matters

The first step to presenting data is to understand that how you present data matters. It’s common for analysts to feel they’re not being heard by stakeholders, or that their analysis or recommendations never generate action. The problem is, if you’re not communicating data clearly for business users, it’s really easy for them to tune out.

Analysts may ask, “But I’m so busy with the actual work of putting together these reports. Why should I take the time to ‘make it pretty’?”

Because it’s not about “making things pretty.” It’s about making your data understandable.

My very first boss in Analytics told me, “As an analyst, you are an information architect.” It’s so true. Our job is to take a mass of information and architect it in such a way that people can easily comprehend it.

… Keep reading on ObservePoint‘s blog …

Featured

Podcasts!

I am a podcast addict! I listen to many podcasts to get me news and for professional reasons. Recently, I came across a great podcast called Everyone Hates Marketers, by Louis Grenier. Louis works for Hotjar, which is a technology I wrote about late last year. His podcast interviews some of the coolest people in Marketing and attempts to get rid of many of the things that Marketers do that annoy people. Some of my favorite episodes were the ones with Seth Godin, DHH from Basecamp and Rand Fishkin. This week, I am honored to be on the podcast to talk about digital analytics. You can check out my episode here, in which I share some of my experiences and stories throughout my 15 years in the field.

There is a lot of great content in the Everyone Hates Marketers podcast and I highly recommend you check it out if you want to get a broader marketing perspective to augment the great stuff you can learn from the more analytics-industry focused Digital Analytics Power Hour.

While I am discussing podcasts, here are some of my other favorites:

  • Recode Decode – Great tech industry updates from the best interviewer in the business – Kara Swisher
  • Too Embarrassed to Ask – This podcast shares specifics about consumer tech stuff with Lauren Goode and Kara Swisher
  • NPR Politics – Good show to keep updated on all things politics
  • How I Built This – Podcast that goes behind the scenes with the founders of some of the most successful companies
  • Masters of Scale – Great podcast by Reid Hoffman about how startups work and practical tips from leading entrepreneurs
  • Rework – Podcast by Basecamp that shares tips about working better

If you need a break from work-related podcasts, I suggest the following non work-related podcasts:

  • West Wing Weekly – This is a fun show to listen to and re-visit each episode of the classic television series “The West Wing”
  • Filmspotting – This one is a bit long, but provides great insights into current and old movies

Here is to a 2018 filled with new insights and learning!

Digital Analytics Community, Team Demystified

Welcome Tim Patten!

While I am running a little behind on sharing this news due to the holidays and some changes in the business I am delighted to announce that Tim Patten has joined the company as a Partner!

Tim is a great guy, a longtime friend, and an even longer-term member of the digital analytics and optimization community. He and I first met when he was at Fire Mountain Gems down in Medford, Oregon and have had multiple opportunities to work together over the years.

Most recently Tim has been a valuable contributor to our Team Demystified business unit, providing on-the-ground analytics support to Demystified’s biggest and best clients. In that role Tim repeatedly demonstrated the type of work ethic, intensity, and attitude that is a hallmark of a Demystified Partner, and so late last year we made the decision to invite him onto the main stage. Tim will be working side by side with Kevin Willeitner and Josh West helping Demystified clients ensure that they have the right implementation for analytics.

Go check out Tim’s blog and please help me welcome him to the company!

General

Tim Patten joins the team at Analytics Demystified

I am extremely excited to be joining the talented team of Analytics Demystified partners.  I am truly humbled to be working alongside some of the brightest industry veterans Digital Analytics has to offer, and I am looking forward to adding my expertise to the already broad offering of services that we provide. My focus will be on the technical and implementation related projects, however I will also assist with any analysis needs that my clients have, as well.

While joining the partner team is a new role for me, I have been working with Analytics Demystified for the past three years as a contractor through Team Demystified.  Prior to this, I was Principal Consultant and Director of Global Consulting Services at Localytics, a mobile analytics company.  My 10+ years of experience in Digital Analytics, as a consultant, vendor and practitioner, puts me in a great position to help our clients reach their maximum potential with their analytics investments.

On a personal note, I currently live in the Portland, Oregon area with my girlfriend and energetic Golden Retriever pup.  I’m a native Oregonian and therefore love the outdoors (anything from hiking to camping to snowboarding and fishing).  I’m also a big craft beer enthusiast (as is anyone from the Portland area) and can be found crafting my own concoctions during the weekends.

I can be reached via email at tim.patten(AT)analyticsdemystified.com, via Measure Slack, or Twitter (@timpatten).