Uncategorized

How can we help you?

If you’re here, based on our analytics, it’s for one of two reasons: You found some old posts we wrote about Excel nearly 10 years ago or you need help unlocking the value of your investment in Google or Adobe Analytics. For the former, we hope the content we have proves helpful, and for the latter, you have come to the right place.

We encourage you to have a look at our team, the services we offer, and our nearly twenty years of analytics thought-leadership, but we hope what you will do is pick up the phone and give us a call. Because while our world-class clients who are among the best known brands in technology, financial services, consumer packaged goods, and automotive keep us busy, we are never too busy to try and help.

Adobe Analytics, Testing and Optimization

Adobe Target and Adobe Analytics Webinar with Adobe

I had the amazing good fortune to be in the testing and optimization space since 2006 when I joined a small company called Offermatica.  In 2008, Offermatica (now Adobe Target) was acquired by Omniture (now Adobe Analytics).  In the twelve years since that acquisition, the two solutions have evolved into a single profile by way of Analytics for Target (A4T).

On Tuesday, October 27th, I will be joining Adobe on a webinar to talk about A4T and dive into:

  • How A4T can provide the mechanisms to align organizationally, scale your optimization program, monitor the program in aggregate, and leverage metric-driven AI
  • Automation with Target and putting  the metrics and audiences from Analytics to work for you
  • Incorporating Automation to advance the journeys of your digital consumers

If you are interested, please join us:

https://www.adobeeventsonline.com/Webinar/2020/PersonalizationScale/invite.html

General

Our campaign to raise money for Black Girls CODE

A little more than a week ago I could not sleep. I was about to ask the analytics community for money at what has proven to be a tough time for many, and I was about to make a statement about the values that our company holds. It’s a little nerve-wracking I admit.

I shouldn’t have worried in retrospect.

Jason Thompson of 33 Sticks and I set out to raise $40,000 for Black Girls CODE by agreeing to match up to $20,000 in donations. We gave ourselves two weeks to raise the money and meet our goal.

It took three days.

I just want to say I am personally HUGELY GRATEFUL for the support we have received from the digital measurement community in this campaign.  Nearly 100 individual donations large and small have pushed us past our goal and now we are well on our way to raising over $50,000.

We still welcome your donations, instructions to do that are pasted below, but again, from the bottom of my heart … thank you all. Thank you for thinking beyond your own lives and considering the lives of others. Thank you for recognizing that racism is present even if we personally do not see it. Thank you for sharing our desire to have a more diverse, more equitable, and more balanced technology landscape over time.

You are all awesome.


If you’d like to make a donation as part of the Demystified/33 Sticks campaign for Black Girls CODE:

  1. Decide how much you can contribute, knowing that Jason and I are matching you dollar for dollar
  2. Go to donorbox.org/support-black-girls-code and make your donation
  3. When you get your email confirmation of the donation, which is also your tax donation receipt, forward that to either blm@analyticsdemystified.com or blm@33sticks.com. If you want to redact the email and remove your personal info that is totally fine, we just need to know how much you donated!
  4. Track our collective progress online at https://tinyurl.com/demystified-33sticks
General

Analytics Demystified Supports Black Lives Matter

I have white privilege.

I was born into it, and throughout my life I have been given opportunities simply because I am white. I don’t want to say I have taken advantage of that, but I honestly don’t know that I haven’t … because I don’t know what it’s like to not be white and live in a system that treats otherwise qualified, talented, hard working individuals differently because of the color of their skin.

I didn’t ask for it, but it’s there, and so watching the scenes unfold across the media in the wake of George Floyd’s killing makes me feel ashamed of the system that has given me so much. And I am frustrated that in a day and age that has seen so many amazing technological advancements, we have not as a society managed to further the causes of equality, humanity, and compassion.

I’d like to start to help fix that.

If you have followed my career — from my founding of the Web Analytics Forum, to my publishing Web Analytics Demystified and The Big Book of Key Performance Indicators, to my co-founding of Web Analytics Wednesdays, or the creation and fostering of the Analysis Exchange — you will see that I have tried to be there for the digital analytics community. My efforts have not always been wholly altruistic, I admit that, but in the end I like to believe that I have had some positive impact on our industry as a whole.

Today I want to ask the analytics community to help me give back.

On behalf of Analytics Demystified, I am donating to Black Girls CODE, a 501(c)(3) non-profit that is working to increase the number of women of color in the digital space by empowering girls of color ages 7 to 17 to become innovators in STEM fields, leaders in their communities, and builders of their own futures through exposure to computer science and technology. I chose Black Girls CODE as a recipient because their efforts speak directly to me as a technologist, a business leader, and as a father.

But I am not alone in my efforts.

Jason Thompson, the CEO and co-founder of 33 Sticks, has generously agreed to match my donation to Black Girls CODE.  While technically Jason and I compete, I reached out to him because I respect his work ethic and his continual efforts to remind us all that it’s not what we do but how we do it that matters. He is one of the “good guys” in digital measurement, and I knew before I even asked him that he would help if he could.

And Jason and I … would like your help.

We are asking each of you reading this who work in the analytics industry and who have comparatively good lives to join us in donating to Black Girls CODE. And to encourage your donations, Jason and I will match up to a total of $20,000 USD in donations over the next 14 days.

Our goal is to work with you, the global digital measurement community, to raise $40,000 USD for Black Girls CODE to help them bring more diverse voices into technology. By expanding the range of experiences shaping our industry, Jason and I have little doubt that digital analytics, and by extension, the technology community, will be better for it.

To help us, and to take advantage of our matching efforts is super simple:

  1. Decide how much you can contribute, knowing that Jason and I are matching you dollar for dollar
  2. Go to donorbox.org/support-black-girls-code and make your donation
  3. When you get your email confirmation of the donation, which is also your tax donation receipt, forward that to either blm@analyticsdemystified.com or blm@33sticks.com. If you want to redact the email and remove your personal info that is totally fine, we just need to know how much you donated!
  4. Track our collective progress online at https://tinyurl.com/demystified-33sticks

No donation is too small!  If you can give $5 it’s like giving $10! If you can give $50 it’s like giving $100!! Jason and I are confident that if we are able to rally the digital analytics community to raise $40,000 for Black Girls CODE, that together we can have a positive and meaningful impact on their efforts to make our little corner of the world a more diverse, a more inclusive, and an overall better place.

I welcome any questions you might have about this effort, and on behalf of everyone at Analytics Demystified I sincerely hope that all is well for you and yours during these uncertain times.

P.S. Please feel free to share this post with anyone and everyone you think may want to contribute. 

Featured, Testing and Optimization

How Adobe Target can help in the craziest of times…

It has been a crazy week but I don’t have to tell any of you that.  Many of you might be new to working from home, adjusting to homeschooling (the biggest challenge in my house), changes to businesses, health issues, family concerns, etc…  We have never seen anything like this before.  Truly historical times.

Since last Saturday, I have been swamped helping some of my clients that leverage Adobe Target to make things easier and better for their digital consumers.  Now that I am getting my head above the water, I thought I would share some of the many use cases that have come up in the hopes that some of you might find them helpful as well.  

So, in no particular order:

A.  Geo-Targeting –   a few of the retailer companies that I work with wanted certain messaging sent to certain DMA’s and cities related to store closings, adjusted hours, etc..  I even had a few financial institutions that needed certain content displayed to their customers outside the United States.  Geo-Targeting is simply an Activity that is targeted to an audience that uses the built-in geo-attributes:

Another helpful utility built into the geo-attributes for the telco’s out there.  You can target your own network or a competitor’s network.  😉

 

 

B.  Impression Capping – This has been a popular request this week.  Present COVID-19 related content show but only for 3 or 4 impressions and then suppress it.  This is done by leveraging Adobe Target profile attributes.  We simply set up a profile script to increment with each Adobe Target server call (or mbox call for us old-timers) like the one below.

 

 

Then create an Audience like this and use it in the Activity.  This Audience here essentially represents 1, 2, or 3 page impressions assuming a global mbox (every page) deployment.  The fourth impression would kick the visitor out of the test and stop any content associated with it to stop showing.

 

C.  Recommendations – quite a bit of work here this week helping customers adjust the Criteria used in Recommendations being made across the site.  The first thing we focused on is the recency of data.  Baseball and soccer cleats were hot items up until this week so adjusting the “most viewed” or “top sellers” to a smaller window made a lot of sense. 

To modify this, within the Criteria, simply drag it all the way to the left and data window for product suggested will only come from data within the last 24 hours.  

The next thing we did was raise the inventory considerations given the high volume of some items being sold.  Again, within the Criteria, you can tell Adobe not to include a particular SKU or product in the Recommendations if the inventory of that product falls below a threshold.

D.  Auto-Allocate – this feature is available to all Adobe Target licenses and not limited to those that have Target Premium.  This feature is huge during short-term marketing initiatives (think cyber Monday, black Friday, etc…) but was really helpful this week.  By simply changing the default radio button to what I show below within the Targeting step of the Activity setup, Adobe Target will automatically shift traffic to the better performing experience.

If you have different messaging that you need to convey to your visitors and are unsure of what one would be the best, you can let your consumers tell you.  Be warned though, I have seen this thing kick some serious butt and shift traffic pretty quickly when confidence is detected.

 

E.  Emergency Backups – This one came as a bit of a surprise to me this week and something you all should think about.  I’ve been helping companies use technologies like Adobe Target since 2006 at Offermatica and I’ve been the pseudo backup for people hundreds of times I am sure but this week things got a bit more formal. 

This week I was incorporated into a very formal process with one of the large Financial firms that I help a lot with test execution and system integrations.  When the situation arose this week, this firm put together very formal processes in place in the event someone is unavailable to work or even get on a phone.  Quite impressive and a testament to the value of optimization and personalization. 

The tactical component of this exercise involved making some adjustments to Adobe Target workspaces (NOT to be confused with Analytics workspaces:) and Adobe Target Product Profiles (NOT the Profile attribute:).  

F:  Test Results – these are not normal times and visitor behavior, traffic volume, conversions are likely very atypical.  In most of the scenarios I dove into this week, the test results were not helpful even though this noise is distributed across all test experiences.  I’d spend more time on qualitative data and use that data, coupled with your testing solution, to help the digital consumer.  Deciding a test winner based off of this traffic, could potentially not be a winner when things normalize.  That said, it depends on what the test is – I had a Recommendation test related to the design that could be valid despite odd traffic.  We are just going to test it again later.    

I wish all of you and yours well and let us all continue to flatten the curve.

 

Adobe Analytics, Featured

Creating Time-Lapse Data via Analysis Workspace

Sometimes, seeing how data changes over time can inform you about trends in your data. One way to do this is to use time-lapse. Who hasn’t been mesmerized by a cool video like this:

Credit: RankingTheWorld – https://www.youtube.com/watch?v=8WVoJ6JNLO8

Wouldn’t it be cool if you could do something similar with Adobe Analytics data? Imagine seeing something like the above time-lapse for your products, product categories, campaign channels! That would be amazing! Unfortunately, I doubt this functionality is on the Adobe Analytics roadmap, but in this post, I am going to show you how you can partially create this using Analysis Workspace and add time-lapse to your analytics presentations.

Step 1 – Isolate Data

To illustrate this concept, let’s start with a simple example. Imagine that you have a site that uses some advanced browser features of Google Chrome. It is important for you to understand which version of Chrome your website visitors are using and how quickly they move from one version to the next. You can easily build a freeform table in Analysis Workspace that isolates visits from a bunch of Google Chrome versions like this:

Here you can see that the table goes back a few years and views Visits by various Chrome versions using a cross-tab with values from the Browser dimension.

Step 2 – Add a Chart Visualization

The next step is to add a chart visualization. I have found that there are only three types of visualizations that work for time-lapse: horizontal bar, treemap and donut. I will illustrate all of these, but to start, simply add a horizontal bar visualization and link it to the table created above:

When you first add this chart visualization, it may look a bit strange since it has so much data, but don’t worry, we will fix it in a minute. Once you add it, be sure to use the gear icon to customize it so it has enough rows to encompass the number of items you have added to your table (I normally choose the maximum of 25):

Step 3 – Create Time-Lapse

The final step is to create the time-lapse. To do this, you have to have some sort of software that will allow you to record. I use a Mac product called GIF Brewery 3, but you can use Snagit, Goto Meeting, Zoom, etc… Once you have selected how you want to record the time-lapse, you have to learn the trick in Analysis Workspace that allows you to cycle through your data by week. The trick is to click on the cell directly to the right of the first time period (week of July 2, 2017 in my example) and then use your left arrow to move one cell to the left. This will allow you to select the entire first row as illustrated here:

Once you have the entire row selected, you can use the down arrow to scroll down one row at a time. Therefore, if you start recording, select the cell to the right of the first time period, select the row and then continue pressing the down arrow, you can stop the recording when you get to the end. Then you just have to clean it up (I cut off a bit at the beginning and end) and save it as a video file. Using GIF Brewery 3, I can turn these recordings into animated GIFs which are easy to embed into Powerpoint, Keynote or Google Slides.

Here is what the time-lapse for the Chrome browser scenario looks like when it is completed:

Another visualization type I mentioned was the treemap. The process is exactly the same, you simply link the treemap to your table and record the same way to produce something like this:

Venn Visualization

As mentioned above, I have found that time-lapse works best with horizontal bar, treemap and donut visualizations. One other one that is cool is the Venn visualization, but this one has to be handled a bit differently than the previous examples. The following are the steps to do a time-lapse with the Venn visualization.

First, choose the segments and metric you want to add to the Venn visualization. as an example, I am going to look at what portion of all Demystified visits view one of my blog posts and also how many people have viewed the page about the Adobe Analytics Expert Council (AAEC). I start this by adding segments to the Venn visualization:

Next, I am going to expose the data table that is populating the Venn visualization:

Then I use a time dimension to breakdown the table. In this case, I will use Week:

From here, you can follow the same steps to record weekly time-lapse to produce this:

Sample Use Cases

This concept can be applied to many other data points found in Adobe Analytics. For example, I recently conducted a webinar with the Decibel, an experience analytics provider, in which we integrated Decibel experience data into Adobe Analytics to view how many visitors were having good and bad website experiences. We were then able to view experience over time using time-lapse. In the following clip, I have highlighted in the time-lapse when key events took place on the website:

If you want to memorialize the time when your customers officially started ordering more products from their mobile phone than the desktop, you can run this device type time-lapse:

Another use case might be blog readership if you are a B2B company. Often times, blogs are used to educate prospects and drive lead generation. Here is an example in which a company wanted to view how a series of blogs were performing over time. Once again, you simply create a table of the various blogs (in this case I used segments since each blog type had several contributors):

In this case, I will use the donut chart I mentioned earlier (though it is dangerously close to a pie chart, which I have been told is officially uncool!):

Here is the same data in the typical horizontal bar chart:

As a bonus tip, if you want to see a cumulative view of your data in a time-lapse, all you need to do is follow the same process, but with a different metric. You can use the Cumulative formula in the calculated metric builder to sum all previous weeks as you go and then do a time-lapse of the sum. In this blog example, here is the new calculated metric that you would build:

Once you add this to your table, it will look like this:

Then you just follow the same steps to record your time-lapse:

Final Thoughts

These are just a few examples of how this concept can be applied. In your case, you might want to view a time-lapse of your top ten pages, campaign codes, etc. It is really up to you to decide how you want to use it. I have heard rumors that Analysis Workspace will soon allow you to add images to projects, so it would be cool if you could add animated GIFs or videos like this right into your project!

Other things to note. When you use the treemap and donut visualizations, Analysis Workspace may switch the placements and colors when one number increases over the other, so watch out for that. Another general “gotcha” I have found with this approach is that you have to pre-select the items you want in your time-lapse. It would be cool if there were a way to have the Adobe Analytics time-lapses be like the first market cap one shown in which new values can appear and disappear based upon data changes, but I have not yet found a way to do that. If you can find a way, let me know!

Adobe Analytics

Quick Tip: Grouping Items in Analysis Workspace

Recently, I was working with a client who needed to group a bunch of dimension items for an analysis. While this would seem like an easy thing to do in Analysis Workspace, it isn’t as easy as you’d think. Therefore, I am going to share a tip I used to help this client in case it helps any of you out there.

Scenario & Options

Let’s imagine that you have a situation in which a dimension (eVar) has thousands of values and you want to see a grouping of say 25 of those values. The 25 could be based upon the text values of the dimension items or it could be completely random. There are several ways to do this in Adobe Analytics:

Option 1 – SAINT Classifications

One option is to have your Adobe Analytics team use SAINT to classify the values you want into one specific value. If you do that, you can then use the classification value in your reports and that one value will encompass all 25 items that you care about. Unfortunately, this may require work to be done by another team and at large organizations, this can take a while or require approvals.

Option 2 – Manually Build Segment

The second option is to manually build a segment that has the 25 values that you need. Once you have a segment, you can easily see metrics for that segment, which serves as an aggregation of the 25 values you desire. This can be done using the operators available in the segment builder like this:

This method works best if you can use text values to narrow down the values you want. For example, you might want all blog posts that “contain” the phrase “workspace” to be included. Unfortunately, the contains function can produce some false-positives that you might not want to be included in the segment. It also doesn’t allow you to easily pick one-offs that you might want to include. Therefore, this option is good, but not perfect.

Option 3 – Fallout

Another option for grouping dimension items is the fallout report. You can create a blank fallout report that looks like this:

Next, you can use the dimension chevron in the left navigation to view the dimension items and search for the items you want:

Unfortunately, you can only do a basic text search here, which is why I don’t love this option. But if you can isolate the items you want, you can multi-select them and drag them as a group onto the fallout report:

Lastly, you can right-click to build a segment from them grouping you just added to the fallout:

Option 4 – Dimension Filtering

The fourth approach is the one that I tend to use most often. There are great filtering capabilities available for dimensions in Freeform tables in Analysis Workspace. This filtering can be leveraged to aggregate the exact values you want. This approach provides the benefits of options #2 & 3, but offers a bit more flexibility.

To demonstrate, let’s continue with the example that I want to pick a bunch of dimension items (in this example, blog post names) and see aggregate numbers for that grouping of items. To start, I can add the dimension to a Freeform table with Occurrences as the metric to see all values:

Next, you can use the filter function in the dimension header to open the advanced filter option:

From here, you can add any criteria that will help narrow down the items. At this step, be sure to err on the side of including too many values so you don’t accidentally exclude dimension items that you might want. In this case, let’s filter for blog posts written by me (Adam) and that contain the words “workspace,” “eVar,” “training” or “campaign:”

This produces a bunch of dimension items:

If I am happy with all of these items, I can select all of them and right-click to build a segment for these specific items:

However, I could have done that with option #2 above in the segment builder. The advantage of this option is that I now have the ability to hand-pick the items that I want to group. In this case, I can manually select the items I want and again right-click to create a new segment:

After clicking, you will be taken to the segment builder with the selected items added to the segment for you:

After saving the segment, you can use it anywhere in Adobe Analytics. For example, you can view any metrics you want for that grouping of dimension items:

You could also use the new segment to create a “derived” calculated metric:

Summary

Above are a few different options for seeing groupings of items in Adobe Analytics Analysis Workspace. You may end up using all three options in different situations, but if SAINT is not an option, consider the two different ways to create segments for groupings of dimension items

Adobe Analytics

Daily Unique Visitors in Analysis Workspace

Recently, one of the members of our Adobe Analytics Expert Council (AAEC) was lamenting that in the [old] Reports interface of Adobe Analytics, there is a Daily Unique Visitors metric, but that this metric is not available in Analysis Workspace. In the Reports interface, you can add Unique Visitors as a metric, which de-duplicates unique visitors for the currently selected date range, but you can also add Daily Unique Visitors which provides a sum of daily unique visitors for all of the dates in the selected timeframe. Unfortunately, in Analysis Workspace, you can only see the former (Unique Visitors) and there may be times that you want to see daily unique visitors for dimension values. As I have demonstrated in the past, I am on a mission to be able to do 100% of what could be done in the Reports interface in Analysis Workspace, so in this post, I will share a workaround that will allow you to add daily unique visitors to your Analysis Workspace projects.

Daily Uniques in Old Interface

To begin, let’s look at how daily unique visitors works in the old interface. Let’s imagine that you want to see unique visitors and daily unique visitors for a dimension (eVar) in your implementation. For example, let’s look at these metrics for my blog posts:

In this report, I am looking at just one day of data, so the two columns of data are exactly the same. But if I change the date range to be the last seven days, these metrics will start to diverge. The amount of divergence will depend on how often your site has return visitors:

In this case, for the week there have been 129 unique visitors who have viewed my UTM Campaign blog post, but that number rises to 135 if you count unique visitors on a daily basis. If you wanted to view this exact report in Analysis Workspace, you would think that it is as simple as clicking the “Try in Workspace” button shown above. But doing this does the following:

As you can see, the daily unique visitor column is stripped out because Analysis Workspace doesn’t have a notion of daily unique visitors.

Creating Daily Unique Visitors in Analysis Workspace

In order to re-create the old interface seven-day report that has both unique visitors and daily unique visitors in Analysis Workspace, we will have to do a bit of calculated metric and segment gymnastics. While the process is a bit manual, the good news is that it can be set up once and re-used with any dimension in the future.

To start, you have to choose a timeframe for which you’d like to view daily unique visitors. To keep things simple, I am going to assume that I want to see daily unique visitors for seven days. To start, I am going to create seven rolling date ranges. Adobe already provides a date range for Yesterday and Two Days Ago, so in this case, I have created the ones for 3-7 days ago.

Each of the new date ranges will be set up as rolling date ranges being X days prior from today and will look similar to this:

Once you have the seven individual date ranges created, you can create a segment for each. Each segment will simply contain the corresponding date range and be constructed like this:

When you are done, you will have these segments:

Once you have the required date range segments, the final step is to create a new calculated metric that includes a sum of all seven days. This calculated metric will use unique visitors as the metric, but will sum a segmented version of unique visitors for each day. Here is what the seven-day formula would look like:

Using Daily Unique Visitors in Analysis Workspace

So now that we have our seven-day daily unique visitors calculated metric, let’s see how we can use it to re-create the reports from the old interface. As a refresher, here is what the report looked like in the old interface when we looked at seven days of data:

Now,  Let’s add our new calculated metric to the same report in Analysis Workspace:

As you can see, we have successfully duplicated the daily unique visitor metric in Analysis Workspace!

Of course, this process is more manual than I’d like. If you want to see daily unique visitors for thirty days, you would have to create thirty rolling date ranges and thirty segments (and a long calculated metric!), but the good news is that once you have done this, you can use the calculated metric in any dimension report. For example, here is a daily unique visitor report for pages for the last seven days from the old interface:

Here is the same report in Analysis Workspace using our new seven-day daily unique visitor calculated metric:

In case it is helpful, here is a video of how I got from the report in the old interface to the one in the new interface:

Adobe Analytics

Path Breakdowns and Breakdown Trends

In my last post, I shared how you could build segments from website paths using the flow visualization in Analysis Workspace. This was done by right-clicking on a specific path and choosing the “segment” option. In this post, I’d like to share another cool thing you can do by right-clicking within flow visualization paths – path breakdowns. Once you have created a flow report, you can pick a specific flow branch and right-click to see a host of options. In this case, we will use the “breakdown” option that is directly below the “segment” option we used in the last post as shown here:

Once you select the breakdown option, it acts like any other breakdown such that you choose if you want to breakdown the path by dimension, metric, segment or time as shown here:

To illustrate this functionality, I will breakdown the path flow of people going from the home page to my blog index page by the type of device they are using as shown here:

This will produce a table that shows the device type breakdown for the specific flow:

This table shows me that most people are viewing this flow from non-mobile devices. Keep in mind that I could have broken down this flow by any dimension, segment, etc. depending upon the business question I was trying to answer.

From here, I might want to see how this particular flow breakdown is trending over time. Is the percentage of desktop vs. mobile phone vs. tablet pretty consistent or does it vary over time? I can view this by visualizing the data in the newly created table by right-clicking again and choosing a chart type:

In this case, I chose the stacked area chart which shows me my path flow by device type trended by month…

Summary

If you have specific path flows that you want to dig deeper into, using path flow breakdowns is an easy way to see how path flows differ by dimension or segment. And if you want to see the path flow breakdowns over time, you can add visualizations to the resulting data…

Adobe Analytics

Segmenting on Paths in Flow Visualization

Recently, Adobe updated the Analysis Workspace Flow visualization to offer more flexibility, especially when it comes to repeat instances. Many Adobe Analytics users have wanted to abandon the pathing reports in the old interface and rely solely on Flow in Analysis Workspace, but were held back by the fact that repeat instances of values passed into Adobe Analytics would appear multiple times consecutively. This made using Flow difficult at times, but now Adobe has changed Flow visualizations so that repeat instances are disabled by default:

This makes Flow much more useful when it comes to pages and any other dimension for which you want to see sequences of values.

Another thing that this change enables is easier segmentation on paths. There may be times when you would like to view a specific path flow visitors are taking and build a segment for that path to learn more about that cohort of visitors in other Adobe Analytics reports. This is possible by simply right-clicking on any path in the Flow visualization. For example, if I wanted to look at how visitors were navigating from the Analytics Demystified home page to our education page (highlighting some of my upcoming training classes), I can build a simple flow report like this:

From here, all I have to do is right-click on the path I want and Adobe will automatically create a sequential segment for me:

Once I am in the segment builder, I can name it and save it. If I want, I can also tweak it a bit. For example, if I want to see visitors who eventually found the education page within the visit, I can change the sequential segment like this:

Once the segment is saved, I can apply it to any other report in Adobe Analytics. For example, if I wanted to see which companies followed that path, I can use my Demandbase eVar to see the specific companies that might be interested in my education classes (hidden here to preserve their privacy!):

As you can see, creating segments on paths is pretty simple, but can be powerful. I suggest that you pick a few of your Adobe Analytics dimensions, add them to a Flow visualization and then try creating some segment paths.

Adobe Analytics

Setting Metric Targets in Analysis Workspace


One of the lesser-known features in the old Adobe Analytics interface was Targets. Targets allowed you to upload numbers that you expected to hit (per metric) so that you could compare your actual results with your target results. While this feature still exists in the old interface, it doesn’t translate to Analysis Workspace.

Therefore, if you want to compare your current data to a target, you have very limited options. One option is to upload your target to a new Success Event via Data Sources, but since Adobe won’t let you upload Data Sources data for future dates (please vote for this idea to change this!), you can only view targets up to the current date and you have to upload a file every day (which doesn’t sound like much fun!). The other option is to use Adobe’s ReportBuilder Excel add-on. Within Excel, you can create a data block that grabs any metric and then you can manually enter your targets in the spreadsheet and compare them in charts and graphs.

But what if you want to view your targets in Analysis Workspace? That is where you are likely spending all of your time. In this post, I will show you one method, albeit a hack, that will allow you to see metric targets in Workspace that might hold you over until Adobe [finally] allows you to upload Data Sources data into the future or provides another way do targets in Analysis Workspace.

Targets in Analysis Workspace

The first step to seeing targets in Workspace is to use Data Sources. For each metric that you want to add a target, you will need to enable a new numeric Success Event in the admin console. In this example, I will set a target for Blog Post Views, so I will create a new Success Event like this:

Next, you will use Data Sources to import your target, but with a twist. Since you cannot upload Data Sources data into the future, you are going to import your target by day for a time period in the past (i.e. one year prior to the current year). For example, if you want to see a target for Blog Post Views for Jan-Feb 2019, you could upload the targets for those months (by day) using the dates 1/1/18 – 2/28/18 (or another year in the past). I know this sounds strange, but I will explain why later. Your Data Sources set up might look like this:

In this case, I want to be able to see targets by Blog Post Author, so I have also added an eVar to the Data Sources upload. Here is what the upload file would look like for 2019 Jan-Feb Blog Post View targets for the author of “Adam Greco:”

Once you have uploaded the target data, you will have the target numbers you need, but they will each be tied to dates in the past, in this case exactly one year prior:

Next, we want to compare 2019 Blog Post Views to this target. To do this, we will create a freeform table that contains Blog Post Views for this year (I will use Jan-Feb) and narrow them down to “Adam Greco” blog posts using a segment, since that is what our target is based upon:

Next, we are going to add our target to this table, but right after we do that, we are going to use the Date Ranges feature to create a date range for our target timeframe (in the past), which in this case includes the dates of Jan-Feb 2018, where we have uploaded our Data Sources data. As you may recall, when you use Date Ranges in Analysis Workspace, they supersede whatever dates are selected in the Analysis Workspace panel, so this will allow us to see our Target data directly next to our actual 2019 data as shown here:

Next, let’s view our data by week instead of day to make it a bit easier to view and then let’s add a chart to compare the data:

Finally, let’s tidy it up a bit by locking the chart, removing some backgrounds and percentages in the table and renaming the legend in the chart:

When you are done, you will have a report that looks like this:

Now you can see how you are doing for your metric against its stated target. Unfortunately, Workspace shows the dates in the column when you use Date Ranges, so some people might get confused about why it has 2018 dates, but that is beyond my control! You can also hide the table itself to avoid this issue.

Once you have this data, you can manipulate it however you’d like. For example, you can view it by month instead of by week:

Cumulative Function

As if that weren’t cool enough, you can also use the Cumulative Function to see your actual vs. target progress over time. This is my favorite view of the data! To do this, you will create two new Metrics. One will be a cumulative count of your actual metric and the other will be a cumulative count of your target. These will use your main Success Event and your Data Sources Success event respectively. The Metric formulas are shown here:

Once you have created these, you can duplicate your table above, add these metrics and then add a chart as shown here:

When you are done, you will have a report that looks like this that shows how you are doing over time against your Target:

Final Thoughts

So that is my “hack” way to add targets to Analysis Workspace. Again, if Adobe would provide the ability to upload Data Sources data into the future, much of this would be unnecessary, but that is the state of things today. While this seems like a lot of work, it is not too bad, especially if you bear in mind that you should only be setting targets for your most important metrics and you only have to do this once a year. However, keep in mind that Data Sources only lets you upload ninety days of data at a time, so you will have to do multiple Data Sources uploads for each metric.

I hope this helps as a temporary solution…

Adobe Analytics

Fallout Funnels With Date Ranges

Recently, I was working with a client to explain how panels work in Analysis Workspace. Panels in Workspace allow you to embed data visualizations and each panel can have its own segments/dimension filters and/or date ranges. I often use different panels when I want to apply different segments or date ranges to groupings of data visualizations.

For example, let’s say that you have a Fallout report that you want to see for two different weeks. Here is a screenshot of two different panels, within one project, viewed side-by-side that contain different date ranges:

In this case, it looks like it is a normal Workspace project, but I have re-sized two distinct panels and put them next to each other so I could use different date ranges for each fallout. While this works, it takes additional time and project real-estate.

Therefore, I want to show an alternative method of seeing the same data within one Workspace project panel. This method involves using custom date ranges. In Workspace, you can create any custom date ranges you need. These date ranges can be fixed or rolling depending upon your analysis needs. In this case, since I want to see two consecutive weeks, I would create two new [fixed] date ranges like this:

Once I have created these date ranges, I can drag them into a fallout report that contains the dates of both date ranges. In this case, the combined date range is April 8 – April 21, 2018. Below you will see me dragging the newly created date ranges into the fallout visualization (using the hidden drop zone!) that has the dates from both weeks:

When this is done, the final fallout report looks like this:

You can see that the fallout percentages are exactly the same as the ones shown in the side-by-side panels above. But this version uses only one visualization and takes up less space. I also think this is a slightly better way to visualize the differences for each fallout step instead of having to compare them side-by-side.

Just a quick tip to streamline your fallout reports when comparing date ranges…

Adobe Analytics

Training on Analysis Workspace (Part 2)

In last week’s post, I shared some of the areas of Analysis Workspace that confused the students of classes I provided on the product. Most of those issues were things that had to do with some larger implications of the product (i.e. having an eVar and an sProp dimension for the same thing). Many of the things I mentioned in the last post would require Adobe to make some key product changes to address, but the goal of that post was really to help you navigate some potentially tricky items if you are doing training.

In this post, I’d like to focus more on the actual user experience of Workspace itself. These are things that, with my limited experience in product design, seem like items that Adobe could address more easily. Again, I will add the caveat that I am the furthest thing there is from a designer and I don’t purport to know of better ways to create user interfaces. But, what I do know is which things in the UX of Workspace my students could not find or figure out, even after having been shown multiple times. If users can’t find features or easily figure out how to use them, that is a problem, and Workspace is notorious for “hiding” some of the coolest aspects of the product. My hunch is that these features are hidden to reduce clutter, but as I will demonstrate below, in some cases, this reduction of clutter results in confusion and lack of feature usage. Again, this is not a critique of the people making Workspace, which I have already stated I think is amazing, but rather just me being a messenger of things that I saw cause confusion during my training classes in case you are training co-workers internally.

The Hidden Easter Eggs That Are Workspace

As I mentioned, some of the greatest stuff in Analysis Workspace is hidden or not super obvious to users. In Freeform tables, right-clicking opens up many great options that casual users don’t know about. While the 1980’s gamer in me loves the easter egg aspect of Workspace, especially when I can show someone a new feature they didn’t know about, I can tell you after training new folks on the product, they did not think it was as cool as I did! So the first part of this post will cover all of the “hidden” stuff that frustrated my students.

Hidden chevron in dimensions

A frequent task when using Workspace is going to the left navigation to view your dimensions (eVars and sProps) in order to find the values that have been collected within each dimension. For example, if you want to see a flow from a specific page, you would look for the page dimension in the left navigation to see its values and then drag over the desired page to the flow visualization. However, when doing exercises, most of my students could not figure out how to find the dimension values. Typically, when they looked at the left navigation and saw the dimensions (like Page), they got stuck. I told them that they needed to hover over the dimension and only then they would magically see a chevron which would then allow them to expand and see the resulting values as shown here:

Soon after, they would forget that the chevron was there and I had to keep reminding them of this. Evenutally, I began referring to this as the “hidden chevron” to jog their memory. They didn’t understand why the chevron couldn’t always be there as a reminder that there is more stuff to be found underneath it. I also had many students thinking that they were supposed to double-click on the dimension to expose its underlying values (which did nothing but select and deselect the dimension in the left navigation). So be on the lookout for this potential confusion from your users as well and you may want to just save time and introduce it as the “hidden chevron” from the start…

Hidden items in visualization header

When I began teaching students that they could copy, edit, duplicate and get links to a visualization by right-clicking, they were excited. However, they soon realized that knowing exactly where to right-click in the header of the visualization was hit or miss.

Eventually, they got it, but they often asked me why there wasn’t a gear icon for the visualization since almost every other thing in Workspace had a gear icon!

While on the topic of the visualization header, let’s discuss the “copy to clipboard” option. Many of my students assumed that this would be an easy way for them to copy the visualization and paste it into a PowerPoint slide to show others in a meeting. Unfortunately, here is what happens when you copy and paste using the Copy to Clipboard option:

 

 

It might be handy to have a copy visualization image option here in addition to copying the actual data.

Additionally, some super handy things in the header of the chart visualization include the ability to “lock” the chart to table data and/or to show/hide table data. Unfortunately, both of these options are found in a [very] tiny little dot at the top-left of the visualization as shown here:

While they would eventually learn this, I can’t tell you how many times I was asked: “where is the place that I lock data and hide the table?” Again, I am not sure why these options can’t be part of the gear icon that already exists for charts, but I just mention that you may have to tell your students a few times about the stuff hidden in the chart dots.

Hidden items in Freeform table columns

Freeform tables are often the most popular Workspace visualization. Like Excel spreadsheets, they allow you to see data in a tabular format. In Workspace Freeform tables, there is a way to customize the columns by hovering over the column header and clicking the gear icon. This was another “hidden” feature that users saw me demonstrate, but later could not find. They also could not figure out how to close the window that opened when they clicked the gear icon since there is no “X” there, so I had to tell them that they just had to click away from the box somewhere else. You can see both the hidden gear icon and the lack of a way to close the window here:

Similarly, changing the sort column in Freeform tables requires the user to know to hover their mouse in the exact right place (next to column total metric). Most folks thought that clicking the column heading would sort (as in the old “Reports” UI), but instead, they had to learn to hover in the correct spot to sort…

For both of these items (gear and sort), I assume that the icons are hidden to make the table look cleaner. However, I wonder if there might be a way to have an “edit” mode when building a project that displays all of the icons like there was an edit mode for dashboards in the older interface. Perhaps give users the option of which view they prefer and then people can have the best of both worlds?

Hidden drop zones

One of the coolest parts of Analysis Workspace is that you can drag and drop components all kinds of places and tweak your data. For example, you can drag segments or dimension values into Freeform table columns and in other visualizations. Unfortunately, there are some places that you can drop items that are so well hidden that many users don’t discover them or remember after they have been trained.

One example of this is the Fallout visualization. In this visualization, you can drag segment or dimension values to the top of the report and see the same fallout segmented as shown here:

The only problem is that there is nothing telling you that you can drop things there. I am not sure why there aren’t blank segment/dimension drop zone boxes there like there are for other visualizations (i.e. Flow, Cohort, etc.).

Similarly, in the Flow visualization, users need to know that they can drop a dimension value on top of another to replace it, but there isn’t any type of visual cue that this is possible. Also, if a user wants to add a second dimension to the Flow report, they have to know that there is another hidden drop zone to the right of the right-most column. You can see both of these here:

Don’t get me wrong, these are super-cool features, but I dare you to stand in front of a class of novice users and get them to find these and remember where they are two weeks later!

Other UX Items

Renaming Fallout Steps

When you create a fallout report, there are some cases in which the names of each fallout step can be very long. This can be due to long page names or having multiple items in each step. To remedy this, Workspace provides a way to rename each Fallout step. The weird thing here is that you only seem to be able to edit the Fallout steps if you mouse coming in the downward direction. Double-Clicking on the name, as my students tried to do, didn’t work. Here is a video of me trying to double-click and coming at the name from bottom and top:

Maybe I am just bad with my mouse, but I find it very difficult to get to the exact right spot to edit step names and my students did as well. My hunch is that there has to be a better way to let people rename steps…

Laptop Screen

I normally work on a huge monitor (three in fact!) when I am using Workspace. But when I began conducting training classes, I was on my laptop and my students were as well. I was amazed at how much harder some things in Workspace were when you were on a smaller screen. For example, as I began the class and asked my students to create their first project, they could not figure out how to do it. I couldn’t for the life of me figure out why they couldn’t do something so simple. Then I went over to their laptop and realized that the blue button they needed to click on the screen showing the templates was below the fold and they were not seeing it. They had to know to scroll down to see CREATE button they needed to click. You can see this here:

I had never seen that on my large monitor, but suddenly got it and was prepared for that in subsequent classes. I wonder if there should be a blue button at the top of the screen as well?

Another example of this was when I taught students how to use functions in the Calculated Metric builder. Students kept telling me that they didn’t have any of the functions and eventually I realized that they are so low in the left navigation on a laptop, that students weren’t seeing that they were in the left navigation as shown here:

There were more cases like this that popped up during the training and it made me wonder if those designing the Workspace interface were spending as much time using the tool on laptops as they were on large monitors?

Default Options

The last item I want to discuss is the concept of project default options. When you create a lot of Workspace projects, you tend to come up with your own little preferences on how you’d like to set them up. For me, I always begin a project by using Project – Info & Settings to make the project “compact” and whenever I add pathing-related visualizations (i.e. Flow Fallout), I tend to use Visit instead of Visitor. It would be great if I could tell Workspace that when I create a new project, I want these to be the default instead of having to update these each time. I am sure there are other items I’d like to make the default (i.e. color scheme) as well…

Summary

Once again, I’d like to stress that I love Analysis Workspace and am not a designer. My intention for sharing this information is to alert those who may be doing training of things that they might want to know about before they get the same types of questions I did. At some point, students/users have to just learn where things are and memorize it, but the above items might represent opportunities for Adobe to help everyone to more easily find and use the amazing features in the Workspace product.

Adobe Analytics

Training on Analysis Workspace (Part 1)

Analysis Workspace, the dynamic reporting interface for the Adobe Experience Cloud, is truly an amazing piece of technology. Over the past few years, Adobe has made tremendous strides in reporting by miraculously moving the functionality from the Ad-Hoc app to a similar, but web-based experience. You know the technology is good when, given a choice between the old “Reports” interface and the “Workspace” interface, most users voluntarily migrate over to the new one (even though some Ad-Hoc users are still not happy about Ad-Hoc being sunset).

Currently, I am in the midst of training approximately five-hundred people on Analysis Workspace for a client. When you conduct training classes on a product, you gain a new perspective on it. This includes being amazed by its cool parts and frustrated by its bad parts. You never really know a product until you have to stand up in front of thirty people, two times a day, five days a week and try and teach them how to use it. Since I have done a lot of training, when conducting a class, I can easily see in people’s faces when they get something and when they don’t. I also make mental notes of things that generate the most questions on a consistent basis. As a power user myself, there are many things in Workspace I take for granted that are confusing for those newer to the product and those casual users who may only interact with it a few times per week.

Therefore, the following will share things I observed while training people on Workspace in case it helps you when training your team on Workspace. The items that my students found confusing might also confuse your co-workers, so think of this as a way to know where potential land-mines might be so you can anticipate them. This is by no means meant to be critical of Adobe and the Workspace product, but rather simply an accounting of areas my students struggled with.

So what follows is a laundry list of the things that I noticed confused my students most about Workspace. These were the times I was explaining something and I either got questions, saw confused looks on faces, or had students unable to easily complete hands-on exercises. While I don’t have time in this post to document how I explained the items below, what you find below is just meant to highlight things that you may encounter.

Difference between segments and dimension values

There are many cases in Workspace where you can use segments or specific values of a dimension to narrow down the data you are analyzing. For example, you can apply both to the drop zone at the top of a project panel or within columns of a freeform table. Unfortunately, it isn’t easy to explain to novices that using a Segment can narrow down data by Visitors, Visits, or Hits, but that filtering using a dimension value is limited to applying a Hit filter (unless you edit it and turn it into a segment). This is especially true when using dimension values to create cross-tabs or within Venn visualizations.

Difference between using dimension and dimension values

There are some cases where your users may be confused about whether they should drag over the dimension or the dimension values (using the hidden dimension chevron-more on this next week!). For example, when you use a Flow visualization, even though it is labeled in the boxes, I found many users were confused about the fact that they could only drag the dimension to the Entry/Exit box, but could drag either a dimension or a dimension value to the middle box.

It was also confusing to them that dragging over the dimension picks a dimension value for them, which then has to be replaced with the actual dimension value they want by dropping it on top of the focus value of the flow (with no type of indicator letting them know that it was even possible to drop a page value onto the existing page value).

Pathing visualizations with Visit or Visitor radio button

While on the subject of the Flow pathing visualization, I also received numerous questions about whether they should use the Visit or Visitor option (radio button) for Flow and Fallout. If using Visitor, does it matter if the same visitor repeated steps or took slightly different paths in their second/third visit? Why does Workspace default to Visitor? Is there a way to make Visit the default for all new visualizations?

Count Instances

In the Project – Info & Settings, there is an option to count instances or not. This was very confusing to people and it would be good if Adobe could provide more context around this and what it impacts.

I also tried to avoid conversations about “Repeat Instances” in Flow reports for my novice users since I learned early on that this concept was only really understood by more experienced users.

Default Occurrences Metric

If you make a new freeform table in Workspace and start by dragging over dimensions before metrics, the table defaults to the Occurrences metric.

As you can imagine, explaining what an Occurrence is versus a Page View can be tricky for people who only casually use Adobe Analytics. While I use it as a good interview question, everyday users can find this confusing, so it may be better to default new tables to Page Views or Visits. I also recommended that my students always add metrics to tables before dimensions to minimize seeing the Occurrences metric.

Time Dimensions/Components

When I was doing exercises with students, I would often ask them to make freeform tables and show me the data by day, week or month. So, they would use the left navigation to search and see this:

At this point, they would ask me whether they should use the “orange” version of the week or the “purple” version of the week. This led to a discussion about how Time components alter the dates of the project and usually was a downward spiral from there for novice users! One thing you can do is to use Workspace project curation and share a project that has fewer items to limit what novice users will see.

Viewing the same dimension as eVar and sProp

While this is not really a Workspace issue, I often got questions about cases in which doing a search in the left navigation produced multiple versions of a dimension:

Putting aside the awkward conversation about eVar Instances and the Entry/Exit versions of sProps, this forces a conversation about when to use the eVar version and when to use the sProp version that very few novice users will understand. This is why I encourage my customers to remove as many of their sProps as they can to avoid this confusion. In the “Reports” interface, eVars and sProps are segregated, but in Workspace, they can be seen next to each other in many situations. Again, you can use Workspace project curation and share a project that has fewer items to limit what novice users will see.

Filtering & Column Totals

When I explained how users could use the text filter box in the column header of Freeform tables (which they can only see if they hover) to choose the specific values they want to see, users didn’t understand why the column totals didn’t change to reflect the newly filtered values.

They expected that the column totals and the row percentages would change based upon what was filtered. When I explained that this could be done through segmentation versus filtering, I got a lot of head scratches. Perhaps one day, Workspace will add a feature to let users choose if they want to have filter impact column totals and row percentages.

Temporary Segments & Metrics

There are a few places in Workspace where you can quickly create segments or new metrics. The former can be done in the segment area at the top of each panel and the latter can be done by right-clicking on columns. Unfortunately, in both of these cases, the segments/metrics created are “temporary” such that they don’t appear in the left navigation or in other projects (unless you edit them and choose “make public” option). I am sure this feature was added to reduce clutter in the left navigation component area, but as a trainer, it is hard to explain why you should create segments/metrics one way if you want them to be “public” and another way if they are “temporary.” I am not sure of the solution here, but I will tell you that your users may be confused about this as well.

Fallout with Success Events vs. Calculated Metric with Success Events

When doing classes, I often asked students to show me the conversion rate between two Success Events. For example, there might be a Leads Started event and a Leads Completed event and I wanted them to tell me what percent of Leads Started were Completed. To me, this was an exercise to have them show me that they knew how to create a new Calculated Metric. However, I was surprised that on multiple occasions students chose to answer this question by creating a Fallout report that used these two Success Events instead. Unfortunately, the resulting conversion rate metric will not be the same since the Fallout report is a count of Unique Visitors and the Calculated Metric divides the actual Success Event numbers. Sometimes they were close, but I got questions about why the numbers were different. This is just an education thing, but be prepared for it.

Derived Metrics & Cohorts

One of the things I love the most about Adobe Analytics is the ability to create derived metrics. These are Calculated Metrics that have a segment or dimension value associated with them. When I explained Cohort tables to students, they thought they were cool, especially when I showed them how to apply segments to Cohorts. Unfortunately, you cannot use Calculated Metrics (of which derived metrics are a subset) in Cohort reports. Upon learning this, my students astutely pointed out that you can add a metric and a segment to a Cohort table, but you cannot add a derived metric, which is just a metric and a segment combined. They didn’t understand why that would be the case. I am sure there is a valid reason for this, but I just wanted to highlight it as another question you may receive.

Segmentation and Containers

Ever since segmentation was created in Adobe Analytics, it has been something that confused novice users. Since you can add so much to a segment and use so many operators, it can be overwhelming. Teaching segmentation is typically the hardest part of classes on Adobe Analytics and since it is front and center in Workspace, it is unavoidable.

One particularly confusing area is the topic of containers within segments. Most people can [eventually] understand why they need containers when different operators are being used, but what I found in my classes is that understanding that each container can be set to Visitor, Visit, or Hit level can push novice users over the edge! If users add a container, it defaults to a “Hit” container which can produce no data in certain situations like this:

Summary

To summarize, the above items are ones that I found generated the most questions and confusion consistently across many classes with students of varying degrees of experience with Adobe Analytics. When these types of questions arise, you will have to decide if you want to tackle them and, if so, how deep you want to go. For now, I just wanted to share my experience as something to consider before you train your employees on Workspace. In next week’s post, I will outline some of the Workspace UX/Design things that my students struggled with in classes.

Adobe Analytics

Values – First/Last/Exit

One of the concepts in Adobe Analytics that confuses my customers is the notion that each sProp has a normal value, an entry value, and an exit value. When using Analysis Workspace, you might see something like this in the left navigation when filtering:

As you can imagine, this could freak out novice users. More often than not, when I ask users “What do the Entry and Exit version of sProp X represent,” I hear this:

“The Entry version of the sProp is only counted when the sProp is sent a value on the first page of the visit and the Exit version is only counted when the sProp is set on the last page of the visit…”

That seems logical, but unfortunately, it is wrong! In reality, the Entry version of the sProp simply stores the first value that is passed to the sProp in a visit and the Exit version stores the last value that is passed to the sProp in a visit. Instead of Entry and Exit, Adobe should really call these First and Last values of the sProp (but that is probably not high on their list!). If a visit contains only one value, then that value would be the same in the Entry version, the normal version and the Exit version of the sProp. But if the sProp contains several values in the visit, one will be designated as the first (entry) and one will be the last (exit). Here is Adobe’s explanation in the documentation:

However, the larger question is why the heck does Adobe even store all of these extra values? How can you use them? These Entry and Exit values are typically used in pathing-related reports, but in this post, I will share some other ways to take advantage of the confusion that these extra sProp values create.

Example: Internal Search Keyword Analysis

Let’s imagine that you have a site that has a lot of internal searches and keyword activity. You are trying to determine which keywords are doing well and which are not. While you may already be tracking the internal search click-through rates, internal search placement and average internal search position clicked, in this scenario, you want to see how often each internal search keyword used was both the first one searched and the last one searched and what the exit rate was for each keyword. This can all be done using the aforementioned derivatives of the internal search keyword sProp.

To start, let’s create a table that captures the top five internal search keywords (FYI: for an sProp, Occurrences is the same as an Internal Searches success event):

Next, let’s breakdown the top keyword by the Entry version of the sProp to see how often the most popular keyword was also the entry keyword:

Here we can see that 68.5% of the time, the top keyword searched was also the entry (first) keyword. Next, we’d like to isolate the 68.5 % and use it as a metric, so I created a new calculated metric that pulls it into its own column. This is done by dividing Occurrences by the column total using a calculated metric function:

When saved and added to the table, it looks like this:

Next, I am going to create a summary number based upon the cell that contains the 68.5%:

Then I am going to repeat all of these steps for the Exit Search term so I have an additional table that looks like this:

In this case, our most popular internal search keyword was also the last keyword used 87% of the time so I will add that as another summary number (I collapsed the first table so you could see it more easily):

Next, I want to see how often the keyword is used and then visitors exit the site on the search results page (similar to what I described in this old post). I do this by creating a new calculated metric that quantifies how often the search results page is the exit page:

Then I can add this to my table and create another calculated metric to divide Occurrences by Search Page Exits:

Here I can see that the top search keyword is an exit 34.6% of the time. Again, I create a summary number so I have all three at the top of my Workspace project:

Build For Scale

So all of that was pretty cool! In one row, I can see the keyword’s first use, last use, and exit %. However, there is one problem. All of this is hard-coded to my top internal search keyword. That is not very scalable. What if I want to see the same numbers for any internal search keyword?

To make this a bit better, the next step is to pick a bunch of internal search keywords and drag them to the segment area, using the shift key to make them a picklist:

Once you do this, you can pick one of your keywords and all of the tables will focus on that keyword like this:

Even better, now that we are narrowing down to just one keyword, we can lock in the Exit Keyword % Summary Number since it will always be the top-right cell:

Now, we can simply change the drop-down value and all of our numbers should re-adjust as shown here:

This works by default because many times the chosen keyword will also be the first and last keyword, so the highlighting of the top-right % in each table works and updates the summary numbers. However, that will not always be the case. Sometimes, the most popular first/last keyword will not be the same as the chosen keyword itself (Note: You can vote for my idea to make cells references to other cells in Analysis Workspace like you can in Excel!). In that case, you may have to manually select the First and Last keyword to see the correct summary numbers as shown here:

Therefore, I have finished this dashboard by putting a text box explaining this potential warning and need for adjustment:

Summary

As stated at the beginning of this post, understanding the “Entry” and “Exit” versions of sProps can be a bit confusing. But once you understand the concept, you can identify ways to leverage them to do additional analysis. In this post, I covered a way to utilize the First and Last sProp values to quantify the percent of the time the same internal search keyword was used first and last. This concept can be applied to any sProp, not just internal search keywords. Anytime you want to compare values stored in sProps with the first and last entries received, you can try this out.

Adobe Analytics

Using Query Strings to Test Adobe Analytics Data

Have you ever wanted to run a specific scenario in Adobe Analytics, but couldn’t find the exact page flow or variable combination in your own data? This happens to me often. I want to view visitors going down a specific path or setting a specific eVar, but even after spending a lot of time building granular segments, I still can’t mock up or test the scenario I want. If I could isolate the right traffic, I could test out specific website paths, test to see if eVars are behaving as I’d expect and so on. While there are a few “techy” ways to do this with JavaScript, if you are not super-technical (like me), this post will show how you can do this yourself with no tagging required.

Query String & Processing Rule

One way you can mock-up or test the data you want is to use query strings and an Adobe Analytics processing rule. Query strings are parameters appended to page URL’s and you can set them to whatever you want. Adobe Analytics processing rules allow you to set Adobe Analytics variables using rules instead of JavaScript code. When you combine the two, you can pick and choose what data you want to capture in Adobe Analytics later on through an easy application of segmentation.

To start, you will want to come up with a query string parameter that you will use for the times you want to make your own data. In this case, I will use the query string of “?qa_test=” in the URL. For example, if I want to count an instance of the Analytics Demystified home page in my data set, I would make the URL look like this:

https://analyticsdemystified.com?qa_test=XXX

Next, you can set up a processing rule that will look for this query string and pass anything found AFTER the equals sign to an Adobe Analytics variable. In this case, I have created a new sProp called QA Test [p11]. Here is what the processing rule using the query string and this new sProp looks like:

Once this processing rule is active, any URL that has a “?qa_test=” query string will pass the value to the sProp which means that I can run as many test scenarios as I want. To demonstrate this, let’s go through an example. Let’s say that I want to view a specific website path. The path I want is one in which a visitor enters on the home page, views a blog post of mine talking about my Sydney Australia class and then exits on the Eventbrite link to register for the class.

To start, I would copy and paste the URL of my home page and append the appropriate query string (“EntryHomePage:AustraliaPost:LinkOutEventbrite” in this case) like this:

Next, I would repeat this for the URL of the Australia blog post, making sure to append the same query string parameter and value:

Lastly, I would click the link to Eventbrite, which would count as an exit link (so it doesn’t need the query string parameter):

In this scenario, we have sent two hits into Adobe Analytics and then had one exit link. Depending upon how you build your segment later (Hit vs. Visit), it doesn’t even matter if you click on other pages in between the ones you care about. If you later use a Visit segment, it will include all hits, but if you use a Hit segment, it will only include the ones you have designated (more on that later). If you want to test that your hits are coming through, you can use the Real-Time reports to view your QA Test sProp values in near real-time (Note: Processing Rule data will not appear in the Experience Cloud Debugger):

Using Your Data

Once you have passed the data you need, the next step is to build a segment that isolates this data. As mentioned above, a Hit segment will show only the pages that had the query string parameter, but a Visit/Visitor segment container will bring back other pages viewed in the same session or by the same browser. In this case, let’s use a Hit container so we only see the specific data we intentionally added. To do this, you simply make a Hit segment with the sProp11 equal to the value you placed in the query string:

Here we can see that there is 1 Unique Visitor, 1 Visit and 2 Page Views for the segment. This looks correct, so we can save it and begin applying it to reports. With the segment applied, we can check out the results. In this case, I will add a Pages and Exit Links table to see if the data looks as I expect (which it does):

Obviously, this is a very simple example, but it still illustrates that you can use a query string and processing rule to isolate whatever traffic you want.

Advanced Uses

If you want to get a bit more sophisticated with this approach and you don’t want to spend your life setting query strings on each page, another way to use this concept is to simply begin a visit with a query string and then use an “entry” segment to include all data taking place after the initial query string. To do this, I suggest that you begin by clearing your cookies or use a private or incognito browser window. Once you have that, paste in the entry page URL with the desired query string like this:

Once you have done this, you can surf the site however you want to generate the traffic you want to see in your reports. Watching the sequence below, you can see that I have chosen to view the Demystified home page,  then a page detailed some training days we are doing in Chicago later this year, then a page about the AAEC, then a page showing when Demystified folks are speaking at events and then back to the home page.

Once you have completed this, you can create a new segment that looks for Visits that began with the chosen query string parameter. This can be done a few ways using a URL variable or using the Entry version of the QA Test sProp as shown here (note that the query string doesn’t have to be the first page of the visit):

When that segment is applied, you will see the pages and paths that took place accordingly:

Test New Flow Instances

This concept can also be used to test out specific Adobe Analytics features. For example, let’s pretend that you don’t trust that Adobe has really addressed the “repeated instances” in Flow visualizations described in this post. To test this, you can use the entry query string concept again to model a specific path. In this case, I am using ?qa_test=EntryPageTest2″ on the first URL of the session and, in the session, I am visiting a few pages on the Demystified website. You will notice in the page sequence below that I am purposely refreshing two pages in my session (Adobe Analytics Expert Council and Adobe Analytics Audit):

Once that session has processed, I can create a new segment that looks for pages where the entry value in my QA Test sProp equals “EntryPageTest2” per the description above. Next, I can apply this segment to a Flow Visualization. In the video below, notice that I am first looking at the path flow where repeat instances are disabled. In that case, I see the pages in the order I viewed them and the page refreshes don’t appear. But once I change the setting to include the repeat instances (as was always the case prior to last week’s ADobe release), I can once again see the same page repeated three times for the AAEC page and two times for the Audit page.

 

Therefore, using the query string parameter, I can do some very detailed tests and make sure that everything in Adobe Analytics is working as I would expect.

Summary

As you can see, this technique can be used whenever you want to be prescriptive about the data you are viewing in Adobe Analytics. And since it requires no tagging, anyone [who has Admin rights] can do it. I have found this especially useful when I want to test out the differences between Hit, Visit and Visitor segments and testing segments in general. The above has mainly shown how the technique can be applied to pathing/flow sequences, but it can also be used to test out any type of Adobe Analytics tagging.

Adobe Analytics

Once Per Visit Success Events

Recently, while working with a new client, I noticed something that I have seen a few clients do related to Once per Visit Success Events. These clients set a bunch of Success Events with event serialization set to Once Per Visit as seen here in the Admin Console:

In these situations, the client tends to have the same Success Event unserialized in a different variable number. For example, they might have event10 set to Form Completions and then event61 set to Form Completions Once per Visit.

So why would a company do this? In most cases, the company wants to have a count of cases in which at least one instance of the event took place within the session. While there are some good reasons to use Once per Visit event serialization, in most cases, I feel that duplicating a large swath of your Success Events in order to have a Once per Visit version is unnecessary. Doing this adds more data points to your implementation, causing the need for more QA and potentially confusing your end-users. In this post, I will share an alternative method of accomplishing the same goal with less tagging and work in general.

Derived Metrics Alternative

As I have discussed in previous blog posts, it is easy to use the Calculated Metric builder to make brand new “derived” metrics in Adobe Analytics. In many cases, this is done by adding a segment to another metric in a way that makes it a subset of the metric. As such, derived metrics can be used instead of duplicating your Success Events and making a Once per Visit version for each. To illustrate this, I will use an example with my blog.

In this scenario, let’s imagine that I have a need to view a metric that shows how many website visits contained a blog post view. I already have a Success Event that fires whenever a blog post view occurs, so I can see how many blog post views occur each day, but that is not what is desired in this case. Instead, I want to see how many visits contained a blog post, so using the method described above, I could create a second Success Event that fires each time a blog post view occurs and serialize it as Once per Visit. This second Success Event would only count once regardless of how many blog post views take place in the session. If I compare this new Success Event to the raw Blog Post Success Event metric, I might see something like this:

Here you can see that the serialized version is less than the raw version, with the difference representing visitors who viewed multiple blog posts per visit.

But as mentioned earlier, this required creating a second Success Event which I don’t really want to do. I can get the same data without any additional tagging work by leveraging derived metrics in Adobe Analytics. In this example, I will start by building a segment that looks for visits in which a blog post view existed:

Next, I will add this new segment to a new Calculated Metric along with Visits as shown here:

Now I have a new derived metric that counts visits in which a blog post took place. If I add this new metric to the table shown above, I see the following:

As you can see, the new derived metric is exactly the same as the Oncer per Visit Success Event, but any user can create it with no technical tagging or additional QA needed! Sometimes, less is more! You can create as many of these derived metrics as you need and share them with your users as needed.

Caveats

There is one important thing to remember when considering whether to set additional Oncer per Visit Success Events or use derived metrics. Derived metrics are a form of Calculated Metrics and Calculated Metrics cannot be used everywhere within Adobe Analytics. For example, Calculated Metrics cannot be used in segments (i.e. Calculated Metric X is > 50), cohort analyses, histograms, DataWarehouse, etc. Therefore, it is important for you to think about how you will use the metrics before deciding whether to make them actual Success Events or derive them via Calculated Metrics. My advice is to begin with the derived metric approach and see if you run into any limitations and, only then, create new Once per Visit Success Events for metrics that need it.

Adobe Analytics

Analysis Workspace Flow Reports Without Repeating Instances!

Yesterday, Adobe released a new update to the Analysis Workspace Flow visualization that [finally] has the long-awaiting “no repeat instances” feature. This has been something that has prevented many Adobe Analytics users from abandoning the [very] old pathing reports in the old interface. In the old interface pathing reports, if a visitor had a sProp that received the same value multiple times in a row, the path reports would ignore the repeat values and only create a new path branch when a new value was received. When the Analysis Workspace Flow visualization was introduced, it came with the ability to view paths for sProps and eVars, which was super exciting, but when people starting using the new Flow reports they would see the same value over and over again like this:

This was due to the fact that the Flow visualization was based upon persisting values instead of instances. As of yesterday, Flow reports are based upon actual instances of values being passed and the new checkbox gives you the ability to view or not view those repeat instances. Therefore, the initial Flow visualization report was like taking one step forward, but another step back. The duplicative values would appear for a number of reasons:

  1. eVar persistence;
  2. Page refreshes;
  3. Visitors getting an eVar value, then going to another page and then coming back to a page where the same eVar value was set (in the example above, the blog post title is set each time the blog post is viewed);
  4. Visitors having sessions time out and then restarting the session on a page that passes the same value.

This problem was exacerbated if you chose to have your flow based upon “Visitor” instead of “Visit” since the visitor could return the next day and receive the same values. The end result was that people like me would continue to use sProps for pathing to avoid the messiness shown above since it isn’t fun explaining the inner workings of Adobe Analytics “Instances” to business stakeholders!

However, with yesterday’s release, you now have the ability in the settings panel of the Flow visualization to toggle off repeat instances:

When you uncheck the repeat instances setting, the above report will look like this:

In this view, only different values will create new flow branches, like is the case in the old pathing reports. But since you can use the Flow visualization with eVars and sProps, you no longer need to reply on the pathing reports of the old Reports interface.

In case you are curious, I also tested what happens with a sProp. In this case, I store the blog post title in both a sProp and an eVar, so I can easily see the flow visualization for the sProp version. As you can see here, it is identical to the eVar:

The same is true for the version that hides repeat instances:

 

Use Cases

So how can you take advantage of this new Flow visualization feature? As I stated, the most obvious use case is to cut back on any sProps you were using simply for pathing purposes. As I have mentioned in the past, casual users of Adobe Analytics can easily get confused when there are multiple versions of the same variable since they don’t really understand the differences between an eVar and a sProp (nor should they!). For example, if you are tracking internal search terms in an eVar, but want to see the order in which search terms are used, you can now do both with the eVar instead of having to create a redundant Internal Search Term sProp.

Other use cases might include:

  • Ability to view Marketing Channels used including and not including cases where the same marketing channel was used in succession
  • Ability to see which top navigation links are used including and not including cases where the same nav link was clicked in succession
  • Ability to view clicks on links on the home page including and not including cases where the same link was clicked in succession

Fine Print

There are a few cases called out in the documentation for which it is not possible to use this new “no repeat instances” functionality. Those cases involve variables that have multiple values such as list eVars, list sProps, the Products variable and merchandising eVars:

This makes sense since there is a lot going on with those special variables, but if you use them in the Flow visualization, the new “repeat instances” option will be grayed out indicating that it cannot be used:

BTW, if you try to beat the system and add a multi-valued dimension to an existing flow report, you will get the following warning (Ben, Jen & Trevor think of everything!):

Summary

Overall, this new Flow visualization feature will make a lot of people’s lives easier and I encourage you to check it out. If you want to learn more about it, you can check out Jen Lasser’s YouTube video about it. Enjoy!

Adobe Analytics

Real-Time Campaign Activity

If you work at an organization that does a lot of digital marketing campaigns, there may be occasions in which you or your stakeholders want to see activity in real-time. In this post, I will demonstrate how to do this in Adobe Analytics.

Campaign Tracking – eVar

If you are using digital marketing campaigns, you should already be tracking marketing campaign codes in the Adobe Analytics Campaigns variable. This involves passing some sort of string that represents each different advertising spot down to the most granular level. These codes can be numeric or you can structure them in a way that makes sense to your organization. I use the UTM method that Google has made a common standard.

However, even if you are tracking your campaigns correctly in the Campaigns variable, there is another step you need to take in order to view people accessing the campaign codes in real-time. Adobe Analytics real-time reports work best with Success Events and sProps. The Campaigns variable in Adobe Analytics is an eVar and eVars don’t work super-well with real-time reports because they persist. If you attempt to select the Tracking Code (Campaigns) eVar from within the real-time report configurator, you will see this:

This scary warning is basically telling you that using an eVar might not work. Here is what the real-time report looks like if you ignore the warning:

As you can see, that doesn’t produce the results you want. Therefore, once you have your campaign codes in the Tracking Code (Campaigns) variable, there is one more step you need to take to view them in real-time.

Campaign Tracking – sProp

Since real-time reports work better with sProps, the next step is to copy your campaign tracking code values to a sProp. This can be done via JavaScript, TMS or a processing rule. Here is a simple Processing Rule that I created to copy the values over to a sProp:

To be sure the data is copying over correctly after you have some time for data to collect, you can open the new sProp report and view the data:

Next, you can go back to the real-time report configurator and choose to base your report on this new sProp dimension. Once you save and refresh, you will see your campaign codes stream in as visitors hit your site:

 

Filtering

One last tip related to using the real-time reports. In this campaign code scenario, you may find cases in which the real-time report contains codes that are from older campaigns that don’t apply to your current analysis. For example, you might want to see how the various “exp-cloud-aud” campaign codes are performing, but there are others appearing as well:

Luckily, there is an easy way to filter the real-time report values to focus on the exact ones you want (assuming you have named them in a way conducive to text filtering). This can be done by adding a text filter in the search box as shown here:

Summary

While I am not often a huge fan of “real-time” data, there may be some cases in which you want to see how different advertising units are performing quickly so you can make some adjustments to hit your conversion targets. This simple method allows you to easily see how campaign codes are being used in real-time. Lastly, if you use processing rules for your marketing channels, you can follow the same approach to see marketing channel usage in real-time as well.

Adobe Analytics

Sharing Experience Cloud Audiences – Part 2

In my last blog post, I showed an example of how you can create a segment in Adobe Analytics, push it to Adobe Target and then use Adobe Target to show personalized content on your website. It was a relatively basic example but showed how you could begin to leverage the power of the Adobe Experience Cloud audience sharing feature. In this post, I will build upon what was done in the last post and show some additional ways you can integrate Adobe Analytics and Adobe Target.

Sharing Converse Segment

In the last blog post, the scenario involved showing a cross-sell promo spot when visitors meet a specific segment criterion, which in that case was viewing two or more “Adam Greco” blog posts. We built a segment looking for those visitors and sent it to Adobe Target as a shared audience and then showed a cross-sell promo to the audience. But what if we wanted to show something different to the visitors that didn’t meet the targeting criteria? In that case, we could have default content or we could use a “converse” (opposite) segment to push something different to visitors who didn’t meet our segment criteria.

To illustrate this, let’s look at the segment we used to identify those who had viewed 2+ of my blog posts, but not viewed my services pages:

Now, if we want to show a different promotion to those who don’t meet this criterion, we can create a “converse” segment that is essentially the opposite of this segment as shown here:

To test that we have our segments setup correctly, we can build a table and make sure that the numbers look correct:

If you create your “converse” segments correctly, you should see the numbers in the right two columns add up to the first column, which they do in this case. Of course, you can create different segments and show different promos to each segment as needed, but in this simple example, I just want to show one promo to people who match the first segment shown above and another promo to those who don’t. Once both segments have been pushed to Adobe Target, the appropriate content can be pushed to the page using the “mbox” shown in my previous post.

In this case, I have decided to push a promo for my cool Adobe Analytics Expert Council (which you should probably apply for if you are reading this!). All those who aren’t targeted to learn about my consulting services will see this as the fallback option.

 

Track Promo Clicks and Impact

Another way to build upon this scenario is to track the use of internal promotions in Adobe Analytics. For example, when visitors click on one of the new promo spots being served by Adobe Target and shared audiences, you can set a click Success Event and also capture an internal tracking code in an eVar. The Success Event will tell you how many times visitors are engaging with the new targeted promo spots and the internal campaign eVar will tell you which ones were clicked and whether any other website conversion events took place after the internal promo was used.

Here is an example of an internal campaign clicks Success Event:

Here is an example of those internal campaign clicks broken down by internal campaign in the eVar report:

This report allows you to see which of these new promos is getting the most clicks (to see impressions and click-through rates of each promo is more involved and described here). It is relatively easy to see how often each promo leads to website Success Events since their values persist in the eVar. For example, in the screenshot shown above, when visitors click on the AAEC promo, I am setting a Success Event on the click, an internal campaign code in an eVar and if the visitors clicks the “Apply” button on the post, I am setting another Success Event. Therefore, I can view how many clicks the AAEC promo gets and how many AAEC Applications I get as a result:

In this example, we can see that the AAEC promo got twenty-five clicks and that four of them resulted in people beginning the application process (and there was one case of someone applying for the AAEC without using the promo). If I wanted to get more advanced, I could have multiple versions of the AAEC promo, use Adobe Target to randomly show each and use different internal campaign codes to see which version had the best conversion rate.

Summary

As you can see, the combination of Adobe Analytics shared audiences, Adobe Analytics and Adobe Target can be very powerful. There are countless ways to leverage the synergies between the products and these are only a few of Adobe’s suite of products! I recommend that you start experimenting with ways you can combine the Adobe products to improve your website/app conversion.

Adobe Analytics

Sydney Adobe Analytics “Top Gun” Class!

UPDATE: Sydney “Top Gun” class is now sold out!

For several years, I have wanted to get back to Australia. It is one of my favorite places and I haven’t been in a LONG time. I have never offered my advanced Adobe Analytics “Top Gun” class in Australia, but this year is the year! I am conducting my Adobe Analytics “Top Gun” Class on June 26th in Sydney. This is the day before the Adobe 2019 Sydney Symposium held on June 27-28, so people who have to travel can attend both events as part of the same trip! This will probably be the only time I offer this class in the region, so I encourage you to take advantage of it! Seats are limited, so I suggest you register early!

Here is a link to register for the class: https://www.eventbrite.com/e/analytics-demystified-adobe-analytics-top-gun-training-sydney-2019-tickets-54764631487

For those of you unfamiliar with my Adobe Analytics “Top Gun” class, it is a one-day crash course on how Adobe Analytics works behind the scenes based upon my Adobe Analytics book. This class is not meant for daily Adobe Analytics end-users, but rather for those who administer Adobe Analytics at their organization, analysts who do requirements gathering or developers who want to understand why they are being told to implement things in Adobe Analytics. The class goes deep into the Adobe Analytics product, exploring all of its features from variables to merchandising to importing offline metrics. The primary objective of the class is to teach participants how to translate everyday business questions into Adobe Analytics implementation steps. For example, if your boss tells you that they want to track website visitor engagement using Adobe Analytics, would you know how to do that? While the class doesn’t get into all of the coding aspects of Adobe Analytics, it will teach you which product features and functions you can bring to bear to create reports answering any question you may get from business stakeholders. It will also allow you and your developers to have a common language and understanding of the Adobe Analytics product so that you can expedite getting the data you need to answer business questions.

Here are some quotes from recent London class attendees:

Again, here is a link to register for the class: https://www.eventbrite.com/e/analytics-demystified-adobe-analytics-top-gun-training-sydney-2019-tickets-54764631487

Please e-mail me if you have any questions.  Thanks!

Adobe Analytics

Sharing Experience Cloud Audiences

One of the advantages that Adobe Analytics offers over other digital analytics tools is that it is part of a suite of products. Analytics integrates with AEM, Adobe Target, and other Adobe Experience Cloud products. Adobe has been transitioning more and more of its features to the “core” level so users can share things between Adobe Experience Cloud products. One of the most interesting things that can be shared are audiences (segments). However, I have not seen as many of my customers take advantage of these types of integrations. So in this post, I am going to share a simple example of sharing audiences in the Adobe Experience Cloud using Adobe Analytics and Adobe Target that my partner Brian Hawkins and I created as an experiment. While the example we use is very simplistic, it does a good job of demonstrating how easy it is to share audiences/segments between the various Adobe products.

Scenario

Since our Analytics Demystified website doesn’t have much other than blog posts, the best scenario we could come up with was to promote our B2B services through internal promotions. The idea is to find website visitors who have viewed a bunch of our blog posts and see if we can get them to engage with our consulting services. In reality, that isn’t why we write blog posts and we don’t expect people to actually click on the promotion, but this is just a demo scenario. In this scenario, I will be the guinea pig for the integration and look for people who have viewed at least two of my blog posts but never viewed any of the website pages that explain my consulting services. Once I isolate these folks, I want to target them with a promo that advertises my services.

Adobe Analytics

To implement this, you need to start in Adobe Analytics and make sure you have data being collected that will help you isolate the appropriate website visitors. In this case, since I want to identify visitors who have viewed “Adam Greco” blog posts, I need to have a way to identify different blog posts (Blog Post Title) and the author of each blog post (Blog Post Author). I already have these setup as eVars in my implementation, so I am set there. Next, I need a way to identify each page separately, which I do by using the Page sProp.

With all of these elements in place, the next step is to build a segment in Adobe Analytics. The segment I want is one that includes visitors that have viewed “Adam Greco” author blog posts and viewed two or more different blog posts (this uses the new Distinct Count segmentation feature I blogged about last week). I also have an “exclude” portion of the segment to take out visitors who have viewed some pages that promote me and my services. Once I am happy with the segment, I can use the checkbox at the bottom of the segment to make it a shared Experience Cloud audience.

Adobe Target

Once the segment has been shared and propagates to the Experience Cloud (which can take a few hours), it is time to set up the promotional area on the website using Adobe Target. This is done by leveraging our “global mbox” and the URL of the pages where we wish to have the content displayed. We chose the right-rail of all blog pages:

Next, within Adobe Target, you can set up a test and target it to the audience (segment) that was created in Adobe Analytics (Called “Adam Greco Consideration, But No Intent”):

Next, you can set up a goal in Adobe Target to monitor the progress:

Once this test is live, Adobe Analytics will continuously update the segments as visitors traverse the site and Adobe Target will push the promotion as dictated by the segment. For example, if a user has not met the segment criteria (viewed less than two Adam blog posts or has viewed Adam services pages), they would see a normal blog post page like this:

 

But if the visitor matches the segment, they would be targeted with the right-rail promo as highlighted here:

We are also able to validate that we are in the test using this free Adobe Target Chrome Extension from MiaProva:

Summary

As mentioned above, this is just a silly example of how you can take advantage of Experience Cloud integrations. However, the concept here is the most important part. The better your Adobe Analytics implementation, the more opportunities you have to build cool segments that can be turned into audiences in other Experience Cloud products! I encourage you to look for situations in which you can leverage the synergistic effects offered by using multiple Adobe Experience Cloud products concurrently.

Featured, Testing and Optimization

Profile Playbook for Adobe Target

Adobe Target Profile Playbook

This blog post provides a very thorough overview of what Adobe Target’s profile is and how it works.  Additionally, we’ve included 10 profile scripts that you can start using immediately in your Adobe Target account. 

We also want to share a helpful tool that will allow you to see the Adobe Target Profile in action.  This Chrome Extension allows Adobe Target users to visualize, edit, and add profile attributes to your Adobe Target ID or your 1st Party Organization’s ID.  Here is a video that shows it in action and if you want to read about all the free Adobe Target features in the extension, please check out this blog post.  

THE PROFILE

The Adobe Target profile is the most valuable component of the Adobe Target platform. Without this profile, Adobe Target would be a relatively simple A/B testing solution.  This profile allows organizations to take their optimization program to levels not normally achievable with typical testing tools.  The profile and the profiling capabilities allow organizations to define attributes for visitors for targeting and segmenting purposes.  These attributes are independent of any tests and essentially are creating audiences that can be managed automatically.

As a general example, let’s say an organization decided to build an audience of purchasers.  

Within the Adobe Target user interface, users can create profile attributes based off of any data that Target gets passed to it.  When someone makes a purchase the URL could contain something like “thank-you.html” or something along those lines.

URLs, among other things, are automatically passed to Adobe Target.  So within Target, under the Audiences and then Profiles Scripts, a Target user can say “IF URL CONTAINS ‘thank-you’, set the purchaser attribute to TRUE.  

Once saved, anytime a visitor sees a URL that contains ‘thank-you’, they will automatically attain the profile attribute of ‘purchaser’ and that value will be ‘true’.  This audience will continue to grow automatically on its own as well and if you had a test targeted to purchasers, visitors who purchased would automatically be included in that test.

Audiences like purchasers can be made based off of any event, offline or online when data is communicated to Adobe Target.  The Adobe Target profile is immediate in that Adobe’s infrastructure updates and evaluates the profile before returning test content.  This allows audiences created to be used IMMEDIATELY on that first impression.

The image below outlines what happens when calls are made from your digital properties to the global edge network of Adobe Target.  Here you can see just how important the profile is as it is the first thing that gets called when Adobe receives a network request.  

The profile is much more than this simple example of creating an audience.  The Adobe Target Profile is:

  • The backbone of Adobe Target:  all test or activity participation are visitor profile attributes in Adobe Target.  In this image below, you can see our Analytics Demystified home page and on the right, the MiaProva Chrome Extension that is highlighting four tests that I am in on this page and a test that my Visitor ID is associated with in another location.  Test and test experiences are just attributes of the unique visitor ID.

  • Independent of any single activity or test:  This profile and all attributes associated with it are not limited to any single or group of tests and can be used interchangeably across any test type in Adobe Target.  
  • Is an OPEN ID for custom audience creation:  The profile and its attributes map directly to the Adobe Target visitor ID and this ID can be shared, coupled, and joined with other systems and IDs.  Before there was A4T for example, you could push your Adobe Target Visitor ID to an eVar, create audiences in Analytics and then target a test to the Target ID’s that mapped to the data in Analytics.  This ID is automatically set and can easily be shared with other systems internally or externally.
  • Empowerment of 1st, 2nd, and 3rd party data: the profile allows audiences to be created and managed in Adobe Target.  The audiences are constructed from 1st party data (an organization’s data), a 2nd party (Adobe Analytics/Target, Google Analytics, etc…), or 3rd party data (audience manager, DemandBase, etc..).  The profile allows to consolidate data sources and use them interchangeably giving you the ability to test out any strategies without any limitations that data sources typically have.

  • Cross-Device test coordination: Adobe Target has a special reserved parameter name called ‘mbox3rdPartyId’ (more on that below) but essentially this is YOUR organization’s visitor ID.  If you pass this ID to Adobe Target, any and all profile attributes are then mapped to that ID. This means that is this ID
  • Exportable client side dynamically:  Profile attributes can be used in offers used in tests or activities and they can be used as Response Tokens (more on Response Tokens later).  To the right here is our Chrome Extension and the boxed area “Adobe Target Geo Metadata” are actually profile attributes or profile tokens injected into the Chrome Extension via Target.  

 

Here is what the offer looks like in Target:

 

<div class=“id_target”>

  <h2>Adobe Target Geo Metadata</h2>

  <h3>City: ${user.city}<br>

  State: ${user.state}<br>

  Country: ${user.country}<br>

  Zip: ${user.zip}<br>

  DMA: ${user.dma}<br>

  Latitude: ${profile.geolocation.latitude}<br>

  Longitude: ${profile.geolocation.longitude}<br>

  ISP Name: ${user.ispName}<br>

  Connection Speed: ${user.connectionSpeed}<br>

  IP Address: ${user.ipaddress}</h3>

</div><br>

<div class=“id_map”>

  <iframe allowfullscreen frameborder=“0” height=“250” src=https://www.google.com/maps/embed/v1/search?key=AIzaSyAxhzWd0cY7k-l4EYkzzzEjwRIdtsNKaIk&q=${user.city},${user.state},${user.country}‘” style=“border:0” width=“425”></iframe>

</div>

The BOLD text are actually profile attributes that I have in my Adobe Target account.  

When you use them in Adobe Target offers they are called tokens and these tokens are dynamically replaced by Target to the values of the profile attributes.   You can even see that I am also passing Adobe Target Profile attributes to Google Mapping Service to return the map based on what Adobe considers to be my geolocation.

 

 

  • How Automated Personalization does its magic:  Automated Personalization is one of Adobe’s Activity types that uses propensity scoring and models to decide what content to present to individuals.  Without passing any data to Adobe Target, Automated Personalization uses what data is does see, by way of the mbox or Adobe Target tags, to see what content works well with what visitors.  To get more value out of Automated Personalization an organization typically passes additional data to Adobe Target for the models to use for content decisions. Any and all data supplied to Sensei or Automated Personalization outside of the data that Adobe Target collects automatically, are profile attributes.  Similarly, the data that you see in the Insights and Segments reports of Automated Personalization is profile attributes (image below of example report).

  • The mechanism by which organizations can make use of their internal models:  Because the Adobe Target profile and its attributes are all mapped to the Adobe Target ID or your organizational ID, that means you can import any offline scoring that your organization may be doing.  Several organizations are doing this and seeing the considerable value. The profile makes it easy to have the data just sitting there waiting for the digital consumer to be seen again so as to respond automatically with the desired content related to the model or strategy.  

HOW TO CREATE PROFILES

The beautiful part of the Adobe Target Profile is that it is created automatically as soon as digital consumers come in contact with Adobe Target.  This is the case no matter how you use Adobe Target (client-side, server-side, SDK, etc…). When we want to leverage the profile’s ability to define audiences, we are not creating profiles as much as we are creating profile attributes that are mapped or associated with a Profile which is directly mapped to Adobe Target’s ID or your organization’s ID.  

There are three main ways to create profile attributes.  No matter the method of creating the profile attributes, they all function exactly the same way within Adobe Target.  The three ways that Adobe Target users can create mboxes is by way of the mbox (passing the parameter value data as profile parameters), within the Adobe Target user interface, and programmatically via an API.

Client-Side

This is going to be the most popular and easiest way to get profile attributes into your Adobe Target account.  For those of you that have sound data layers or have rich data in your tag management system, you are going to love this approach. When Adobe Target is implemented, you can configure data to be passed to the call that is made to Adobe when a visitor consumes your content.  This data can be from your data layer, cookies, your tag management, or third-party services that are called.

The image below is from the MiaProva Chrome Extension and highlights the data being passed to Adobe when Adobe Target is called.  The call that is made by Adobe Target to Adobe is often referred to as a mbox call (mbox being short for marketing box). The data being passed along are called mbox parameters.  

If you look at #3 below in the image, that is a mbox parameter but because it starts with a “profile.” syntax, that makes it a profile attribute that is then immediately associated with the ID’s at #1 (your organizational ID) and #2, Adobe Target’s visitor ID.  

The important thing to note is that you are limited to 50 profile attribute per mbox or call to Adobe Target.  

Server-side – within your Adobe Target account

The client-side approach will likely be your go-to method especially if you have investments in data layers and tag management.  That said, there is another great way to create these profile attributes right within your Adobe Target account.

This method is quite popular because it requires no change to your Adobe Target implementation and anyone with Approver rights in your Target account can create them.  I especially appreciate that it allows for processing, similar to Adobe I/O Runtime, to be done server side.

This method can be intimidating though because it requires some scripting experience to really take advantage of all the benefits of this approach.  Essentially, you are creating logic based off of what data Adobe Target is getting coupled with the values of any other profile attributes.

Here is a good example, let’s say we want to an audience of current customers and we know that only customers see a URL that contains “myaccount.html”.  When Adobe Target makes its call to Adobe, it passes along the URL to Adobe. Here in this server-side approach, we want to say “if URL contains myaccount.html” create an audience or profile attribute of customer equal to true.  

Here is what that would look like in Target:

And the script used:

if (page.url.indexOf(‘myaccount.html’)  > -1) { return ‘true’; }

Developers and people comfortable with scripting love this approach but for those not familiar with scripting, you can see how it can be intimidating.  

After scripts like this are saved, they live in the “Visitor Profile Repository” and are a key component of the “Profile Processing” as seen in the image below.  Your Adobe Target account will process any and all of these scripts and update their values if warranted. This all happens before test content is returned so that you can use that profile and its values immediately on the first impression.  

To access this server-side configuration of Adobe Target profile attributes, simply click on Audiences in the top-navigation and then on Profile Scripts in the left navigation.  

10 Profile Templates:  The table below outlines 10 great profile scripts that you can use immediately in your Adobe Target account.  Once these scripts are saved, the audiences they create will immediately start to grow. These scripts are a great starting point and help you realize all the potential with this approach.

 

DETAILS PROFILE ATTRIBUTE NAME SCRIPT
This profile attribute retains the current visit number of the visitor. visitnumber if(user.sessionId!=user.getLocal(‘lastSessionId’)) {  user.setLocal(‘lastSessionId’, user.sessionId);

 return (user.get(‘visitnumber’) | 0) + 1;

}

This profile attribute will associate the IP address with the visitor thus enabling you to target activities to certain IP addresses ip_address user.header(‘x-cluster-client-ip’);
This attribute increases with each purchase as defined by impressions of the ‘orderConfirmPage’ mbox which typically exists on thank you pages. purchasefrequency if (mbox.name == ‘orderConfirmPage’) {

return (user.get(‘purchasefrequency’) | 0) + 1;

}

One of my favorites as it allows you to QA tests without having to repeat entry conditions of the tests.  Simply use the letters “qa” as part of your query string and this profile is set to true! Very popular attribute.   qa if (page.param(“qa”)) {

  return “true”;

}

Day of the week.  Helpful such that it highlights the incorporation of standard javascript functions.   day_of_visit if (mbox.name == “target-global-mbox”) {

var today = new Date().getDay();

var days = [‘sunday’, ‘monday’, ‘tuesday’, ‘wednesday’, ‘thursday’, ‘friday’, ‘saturday’];

return(days[today]);

}

This attribute sums up the total revenue per visitor as they make multiple purchases over time. amountSpent if (mbox.name == ‘orderConfirmPage’) {

  return (user.get(‘amountSpent’) || 0) + parseInt(mbox.param(‘orderTotal’));

}

This attribute sums up the number of items purchased by a visitor over time.   purchaseunits if (mbox.name == (‘orderConfirmPage’)) {

var unitsPurchased;

if(mbox.param(‘productPurchasedId’).length === 0){

   unitsPurchased = 0;} else {

   unitsPurchased = mbox.param(‘productPurchasedId’).split(‘,’).length;

}

return unitsPurchased;

} else {

return ‘0’;

}

This attribute simply sets a value of true based off of the URL of the page. You can easily modify this script for any page that is important to you.   myaccount if (page.url.indexOf(‘myaccount’)  > -1)

{

return ‘true’;

}

This attribute is a good example of using an mbox name and an mbox parameter to set an attribute.  I used this one for Marketo when a user submits a form. This creates a ‘known’ audience segment. form_complete if ((!user.get(‘marketo_mbox’)) && (mbox.param(‘form’) == (‘completed’))) {

return ‘true’;

}

This script enables mutual exclusivity in your Adobe Target account.  This attribute creates 20 mutually exclusive swim lanes. Visitors are randomly assigned a group number 1 through 20.   random_20_group if (!user.get(‘random_20_group’)) {

var ran_number = Math.floor(Math.random() * 99),

query = (page.query || ”).toLowerCase();query = query.indexOf(‘testgroup=’) > -1 ? query.substring(query.indexOf(‘testgroup=’) + 10) : ”;

if (ran_number <= 4) {

return ‘group1’;

} else if (ran_number <= 9) {

return ‘group2’;

} else if (ran_number <= 14) {

return ‘group3’;

} else if (ran_number <= 19) {

return ‘group4’;

} else if (ran_number <= 24) {

return ‘group5’;

} else if (ran_number <= 29) {

return ‘group6’;

} else if (ran_number <= 34) {

return ‘group7’;

} else if (ran_number <= 39) {

return ‘group8’;

} else if (ran_number <= 44) {

return ‘group9’;

} else if (ran_number <= 49) {

return ‘group10’;

} else if (ran_number <= 54) {

return ‘group11’;

} else if (ran_number <= 59) {

return ‘group12’;

} else if (ran_number <= 64) {

return ‘group13’;

} else if (ran_number <= 69) {

return ‘group14’;

} else if (ran_number <= 74) {

return ‘group15’;

} else if (ran_number <= 79) {

return ‘group16’;

} else if (ran_number <= 84) {

return ‘group17’;

} else if (ran_number <= 89) {

return ‘group18’;

} else if (ran_number <= 94) {

return ‘group19’;

} else {

return ‘group20’;

}

}

API

The third approach that we highlight is by way of API.  Many organizations leverage this approach because the data that they want to be profile attributes is not available online and so passing it client side is not an option.  Similarly, we can’t use server-side scripting either because of data communications. Many financial institutions and organizations that have conversion events offline typically use this approach.  

Essentially, how this works is you leverage Adobe’s API to push data (profile attributes) to Adobe based on your visitor ID (mbox3rdPartyId) or by Adobe Target’s ID.  The documentation on this approach can be found here: http://developers.adobetarget.com/api/#updating-profiles

mbox3rdPartyId or thirdPartyId

This is one of the easiest things you can do with your Adobe Target account and yet it is one of the most impactful things you can do to your optimization program.  

The mbox3rdPartyId is a special parameter name that is used when you pass YOUR visitor ID to Adobe Target.  

The image to the right is the MiaProva Chrome Extension which is showing the data that is communicated to Adobe Target.  The highlighted value is this mbox3rdPartyId in action.

Here I am mirroring my ID, with the Adobe ID.  This will allow me to coordinate tests across devices such that if a visitor is getting Experience B on one device, they will continue to get Experience B on any other device that has this ID.

Any and all data that is available offline by this ID can be imported to Adobe Target via API!  This further enables offline modeling and having targeting in place even before the digital consumer arrives on your digital properties.  

If your digital property has a visitor ID that they manage, you most definitely want to integrate this into Adobe Target.

Response Tokens

To allow organizations to easily made profile attributes and their values available to other systems, Adobe Target has Response Tokens.  Within your Adobe Target account under “Setup” and then “Response Tokens” as seen in the image below, we can toggle on or off Response Tokens, which are Profile Attributes.  

When you turn the toggle to on, Adobe Target will push these profile attribute values back to the page or location where the Adobe Target call came from.  

This feature is how Adobe Target can integrate with third-party Analytics tools such as Google Analytics.  It is also how the MiaProva Chrome Extension works because as part of that setup, we instruct turning the above-toggled attributes to on.

The immediate image below is what the Adobe Target response looks like where I have a test running.  The first component (in green) is the offer that is changing the visitor’s experience as part of the test.  The second component (in blue) are response tokens that have been turned on. Pretty cool way to easily get your profile attributes part of your data layer or for the consumption of other tools such as ClickTale, internal data lakes, Heap, MiaProva, etc…    

Expiration

A very important thing to note.  By default, the Adobe Target Profile lasts for 14 days of inactivity.  You can submit a ticket to client care to extend this lifetime. They can extend it for 12 to 18 weeks.  This period of time is a rolling period based off of inactivity. So if a visitor arrives on day 1 and then on day 85, the visitor ID and its attributes will be gone if your profile expiration was at 12 weeks (84 days).  

If the visitor was seen at any point before the profile expiration, Adobe Target will push its expiration back by the profile expiration period.  

 

Adobe Analytics

Distinct Count in Segmentation

In last week’s Adobe Analytics release, a new feature was added within the segmentation area. This feature is called Distinct Count and allows you to build a segment based upon how many times an Adobe Analytics dimension value occurs. While the names are similar, this feature is very different from the Approximate Count Distinct function which allows you to add distinct counts to a Calculated Metric. In this post, I will describe the new Distinct Count segmentation feature and some ways that it can be used.

Segmenting on Counts – The Old Way

When doing analysis, there are often scenarios in which you want to build a segment of visitors or visits that have done X a certain number of times. For example, you may want to look at visitors who have viewed more than two products but never added anything to the shopping cart. Or you may want to identify visits in which visitors read more than three articles.

This has been somewhat possible in Adobe Analytics for some time, but building a segment to do this has always relied on using Metrics (Success Events). For example, if you want to build a segment to see how many visitors have viewed more than three blog posts, you might do this:

You could then use this segment as needed:

The key here is that you need to have a Success Event related to the thing that you want to count. This can be limiting because you might need to add extra Success Events to your implementation. But the larger issue with this approach is that a visitor can make it into the segment even if they viewed the same blog post three or more times because it is just a count of all blog post views. Therefore, the segment isn’t really telling you how many visitors viewed three or more distinct blog posts.

At this point, you might think, “well that is what I use the Approximate Count Distinct function for…” but, as I mentioned earlier, that function is only useful for creating Calculated Metrics. As shown below, using the Approximate Count Distinct function tells you how many unique blog posts titles were viewed each day or week and doesn’t help you answer the question at hand (how many visitors viewed three or more blog posts).

Segmenting on Counts – The New Way

So you want to accurately report on how many visitors viewed three or more different blog posts and have realized that segmenting on a metric (Success Event) isn’t super-accurate and that the Approximate Count Distinct function doesn’t help either! Lucky for you, Adobe has now released a new Distinct Count feature within the Segmentation area that allows you to build segments on counts of dimension (eVar/sProp) values. Before last week’s release, when you added a dimension to the segment canvas, you would only see the following operator options:

But now, Adobe has added the following Distinct Count operators that can be used with any dimension:

This means that you can now segment on counts of any eVar/sProp value. In this case, you want to identify visitors that have viewed three or more different blog post titles. This can be done with the following segment:

The Results

Once you have created your segment, you can add it to a freeform table to see how many unique visitors viewed three or more blog posts:

In this case, over the selected time period, there have been about 2,100 visitors that have viewed three or more blog posts on my site and I can see the totals by day or week as shown above.

As a side note, if you did try to answer the question of how many visitors viewed three or more blog posts using the old method of segmenting on the Success Event counts (Blog Post Views >=3), you would see the following results:

Here you can see that the number is 3,610 vs. the correct number of 2,092. The former is counting visitors who viewed more than three blog posts, but not necessarily three or more different blog posts. All of the visitors in the correct table would be included in the incorrect table, but the opposite wouldn’t be true.

Again, this functionality can be done with any dimension, so the possibilities are endless. Here are some potential use cases:

  • View Visitors/Visits that viewed more than one product
  • View Visitors/Visits that used more than three internal search terms
  • Check potential fraud cases in which more than one login ID was used in a visit
  • Identify customers who are having a bad experience by seeing who had multiple different error messages in a session
  • Identify visitors who are coming from multiple marketing campaigns or campaign channels

To learn more about this new feature, check out Jen Lasser’s video release video.

Adobe Analytics, Reporting, Testing and Optimization

Guest Post: Test Confidence – a Calculated Metric for Analysis Workspace

Today I am happy to share a guest post from one of our “Team Demystified” superstars, Melody Walk! Melody has been with us for years and is part of Adam Greco’s Adobe Analytics Experts Council where she will be sharing this metric with other experts. We asked her to share more detail here and if you have questions you can write me directly and I will connect you with Melody.


It’s often helpful to use Adobe Analysis Workspace to analyze A/B test results, whether it’s because you’re using a hard-coded method of online testing or you want to supplement your testing tool results with more complex segmentation. In any case, Analysis Workspace can be a great tool for digging deeper into your test results. While Workspace makes calculating lift in conversion rate easy with the summary change visualization, it can be frustrating to repeatedly plug your data into a confidence calculator to determine if your test has reached statistical significance. The calculated metric I’m sharing in this post should help alleviate some of that frustration, as it will allow you to display statistical confidence within Analysis Workspace just as you would lift. This is extremely helpful if you have business stakeholders relying on your Workspace to regularly check in on the test results throughout the life of the test.

This calculated metric is based on the percent confidence formula for a two-tailed T-Test. Below is the formula, formatted for the Adobe Calculated Metric Builder, and a screen shot of the builder summary.

The metric summary can be difficult to digest, so I’ve also included a screen shot of the metric builder definition at the end of this post. To create your confidence calculated metric you’ll need unique visitor counts and conversion rates for both the control experience (experience A) and the test experience (experience B). Once you’ve built the metric, you can edit it for all future tests by replacing your experience-specific segments and conversion rates, rather than starting from scratch each time. I recommend validating the metric the first several times you use it to confirm it’s working as expected. You can do so by checking your percent confidence against another calculator, such as the Target Complete Confidence Calculator.

Here are some things to keep in mind as you build and use this metric:

  1. Format your confidence calculated metric as a percent (number of decimals is up to you).
  2. You’ll need to create a separate confidence calculated metric for each experience compared to the control and for each success event you wish to measure. For example, if your test has a control and two challenger experiences and you’re measuring success for three different events, you’ll need to create six confidence metrics.
  3. Add your confidence metric(s) to a separate free-form table with a universal dimension, a dimension that is not specific to an individual experience and applies to your entire test period. Then, create summary number visualizations from your confidence metrics per the example below.

  1. This formula only works for calculating confidence with binary metrics. It will not work for calculating confidence with revenue or AOV.

After creating your confidence metrics you’ll be able to cleanly and easily display the results of your A/B test in Analysis Workspace, helping you save time from entering your data in an external calculator and helping your stakeholders quickly view the status of the test. I hope this is as helpful for you as it has been to me!

 

Calculated Metric Builder Definition

Conferences/Community, Digital Analytics Community

Two days of training in Chicago in October

Following up on our very successful training day efforts at ACCELERATE 2019 we have collectively decided to bring an expanded version of the same classes to Chicago, Illinois on Monday, October 21st and Tuesday, October 22nd.  We picked the location and dates to encourage folks to join us both at our training days and the inaugural DAA OneConference which is being held on Wednesday, October 23rd and Thursday, October 24th.

On Monday the 21st those of you who love Adobe Analytics and want to take your knowledge to the next level can join Adam Greco, Senior Partner, Author, and Founder of the Adobe Analytics Expert Council for a full day of Adobe Analytics “Top Gun” — a class that is widely recognized as the most complete and most advanced examination of Adobe Analytics available today.

Then, on Tuesday the 22nd you will be able to choose from both morning and afternoon sessions covering a wide range of Adobe and Google related topics delivered by Michele Kiss, Brian Hawkins, Kevin Willeitner, Josh West, and Tim Patten.  There is something for everyone during this day long session:

  • Managing Adobe Analytics Like a Pro
  • Enterprise Class Testing and Optimization with Adobe Target
  • JavaScript for Analysts
  • Getting the Most from Google Data Studio
  • Getting the Most from Adobe Analytics Workspace

You can learn more about the classes and register now at https://analyticsdemystified.com/advanced-analytics-education/

We hope you will join us in Chicago and then come with us to the DAA OneConference!

Adobe Analytics, Featured

B2B Conversion Funnels

One of the unique challenges of managing a B2B website is that you often don’t actually sell anything directly. Most B2B websites are there to educate, create awareness and generate sales leads (normally through form completions). Retail sites have a very straightforward conversion funnel: Product Views to Cart Additions to Checkouts to Orders. But B2B sites are not as linear. In fact, there is a ton of research that shows that B2B sales consideration cycles are very long and potential customers only reach out or self-identify towards the end of the process.

So if you work for a B2B organization, how can you see how your website is performing if the conversion funnel isn’t obvious? One thing you can do is to use segmentation to split your visitors into the various stages of the buying process. Some people subscribe to the Awareness – Consideration – Intent – Decision funnel model, but there are many different types of B2B funnel models that you can choose from. Regardless of which model you prefer, you can use digital analytics segmentation to create visitor buckets and see how your visitors progress through the buying process.

To illustrate this, I will use a very basic example using my website. On my website, I write blog posts, which [hopefully] drive visitors to the site to read, which, in turn, gives me an opportunity to describe my consulting services (of course, generating business isn’t my only motivation for writing blog posts, but I do have kids to put through college!). Therefore, if I want to identify which visitors I think are at the “Awareness” stage for my services, I might make a segment that looks like this:

Here I am saying that someone who has been to my website more than once and read more than one of my blog posts is generally “aware” of me. Next, I can create another segment for those that might be a bit more serious about considering me like this:

Here, you can see that I am raising the bar a bit and saying that to be in the “Consideration” bucket, they have to have visited at least 3 times and viewed at least three of my blog posts. Lastly, I will create a third bucket called “Intent” and define it like this:

Here, I am saying that they had to have met the criteria of “Consideration” and viewed at least one of the more detailed pages that describe my consulting services. As I mentioned, this example is super-simplistic, but the general idea is to place visitors into sales funnel buckets based upon what actions they can do on your website that might indicate that they are in one stage or another.

However, these buckets are not mutually exclusive. Therefore, what you can do is place them into a conversion funnel report in your digital analytics tool. This will apply these segments but do so in a progressive manner taking into account sequence. In this case, I am going to use Adobe’s Analysis Workspace fallout visualization to see how my visitors are progressing through the sales process (and I am also applying a few segments to narrow down the data like excluding competitor traffic and some content unrelated to me):

Here is what the fallout report looks like when it is completed:

In this report, I have applied each of the preceding three segments to the Visits metric and created a funnel. I also use the Demandbase product (which attempts to tell me what company anonymous visitors work for), so I segmented my funnel for all visitors and for those where a Demandbase Company exists. Doing this, I can see that for companies that I can identify, 55% of visitors make it to the Awareness stage, 27% make it to the Consideration stage, but only 2% make it to the Intent stage. This allows you to see where your website issues might exist. In my case, I am not very focused on using my content to sell my services and this can be seen in the 25% drop-off between Consideration and Intent. If I want to see this trended over time, I can simply right-click and see the various stages trended:

In addition, I can view each of these stages in a tabular format by simply right-clicking and create a segment from each touchpoint and adding those segments to a freeform table. Keep in mind that these segments will be different from the Awareness, Consideration, Intent segments shown above because these segments take into account the sequence since they come from the fallout report (using sequential segmentation):

Once I have created segments for all funnel steps, I can create a table that looks like this:

This shows me which known companies (via Demanbase) have unique visitors at each stage of the buying process and which companies I might want to reach out to about getting new business. If I want, I can right-click and make a new calculated metric that divides the Intent visitor count by the Awareness visitor count to see who might be the most passionate about working with me:

Summary

So this is one way that you can use the power of segmentation to create B2B sales funnels with your digital analytics data. To read some other posts I have shared related to B2B, you can check out the following, many coming from my time at Salesforce.com:

Adobe Analytics, Featured

New Cohort Analysis Tables – Rolling Calculation

Last week, Adobe released a slew of cool updates to the Cohort Tables in Analysis Workspace. For those of you who suffered through my retention posts of 2017, you will know that this is something I have been looking forward to! In this post, I will share an example of how you can use one of the new updates, a feature called rolling calculation.

A pretty standard use case for cohort tables is looking to see how often a website visitor came to your site, performed an action and then returned to perform another action. The two actions can be the same or different. The most popular example is probably people who ordered something on your site and then came back and ordered again. You are essentially looking for “cohorts” that were the same people doing both actions.

To illustrate this, let’s look at people who come to my blog and read posts. I have a success event for blog post views and I have a segment created that looks for blog posts written by me. I can bring these together to see how often my visitors come back to my blog each week: 

I can also view this by month:

These reports are good at letting me know how many visitors who read a blog post in January of 2018 came back to read a post in February, March, etc… In this case, it looks like my blog posts in July, August & September did better than other months at driving retention.

However, one thing that these reports don’t tell me is whether the same visitors returned every week (or month). Knowing this tells you how loyal your visitors are over time (bearing in mind that cookie deletion will make people look less loyal!). This ability to see the same visitors rolling through all of your cohort reports is what Adobe has added.

Rolling Calculation

To view how often the same people return, you simply have to edit your cohort table and check off the Rolling Calculation box like this:

This will result in a new table that looks like this:

Here you can see that very few people are coming to my blog one week after another. For me, this makes sense, since I don’t always publish new posts weekly. The numbers look similar when viewed by month:

Even though the rolling calculation cohort feature can be a bit humbling, it is a really cool feature that can be used in many different ways. For example, if you are an online retailer, you might want to use the QUARTER granularity option and see what % of visitors who purchase from you at least once every quarter. If you manage a financial services site, you might want to see how often the same visitors return each month to check their online bank statements or make payments.

Segmentation

One last thing to remember is that you still have the ability to right-click on any cohort cell and create a segment. This means that in one click you can build a segment for people who come to your site in one week and return the next week. It is as easy as this:

The resulting segment created will be a bit lolengthy (and a bit intimidating!), but you can name it and tweak it as needed:

Summary

Rolling Calculation cohort analysis is a great new feature for Analysis Workspace. Since no additional implementation is required to use this new feature, I suggest you try it out with some of your popular success events…

Excel Tips, Presentation

Big Book of Key Performance Indicators

I have received a bunch of email the last few days from folks who have been directed to my Big Book of Key Performance Indicators and the companion spreadsheet.  Since we have changed the web site (albeit years ago) I figured it was easier to just put it here in a blog post:

I hope you enjoy the work!

Adobe Analytics, Featured

Adam Greco Adobe Analytics Blog Index

Over the years, I have tried to consistently share as much as I can about Adobe Analytics. The only downside of this is that my posts can span a wide range of topics. Therefore, as we start a new year, I have decided to compile an index of my blog posts in case you want a handy way to find them by topic. This index won’t include all of my posts or old ones on the Adobe site, since many of them are now outdated due to new advances in Adobe Analytics. Of course, you can always deep-dive into most Adobe Analytics topic by checking out my book.

Running A Successful Adobe Analytics Implementation

Adobe Analytics Implementation & Features

Analysis Workspace

Virtual Report Suites

Sample Analyses by Topic

Marketing Campaigns

Content

Click-through Rates

Visitor Engagement/Scoring

eCommerce

Lead Generation/Forms

Onsite Search

Adobe Analytics Administration

Adobe Analytics Integrations

Adobe Analytics, Featured

2019 London Adobe Analytics “Top Gun” Class

I will be traveling to London in early February, so I am going to try and throw together an Adobe Analytics “Top Gun” class whilst I am there (Feb 5th). As a special bonus, for the first time ever, I am also going to include some of my brand new class “Managing Adobe Analytics Like A Pro!” in the same training!  I promise it will be a packed day! This will likely be the only class I do in Europe this year, so if you have been wanting to attend this class, I suggest you register. Thanks!

Here is the registration link:

https://www.eventbrite.com/e/analytics-demystified-adobe-analytics-top-gun-training-london-2019-tickets-53403058987

Here is some feedback from class attendees:

Adobe Analytics, Featured

My Favorite Analysis Workspace Right-Clicks – Part 2

In my last blog post, I began sharing some of my favorite hidden right-click actions in Analysis Workspace. In this post, I continue where I left off (since that post was getting way too long!). Most of these items are related to the Fallout visualization since I find that it has so many hidden features!

Freeform Table – Change Attribution Model for Breakdowns

Attribution is always a heated topic. Some companies are into First Touch and others that believe in Last Touch. In many cases, you have to agree as an organization which attribution model to use, especially when it comes to marketing campaigns. However, what if you want to use multiple attribution models? For example, let’s say that as an organization, you decide that the over-arching attribution model is Last Touch, meaning that the campaign source taking place most closely to the success (Order, Blog Post View, etc.) is the one that gets credit. Here is what this looks like for my blog:

However, what if, at the tracking code level, you want to see attribution differently. For example, what if you decide that once the Last Touch model is applied to the campaign source, you want to see the specific tracking codes leading to Blog Posts allocated by First Touch? Multiple allocation models are available in Analysis Workspace, but this feature is hidden. The use of multiple concurrent attribution models is described below.

First, you want to break down your campaign source into tracking codes by right-clicking and choosing your breakdown:

You can see that the breakdown is showing tracking codes by source and that the attribution model is Last Touch | Visitor (highlighted in red above). However, if you hover your mouse over the attribution description of the breakdown header, you can see an “Edit” link like this:

Clicking this link allows you to change the attribution model for the selected metric for the breakdown rows. In this case, you can view tracking codes within the “linkedin-post” source attributed using First Touch Attribution and, just for fun, you can change the tracking code attribution for Twitter to an entirely different attribution model (both shown highlighted in red below):

So with a few clicks, I have changed my freeform table to view campaign source by Last Touch, but then within that, tracking codes from LinkedIn by First Touch and Twitter by J Curve attribution. Here is what the new table looks like side-by-side with the original table that is all based upon Last Touch:

As you can see, the numbers can change significantly! I suggest you try out this hidden tip whenever you want to see different attribution models at different levels…

Fallout – Trend

The next right-click I want to talk about has to do with the Fallout report. The Fallout report in Analysis Workspace is beyond cool! It lets you add pages, metrics and pretty much anything else you want to it to see where users are dropping off your site or app. You can also apply segments to the Fallout report holistically or just to a specific portion of the Fallout report. In this case, I have created a Fallout report that shows how often visitors come to our home page, eventually view one of my blog posts and then eventually view one my consulting services pages:

Now, let’s imagine that I want to see how this fallout is trending over time. To do this, right-click anywhere in the fallout report and choose the Trend all touchpoints option as shown here:

Trending all touchpoints produces a new graph that shows fallout trended over time:

Alternatively, you can select the Trend touchpoint option for a specific fallout touchpoint and see one of the trends. Seeing one fallout trend provides the added benefit of being able to see anomaly detection within the graph:

Fallout – Fall-Through & Fall-Out

The Fallout visualization also allows you to view where people go directly after your fallout touchpoints. Fallthrough reporting can help you understand where they are going if they don’t go directly to the next step in your fallout steps. Of course, there are two possibilities here. Some visitors eventually do make it to the remaining steps in your fallout and others do not. Therefore, Analysis Workspace provides right-clicks that show you where people went in both situations. The Fallthrough scenario covers cases where visitors do eventually make it to the next touchpoint and right-clicking and selecting that option looks like this:

In this case, I want to see where people who have completed the first two steps of my fallout go directly after the second step, but only for cases in which they eventually make it to the third step of my fallout. Here is what the resulting report looks like:

As you can see, there were a few cases in which users went directly to the pages I wanted them to go to (shown in red), but now I can see where they deviated and view the latter in descending order.

The other option is to use the fallout (vs. fallthrough) option. Fallout shows you where visitors went next if they did not eventually make it to the next step in your fallout. You can choose this using the following right-click option:

Breakdown fallout by touchpoint produces a report that looks like this:

Another quick tip related to the fallout visualization that some of my clients miss is the option to make fallout steps immediate instead of eventual. At each step of the fallout, you can change the setting shown here:

Changing the setting to Next Hit, narrows down the scope of your fallout to only include cases in which visitors went directly from one step to the next. Here is what my fallout report looks like before and after this change:

Fallout – Multiple Segments

Another cool feature of the fallout visualization is that you can add segments to it to see fallout for different segments of visitors. You can add multiple segments to the fallout visualization. Unfortunately, this is another “hidden” feature because you need to know that this is done by dragging over a segment and dropping it on the top part of the visualization as shown here:

This shows a fallout that looks like this:

Now I can see how my general population falls out and also how it is different for first-time visits. To demonstrate adding multiple segments, here is the same visualization with an additional “Europe” segment added:

Going back to what I shared earlier, right-clicking to trend touchpoints with multiple segments added requires you to click precisely on the part that you want to see trended. For example, right-clicking on the Europe Visits step two shows a different trend than clicking on the 1st Time Visits bar:

Therefore, clicking on both of the different segment bars displays two different fallout trends:

So there you have it. Two blog posts worth of obscure Analysis Workspace features that you can explore. I am sure there are many more, so if you have any good ones, feel free to leave them as a comment here.

Adobe Analytics, Featured

Product Segmentation Gotchas

If you have used Adobe Analytics segmentation you are likely very familiar with the hierarchy of container. These containers illustrate the scope of the criteria wrapped in the container and are available at the visitor, visit, and hit levels. These containers help you control exactly what happens on each of those levels and your analysis can be heavily impacted by which of these you use. These are extremely useful and handle most use cases.

When doing really detailed analysis related to products, however, the available containers can get confused. This is because there can be multiple products per visitor, visit, or hit. Scenarios like a product list page and checkout pages, when analyzed at a product level, can be especially problematic. Obviously this has a disproportionate impact on retailers but other industries may also be impacted if they use the products variable to facilitate fancy implementations. Any implementation that has a need to collect attributes that have a many-to-many relationship may need to leverage the products variable.

Following are a few cases illustrating where this might happen so be on the lookout.

Product Attributes at Time of Order

Let’s say you want to segment for visits that purchased a product with a discount. Or, rather than a discount, it could be a flag indicating the product should be gift wrapped. It could even be some other attribute that you want passed “per product” on the thank you page. Using the scenario of a discount, if a product-level discount (e.g. 2 for 1 deal) is involved and that same discount can apply to other products, you won’t quite be able to get the right association between the two dimension. You may be tempted to create a segment like this:

However, this segment can disappoint you. Imagine that your order includes two products (product A and product B) and product B is the one that has the “2_for_1” discount applied to it (through a product syntax merchandising eVar). In that case the visit will qualify for our segment because our criteria will be applied at the hit level (note the red arrow). This setting will result in the segment looking for a hit with product A and a code of “2_for_1” but it doesn’t care beyond that. This segment will include the correct results (the right discount associated with the right product), but it will also include undesired results such as right discount associated with the wrong product. This is caused when the correct product just so happened to be purchased at the same time. In the end you are left with a segment you shouldn’t use.

This example is centered around differing per-product attributes at the time of an order but really the event doesn’t matter. This could apply at any time you have a bunch of products collected at once that may each have different values. If multiple products are involved and your implementation is using merchandising evars with product syntax (correctly) then this will be a consideration for you.

Differentiating Test Products

I once had a super-large retailer run a test on a narrow set of a few thousand products. They wanted to know what kind of impact different combinations of alternate images available on the product detail page would have on conversion. This included still images, lifestyle images, 360 views, videos, etc. However, not all products had comparable alternate images available. Because of this they ran the test only across products that did have comparable imagery assets. This resulted in the need to segment very carefully at a product level. Inevitably they came to me with the question “how much revenue was generated by the products that were in the test?” This is a bit tricky because in A/B tests we normally look at visitor-level data for a certain timeframe. If someone in the test made a purchase and the test products were only a fraction of the overall order then the impact of the test could be washed out. So we had to get specific. Unfortunately, through a segment alone we couldn’t get good summary information.

This is rooted in the same reasons as the first example. If you were to only segment for a visitor in the test then your resulting revenue would include all orders for that visitor while in that test. From there you could try to get more specific and segment for the products you are interested in; however,  the closest you’ll get is order-level revenue containing the right products. You’ll still be missing the product-specific revenue for the right products. At least you would be excluding orders placed by test participants that didn’t have the test products at all…but a less-bad segment is still a bad segment 🙂

Changes to Product Attributes

This example involves the fulfillment method of the product. Another client wanted to see how people changed their fulfillment method (ship to home, ship to store, buy online/pickup in store) and was trying to work around a limited implementation. The implementation was set up to answer “what was the fulfillment method changed to?” but what they didn’t have built in was this new question — “of those that start with ship-to-home products in the cart, how often is that then changed to ship to store?” Also important is that each product in the cart could have different fulfillment methods at any given time.

In this case we can segment for visits that start with some product with a ship-to-home method. We can even segment for those that change the fulfillment method. We get stuck, though, when trying to associate the two events together by a specific product. You’re left without historical data and resorting to implementation enhancements.

Other Options

The main point of this post is to emphasize where segmenting on products could go wrong. There are ways to work around the limitations above, though. Here are a few options to consider:

  • In the case of the product test, we could apply a classification to identify which products are in the test. Then you would just have to use a table visualization, add a dimension for your test groups, and break that down by this new classification. This will show you the split of revenue within the test group.
  • Turn to the Adobe Data Feed and do some custom crunching of the numbers in your data warehouse.
  • Enhance your implementation. In the case of the first scenario where persistence isn’t needed you could get away with appending the product to the attribute to provide the uniqueness you need. That may, though, give you some issue with the number of permutations that could create. Depending on how into this you want to get, you could even try some really crazy/fun stuff like rewriting the visitor ID to include the product. This results in some really advanced product-level segmentation. No historical data available, though.
  • Limit your dataset to users that just interacted with or ordered one product to avoid confusion with other products. Blech! Not recommended.

Common Theme

You’ll notice in all of these examples the common thread is where we are leveraging product-specific attributes (merchandising eVars) and trying to tease out specific products from other products based on those attributes. Given that none of the containers perfectly match the same scope of a product you may run into something like the problems described above. Have you come across other segmenting-at-a-product-level problems? If so please comment below!

 

Adobe Analytics, Featured

My Favorite Analysis Workspace Right-Clicks – Part 1

If you use Adobe Analytics, Analysis Workspace has become the indispensable tool of choice for reporting and analysis. As I mentioned back in 2016, Analysis Workspace is the future and where Adobe is concentrating all of its energy these days. However, many people miss all of the cool things they can do with Analysis Workspace because much of it is hidden in the [in]famous right-click menus. Analysis Workspace gurus have learned “when in doubt, right-click” while using Analysis Workspace. In this post, I will share some of my favorite right-click options in Analysis Workspace in case you have not yet discovered them.

Freeform Table – Compare Attribution Models

If you are an avid reader of my blog, you may recall that I recently shared that a lot of attribution in Adobe Analytics is shifting from eVars to Success Events. Therefore, when you are using a freeform table in Analysis Workspace, there may be times when you want to compare different attribution models for a metric you already have in the table. Instead of forcing you to add the metric again and then modify its attribution model, you can now choose a second attribution model right from within the freeform table. To do this, just right-click on the metric header and select the Compare Attribution Model option:

This will bring up a window asking you which comparison attribution model you want to use that looks like this:

Once you select that, Analysis Workspace will create a new column with the secondary attribution model and also automatically create a third column that compares the two:

My only complaint here is that when you do this, it becomes apparent that you aren’t sure what attribution model was being used for the column you had in the first place. I hope that, in the future,  Adobe will start putting attribution model indicators underneath every metric that is added to freeform tables, since the first metric column above looks a bit confusing and only an administrator would know what its allocation is based upon eVar settings in the admin console. Therefore, my bonus trick is to use the Modify Attribution Model right-click option and set it to the correct model:

In this case, the original column was Last Touch at the Visitor level, so modifying this keeps the data as it was, but now shows the attribution label:

This is just a quick “hack” I figured out to make things clearer for my end-users… But, as you can see, all of this functionality is hidden in the right-click of the Freeform table visualization. Obviously, there are other uses for the Modify Attribution Model feature, such as changing your mind about which model you want to use as you progress through your analysis.

Freeform Table – Compare Date Range

Another handy freeform table right-click is the date comparison. This allows you to pick a date range and compare the same metric for the before and after range and also creates a difference column automatically. To do this, just right-click on the metric column of interest and specify your date range:

This what you will see after you are finished with your selection:

In this case, I am looking at my top blog posts from October 11 – Nov 9 compared to the prior 30 days. This allows me to see how posts are doing in both time periods and see the percent change. In your implementation, you might use this technique to see product changes for Orders and Revenue.

Cohort – Create Segment From Cell

If you have situations on your website or mobile app that require you to see if your audience is coming back over time to perform specific actions, then the Cohort visualization can be convenient. By adding the starting and ending metric to the Cohort visualization, Analysis Workspace will automatically show you how often your audience (“cohorts”) are returning. Here is what my blog Cohort looks like using Blog Post Views as the starting and ending metrics:

While this is interesting, what I like is my next hidden right-click. This is the ability to automatically create a segment from a specific cohort cell. There are many times where you might want to build a segment of people who came to your site, did something and then came back later to do either the same thing or a different thing. Instead of spending a lot of time trying to build a segment for this, you can create a Cohort table and then right-click to create a segment from a cell. For example, let’s imagine that I notice a relatively high return rate the week after September 16th. I can right-click on that cell and use the Create Segment from Cell option:

This will automatically open up the segment builder and pre-populate the segment, which may look like this:

From here you can modify the segment any way you see fit and then save it. Then you can use this segment in any Adobe Analytics report (or even make a Virtual Report Suite from it!). This is a cool, fast way to build cohort segments! Sometimes, I don’t even keep the Cohort table itself. I merely use the Cohort table to make the segment I care about. I am not sure if that is smart or lazy, but either way, it works!

Venn – Create Segment From Cell

As long as we are talking about creating segments from a visualization, I would be remiss if I didn’t mention the Venn visualization. This visualization allows you to add up to three segments and see the overlap between all of them. For example, let’s say that for some crazy reason I need to look at people who view my blog posts, are first-time visitors and are from Europe. I would just drag over all three of these segments and then select the metric I care about (Blog Post Views in this case):

This would produce a Venn diagram that looks like this:

While this is interesting, the really cool part is that I can now right-click on any portion of the Venn diagram to get a segment. For example, if I want a segment for the intersection of all three segments, I just right-click in the region where they all overlap like this:

This will result in a brand new segment builder window that looks like this:

From here, I can modify it, save it and use it any way I’d like in the future.

Venn – Add Additional Metrics

While we are looking at the Venn visualization, I wanted to share another secret tip that I learned from Jen Lasser while we traveled the country performing Adobe Insider Tours. Once you have created a Venn visualization, you can click on the dot next to the visualization name and check the Show Data Source option:

This will expose the underlying data table that is powering the visualization like this:

But the cool part is what comes next. From here, you can add as many metrics as you want to the table by dragging them into the Metrics area. Here is an example of me dragging over the Visits metric and dropping it on top of the Metrics area:

Here is what it looks like after multiple metrics have been added (my implementation is somewhat lame, so I don’t have many metrics!):

But once you have numerous metrics, things get really cool! You can click on any metric, and the Venn visualization associated with the table will dynamically change! Here is a video that shows what this looks like in real life:

This cool technique allows you to see many Venn visualizations for the same segments at once!

Believe it or not, that is only half of my favorite right-clicks in Analysis Workspace! Next week, I will share the other ones, so stay tuned!

Adobe Analytics, Featured

New Adobe Analytics Class – Managing Adobe Analytics Like A Pro!

While training is only a small portion of what I do in my consulting business, it is something I really enjoy. Training allows you to meet with many people and companies and help them truly understand the concepts involved in a product like Adobe Analytics. Blog posts are great for small snippets of information, but training people face-to-face allows you to go so much deeper.

For years, I have provided general Adobe Analytics end-user training for corporate clients and, more recently, Analysis Workspace training. But my most popular class has always been my Adobe Analytics “Top Gun” Class, in which I delve deep into the Adobe Analytics product and teach people how to really get the most out of their investment in Adobe Analytics. I have done this class for many clients privately and also offer public versions of the class periodically (click here to have me come to your city!).

In 2019, I am launching a brand new class related to Adobe Analytics! I call this class:

Having worked with Adobe Analytics for fifteen years now (yeesh!), I have learned a lot about how to run a successful analytics program, especially those using Adobe Analytics. Therefore, I have attempted to put all of my knowledge and best practices into this new class. Some of the things I cover in the class include:

  • How to run an analytics implementation based upon business requirements
  • What does a fully functioning Solution Design Reference look like and how can you use it to track implementation status
  • Why data quality is so important and what steps can you take to minimize data quality issues
  • What are best practices in organizing/managing your Adobe Analytics implementation (naming conventions, admin settings, etc…)
  • What are the best ways to train users on Adobe Analytics
  • What team structures are available for an analytics team which is best for your organization
  • How to create the right perception of your analytics team within the organization
  • How to get executives to “buy-in” to your analytics program

These are just some of the topics covered in this class. About 70% of the class applies to those using any analytics tool (i.e. Adobe, GA, etc…), but there are definitely key portions that are geared towards Adobe Analytics users.

I decided to create this class based on feedback from people attending my “Top Gun” Class over the years. Many of the attendees were excited about knowing more about the Adobe Analytics product, but they expressed concerns about running the overall analytics function at their company. I have always done my best to share ideas, lessons, and anecdotes in my conference talks and training classes, but in this new class, I have really formalized my thinking in hopes that class participants can learn from what I have seen work over the past two decades.

ACCELERATE

This new class will be making its debut at the Analytics Demystified ACCELERATE conference this January in California. You can come to this class and others at our two-day training/conference event, all for under $1,000! In addition to this class and others, you also have access to our full day conference with great speakers from Adobe, Google, Nordstrom, Twitch and many others. I assure you that this two-day conference is the best bang for the buck you can get in our industry! Unfortunately, space is limited, so I encourage you to register as soon as possible.

Tag Management

Adobe Launch Linking to DTM

Earlier this week I mentioned a feature that allows you to link your Adobe Launch files to your old DTM files. Some have asked me for more details so you now get this follow-up post.

Essentially this feature allows you to make the transition from DTM to Launch easier for sites that were already implemented with DTM. How does it make it easier? Well, let’s say you are an implementation manager who spent years getting DTM in place across 100+ sites that are each running on different platforms and managed by a multitude of internal and external groups. That isn’t a process that most people get excited to revisit. To avoid all that, Adobe has provided this linking feature. As you create a new configuration in Launch those Launch files can just replace your DTM files.

Let’s imagine that you have a setup where a variety of sites are currently pointing to DTM code and your newer implementations are pointing to Launch code. This assumes you are using one property across many sites which may or may not be a good idea depending on your needs. You could visualize it like below where the production environment for both products is used by different sites.

Once you enable the linking, the production code is now shared between the two products. The new visual would look something like this:

It is just a one-way sharing, though. If you were to link and then publish from DTM that would not impact your Launch files. It would impact the DTM files. It’s best to get to the point where you have published in Launch and then just disable the DTM property.

How to Enable Linking

Here is how it is done if you are starting from a brand new property in Launch. You should do these steps before any site is using the Production embed script from Launch. This is because Adobe will give you a new embed code during this process.

  1. The new property will already have Environments enabled (this may be new, I was under the impression you had to create the environments from scratch). Find your Production environment, consider the warning above, and, if all is well, delete it.
  1. Once deleted then hit the Add Environment button and select Production. This will allow you to add a new Production environment to replace the one you just deleted.
  2. As you are configuring the environment  just toggle the “Link DTM embed code” on and paste in your DTM embed code.
  1. Save your settings and if everything checks out ok you will be given new Production embed code. This embed code is what you would use for any production sites.

Other Considerations

  • The embed code will change every time you delete and add a new Production environment. You’ll want sites with the Launch embed code to have the latest version. I haven’t tested what will happen if you try to implement a site with the old Production embed code. It makes me uneasy, though, so I would just avoid it.
  • Note that in my picture above I only show the Production environment being shared. This actually brings up an important point around testing. If you have a staging version of the old sites that uses the staging version of the DTM script then you really can’t test the migration to Launch. The linking only updates the production files. But really you neeeed to test. In order to do this I would recommend just using a tool like Charles or Chrome overrides to rewrite the the DTM embed code to your Launch embed code.
  • Watch out for old methods. When Adobe warned of the transition from DTM to Launch they noted that only the methods below will be supported. If you did something crazy on your site that has outside-of-DTM scripts using something in the _satellite object then you’ll need to figure out an alternative. Once you publish your Launch files to the DTM location, any other methods previously made available by DTM may not be there anymore. Here are the methods that you can still use:
    • _satellite.notify()
    • _satellite.track()
    • _satellite.getVar()
    • _satellite.setVar()
    • _satellite.getVisitorId()
    • _satellite.setCookie()
    • _satellite.readCookie()
    • _satellite.removeCookie()
    • _satellite.isLinked()
  • You can see Adobe’s documentation around this feature here. Especially important are the prerequisites for enabling the linking (DTM and Launch need to be associated with the same org, etc)
Tag Management, Uncategorized

Thankful Launch Features

In honor of Thanksgiving last week I wanted to take a moment to provide a possibly odd mashup between holiday and tag management systems. When I’m converting an implementation from Adobe DTM to Adobe Launch there are a few small features that I’m grateful Adobe added to Launch. Here they are in no particular order…

Better Support for the S Object

Adobe Analytics implementations have traditionally leveraged an ‘s’ global object. The standard setup in DTM would either obfuscate the object that Adobe Analytics used or just not make it globally scoped. This could be annoying when you wanted some other script to use the ‘s’ object. You can force DTM to use the ‘s’ object, but then you would lose some features like the “Managed by Adobe” option for your app measurement code. Here is the DTM setup:

Now in Launch you can opt to “make tracker globally accessible” in your extension configuration.

This will create the ‘s’ object at the window scope so you can have other scripts potentially reference the object directly. With this you get the added benefit of future library updates being easier. Having scripts that directly reference the ‘s’ object isn’t something you should plan on leveraging heavily. However, depending what you are needing while migrating it sure can be useful.

Ordering Tags

When you have an implementation with dependencies between tags the ordering is important. In DTM you had some ordering available by using different event types on the page (top of page, page bottom, DOM ready) but no supported ordering at the time of a single event (although I did once find an unsupported hack for this).

With Launch the ordering is built right into the event configuration of your rules.

It is pretty simple. The default number is 50. If you need something to run earlier on the same event just give it a lower number.

Modifying default values sometimes make me nervous, though, so if you do change the number from 50 just do yourself a favor and update the event name and even the rule name to reflect that. Because my names often represent a list of attributes, I’ll just add “order 10” to the end of the name.

Link Launch to DTM Embed Code

When you configure environments in Launch you will get new embed code to implement on your site. If you were on DTM for a long time and had a bunch of internal or agency groups implement DTM across many different applications then chances are making a global code updates like this is tough! Fortunately, Launch has a feature that allows you to simply update your old DTM payload with the new Launch logic without making all those updates. When creating a new production environment you can just add your DTM embed code to the field shown below. Once that is done, your production Launch code will also publish to the old DTM embed file as well. With this any site on the old or new embed code will have the same, consistent code. Yay!

So what’s one of your favorite Launch features? Comment below!

Adobe Analytics, Featured

Using Builders Visibility in Adobe Analytics

Recently, while working on a client implementation, I came across something I hadn’t seen before in Adobe Analytics. For me, that is quite unusual! While in the administration console, I saw a new option under the success event visibility settings called “Builders” as shown here:

A quick check in the documentation showed this:

Therefore, the new Builders setting for success events is meant for cases in which you want to capture data and use it in components (i.e. Calculated Metrics, Segments, etc.), but not necessarily expose it in the interface. While I am not convinced that this functionality is all that useful, in this post, I will share some uses that I thought of related to the feature.

Using Builders in Calculated Metrics

One example of how you could use the Builders visibility is when you want to create a calculated metric, but don’t necessarily care about one of the elements contained in the calculated metric formula as a standalone metric. To illustrate this, I will reference an old blog post I wrote about calculating the average internal search position clicked. In that post, I suggested that you capture the search result position clicked in a numeric success event, so that it could be divided by the number of search result clicks to calculate the average search position. For example, if a user conducts two searches and clicks on the 4th and 6th results respectively, you would pass the values of 4 and 6 to the numeric success event and divide it by the number of search result clicks (6+4/2=5.0). Once you do that, you will see a report that looks like this:

In this situation, the Search Position column is being used to calculate the Average Search Position, but by itself, the Search Position metric is pretty useless. There aren’t many cases in which someone would want to view the Search Position metric by itself. It is simply a means to an end. Therefore, this may be a situation in which you, as the Adobe Analytics administrator, may choose to use the Builders functionality to hide this metric from the reporting interface and Analysis Workspace, only exposing it when it comes to building calculated metrics and segments. This allows you to remove a bit of the clutter from your implementation and can be done by simply checking the box in the visibility column and using the Builders option as shown here:

As I stated earlier, this feature will not solve world peace, but I guess it can be handy in situations like this.

Using Builders in Segments

In addition to using “Builders” Success Events in calculated metrics, you can also use them when building segments. Continuing the preceding internal search position example, there may be cases in which you want to use the Search Position metric in a segment like the one shown here:

Make Builder Metrics Selectively Visible

One other thing to note with Builders has to do with calculated metrics. If you choose to hide an element from the interface, but one of your advanced users wants to view it, keep in mind that they still can by leveraging calculated metrics. Since the element set to Builders visibility is available in the calculated metrics builder, there is nothing stopping you or your users from creating a calculated metric that is equal to the hidden success event. They can do this by simply dragging over the metric and saving it as a new calculated metric as shown here:

This will be the same as having the success event visible, but by using a calculated metric, your users can determine who they want to share the resulting metric with at the organization.

Adobe Analytics, Featured

Viewing Classifications Only via Virtual Report Suites

I love SAINT Classifications! I evangelize the use of SAINT Classifications anytime I can, especially in my training classes. Too often Adobe customers fail to take full advantage of the power of SAINT Classifications. Adding meta-data to your Adobe Analytics implementation greatly expands the types of analysis you can perform and what data you can use for segmentation. Whether the meta-data is related to campaigns, products or customers, enriching your data via SAINT is really powerful.

However, there are some cases in which, for a variety of reasons, you may choose to put a lot of data into an eVar or sProp with the intention of splitting the data out later using SAINT Classifications. Here are some examples:

  • Companies concatenate a lot of “ugly” campaign data into the Tracking Code eVar which is later split out via SAINT
  • Companies store indecipherable data (like an ID) in an eVar or sProp which only makes sense when you look at the SAINT Classifications
  • Companies have unplanned bad data in the “root” variable that they fix using SANIT Classifications
  • Companies are low on variables, so they concatenate disparate data points into an eVar or sProp to conserve variables

One example of the latter I encountered with a client is shown here:

In this example, the client was low on eVars and instead of wasting many eVars, we concatenated the values and then split out the data using SAINT like this:

Using this method, the company was able to get all of the reports they wanted, but only had to use one eVar. The downside was that users could open up the actual eVar28 report in Adobe Analytics and see the ugly values shown above (yuck!). Because of this, a few years ago I suggested an idea to Adobe that they let users hide an eVar/sProp in the interface, but continue letting users view the SAINT Classifications of the hidden eVar/sProp. Unfortunately, since SAINT Classification reports were always tied directly to the “root” eVar/sProp from which they are based, this wasn’t possible. However, with the advent of Virtual Report Suites, I am pleased to announce that you now can curate your report suite to provide access to SAINT Classification meta-data reports, while at the same time not providing access to the main variable they are based upon. The following will walk you through how to do this.

Curate Your Classifications

The first step is to create a new Virtual Report Suite off of another report suite. At the last step of the process, you will see the option to curate/customize what implementation elements will go over to the new Virtual Report Suite. In this case, I am going to copy over everything except the Tracking Code and Blog Post Title (eVar5) elements as shown here:

As you can see, I am hiding Blog Post Title [v5], but users still have access to the four SAINT Classifications of eVar5. Once the Virtual Report Suite is saved and active, if you go into Analysis Workspace and look at the dimensions in the left nav, you will see the meta-data reports for eVar5, but not the original eVar5 report:

If you drag over one of the SAINT Classification reports, it works just like you would expect it to:

If you try to break this report down by the “root” variable it is based upon, you can’t because it isn’t there:

Therefore, you have successfully hidden the “root” report, but still provided access to the meta-data reports. Similarly, you can view one of the Campaign Tracking Code SAINT Classification reports (like Source shown below), but not have access to the “root” Tracking Code report:

Summary

If you ever have situations in which you want to hide an eVar/sProp that is the “root” of a SAINT Classification, this technique can prove useful. Many of the reasons you might want to do this are shown in the beginning of this post. In addition, you can combine Virtual Report Suite customization and security settings to show different SAINT Classification elements to different people. For example, you might have a few Classifications that are useful to an executive and others that are meant for more junior analysts. There are lots of interesting use cases where you can apply this cool trick!

Conferences/Community

Announcing additional ACCELERATE speakers from Twitch, Google, and Nordstrom!

Today we are excited to announce some additional speakers at our 2019 ACCELERATE conference in Los Gatos, California on January 24th and 25th. In addition to Ben Gaines from Adobe and Krista Seiden from Google, we are delighted to be joined by June Dershewitz from Twitch, Lizzie Allen Klein from Google, and David White from Nordstrom.

June is a long-time friend of the firm and will be sharing her insights into the emerging relationships between Data Analysts, Data Scientists, and Data Engineers, Lizzie is an analytics rock-star at Google and will be talking about how any data worker can elevate their own skills in an effort to get the most from their career, and David will be talking about how Nordstrom is essentially “rolling their own” digital analytics and building data collection and distribution based on open source, cloud-based technology.

June Dershewitz is a Director of Analytics at Twitch, the world’s leading video platform and community for gamers (a subsidiary of Amazon). As an analytics practitioner she builds and leads teams that focus on marketing analytics, product analytics, business intelligence, and data governance. As a long-standing advocate of the analytics community, she was the co-founder of Web Analytics Wednesdays (along with Eric Peterson!); she’s also a Director Emeritus of the Digital Analytics Association and a current Advisory Board Member at Golden Gate University.
Lizzie Allen Klein is a consumer insights analyst at Google, where she focuses on support analytics for Google consumer apps. Prior to this role, she ran experimentation and analytics on the Google Cloud Platform website. Aside from playing with her dog in the mountains of Colorado, she enjoys learning new data exploration techniques, using those techniques to better understand users and encouraging data-informed decision-making by sharing user insights.
David White is a Cloud Security Engineer at Nordstrom. He is passionate about event-driven architectures, clickstream analytics and keeping data secure. He has experience working on building analytics pipelines, both in the corporate space, as well as open source communities. He lives in Seattle, WA with his girlfriend and dog.
Ben Gaines is a Group Product Manager at Adobe, where he is responsible for guiding aspects of the Adobe Analytics product strategy and roadmap related to product integration and Analysis Workspace. In this role, he and his team work closely with Adobe customers to understand their needs and manage the planning and design of new analysis capabilities in the product. He lives near Salt Lake City, Utah, with his wife and four children.
Krista Seiden is a Product Manager for Google Analytics and the Analytics Advocate for Google, advocating for all things data, web, mobile, optimization and more. Keynote speaker, practitioner, writer on Analytics and Optimization, and passionate supporter of #WomenInAnalytics. You can follow her blog at www.kristaseiden.com and on twitter @kristaseiden.
Adobe Analytics, Featured

Adjusting Time Zones via Virtual Report Suites

When you are doing analysis for an organization that spans multiple time zones, things can get tricky. Each Adobe Analytics report suite is tied to one specific time zone (which makes sense), but this can lead to frustration for your international counterparts. For example, let’s say that Analytics Demystified went international and had resources in the United Kingdom. If they wanted to see when visitors located in the UK viewed blog posts (assume that is one of our KPI’s), here is what they would see in Adobe Analytics:

This report shows a Blog Post Views success event segmented for people located in the UK. While I wish our content was so popular that people were reading blogs from midnight until the early morning hours, I am not sure that is really the case! Obviously, this data is skewed because the time zone of our report suite is on US Pacific time. Therefore, analysts in the UK would have to mentally shift everything eight hours on the fly, which is not ideal and can cause headaches.

So how do you solve this? How do you let the people in the US see data in Pacific time and those in the UK see data in their time zone? Way back in 2011, I wrote a post about shifting time zones using custom time parting variables and SAINT Classifications. This was a major hack and one that I wouldn’t really recommend unless you were desperate (but that was 2011!). Nowadays, using the power of Virtual Report Suites, there is a more elegant solution to the time zone issue (thanks to Trevor Paulsen from Adobe Product Management for the reminder).

Time-Zone Virtual Report Suites

Here are step-by-step instructions on how to solve the time zone paradox. First, you will create a new Virtual Report Suite and assign it a new name and a new time zone:

You can choose whether this Virtual Report Suite has any segments applied and/or contains all of your data or just a subset of your data in the subsequent settings screens.

When you are done, you will have a brand new Virtual Report Suite that has all data shifted to the UK time zone:

Now you are able to view all reports in the UK time zone.  To illustrate this, let’s look at the report above in the regular report suite side by side with the same report in the new Virtual Report Suite:

As you can see, both of these reports are for the same date and have the same UK geo-segmentation segment applied. However, as you can see, the data has been shifted eight hours. For example, Blog Post Views that previously looked like they were viewed by UK residents at 2:00am, now show that they were viewed at 10:00am UK time. This can also be seen by looking at the table view and lining up the rows:

This provides a much more realistic view of the data for your international folks. In theory, you could have a different Virtual Report Suite for all of your major time zones.

So that is all you need to do to show data in different time zones. Just a handy trick if you have a lot of international users.

Industry Analysis, Tag Management, Technical/Implementation

Stop Thinking About Tags, and Start Thinking About Data

Nearly three weeks ago, I attended Tealium’s Digital Velocity conference in San Francisco. I’ve attended this event every year since 2014, and I’ve spent enough time using its Universal Data Hub (the name of the combined UI for AudienceStream, EventStream, and DataAccess, if you get a little confused by the way these products have been marketed – which I do), and attended enough conferences, to know that Tealium considers these products to be a big part of its future and a major part of its product roadmap. But given that the majority of my my clients are still heavily focused on tag management and getting the basics under control, I’ve spent far more time in Tealium iQ than any of its other products. So I was a little surprised as I left the conference on the last day by the force with which my key takeaway struck me: tag management as we knew it is dead.

Back in 2016, I wrote about how much the tag management space had changed since Adobe bought Satellite in 2013. It’s been awhile since tag management was the sole focus of any of the companies that offer tag management systems. But what struck me at Digital Velocity was that the most successful digital marketing organizations – while considering tag management a prerequisite for their efforts – don’t really use their tools to manage tags at all. I reflected on my own clients, and found that the most successful ones have realized that they’re not managing tags at all – they’re managing data. And that’s why Tealium is in such an advantageous position relative to any of the other companies still selling tag management systems while Google and Adobe give it away for free.

This idea has been kicking around in my head for awhile now, and maybe I’m stubborn, but I just couldn’t bring myself to admit it was true. Maybe it’s because I still have clients using Ensighten and Signal – in spite of the fact that neither product seems to have committed many resources to their tag management products lately (they both seem much more heavily invested in identity and privacy these days). Or maybe it’s because I still think of myself as the “tag management guy” at Demystified, and haven’t been able to quite come to grips with how much things have changed. But my experience at Digital Velocity was really the final wake-up call.

What finally dawned on me at Digital Velocity is that Tealium, like many of their early competitors, really doesn’t think of themselves as a tag management company anymore, either. They’ve done a much better job of disguising that though – because they continue to invest heavily in TiQ, and have even added some really great features lately (I’m looking at you, New JavaScript Code Extension). And maybe they haven’t really had to disguise it, either,  because of a single decision they made very early on in their history: the decision to emphasize a data layer and tightly couple it with all the core features of its product. In my opinion, that’s the most impactful decision any of the early tag management vendors made on the industry as a whole.

Most tag management vendors initially offered nothing more than code repositories outside of a company’s regular IT processes. They eventually layered on some minimal integration with a company’s “data layer” – but really without ever defining what a data layer was or why it was important. They just allowed you to go in and define data elements, write some code that instructed the TMS on how to access that data, and then – in limited cases – gave you the option of pushing some of that data to your different vendor tags.

On the other hand, Tealium told its customers up front that a good data layer was required to be successful with TiQ. They also clearly defined best practices around how that data layer should be structured if you wanted to tap into the power of their tool. And then they started building hundreds of different integrations (i.e. tags) that took advantage of that data layer. If they had stopped there, they would have been able to offer customers a pretty useful tool that made it easier to deploy and manage JavaScript tags. And that would have made Tealium a pretty similar company to all of its early competitors. Fortunately, they realized they had built something far more powerful than that – the backbone of a potentially very powerful customer data platform (or, as someone referred to Tealium’s tag management tool at DV, a “gateway drug” to its other products).

The most interesting thing that I saw during those 2 days was that there are actual companies for which tag management is only a subset of what they are doing through Tealium. In previous years, Tealium’s own product team has showcased AudienceStream and EventStream. But this year, they had actual customers showing off real-world examples of the way that they have leveraged these products to do some pretty amazing things. Tealium’s customers are doing much more real-time email marketing than you can do through traditional integrations with email service providers. They’re leveraging data collected on a customer’s website to feed integrations with tools like Slack and Twilio to meet customers’ needs in real-time. They’re solving legitimate concerns about the impact all these JavaScript tags have on page-load performance to do more flexible server-side tagging than is possible through most tools. And they’re able to perform real-time personalization across multiple domains and devices. That’s some really powerful stuff – and way more fun to talk about than “tags.” It’s also the kind of thing every company can start thinking about now, even if it’s something you have to ramp up to first.

In conclusion, Tealium isn’t the only company moving in this direction. I know Adobe, Google, an Salesforce all have marketing tools offer a ton of value to their customers. Segment offers the ability to do server-side integrations with many different marketing tools. But I’ve been doing tag management (either through actual products or my own code) for nearly 10 years, and I’ve been telling customers how important it is to have a solid data layer for almost as long – at Salesforce, we had a data layer before anyone actually called it that, and it was so robust that we used it to power everything we did. So to have the final confirmation that tag management is the past and that customer data is the future was a pretty cool experience for me. It’s exciting to see what Adobe Launch is doing with its extension community and the integration with the newest Adobe mobile SDKs. And there are all kinds of similar opportunities for other vendors in the space. So my advice to marketers is this: if you’re still thinking in terms of tags, or if you still think of all your third-party vendors as “silos,” make the shift to thinking about data and how to use it to drive your digital marketing efforts.

Photo Credit: Jonathan Poh (Flickr)

Featured, General

Analytics Demystified Interview Service Offering

Finding good analytics talent is hard! Whether you are looking for technical or analysis folks, it seems like many candidates are good from afar, but far from good! As someone who has been part of hundreds of analytics implementations/programs, I can tell you that having the right people makes all of the difference. Unfortunately, there are many people in our industry who sound like they know Adobe Analytics (or Google Analytics or Tealium, etc…), but really don’t.

One of the services that we have always provided to our clients at Demystified is the ability to have our folks interview prospective client candidates. For example, if a client of ours is looking for an Adobe Analytics implementation expert, I would conduct a skills assessment interview and let them know how much I think the candidate knows about Adobe Analytics. Since many of my clients don’t know the product as well as I do, they have found this to be extremely helpful.  In fact, I even had one case where a candidate withdrew from contention upon finding out that they would be interviewing with me, basically admitting that they had been trying to “BS” their way to a new job!

Recently, we have had more and more companies ask us for this type of help, so now Analytics Demystified is going to open this service up to any company that wants to take advantage of it. For a fixed fee, our firm will conduct an interview with your job candidates and provide an assessment about their product-based capabilities. While there are many technologies we can assess, so far most of the interest has been around the following tools:

  • Adobe Analytics
  • Google Analytics
  • Adobe Launch/DTM
  • Adobe Target
  • Optimizely
  • Tealium
  • Ensighten
  • Optimize
  • Google Tag Manager

If you are interested in getting our help to make sure you hire the right folks, please send an e-mail to contact@analyticsdemystified.com.

Adobe Analytics, Featured

Setting After The Fact Metrics in Adobe Analytics

As loyal blog readers will know, I am a big fan of identifying business requirements for Adobe Analytics implementations. I think that working with your stakeholders before your implementation (or re-implementation!) to understand what types of questions they want to answer helps you focus your efforts on the most important items and can reduce unnecessary implementation. However, I am also a realist and acknowledge that there will always be times where you miss stuff. In those cases, you can set a new metric after the fact for the thing you missed, but what about the data from the last few years? It would be ideal if you could create a metric today that would be retroactive such that it shows you data from the past.

This ability to set a metric “after the fact” is very common in other areas of analytics and there are even vendors like Heap, SnowPlow and Mixpanel that allow you to capture virtually everything and then set up metrics/goals afterwards. These tools capture raw data, let you model it as you see fit and change your mind on definitions whenever you want. For example, in Heap you can collect data and then one day decide that something you have been collecting for years should be a KPI and assign it a name. This provides a ton of flexibility. I believe that tools like Heap and SnowPlow are quite a bit different than Adobe Analytics and that each tool has its strengths, but for those who have made a long-term investment in Adobe Analytics, I wanted to share how you can have some of the Heap-like functionality in Adobe Analytics in case you ever need to assign metrics after the fact. This by no means is meant to discount the cool stuff that Heap or SnowpPlow are doing, but rather, just showing how this one cool feature of theirs can be mimicked in Adobe Analytics if needed.

After The Fact Metrics

To illustrate this concept, let’s imagine that I completely forgot to set a success event in Adobe Analytics when visitors hit my main consulting service page. I’d like to have a success event called “Adobe Analytics Service Page Views” when visitors hit this page, but as you can see here, I do not:

To do this, you simply create a new calculated metric that has the following definition:

This metric allows you to see the count of Adobe Analytics Service Page Views based upon the Page Name (or you could use URL) that is associated with that event and can then be used in any Adobe Analytics report:

So that is how simple it is to retroactively create a metric in Adobe Analytics. Obviously, this becomes more difficult if the metric you want is based on actions beyond just a page loading, but if you are tracking those actions in other variables (or ClickMap), you can follow the same process to create a calculated metric off of those actions.

Transitioning To A New Success Event

But what if you want to use the new success event going forward, but also want all of the historical data? This can be done as well with the following steps:

The first step would be to set the new success event going forward via manual tagging, a processing rule or via tag management. To do this, assign the new success event in the Admin Console:

The next step is to pick a date in which you will start setting this new success event and then start populating it.  If you want to have it be a clean break, I recommend doing this one day at midnight.

Next, you want to add the new success event to the preceding calculated metric so that you can have both the historical count and the count going forward:

However, this formula will double-count the event for all dates in which the new success event 12 has been set. Therefore, the last step is to apply two date-based segments to each part of the formula. The first date range contains the historical dates before the new success event was set. The second date range contains the dates after the new success event has been set (you can make the end date some date way into the future). Once both of these segments have been created, you can add them to the corresponding part of the formula so it looks like this:

This combined metric will use the page name for the old timeframe and the new success event for the new timeframe. Eventually, if desired, you can transition to using only the success event instead of this calculated metric when you have enough data in the success event alone.

Summary

To wrap up, this post shows a way that you can create metrics for items that you may have missed in your initial implementation and provides a way to fix your original omission and combine the old and the new. As I stated, this functionality isn’t as robust as what you might get from a Heap, SnowPlow or Mixpanel, but it can be a way to help if you need it in a pinch.

Adobe Analytics, Featured

Shifting Attribution in Adobe Analytics

If you are a veteran Adobe Analytics (or Omniture SiteCatalyst) user, for years the term attribution was defined by whether an eVar was First Touch (Original Value) or Last Touch (Most Recent). eVar attribution was setup in the administration console and each eVar had a setting (and don’t bring up Linear because that is a waste!). If you wanted to see both First and Last Touch campaign code performance, you needed to make two separate eVars that each had different attribution settings. If you wanted to see “Middle Touch” attribution in Adobe Analytics, you were pretty much out of luck unless you used a “hack” JavaScript plug-in called Cross Visit Participation (thanks to Lamont C.).

However, this has changed in recent releases of the Adobe Analytics product. Now you can apply a bunch of pre-set attribution models including J Curve, U Curve, Time Decay, etc… and you can also create your own custom attribution model that assigns some credit to first, some to last and the rest divided among the middle values. These different attribution models can be built into Calculated Metrics or applied on the fly in metric columns in Analysis Workspace (not available for all Adobe Analytics packages). This stuff is really cool! To learn more about this, check out this video by Trevor Paulsen from Adobe.

However, this post is not about the new Adobe Analytics attribution models. Instead, I wanted to take a step back and look at the bigger picture of attribution in Adobe Analytics. This is because I feel that the recently added Attribution IQ functionality is fundamentally changing how I have always thought about where and how Adobe performs attribution. Let me explain. As I mentioned above, for the past decade or more, Adobe Analytics attribution has been tied to eVars. sProps didn’t really even have attribution since their values weren’t persistent and generally didn’t work with Success Events. But what has changed in the past year, is that attribution has shifted to metrics instead of eVars. Today, instead of having a First Touch and Last Touch campaign code eVar, you can have one eVar (or sProp – more on that later) that captures campaign codes and then choose the attribution (First or Last Touch) in whatever metric you care about. For example, if you want to see First Touch Orders vs. Last Touch Orders, instead of breaking down two eVars by each other like this…

…you can use one eVar and create two different Order metric columns with different attribution models to see the differences:

In fact, you could have metric columns for all available attribution models (and even create Calculated Metrics to divide them by each other) as shown here:

In addition, the new attribution models work with sProps as well. Even though sProp values don’t persist, you can use them with Success Events in Analysis Workspace and then apply attribution models to those metrics. This means that the difference between eVars and sProps is narrowing due to the new attribution model functionality.

To prove this, here is an Analysis Workspace table based upon an eVar…

…and here is the same table based upon an sProp:

What Does This Mean?

So, what does this mean for you? I think this changes a few things in significant ways:

  1. Different Paradigm for Attribution – You are going to have to help your Adobe Analytics users understand that attribution (First, Last Touch) is no longer something that is part of the implementation, but rather, something that they are empowered to create. I recommend that you educate your users on how to apply attribution models to metrics and what each model means. You will want to avoid “analysis paralysis” for your users, so you may want to suggest which model you think makes the most sense for each data dimension.
  2. Different Approach to Implementation – The shift in attribution from eVars to metrics means that  you no longer have to use multiple eVars to see different attribution models. Also, the fact that you can see success event attribution for sProps means that you can also use sProps if you are using Analysis Workspace.
  3. sProps Are Not Dead! – While I have been on record saying that outside of Pathing, sProps are just a relic of old Omniture days, but as stated above, the new attribution modeling feature is helping make them useful again! sProps can now be used almost like eVars, which gives you more variables. Plus, they have Pathing that is better than eVars in Flow reports (until the instances bug is fixed!). Eventually, I assume all eVars and sProps will merge and simply be “dimensions,” but for now, you just got about 50 more variables!
  4. Create Popular Metric/Attribution Combinations – I suggest that you identify your most important metrics and create different versions of them for the relevant attribution models and share those out so your users can easily access them.  You may want to use tags as I suggested in this post.
Featured, Testing and Optimization

Adobe Target Chrome Extension

Adobe Target Chrome Extension

I use many different testing solutions each day as part of my strategic and tactical support of testing programs here at Analytics Demystified.  I am very familiar with how each of these different solutions functions and how to get the most value out of them.  To that end, I had a Chrome Extension built that will allow Adobe Target users to get much more value with visibility into test interaction, their Adobe Target Profile, and the bidirectional communication taking place.  23 (and counting:) powerful features, all for free.  Check out the video below to see it in action.

 

Video URL: https://youtu.be/XibDjGXPY4E

To learn more details about this Extension and download it from the Chrome Store, click below:
MiaProva Chrome Extension

Adobe Analytics, Featured

Ingersoll Rand Case Study

One of my “soapbox” issues is that too few organizations focus on analytics business requirements and KPI definition. This is why I spend so much time working with clients to help them identify their analytics business requirements. I have found that having requirements enables you to make sure that your analytics solution/implementation are aligned with the true needs of the organization. For this reason, I don’t take on consulting engagements unless the customer agrees to spend time defining their business requirements.

A while back, I had the pleasure of working with Ingersoll Rand to help them transform their legacy Adobe Analytics implementation to a more business requirements driven approach. The following is a quick case study that shares more information on the process and the results:

The Demystified Advantage – Ingersoll Rand – September 2018

 

Adobe Analytics

Page Names with and Without Locale in Adobe Analytics

Have you found yourself in a situation where your pages in Adobe Analytics are specific to a locale but you would like to aggregate them for a global view? It really isn’t uncommon to collect pages with a locale. If your pages names are in a URL format then two localized version of the same page may  look like so:

/us/en/services/security/super-series/

/jp/jp/services/security/super-series/

Or if you are using a custom page name perhaps it looks like this:

techco:us:en:services:security:super-series

techco:jp:jp:services:security:super-series

For this example we’re going to use the URL version of the page name. This could have been put in place to provide the ability to see different locales of the same page next to each other. Or maybe it was just the easiest or most practical approach to generate a page name at the time. Suppose that you just inherited an implementation with this setup but now you are getting questions of a more global nature. Your executives and users are wanting information on a global level. At the same time we still have the need for the locale-specific information. In order to meet their needs, you now need to have a version of the pages combined but still have the flexibility to break out by locale. To do this we’ll keep our original report with pages like “/us/en/services/security/super-series” but create a version that combines those into something like “/services/security/super-series/”. This new value would represent the total across all locales such as /us/en, /jp/jp, or any others we have.

Since we need to do this retroactively, classifications are going to be the best approach here. We’ll set this up so that we have a new version of the pages report without a locale and use the rule builder to automate the classification. Here’s how it would work…

Classification Setup

If you have worked with classifications before then this will be easy. First, go to your report suites under the admin tab, select your report suites, and navigate to the traffic classifications

The page variable should show by default in the dropdown of the Traffic Classifications page. From here select the icon next to Page and click Add Classification. Name your new classification something like “Page w/o Locale” and Save.

Your classification schema should now look something like this:

Classification Automation

Now let’s automate the population of this new classification by using the Rule Builder. To do so, navigate to the Admin tab and then click on Classification Rule Builder. Select the “Add Rule Set” button and configure the rule set like so:

Purple Arrow: this is where you select the report suite and variable where you want the classification applied. In this case we are using the Page variable.

Green Arrow: when this process runs for the first time this is how long it should look back to classify old value. For something like this I would select the maximum lookback. On future runs it will just use a one-month lookback which works great.

Red Arrow: Here is where you set up the logic for how each page should be classified. The order here is important as each rule is applied to each page in sequence. In a case where multiple rules apply to a value the last case will win since it is later in the sequence. We are going to use that to our advantage here with the following two expressions:

  1. (.*) This will simply classify all pages with the original value. I’m doing this because many sites also have non-localized content in addition to the localized URLs. This ensures that all Page values are represented in our new report.
  2. ^\/..\/..(\/.*) This expression actually does something for our localized pages. There are several ways to write this expression but this one tends to be more simple and shorter than others I’ve thought of. This will look for values starting with a slash and two characters, repeated twice (e.g. “/us/en”). It will then extract the following slash and anything after that. That means it would pull out the “/services/security/super-series/” from “/us/en/services/security/super-series/”.

 

Other considerations

If you have copied your page name into an eVar (hopefully so) then be sure to set up the same classification there.

If the Classification Rule Builder already has a rule set doing something for the page variable then you may need to add these rules to the existing rule set.

If you want to remind users that “Page w/o Locale” has the locale removed you can also prefix the new values with some value that indicates the value was removed. That might be something like “[locale removed]” or “/**/**” or whatever works for you. To do this you would just use “[locale removed]$1” instead of “$1” in the second rule of the rule set.

If you are using a custom page name like “techco:jp:jp:services:security:super-series” then the second rule in the Rule Builder would need to be modified. Instead of the expression I outlined above it would be something like “^([^:]*):..:..(:.*)” and you would set the “To” column to “$1$2”. This will pull out the locale from the middle of the string and give you a final value such as “techco:services:security:super-series”

 

 

 

Adobe Analytics, Featured

Analysis Workspace Drop-downs

Recently, the Adobe Analytics team added a new Analysis Workspace feature called “Drop-downs.” It has always been possible to add Adobe Analytics components like segments, metrics, dimensions and date ranges to the drop zone of Analysis Workspace projects. Adding these components allowed you to create “Hit” segments based upon what was brought over or, in the case of a segment, segment your data accordingly. Now, with the addition of drop-downs, this has been enhanced to allow you to add a set of individual elements to the filter area and then use a drop-down feature to selectively filter data. This functionality is akin to the Microsoft Excel Filter feature that lets you filter rows of a table. In this post, I will share some of the cool things you can do with this new functionality.

Filter on Dimension Values

One easy way to take advantage of this new feature is to drag over a few of your dimension values and see what it is like to filter on each. To do this, you simply find a dimension you care about in the left navigation and then click the right chevron to see its values like this:

Next you can use the control/shift key to pick the values you want (up to 50) and drag them over to the filter bar. Before you drop them, you must hold down the shift key to make it a drop-down:

When this is done, you can see your items in the drop-down like this:

 

Now you can select any item and all of your Workspace visualizations will be filtered. For example, if I select my name in the blog post author dimension, I will see only blog posts I have authored:

Of course, you can add as many dimensions as you’d like, such as Visit Number and/or Country. For example, if I wanted to narrow my data down to my blog posts viewed in the United States and the first visit, I might choose the following filters:

This approach is likely easier for your end-users to understand than building complex segments.

Other Filters

In addition to dimensions, you can create drop-downs for things like Metrics, Time Ranges and Segments. If you want to narrow your data down to cases in which a specific Metric was present, you can drag over the Metrics you care about and filter like this:

Similarly, you can filter on Data Ranges that you have created in your implementation (note that this will override whatever dates you have selected in the calendar portion of the project):

One of the coolest parts of this new feature is that you can also filter on Segments:

This means that instead of having multiple copies of the same Analysis Workspace project for different segments, you can consolidate down to one version and simply use the Segment drop-down to see the data you care about. This is similar to how you might use the report suite drop-down in the old Reports & Analytics interface. This should also help improve the performance times of your Analysis Workspace projects.

Example Use – Solution Design Project

Over the last few weeks, I have been posting about a concept of adding your business requirements and solution design to an Analysis Workspace project. In the final post of the series (I suggest reading all parts in order), I talked about how you could apply segmentation to the solution design project to see different completion percentages based upon attributes like status or priority (shown here):

Around this time, after reading my blog post, one of my old Omniture cohorts tweeted this teaser message:

At the time, I didn’t know what Brandon was referring to, but as usual, he was absolutely correct that the new drop-down feature would help with my proposed solution design project. Instead of having to constantly drag over different dimension/value combinations, the new drop-down feature allows any user to select the ways they want to filter the solution design project and, once they apply the filters, the overall project percentage completion rate (and all other elements) will dynamically change. Let’s see how this works through an example:

As shown in my previous post, I have a project that is 44.44% complete as shown above. Now I have added a few dimension filters to the project like this:

Now, if I choose to filter by “High” priority items, the percentage changes to 66.67% and only high priority requirements are shown:

Another cool side benefit of this is that the variable panel of the project now only shows variables that are associated with high priority requirements:

If I want to see how I am doing for all of Kevin’s high priority business requirements, I can simply select both high priority and then select Kevin in the requirement owner filter:

This is just a fun way to see how you can apply this new functionality to old Analysis Workspace projects into which you have invested time.

Future Wishlist Items

While this new feature is super-cool, I have already come up with a list of improvements that I’d like to eventually see:

  • Ability to filter on multiple items in the list instead of just one item at a time
  • Ability to clear the entire filter without having to remove each item individually
  • Ability to click a button to turn currently selected items (across all filters) into a new Adobe Analytics Segment
  • Ability to have drop-down list values generated dynamically based upon a search criteria (using the same functionality available when filtering values in a freeform table shown below)

Conferences/Community, Featured

ACCELERATE 2019

Back in 2015, the Analytics Demystified team decided to put on a different type of analytics conference we called ACCELERATE. The idea was that we as partners and a few select other industry folks would share as much information as we could in the shortest amount of time possible. We chose a 10 tips in 20 minutes format to force us and our other presenters to only share the “greatest hits” instead of the typical (often boring) 50 minute presentation with only a few minutes worth of good information. The reception of these events (held in San Francisco, Boston, Chicago, Atlanta and Columbus) was amazing. Other than some folks feeling a bit overwhelmed with the sheer amount of information, people loved the concept. We also coupled this one day event with some detailed training classes that attendees could optionally attend. The best part was that our ACCELERATE conference was dramatically less expensive than other industry conferences.

I am pleased to say that, after a long hiatus, we are bringing back ACCELERATE in January of 2019 in the Bay Area! As someone who attends a LOT of conferences, I still find that there is a bit of a void that we once again hope to fill with an updated version of ACCELERATE. In this iteration, we are going to do some different things in the agenda in addition to our normal 10 tips format. We hope to have a few roundtable discussions where attendees can network and have some face-to-face discussions like what is available at the popular DA Hub conference. We are also bringing in product folks Ben Gaines (Adobe) and Krista Seiden (Google) to talk about the two most popular digital analytics tools. I will even be doing an epic bake-off comparison of Adobe Analytics and Google Analytics with my partner Kevin Willeitner! We may also have some other surprises coming as the event gets closer…

You will be hard-pressed to find a conference at this price that provides as much value in the analytics space. But seats are limited and our past ACCELERATE events all sold out, so I suggest you check out the information now and sign-up before spaces are gone. This is a great way to start your year with a motivating event, at a great location, with great weather and great industry peers! I hope to see you there…

Featured, Testing and Optimization

Adobe Target and Marketo

The Marketo acquisition by Adobe went from rumor to fact earlier today.  This is a really good thing for the Adobe Target community.

I’ve integrated Adobe Target and Marketo together many times over the years and the two solutions complement each other incredibly well.  Independent of this acquisition and of marketing automation in general, I’ve also been saying for years that organizations need to shift their testing programs such that the key focus is on the Knowns and Unknowns if they are to succeed.  Marketo can maybe help those organizations with this vision if it is part of their Adobe stack since Marketo is marketing automation for leads (Unknowns) and customers (Knowns).

The assimilation of Marketo into the Adobe Experience Cloud will definitely deepen the integration between the multiple technologies but let me layout here how Target and Marketo work together today so as to relay the value the two together bring.

Marketo

For those of you in the testing community that is unfamiliar with Marketo or Marketing Automation in general, let me layout at a very high level some of the things these tools do.

Initially and maybe most commonly the Marketing Automation starts out with Lead Management space which means, when you fill out those forms on websites, the management of that “lead” is then handled by these systems.  At that point, you get emails, deal with salespeople, consume more content, etc…  The management of that process is handled here and if done well, prospects turn into customers.  Unknowns become Knowns.

Once you are Known, a whole new set of Marketing and Customer Marketing kicks in and that is also typically managed by Marketing Automation technologies like Marketo.

Below is an image that was taken directly from Marketo’s Solution’s website that highlights their offering.

Image from: https://www.marketo.com/solutions/

Adobe Target

Just like Marketo, testing solutions like Adobe Target also focus on different audiences as well.  The most successful testing programs out there have testing roadmaps and personalization strategies dedicated to getting Unknowns (prospects) to becoming Knowns (customers).  And when that transition takes place, these newly gotten Knowns then fall into tests and personalization initiatives focused on different KPIs vs becoming a Known.

Combining the power of testing and the quantification/reporting of consumer experiences (Adobe Target) with the power of marketing automation (Marketo) provide a value significantly higher than the value these solutions provide independently.

Target into Marketo

Envision a scenario where you bring testing to unknowns and use the benefits of testing to find ideal experiences that lead to more forms completions.  This is a no-brainer for Marketo customers and works quite well.  At this point, when tests are doing their thing, it is crucial to communicate or share this test data to Marketo when end users make the transition from Unknowns to Knowns.  This data will help with the management of leads because we will know what test and test experience influenced their transition to becoming a Known.

Just like Target, Marketo loves data and this code below is what Target would deliver with tests targeted to Unknowns.  This code delivers to Marketo the test name but also the Adobe Target ID in the event users of Marketo wanted to retargeted certain Adobe Target visitors.

var customData = {value: ‘${campaign.name}:${user.recipe.name}’};
rtp(‘send’, ‘AdobeTarget’, customData);
var customData = {value: ‘${profile.mboxPCId}’};
rtp(‘send’, ‘AdobeTarget_ID’, customData);

Marketo into Target

Adobe Target manages a rich profile that can be made up of online behaviors, 3rd Party Data, and offline data.  Many Target customers use this profile for strategic initiatives that change and quantify consumer experiences based off of the values of the profile attributes associated with this profile or Adobe Target ID.

In the Marketo world, there are many actions or events that take place as the leads are nurtured and the customers are marketed to.  Organizations differ on how the specific actions or stages of lead or customer management/marketing are defined but no matter what definition, those stages/actions/events can be mirrored or shared with Adobe Target.  This effort allows Marketo users to run tests online that are coordinated with their efforts managed offline – hence making those offline efforts more successful.

Push Adobe Target ID into Marketo

Marketo can get this data into Target in one of two ways.  The first method uses the code that I shared above where the Adobe Target ID is shared with Marketo.  Marketo can then generate a report or gather all Adobe Target IDs at a specific stage/event/action and then set up a test targeted to them.  It is literally that easy.

Push Marketo ID into Adobe Target

The second method is a more programmatic approach.  We have the Marketo visitor ID passed to Adobe Target as a special mbox parameter called mbox3rdPartyId.  When Adobe Target sees this value it immediately marries its ID to that ID so that any data shared to Adobe with that ID will be available for any testing efforts.  This process is one that many organizations use with their own internal ID.  At this point, any and all (non-PII) data can be sent to Adobe Target by way of APIs using nothing more than the Marketo ID – all possible because it passed the ID to Adobe Target when the consumer was on the website.

And then the cycle repeats itself with Adobe Target communicating test and experience names again to Marketo but this time for the Knowns – thus making that continued management more effective.

 

Adobe Analytics, Featured

Bonus Tip: Quantifying Content Creation

Last week and this week, I shared some thoughts on how to quantify content velocity in Adobe Analytics. As part of that post, I showed how to assign a publish date to each piece of content via a SAINT Classification like this:

Once you have this data in Adobe Analytics, you can download your SAINT file and clean it up a bit to see your content by date published in a table like this:

The last three columns split out the Year, the Month and then I added a “1” for each post. Adding these three columns allows you to then build a pivot table to see how often content is published by both Month and Year:

Then you can chart these like you would any other pivot table. Here are blog posts by month:

Here are blog posts by year:

As long as you are going to go through the work of documenting the publish date of your key content, you can use this bonus tip to leverage your SAINT Classifications file to do some cool reporting on your content creation.

Adobe Analytics, Featured

Quantifying Content Velocity in Adobe Analytics – Part 2

Last week, I shared how to quantify content velocity in Adobe Analytics. This involved classifying content with the date it was published and looking at subsequent days to see how fast it is viewed. As part of this exercise, the date published was added via the SAINT classification and dates were grouped by Year and Month & Year. At the same time, it is normal to capture the current Date in an eVar (as I described in this old blog post). This Date eVar can also be classified into Year and Year & Month. The classification file might look like this:

Once you have the Month-Year for both Blog Post Launches and Views, you can use the new cross-tab functionality of Analysis Workspace to do some analysis. To do this, you can create a freeform table and add your main content metric (Blog Post Views in my case) and break it down by the Launch Month-Year:

In this case, I am limiting data to 2018 and showing the percentages only. Next, you can add the Blog Post View Month-Year as cross-tab items by dragging over this dimension from the left navigation:

This will insert five Blog Post View Month-Year values across the top like this:

From here, you can add the missing three months, order them in chronological order and then change column settings like this:

Next, you can change the column percentages so they go by row instead of column, but clicking on the row settings gear icon like this:

After all of this, you will have a cross-tab table that looks like this:

Now you have a cross-tab table that allows you to see how blog posts launched in each month are viewed in subsequent months. In this case, you can see that from January to August, for example, blog posts launched in February had 59% of their views take place in February and the remaining 40% over the next few months.

Of course, the closer you are to the month content was posted, the higher the view percentage will be for the current month and the months that follow. This is due to the fact that over time, more visitors will end up viewing older content. You can see this above by the fact that 100% of content launched in August was viewed in August (duh!). But in September, August will look more like July in the table above when September will steal a percentage of content that was launched in August.

This type of analysis can be used to see how sticky your content is in a way that is similar to the Cohort Analysis visualization. For example, four months after content was launched in March, its view % was 3.5%, whereas, four months after content was released in April, its view % was 5.3%. There are many ways that you can dissect this data and, of course, since this is Analysis Workspace, if you ever want to do a deeper dive on one of the cross-tab table elements, you can simply right-click and build an additional visualization. For example, if I want to see the trend of February content, I can simply right-click on the 59.4% value and add an area visualization like this:

This would produce an additional Analysis Workspace visualization like this:

For a bonus tip related to this concept, click here.

Conferences/Community, Digital Analytics Community

Registration for ACCELERATE 2019 is now open!

Analytics Demystified is excited to have opened registration for ACCELERATE 2019 on January 25th in Los Gatos, California.  You can see the entire agenda including speakers, topics, and information about our training day and the Toll House hotel via the following links:

Registration for ACCELERATE is only $299 USD making the conference among the most affordable in the industry.  Registration for training day is only $999 USD and includes the cost of the conference and seats are limited and available on a first-come basis … so don’t delay in signing up for ACCELERATE 2019!

 

Adobe Analytics, Featured

Quantifying Content Velocity in Adobe Analytics

If publishing content is important to your brand, there may be times when you want to quantify how fast users are viewing your content and how long it takes for excitement to wane. This is especially important for news and other media sites that have content as their main product. In my world, I write a lot of blog posts, so I also am curious about which posts people view and how soon they are viewed. In this post, I will share some techniques for measuring this in Adobe Analytics.

Implementation Setup

The first step to tracking content velocity is to assign a launch date to each piece of content, which is normally the publish date. Using my blog as an example, I have created a SAINT Classification of the Blog Post Title eVar and classified each post with the publish date:

Here is what the SAINT File looks like when completed:

The next setup step is to set a date eVar on every website visit. This is as simple as capturing today’s date in an eVar on every hit, which I blogged about back in 2011. Having the current date will allow you to compare the date the post was viewed with the date it was published. Here is an example on my site:

Reporting in Analysis Workspace

Once the setup is complete, you can move onto reporting. First, I’ll show how to report on the data in Analysis Workspace. In Workspace, you can create a panel and add the content item you care about (blog post in my example) and then break it down by the launch date and the view date. I recommend setting the date range to begin with the publish date:

In this example, you can see that the blog post launched on 8/7/18 and that 36% of total blog post views since then occurred on the launch date. You can also see how many views took place on each date thereafter. As you would expect, most of the views took place around the launch date and then slowed down in subsequent days. If you want to see how this compares to another piece of content, you can create a new panel and view the same report for another post (making sure to adjust the date range in the new panel to start with the new post’s launch date):

By viewing two posts side by side, I can start to see how usage varies. The unfortunate part, is that it is difficult to see which date is “Launch Date,” Launch Date +1,” Launch Date +2, ” etc… Therefore, Analysis Workspace, in this situation, is good for seeing some ad-hoc data (no pun intended!), but using Adobe ReportBuilder might actually prove to be a more scalable solution.

Reporting in Adobe ReportBuilder

When you want to do some more advanced formulas, sometimes Adobe ReportBuilder is the best way to go. In this case, I want to create a data block that pulls in all of my blog posts and the date each post was published like this:

Once I have a list of the content I care about (blog posts in this example), I want to pull in how many views of the content occurred each date after the publish date. To do this, I have created a set of reporting parameters like this:

The items in green are manually entered by setting them equal to the blog post name and publish date I am interested in from the preceding data block. In this case, I am setting the Start Date equal to the sixth cell in the second column and the Blog Post equal to the cell to the left of that. Once I have done that I create a data block that looks like this:

This will produce the following table of data:

Now I have a daily report of content views beginning with the publish date. Next, I created a table that references this table that captures the launch date and the subsequent seven days (you can use more days if you want). This is done by referencing the first eight rows in the preceding table and then creating a sum of all other data to create a table that looks like this:

In this table, I have created a dynamic seven-day distribution and then lumped everything else into the last row. Then I have calculated the percentage and added an incremental percentage formula as well. These extra columns allow me to see the following graphs on content velocity:

The cool part about this process, is that it only takes 30 seconds to produce the same reports/graphs for any other piece of content (blog post in my example). All you have to do is alter the items in green and then refresh the data block. Here is the same reporting for a different blog post:

You can see that this post had much more activity early on, whereas the other post started slow and increased later. You could even duplicate each tab in your Excel worksheet so you have one tab for each key content item and then refresh the entire workbook to update the stats for all content at once.

Check out Part 2 of this post here: https://analyticsdemystified.com/featured/quantifying-content-velocity-in-adobe-analytics-part-2/

Adobe Analytics, Featured

Adobe Analytics Requirements and SDR in Workspace – Part 4

Last week, I shared how to calculate and incorporate your business requirement completion percentage in Analysis Workspace as part of my series of posts on embedding your business requirements and Solution Design in Analysis Workspace (Part 1, Part 2, Part 3). In this post, I will share a few more aspects of the overall SDR in Workspace solution in case you endeavor to try it out.

Updating Business Requirement Status

Over time, your team will add and complete business requirements. In this solution, adding new business requirements is as simple as uploading a few more rows of data via Data Sources as shown in the “Part 2” blog post. In fact, you can re-use the same Data Sources template and FTP info to do this. When uploading, you have two choices. You can upload only new business requirements or you can re-upload all of your business requirements each time, including the new ones. If you upload only the new ones, you can tie them to the same date you originally used or use the current date. Using the current date allows you to see your requirements grow over time, but you have to be mindful to make sure your project date ranges cover the timeframe for all requirements. What I have done is re-uploaded ALL of my business requirements monthly and changed the Data Sources date to the 1st of each month. Doing this allows me to see how many requirements I had in January, Feb, March, etc., simply by changing the date range of my SDR Analysis Workspace project. The only downside of this approach is that you have to be careful not to include multiple months or you will see the same business requirements multiple times.

Once you have all of your requirements in Adobe Analytics and your Analysis Workspace project, you need to update which requirements are complete and which are not. As business requirements are completed, you will update your business requirement SAINT file to change the completion status of business requirements. For example, let’s say that you re-upload the requirements SAINT file and change two requirements to be marked as “Complete” as shown here in red:

Once the SAINT file has processed (normally 1 day), you would see that 4 out of your 9 business requirements are now complete, which is then reflected in the Status table of the SDR project:

Updating Completion Percentage

In addition, as shown in Part 3 of the post series, the overall business requirement completion percentage would be automatically updated as soon as the two business requirements are flagged as complete. This means that the overall completion percentage would move from 22.22% (2/9) to 44.44% (4/9):

Therefore, any time you add new business requirements, the overall completion percentage would decrease, and any time you complete requirements, the percentage would increase.

Using Advanced Segmentation

For those that are true Adobe Analytics geeks, here is an additional cool tip. As mentioned above, the SAINT file for the business requirements variable has several attributes. These attributes can be used in segments just like anything else in Adobe Analytics. For example, here you see the “Priority” SAINT Classification attribute highlighted:

This means that each business requirement has an associated Priority value, in this case, High, Medium or Low, which can be seen in the left navigation of Analysis Workspace:

Therefore, you can drag over items to create temporary segments using these attributes. Highlighted here, you see “Priority = High” added as a temporary segment to the SDR panel:

Doing this, applies the segment to all project data, so only the business requirements that are marked as “High Priority” are included in the dashboard components. After the segment is applied, there are now three business requirements that are marked as high priority, as shown in our SAINT file:

Therefore, since, after the upload described above, two of those three “High Priority” business requirements are complete, the overall implementation completion percentage automatically changes from 44.44% to 66.67% (2 out 3), as shown here (I temporarily unhid the underlying data table in case you want to see the raw data):

As you can see, the power of segmentation is fully at your disposal to make your Requirements/Solution Design project highly dynamic! That could mean segmenting by requirement owner, variable or any other data points represented within the project! For example, once we apply the “High Priority” segment to the project as shown above, viewing the variable portion of the project displays this:

This now shows all variables associated with “High Priority” business requirements.  This can be useful if you have limited time and/or resources for development.

Another example might be creating a segment for all business requirements that are not complete:

This segment can then be applied to the project as shown here to only see the requirements and variables that are yet to be implemented:

As you can see, there are some fun ways that you can use segmentation to to slice and dice your Solution Design! Pretty cool huh?

Adobe Analytics, Featured

Adobe Analytics Requirements and SDR in Workspace – Part 3

Over the past two weeks, I have been posting about how to view your business requirements and solution design in Analysis Workspace. First, I showed how this would look in Workspace and then I explained how I created it. In this post, I am going to share how you can extend this concept to calculate the completion percentage of business requirements directly within Analysis Workspace. Completion percentage is important because Adobe Analytics implementations are never truly done. Most organizations are continuously doing development work and/or adding new business requirements. Therefore, one internal KPI that you may want to monitor and share is the completion percentage of all business requirements.

Calculating Requirement Percentage Complete

As shown in the previous posts, you use Data Sources to upload a list of business requirements and each business requirement has one or more Adobe Analytics variables associated to it:

When this is complete, you can see a report like this:

Unfortunately, this report is really showing you how many total variables are being used, not the number of distinct business requirements (Note: You could divide the “1” in event30 by the number of variables, but that can get confusing!). This can be seen by doing a breakdown by the Variable eVar:

Since your task is to see how many business requirements are complete, you can upload a status for each business requirement via a SAINT file like this:

This allows you to create a new calculated metric that counts how many business requirements have a status of complete (based upon the SAINT Classification attribute) like this:

However, this is tricky, because the SAINT Classification that is applied to the Business Requirement metric doesn’t sum the number of completed business requirements, but rather the number of variables associated with completed requirements. This can be seen here:

What is shown here is that there are five total variables associated with completed business requirements out of twenty-five total variables associated with all business requirements. You could divide these two to show that your implementation is 20% complete (5/25), but that is not really accurate. The reality is that two out of nine business requirements are complete, so your actual completion percentage is 22.22 % (2/9).

So how do you solve this? Luckily, there are some amazing functions included in Adobe Analytics that can be used to do advanced calculations. In this case, what you want to do is count how many business requirements are complete, not how many variables are complete. To do this, you can use an IF function with a GREATER THAN function to set each row equal to either “1” or “0” based upon its completion status using this formula:

This produces the numbers shown in the highlighted column here:

Next, you want to divide the number of rows that have a value of “1” by the total number of rows (which represents the number of requirements). To do this, you simply divide the preceding metric by the ROW COUNT function, which will produce the numbers shown in the highlighted column here:

Unfortunately, this doesn’t help that much, because what you really want is the sum of the rows (22.22%) versus seeing the percentages in each row. However, you can wrap the previous formula in a COLUMN SUM function to sum all of the individual rows. Here is what the final formula would look like:

This would then produce a table like this:

Now you have the correct requirement percentage completion rate. The last step is to create a new summary number visualization using the column heading in the Requirement Completion % column as shown highlighted here:

To be safe, you should use the “lock” feature to make sure that this summary number will always be tied to the top cell in the column like this:

Before finishing, there are a few clean-up items left to do. You can remove any extraneous columns in the preceding table (which I added just to explain the formula) to speed up the overall project so the final table looks like this:

You can also hide the table completely by unchecking the “Show Data Source” box, which will avoid confusing your users) :

Lastly, you can move the completion percentage summary number to the top of the project where it is easily visible to all:

So now you have an easy way to see the overall business requirement completion % right in your Analysis Workspace SDR project!

[Note: The only downside of this overall approach is that the completion status is flagged by a SAINT Classification, which, by definition, is retroactive. This means that the Analysis Workspace project will always show the current completion percentage and will not record the history. If that is important to you, you’d have to import two success events for each business requirement via Data Sources. One for requirements and another for completed requirements and use formulas similar to the ones described above.]

Click here to see Part 4 for even more cool things related to this concept!

Adobe Analytics, Featured

Adobe Analytics Requirements and SDR in Workspace – Part 2

Last week, I wrote about a concept of having your business requirements and SDR inside Analysis Workspace. My theory was that putting business requirements and implementation information as close to users as possible could be a good thing. Afterwards, I had some folks ask me how I implemented this, so in this post I will share the steps I took. However, I will warn you that my approach is definitely a “hack” and it would be cool if, in the future, Adobe provided a much better way to do this natively within Adobe Analytics.

Importing Business Requirements (Data Sources)

The first step in the solution I shared is getting business requirements into Adobe Analytics so they can be viewed in Analysis Workspace. To do this, I used Data Sources and two conversion variables – one for the business requirement number and another for the variables associated with each requirement number. While this can be done with any two conversion variables (eVars), I chose to use the Products variable and another eVar because my site wasn’t using the Products variable (since we don’t sell a physical product). You may choose to use any two available eVars. I also used a Success Event because when you use Data Sources, it is best to have a metric to view data in reports (other than occurrences). Here is what my data sources file looked like:

Doing this allowed me to create a one to many relationship between Req# (Products) and the variables for each (eVar17). The numbers in event 30 are inconsequential, so I just put a “1′ for each. Also note, that you need to associate a date with data being uploaded via Data Sources. The cool thing about this, is that you can change your requirements when needed by re-uploading the entire file again at a later date (keeping in mind that you need to choose your data ranges carefully so you don’t get the same requirement in your report twice!). Another reason I uploaded the requirement number and the variables into conversion variables is that these data points should not change very often, whereas, many of the other attributes will change (as I will show next).

Importing Requirement & Variable Meta-Data (SAINT Classifications)

The next step of the process is adding meta-data to the two conversion variables that were imported. Since the Products variable (in my case) contains data related to business requirements, I added SAINT Classifications for any meta-data that I would want to upload for each business requirement. This included attributes like description, owner, priority, status and source.

Note, these attributes are likely to change over time (i.e. status), so using SAINT allows me to update them by simply uploading an updated SAINT file. Here is the SAINT file I started with:

 

The next meta-data upload required is related to variables. In my case, I used eVar17 to capture the variable names and then classified it like this:

As you can see, I used classifications and sub-classifications to document all attributes of variables. These attributes include variable types, descriptions and, if desired, all of the admin console attributes associated with variables. Here is what the SAINT file looks like when completed:

[Note: After doing this and thinking about it for a while, in hindsight, I probably should have uploaded Variable # into eVar17 and made variable name a classification in case I want to change variable names in the future, so you may want to do that if you try to replicate this concept.]

Hence, when you bring together the Data Sources import and the classifications for business requirements and variables, you have all of the data you need to view requirements and associated variables natively in Adobe Analytics and Analysis Workspace as shown here:

Project Curation

Lastly, if you want to minimize confusion for your users in this special SDR project, you can use project curation to limit the items that users will see in the project to those relevant to business requirements and the solution design. Here is how I curated my Analysis Workspace project:

This made it so visits only saw these elements by default:

Final Thoughts

This solution has a bit of set-up work, but once you do that, the only ongoing maintenance is uploading new business requirements via Data Sources and updating requirements and variable attributes via SAINT Classifications. Obviously, this was just a quick & dirty thing I was playing around with and, as such, not something for everyone. I know many people are content with keeping this information in spreadsheets, in Jira/Confluence or SharePoint, but I have found that this separation can lead to reduced usage. My hope is that others out there will expand upon this concept and [hopefully] improve it. If you have any additional questions/comments, please leave a comment below.

To see the next post in this series, click here.

Adobe Analytics, Featured

Adobe Analytics Requirements and SDR in Workspace

Those who know me, know that I have a few complaints about Adobe Analytics implementations when it comes to business requirements and solution designs. You can see some of my gripes around business requirements in the slides from my 2017 Adobe Summit session and you can watch me describe why Adobe Analytics Solution Designs are often problematic in this webinar (free registration required). In general, I find that:

  • Too few organizations have defined analytics business requirements
  • Most Solution Designs are simply lists of variables and not tied to business requirements
  • Often times, Solution Designs are outdated/inaccurate

When I start working with new clients, I am shocked at how few have their Adobe Analytics implementation adequately organized and documented. One reason for this, is that requirements documents and solution designs tend to live on a [digital] shelf somewhere, and as you know, out of sight, often means out of mind. For this reason, I have been playing around with something in this area that I wanted to share. To be honest, I am not sure if the concept is the right solution, but my hope is that some of you out there can possibly think about it and help me improve upon it.

Living in Workspace

It has become abundantly clear that the future of Adobe Analytics is Analysis Workspace. If you haven’t already started using Workspace as your default interface for Adobe Analytics, you will be soon. Most people are spending all of their time in Analysis Workspace, since it is so much more flexible and powerful than the older “SiteCatalyst” interface. This got me thinking… “What if there were a way to house all of your Adobe Analytics business requirements and the corresponding Solution Design as a project right within Analysis Workspace?” That would put all of your documentation a few clicks away from you at all times, meaning that there would be no excuse to not know what is in your implementation, which variables answer each business requirement and so on.

Therefore, I created this:

The first Workspace panel is simply a table of contents with hyperlinks to the panels below it. The following will share what is contained within each of the Workspace panels.

The first panel is simply a list of all business requirements in the Adobe Analytics implementation, which for demo purposes is only two:

The second panel shows the same business requirements split out by business priority, in case you want to look at ones that are more important than others:

One of the ways you can help your end-users understand your implementation is to make it clear which Adobe Analytics variables (reports) are associated with each business requirement. Therefore, I thought it would make sense to let users breakdown each business requirement by variable as shown here:

Of course, there will always be occasions where you just want to see a list of all of your Success Events, eVars and sProps, so I created a breakdown by variable type:

Since each business requirement should have a designated owner, the following breakdown allows you to see all business requirements broken down by owner:

Lastly, you may want to track which business requirements have been completed and which are still outstanding. The following breakdown allows you to see requirements by current implementation status:

Maximum Flexibility

As you can see, the preceding Analysis Workspace project, and panels contained within, provide an easy way to understand your Adobe Analytics implementation. But since you can break anything down by anything else in Analysis Workspace, these are just some sample reports of many more that could be created. For example, what if one of my users wanted to drill deep into the first business requirement and see what variables it uses, descriptions of those variables and even the detailed settings of those variables (i.e. serialization, expiration, etc…)? All of these components can be incorporated into this solution such that users can simply choose from a list of curated Analysis Workspace items (left panel) and drop them in as desired like shown here:

Granted, it isn’t as elegant as seeing everything on an Excel spreadsheet, it is convenient to be able to see all of this detail without having to leave the tool! And maybe one day, it will be possible to see multiple items on the same row in Analysis Workspace, which would allow this solution to look more like a spreadsheet. I also wish there were a way to hyper-link right from the variable (report) name to a new project that opens with that report, but maybe that will be possible in the future.

If you want to see the drill-down capabilities in action, here is a link to a video that shows me doing drill-downs live:

Summary

So what do you think? Is this something that your Adobe Analytics users would benefit from? Do you have ideas on how to improve it? Please leave a comment here…Thanks!

P.S. To learn how I created the preceding Analysis Workspace project, check out Part Two of this post.

Tag Management

GTM: Using Multiple Inputs for RegEx Tables

I’m a big fan of RegEx table variables in Google Tag Manager. These are especially useful if you have implemented a gtm blacklist with your GTM setup to avoid custom scripting. If that is your situation (likely not) then this variable type will provide some flexibility that you wouldn’t have otherwise.  Keep in mind, though, that RegEx Tables have one limiting factor that sometimes take a bit of extra work…they only allow for a single input variable:

This can be limiting in scenarios where you want to have an output that depends on a variety of inputs. As an example, let’s say that we want to deploy a bunch of Floodlights, each with a different activity string, based on a combination of an event category, event action and URL path. For this example, you can just assume that the event category and action are being pushed into the dataLayer. Now, I could probably create a trigger for each floodlight and avoid doing any of this; however I find that approach less scalable and tends to make a mess of your GTM configuration. Let’s just say I don’t want to be miserable so I decide to not create a trigger for each floodlight. Instead we can have all that logic in one place by just concatenating together the different variables we want and use the RegEx table to identify a match as needed. To do this I just create a new variable that pulls in all the other variables like so:

Notice that I like to use a name/value concatenation so it ends up looking like URL parameters. I use “^” as the delimiter since that is pretty unique. I avoid using delimiters that are leveraged in URLs (such as ?, &, #). Also using a name with the value helps to ensure that we don’t accidentally match on a value that is unexpectedly in one of the other variables.

With the concatenation of variables giving us a new, single input I can now set up my regex table as needed:

Once I have worked out my RegEx table logic to match correctly on what I want I then just plug it into my floodlight as normal:

And with that you have now deployed a bunch of floodlights with a single tag, trigger, and variable. I have found this to be useful in simplifying the GTM setup in a bunch of scenarios so I hope it helps you too!

 

 

Testing and Optimization

Steps to Automation [Adobe Webinar]

On August 9th, this upcoming Thursday, I will be joining Adobe on the Adobe Target Basics webinar to geek out over how to dip your toes in Automation, using Automated Personalization in Adobe Target.

I am going to dive deep into the strategy, the setup, best practices, and how to interpret the results.  To make things even more fun, I am going to walk attendees through a LIVE Automated Personalization test that is currently running on our home page.

This test has only been up and running for 12 days and the image below represents a sneak preview of the results.  During the webinar, I will explain what is going on with this test.

To register for the webinar, simply register via the CTA at the bottom.

 

 

Hope to see you there!

Adobe Analytics, Featured

Transaction ID – HR Example

The Transaction ID feature in Adobe Analytics is one of the most underrated in the product. Transaction ID allows you to “close the loop,” so to speak, and import offline metrics related to online activity and apply those metrics to pre-existing dimension values.  This means that you can set a unique ID online and then import offline metrics tied to that unique ID and have the offline metrics associated with all eVar values that were present when the online ID was set. For example, if you want to see how many people who complete a lead form end up becoming customers a few weeks later, you can set a Transaction ID and then later import a “1” into a Success Event for each ID that becomes a customer. This will give “1” to every eVar value that was present when the Transaction ID was set, such as campaign code, visit number, etc…. It is almost like you are tricking Adobe Analytics into thinking that the offline event happened online. In the past, I have described how you could use Transaction ID to import recurring revenue and import product returns, but in this post, I will share another example related to Human Resources and recruiting.

Did They Get Hired?

So let’s imagine that you work for an organization that uses Adobe Analytics and hires a lot of folks. It is always a good thing if you can get more groups to use analytics (to justify the cost), so why not have the HR department leverage the tool as well? On your website, you have job postings and visitors can view jobs and then click to apply. You would want to set a success event for “Job Views” and another for “Job Clicks” and store the Job ID # in an eVar. Then if a user submits a job application, you would capture this with a “Job Applications” Success Event. Thus, you would have a report that looks like this:

Let’s assume that your organization is also using marketing campaigns to find potential employees. These campaign codes would be captured in the Campaigns (Tracking Code) eVar and, of course, you can also see all of these job metrics in this and any other eVar reports:

But what if you wanted to see which of these job applicants were actually hired? Moreover, what if you wanted to see which marketing campaigns led to hires vs. just unqualified applicants? All of this can be done with Transaction ID. As long as you have some sort of back-end system that knows the unique “transaction” ID and knows if a hire took place, you can upload the offline metric and close the loop. Here is what the Transaction ID upload file might look like:

Notice that we are setting a new “Job Hires” Success Event success event and tying it to the Transaction ID. This will bind the offline metric to the Job # eVar value, the campaign code and any other eVars. Once this has loaded, you can see a report that looks like this:

Additionally, you can then switch to the Campaigns report to see this:

This allows you to then create Calculated Metrics to see which marketing campaigns are most effective at driving new hires.

Are They Superstars?

If you want to get a bit more advanced with Transaction ID, you can extend this concept to import additional metrics related to employee performance. For example, let’s say that each new hire is evaluated after their first six months on the job and that they are rated on a scale of 1 (bad) to 10 (great). In the future, you can import their performance as another numeric Success Event (just be sure to have your Adobe account manager extend Transaction ID beyond the default 90 days):

Which will allow you to see a report like this:

Then you can create a Calculated Metric that divides the rating by the number of hires. This will allow you to see ratings per hire in any eVar report, like the Campaigns report shown here:

Final Thoughts

This is a creative way to apply the concept of Transaction ID, but as you can imagine, there are many other ways to utilize this functionality. Anytime that you want to tie offline metrics to online metrics, you should consider using Transaction ID.

Adobe Analytics, Uncategorized

Daily Averages in Adobe Analytics

Traditionally it has been a tad awkward to create a metric that gave you a daily average in Adobe Analytics. You either had to create a metric that could only be used with a certain time frame (with a fixed number of days), or create the metric in Report Builder using Excel functions. Thankfully, with today’s modern technology we are better equipped to do basic math ;). This is still a bit awkward, but should be easy for advanced users to create a metric that others can easily pull into their reports.

This approach takes advantage of the Approximate Count Distinct function to count the number of days your metric is seen across. The cool thing about this approach is that you can then use the metric across any time range and your denominator will always be right. Here’s how it would look in the calculated metric builder for a daily average of visits:

 

The most important part of this is the red section which is the APPROXIMATE COUNT DISTINCT function. This asks for a dimension as the only argument into which you would plug the “Day” dimension.

Now what’s up with the ROUND function in yellow around that? Well, as the name indicates, the distinctness of the count is approximate and doesn’t necessarily return a whole number like you would expect. To help it out a bit I just use the ROUND function to ensure that it is a round number. From what I have seen so far this is good enough to make the calculation accurate. However, if it is off by more than .5 this could cause problems, so keep an eye open for that and let me know if this happens to you.

With this metric created you can now use this in your reporting to show a daily average along with your total values:

Weekday and Weekend Averages

You can also use a variation of this to give you averages for just the weekday or weekend. This can be especially useful if your company experiences dramatic shifts in traffic on the weekend, and you don’t want the usual weekly trend to throw off your comparisons. For example, if I’m looking at a particular Saturday and I want to know how that compares to the average, it may not make sense to compare to the average across all days. If the weekday days are really high then they would push the average up and the Saturday I’m looking at will always seem low. You could also do the same for certain days of the week if you had the need.

To do this we need to add just a smidge bit more to the metric. In this example, notice that the calculation is essentially the same. I have just wrapped it all in a “Weekend Hits” segment. The segment is created using a hits container where the “Weekday/Weekend” dimension is equal to “Weekend”.

Here’s how the segment would look:

And here is the segment at play in the calculated metric:

With the metric created just add it to your report. Now you can have the average weekend visits right next to the daily average and your total. You have now given birth to a beautiful little metric family. Congratulations!

Caution

Keep in mind that this will count the days where you have data. This means your denominator could be deflated if you use this to look at a site, dimension, segment or combination that doesn’t get data every day. For example, let’s say you want to look at the daily average visits to a page that gets a tiny amount of traffic. If over 30 days it just has traffic for 28 of those days then this approach will just give the average over 28 days. The reason for this is that the function is counting the line items in the day dimension for that item. If the day doesn’t have data it isn’t available for counting.

In most cases this will likely help you. I say this mainly because date ranges in AA default to “This Month”. If you are in the middle of the current month, then using the total number of days in your time range would throw the calculations off. With this approach, if you are using “This Month” and you are just on the 10th then this approach will use 10 days in the calculation. Cool, eh?

Conferences/Community, Featured

ACCELERATE 2.0 coming in 2019: Save the Date

After a brief hiatus while we examined the ever-changing conference landscape and regrouped here at Analytics Demystified I am delighted to announce that our much loved ACCELERATE conference will be returning in January 2019.

On January 25th we will be gathering in Los Gatos, California at the beautiful Toll House Hotel to ACCELERATE attendees knowledge of digital measurement and optimization via our “Ten Tips in Twenty Minutes” format.  If you haven’t experienced our ground-breaking “Ten Tips” format before … think of it as a small firehose of information, aimed directly at you, in rapid-fire succession all morning long.

What’s more, as part of the evolution of ACCELERATE, the afternoon will feature both a keynote presentation that we think you will love and a session of intimate round-tables led by each of our “Ten Tips” speakers designed to allow participants to dig into each topic more deeply.  I am especially excited about the round-tables since, as an early participant and organizer in the old X Change conference, I have seen first-hand how deep these sessions can go, and how valuable they can be (when done properly!)

Also, as we have done in the past, on Thursday, January 24th, the Partners at Analytics Demystified will be leading half-day training sessions.  Led by Adam Greco, Brian Hawkins, Kevin Willeitner, Michele Kiss, Josh West, Tim Patten, and possibly … yours truly … these training sessions will cover the topics that digital analysts need most to ACCELERATE their own knowledge of Adobe and Google, analytics and optimization in practice, and their own professional careers.

But wait, there is one more thing!

While we have long been known for our commitment to the social aspects of analytics via Web Analytics Wednesday and the “lobby bar” gathering model … at ACCELERATE 2.0 we will be offering wholly social activities for folks who want to hang around and see a little more of Los Gatos.  Want to go mountain biking with Kevin Willeitner?  Or hiking with Tim Patten and Michele Kiss?  Now is your chance!

Watch for more information including our industry-low ticket prices, scheduling information, and details about hotel, training, and activities in the coming weeks … but for now we hope you will save January 24th and January 25th to join us in Los Gatos, California for ACCELERATE 2.0!

Adobe Analytics, Featured

Return Frequency % of Total

Recently, a co-worker ran into an issue in Adobe Analytics related to Visit Frequency. The Visit Frequency report in Adobe Analytics is not one that I use all that often, but it looks like this:

This report simply shows a distribution of how long it takes people to come back to your website. In this case, my co-worker was looking to show these visit frequencies as a percentage of all visits. To do this, she created a calculated metric that divided visits by the total number of visits like this:

Then she added it to the report as shown here:

At this point, she realized that something wasn’t right. As you can see here, the total number of Visits is 5,531, but when she opened the Visits metric, she saw this:

Then she realized that the Return Frequency report doesn’t show 1st time visits and even though you might expect the % of Total Visits calculated metric to include ALL visits, it doesn’t. This was proven by applying a 1st Time Visits segment to the Visits report like this:

Now we can see that when subtracting the total visits (27,686) from the 1st time visits (22,155), we are left with 5,531, which is the amount shown in the return frequency report. Hence, it is not as easy as you’d think to see the % of total visits for each return frequency row.

Solution #1 – Adobe ReportBuilder

The easiest way to solve this problem is to use Adobe ReportBuilder. Using ReportBuilder, you can download two data blocks – one for Return Frequency and one for Visits:

Once you have downloaded these data blocks you can create new columns that divide each row by the correct total number of visits to see your % of total:

In this case, I re-created the original percentages shown in the Return Frequency report, but also added the desired % of Total visits in a column next to it so both could be seen.

Solution #2 – Analysis Workspace & Calculated Metrics

Since Analysis Workspace is what all the cool kids are using these days, I wanted to find a way to get this data there as well. To do this, I created a few new Calculated Metrics that used Visits and Return Frequency. Here is one example:

This Calculated Metric divides Visits where Return Frequency was less than 1 day by all Visits. Here is what it looks like when you view Total visits, the segmented version of Visits and the Calculated Metric in a table in Analysis Workspace:

Here you can see that the total visits for June is 27,686, that the less than 1 day visits were 2,276 and that the % of Total Visits is 8.2%. You will see that these figures match exactly what we saw in Adobe ReportBuilder as well (always a good sign!). Here is what it looks like if we add a few more Return Frequencies:

Again, our numbers match what we saw above. In this case, there is a finite number of Return Frequency options, so even though it is a bit of a pain to create a bunch of new Calculated Metrics, once they are created, you won’t have to do them again. I was able to create them quickly by using the SAVE AS feature in the Calculated Metrics builder.

As a bonus, you can also right-click and create an alert for one or more of these new calculated metrics:

Summary

So even though Adobe Analytics can have some quirks from time to time, as shown here, you can usually find multiple ways to get to the data you need if you understand all of the facets of the product. If you know of other or easier ways to do this, please leave a comment here. Thanks!

Adobe Analytics, Featured

Measuring Page Load Time With Success Events

One of the things I have noticed lately is how slowly some websites are loading, especially media-related websites. For example, recently I visited wired.com and couldn’t get anything to work. Then I looked at Ghostery and saw that they had 126 tags on their site and a page load time of almost 20 seconds!

I have seen lots of articles showing that fast loading pages can have huge positive impacts on website conversion, but the proliferation of JavaScript tags may be slowly killing websites! Hopefully some of the new GDPR regulations will force companies to re-examine how many tags are on their sites and whether all of them are still needed. In the meantime, I highly recommend that you use a tool like ObservePoint to understand how many tags are lingering on your site now.

As a web analyst, you may want to measure how long it is taking your pages to load. Doing this isn’t trivial, as can be seen in my partner Josh West’s 2015 blog post. In this post, Josh shows some of the ways you can capture page load time in a dimension in Adobe or Google Analytics, though doing so is not going to be completely exact. Regardless, I suggest you check out his post and consider adding this dimension to your analytics implementation.

One thing that Josh alluded to, but did not go into depth on, is the idea of storing page load time as a metric. This is quite different than capturing the load time in a dimension, so I thought I would touch upon how to do this in Adobe Analytics (which can also be done in Google Analytics). If you want to store page load time as a metric in Adobe Analytics, you would pass the actual load time (in seconds or milliseconds) to a Numeric Success Event. This would create an aggregated page load time metric that is increased with every website page view. This new metric can be divided by page views or you can set a separate counter page load denominator success event (if you are not going to track page load time on every page). Here is what you might see if you set the page load time and denominator metrics in the debugger:

You would also want to capture the page name in an eVar so you can easily see the page load time metrics by page. This is what the data might look like in a page name (actual page names hidden here):

In this case, there is a calculated metric that is dividing the aggregated page load time by the denominator to see an average page load time for each page. There are also ways that you can use Visit metrics to see the average page load time per visit. Regardless of which version you use, this type of report can help you identify your problem pages so you can see if there are things you can do to improve conversion. I suggest combing this with a Participation report to see which pages impact your conversion the most, but are loading slowly.

Another cool thing you can do with this data is to trend the average page load time for the website overall. Since you already have created the calculated metric shown below, you can simply open this metric by itself (vs. viewing by page name), to see the overall trend of page load speeds for your site and then set some internal targets or goals to strive for in the future.

Adobe Analytics, Featured

Product Ratings/Reviews in Adobe Analytics

Many retailers use product ratings as a way to convince buyers that they should take the next step in conversion, which is usually a cart addition. Showing how often a product has been reviewed and its average product rating helps build product credibility and something consumers have grown used to from popular sites like amazon.com.

Digital analytics tools like Adobe Analytics can be used to determine whether the product ratings on your site/app are having a positive or negative impact on conversion. In this post, I will share some ways you can track product review information to see its impact on your data.

Impact of Having Product Ratings/Reviews

The first thing you should do with product ratings and reviews is to capture the current avg. rating and # of reviews in a product syntax merchandising eVar when visitors view the product detail page. In order to save eVars, I sometimes concatenate these two values with a separator and then use RegEx and the SAINT Classification RuleBuilder to split them out later. In the preceding screenshot, for example, you might pass 4.7|3 to the eVar and then split those values out later via SAINT. Capturing these values at the time of the product detail page view allows you to lock in what the rating and # of reviews was at the time of the product view. Here is what the rating merchandising eVar might look like once split out:

You can also group these items using SAINT to see how ratings between 4.0 – 4.5 perform vs. 4.5 – 5.0, etc… You can also sort this report by your conversion metrics, but if you do so, I would recommend adding a percentile function so you don’t just see rows that have very few product views or orders. The same type of report can be run for # of reviews as well:

Lastly, if you have products that don’t have ratings/reviews at all, the preceding reports will have a “None” row, which will allow you to see the conversion rate when no ratings/reviews exist, which may be useful information to see overall impact of ratings/reviews for your site.

Average Product Rating Calculated Metric

In addition to capturing the average rating and the # of reviews in an eVar, another thing you can do is to capture the same values in numeric success events. As a reminder, a numeric success event is a metric that can be incremented by more than one in each server call. For example, when a visitor views the following product page, the average product rating of 4.67 is being passed to a numeric success event 50. This means that event 50 is being increased for the entire website by 4.67 each time this product is viewed. Since the Products variable is also set, this 4.67 is “bound” (associated) to product H8194. At the same time, we need a denominator to divide this rating by to compute the overall product rating average. In this case, event 51 is set to “1” each time that a rating is present (you cannot use Product Views metric since there may be cases in which no rating is present but there is a product view).  Here is what the tagging might look like when it is complete:

Below is what the data looks like once it is collected:

You can see Product Views, the accumulated star ratings, the number of times ratings were available and a calculated metric to compute the average rating for each product. Given that we already have the average product rating in an eVar, this may not seem important, but the cool part of this is that now the product rating can be trended over time. Simply add a chart visualization and then select a specific product to see how its rating changes over time:

The other cool part of this is that you can leverage your product classifications to group these numeric ratings by product category:

Using both eVars and success events to capture product ratings/reviews on your site allows you to capture what your visitors saw for each product while on your product detail pages. Having this information can be helpful to see if ratings/reviews are important to your site and to be aware of the impact for each product and/or product category.

Adobe Analytics, Featured

Engagement Scoring Using Approx. Count Distinct

Back in 2015, I wrote a post about using Calculated Metrics to create an Engagement Score. In that post, I mentioned that it was possible to pick a series of success events and multiply them by some sort of weighted number to compute an overall website engagement score. This was an alternative to a different method of tracking visitor engagement via numeric success events set via JavaScript (which was also described in the post). However, given that Adobe has added the cool Approximate Count Distinct function to the analytics product, I recently had an idea about a different way to compute website engagement that I thought I would share.

Adding Depth to Website Engagement

In my previous post, website engagement was computed simply by multiplying chosen success events by a weighted multiplier like this:

This approach is workable but lacks a depth component. For example, the first parameter looks at how many Product Views take place but doesn’t account for how many different products are viewed. There may be a situation in which you want to assign more website engagement to visits that get visitors to view multiple products vs. just one. The same concept could apply to Page Views and Page Names, Video Views and Video Names, etc…

Using the Approximate Count Distinct function, it is now possible to add a depth component to the website engagement formula. To see how this might work, let’s go through an example. Imagine that in a very basic website engagement model, you want to look at Blog Post Views and Internal Searches occurring on your website. You have success events for both Blog Post Views and Internal Searches and you also have eVars that capture the Blog Post Titles and Internal Search Keywords.

To start, you can use the Approximate Count Distinct function to calculate how many unique Blog Post Titles exist (for the chosen date range) using this formula:

Next, you can multiply the number of Blog Post Views by the number of unique Blog Post Titles to come up with a Blog Post Engagement score as shown here:

Note that since the Approximate Count Distinct function is not 100% accurate, the numbers will differ slightly from what you would get if you use a calculator, but in general, the function will be at least 95% accurate or greater.

You can repeat this process for Internal Search Keywords. First, you compute the Approximate Count of unique Search Keywords like this:

Then you create a new calculated metric that multiplies the number of Internal Searches by the unique number of Keywords. Here is what a report looks like with all six metrics:

Website Engagement Calculation

Now that you have created the building blocks for your simplistic website engagement score, it is time to put them together and add some weighting. Weighting is important, because it is unlikely that your individual elements will have the same importance to your website. In this case, let’s imagine that a Blog Post View is much more important than an Internal Search, so it is assigned a weight score of 90, whereas a score of 10 is assigned to Internal Searches. If you are creating your own engagement score, you may have more elements and can weight them as you see fit.

In the following formula, you can see that I am adding the Blog Post engagement score to the Internal Search engagement score and adding the 90/10 weighting all in one formula. I am also dividing the entire formula by Visits to normalize it, so my engagement score doesn’t rise or fall based upon differing number of Visits over time:

Here you can see a version of the engagement score as a raw number (multiplied by 90 & 10) and then the final one that is divided by Visits:

Finally, you can plot the engagement score in a trended bar chart. In this case, I am trending both the engagement score and visits in the same chart:

In the end, this engagement score calculation isn’t significantly different than the original one, but adding the Approximate Count Distinct function allows you to add some more depth to the overall calculation If you don’t want to multiply the number of success event instances by ALL of the unique count of values, you could alternatively use an IF function with the GREATER THAN function to cap the number of unique items at a certain amount (i.e. If more than 50 unique Blog Post Titles, use 50, else, use the unique count).

The best part of this approach is that it requires no JavaScript tagging (assuming you already have the success events and eVars you need in the calculation). So you can play around with the formula and its weightings with no fear of negatively impacting your implementation and no IT resources! I suggest that you give it a try and see if this type of engagement score can be used as an overall health gauge of how your website is performing over time.

Featured, Testing and Optimization

Adobe Insider Awesomeness and Geo Test deep dive

Adobe Insider and EXBE

The first Adobe Insider with Adobe Target took place on June 1st in Atlanta, Georgia.  I wrote a blog post a couple of weeks back about the multi-city event but after attending the first one, I thought I would share some takeaways.  

The event was very worthwhile and everyone that I talked to was glad to have attended.  The location was an old theatre and Hamilton was even set to run in that building later that evening.  Had I known that my flight back to Chicago that evening would be delayed by four hours, I would have tried to score a ticket.  The Insider Tour is broken down into two tracks.  An Analytics one and an Adobe Target or Personalization one.  My guess is that there were about 150 to 180 attendees which made for a more social and intimate gathering.

The Personalization track got to hang directly with the Target Product Team and hear some presentations on what they are working on, what is set to be released, and they even got to give some feedback as to product direction and focus.

The roundtable discussions went really well with lots of interaction and feedback.  I especially found it interesting to see the company to company conversations taking place.  The roundtable that I was at had really advanced users of Adobe Target as well as brand new users which allowed newbies to get advice and tips directly from other organizations vs. vendors or consultants.

As for the what the attendees liked the most, they seem to really enjoy meeting and working directly with the Product Team members but the biggest and most popular thing for the day was EXBE.   EXBE represents “Experiences Business Experience Excellence”.  You are not alone if that doesn’t roll off the tongue nicely.  Essentially, this all translates to someone (not Adobe and not a Consultant) sharing a case study of a test that they ran.  The test could be simple or the test could be very complex, it doesn’t matter.  The presenter would simply share any background, test design, setup, and any results that they could share.

Home Depot shared a case study at this year’s event and it was a big hit.  Priyanka, from Home Depot, walked attendees through a test that made a very substantial impact into Home Depot’s business.  Attendees asked a ton of questions about the test and the conversation even turned into a geek out.  Priyanka made really cool use of using multiple locations within a single experience.   This capability mapped back to using multiple mboxes in the same experience.  Some advanced users didn’t know it was possible.

So, if you are in LOS ANGELES, CHICAGO, NEW YORK, or DALLAS and plan on attending the Insider Tour, I strongly encourage to submit a test and present it.  Even if the test may seem very straightforward or not that exciting, there will be attendees that will benefit substantially.  The presentation could be 5 minutes or 30 minutes, and there is no need to worry if you can’t share actual results.  It is also a great opportunity to present to your peers and in front of a very friendly audience.  You can register here or via the very nerdy non-mboxy CTA below (see if you can figure out what I am doing here) if you are interested.

Sample Test and feedback…

At the event that day, an attendee was telling me that they don’t do anything fancy with their tests otherwise they would have submitted something and gotten the experience presenting to fellow testers.  I explained that I don’t think that matters as long as the test is valuable to your or to your business.  I then explained a very simple test that I am running on the Demystified site that some might think is simple but would a good example of a test to present.  

Also, at the event, a few people asked that I write more about test setup and some of the ways I approach test setup within Target.  So, I thought I would walk through the above mention Geo Targeted test that I have running on the Demystified website.

 

Test Design and Execution

Hypothesis

Adam and I are joining Adobe on the Adobe Insider Tour in Atlanta, Los Angeles, Chicago, New York and in Dallas.  We hypothesize that geo-targeting a banner to those five cities encouraging attendance will increase clicks on the hero compared to the rotating carousel that is hard-coded into the site.  We also hope that in the event that some of our customers or previous customers didn’t know about the Insider event, that maybe the test might make them aware of it and they attend.  

Built into Adobe Target is geo-targeting based on reverse IP lookup.  Target user the same provider that is in Analytics and users can target based on zip code, city, state, DMA, and country.  I chose to use DMA so as to get the biggest reach.

This data in this box represents the geo attributes for YOU, based on your IP address.  I am pumping this in via a test on this page.

Default Content – if you are seeing this, you are not getting the test content from Target

Test Design

So as to make sure we have a control group and to make sure we get our message out to as many people as possible, we went with a 90/10 split.  Of course, this is not ideal for sample sizes calculations, etc… but that is a whole other subject.  This is more about the tactical steps or a geo-targeted test.

Experience A:  10% holdout group to serve as my baseline (all five cities will be represented here)

Experience B:  Atlanta 

Experience C:  Los Angeles

Experience D:  Chicago

Experience E:  New York

Experience F:  Dallas

I also used an Experience Targeted test in the event that someone got into the test and happen to travel to another city that was part of our test.  The Experience Targeted test enables their offer to change to the corresponding test-Experience.

The banner would look like this (I live in Chicago DMA so I am getting this banner:).  When I go to Los Angeles next week, I will get the one for Los Angeles.  If I used an A/B test, I would continue to get Chicago since that is where I was first assigned.

Profile to make this happen

To have my 10% group, I have to use Target profiles.  There is no way to use % allocation coupled with visitor attributes like DMA so profiles are the way to go.  I’ve long argued that the most powerful part of the Adobe Target platform is the ability to profile visitors client side or server side.  For this use case, we are going to use the server side scripts to get our 10% control group.  Below is my script and you are welcome to copy it into your account.  Just be sure to name it “random_10_group”.

This script randomly generates a number and based off of that number, puts visitors into 1 of 10 groups.  Each group or set of groups can be used for targeting.  You can also force yourself into a group by appending the URL parameter ‘testgroup’ = the number of the group that you want.  For example, http://analyticsdemystified.com/?testgroup=4 would put me in the group4 for this profile.  Helpful when debugging or QA’ing tests that make use of this.

These groups are mutually exclusive as well so if your company wants to incorporate test swimlanes, this script will be helpful.

if (!user.get('random_10_group')) {
var ran_number = Math.floor(Math.random() * 99),
query = (page.query || '').toLowerCase();
query = query.indexOf('testgroup=') > -1 ? query.substring(query.indexOf('testgroup=') + 10) : '';
if (query.charAt(0) == '1') {
return 'group1';
} else if (query.charAt(0) == '2') {
return 'group2';
} else if (query.charAt(0) == '3') {
return 'group3';
} else if (query.charAt(0) == '4') {
return 'group4';
} else if (query.charAt(0) == '5') {
return 'group5';
} else if (query.charAt(0) == '6') {
return 'group6';
} else if (query.charAt(0) == '7') {
return 'group7';
} else if (query.charAt(0) == '8') {
return 'group5';
} else if (query.charAt(0) == '9') {
return 'group6';
} else if (query.charAt(0) == '10') {
return 'group10';
} else if (ran_number <= 9) {
return 'group1';
} else if (ran_number <= 19) {
return 'group2';
} else if (ran_number <= 29) {
return 'group3';
} else if (ran_number <= 39) {
return 'group4';
} else if (ran_number <= 49) {
return 'group5';
} else if (ran_number <= 59) {
return 'group6';
} else if (ran_number <= 69) {
return 'group7';
} else if (ran_number <= 79) {
return 'group8';
} else if (ran_number <= 89) {
return 'group9';
} else if (ran_number <= 99) {
return 'group10';
} else {
return 'sorry';
}
}

Audiences

Before I go into setting up the test, I am going to create my Audiences.  If you are going to be using more than a couple of Audiences in your test, I recommend you adopt this process.  Creating Audiences during the test setup can interrupt the flow of things and if you have them already created, it takes no time at all to add them as needed.

Here is my first Audience – it is my 10% control group that was made possible by the above profile parameter and it has all five cities that I am using for this test.  This will be my first Experience in my Experience Targeted Test which is a very important component.  For Experience Targeted Tests, visitors are evaluated for Experiences from top to bottom so had I put my New York Experience first, I would get visitors that should be in my Control group in that Experience.

And here is my New York Audience.  Chicago, Dallas, Atlanta, and Los Angeles are setup the same way.

 

Offer Code

Here is an example of the code I used for my test. This is the code for the offer that will display for users in Los Angeles.  I could have used VEC to do this test but our carousel is finicky and would have taken too much time to figure out in VEC so I went with FORM based.  I am old school and prefer to use Form vs. VEC.  I do love the easy click tracking as conversions events in VEC and wish they would put that in Form-based testing.  Users should only use VEC if they are using the Visual Composer.  Too often I see users select VEC only to place in custom code.  That adds overhead and is unnecessary.

 

<!– I use CSS here to suppress the hero from showing –>
<style id=”flickersuppression”>
#slider {visibility:hidden !important}
</style>
<script>
(function($){var c=function(s,f){if($(s)[0]){try{f.apply($(s)[0])}catch(e){setTimeout(function(){c(s,f)},1)}}else{setTimeout(function(){c(s,f)},1)}};if($.isReady){setTimeout(“c=function(){}”,100)}$.fn.elementOnLoad=function(f){c(this.selector,f)}})(jQuery);
// this next like wants for my test content to show up in the DOM then changes the experience
jQuery(‘.rsArrowRight > .rsArrowIcn’).elementOnLoad(function(){
$(“.rsContainer”).replaceWith(“<div class=\”rsContent\”>\n <a href=\”https://webanalyticsdemystif.tt.omtrdc.net/m2/webanalyticsdemystif/ubox/page?mbox=insider&mboxDefault=http%3A%2F%2Fwww.adobeeventsonline.com%2FInsiderTour%2F2018%2F/\”><img class=\”rsImg rsMainSlideImage\” src=\”http://analyticsdemystified.com/wp-content/uploads/2015/02/header-image-services-training-700×400.jpg\” alt=\”feature-image-1\” style=\”width:100%; height: 620px; margin-left: 0px; margin-top: -192px;\”></a>\n \n \n <div class=\”rsSBlock ui-draggable-handle\” style=\”width: auto; height: 600px; left: 40px; top: 317px;\”><h1><strong>Los Angeles! Analytics Demystified is joining Adobe on the Adobe Insider Tour</strong></h1>\n<p style=\”text-align:left;\”><br><br>Thursday, June 21st – iPic Westwood in Los Angeles, CA. </p>\n</div>\n</div>”);
$(“.rsContainer > div:eq(0) > div:eq(0) > div:eq(0) > p:eq(0)”).css({“color”:”#000000″});
$(“.rsContainer > div:eq(0) > div:eq(0) > div:eq(0) > h1:eq(0)”).css({“color”:”#000000″});
$(“.rsNav”).css({“display”:”none”, “visibility”:””});
$(“.rsArrowLeft > .rsArrowIcn”).css({“display”:”none”, “visibility”:””});
$(“.rsArrowRight > .rsArrowIcn”).css({“display”:”none”, “visibility”:””});
$(“#login-trigger > img”).removeAttr(“src”).removeAttr(“srcdoc”);
$(“#login-trigger > img”).css({“display”:”none”, “visibility”:””});
$(“.rsSBlock > h1”).append(“<div id=\”hawk_cta\”>…</div>”);
// this next line removes my flicker suppression that I put in place at the top of this code
jQuery(‘#flickersuppression’).remove();
})
// one of the coolest parts of at.js making click tracking a lot easier!!!
$(‘#slider’).click(function(event){
adobe.target.trackEvent({‘mbox’:’hero_click’})
});
</script>

Success Events

The success event for this test is clicking on the hero CTA which brings you to the Adobe page to register to join the insider event.  This CTA click was tracked via a very cool function that you all will grow to love as you adobe at.js.

$(‘#slider‘).click(function(event){
adobe.target.trackEvent({‘mbox’:’hero_click‘})
});

To use this, one needs to be using at.js and then update the two bold sections above.  The first bold section is the CSS selector which you can get with any browswer by right clicking and then click inspect.  In the HTML below we then right click again and copy the selector.  The second bold section is the name of the mbox that will be called when the area gets clicked on.  In the test setup, that looks like this:

Segments

Segment adoption within Target varies quite a bit it seems.  I personally find it a crucial component and recommend that organizations standardize a set of key segments to your business and include them with every test.  With Analytics, much time and effort are put in place to classify sources (utm parameters), behaviors, key devices, etc… so the same effort should be applied here.  If you use A4T or integrate with Analytics in other ways, this will help with these efforts for many of your tests.  For this test, I can’t use Analytics because the success event is a temporary CTA that was put in place for this test and I have no Analytics tracking in place to report on it so the success event lives in Target.

The main segments that are important here are for my Control group.  If you recall, I am consolidating all five cities into the Experience A.  To see how any of these cities do in this Experience, I have to define them as a segment when they qualify for the activity.  Target makes this a bit easier now vs. the Classic days as we can repurpose the Audiences that we used in the Experience Targeting.

Also cool now is the ability to add more than one segment at a time!  Classic had this many years back but the feature was taken away.  Having it now leaves organizations with no excuses for not using key segments in your tests!

An important note, you can apply segments on any and all Adobe Target success events used in the test.  For example, if I wanted to segment out visitors that spent over $200 on a revenue success event (or any event other than test entry), I can do that in the “Applied At” dropdown.  Lot of very cool use cases here but for what I need here, I am going to select “Campaign Entry” (although Adobe should change this to Activity entry:) and I will see how all the visitors from each of these cities did for my Control.

Geo-Targeting

To wrap things up here, I am going to share this last little nugget of gold.  Adobe Target allows users to pass an IP address to a special URL parameter and Adobe Target will return the Geo Attribues (City, State, DMA, Country, and Zip) for that IP address.  Very helpful when debugging.  You can see what it would look like below but clicking on this link will do you no good.  Sadly there is a bug with some versions of WordPress that changes the “.” in the URL to an underscore.  That breaks it sadly but this only applies to our site and some other installs of Word Press.

https://analyticsdemystified.com/?mboxOverride.browserIp=161.185.160.93

Happy Testing and hopefully see you at one of the Insider events coming up!

 

Adobe Analytics, Featured

100% Stacked Bar Chart in Analysis Workspace

As is often the case with Analysis Workspace (in Adobe Analytics), you stumble upon new features accidentally. Hopefully, by now you have learned the rule of “when in doubt, right-click” when using Analysis Workspace, but for other new features, I recommend reading Adobe’s release notes and subscribing to the Adobe Analytics YouTube Channel. Recently, the ability to use 100% stacked bar charts was added to Analysis Workspace, so I thought I’d give it a spin.

Normal vs. 100% Stacked Bar Charts

Normally, when you use a stacked bar chart, you are comparing raw numbers. For example, here is a sample stacked bar chart that looks at Blog Post Views by Author:

This type of chart allows you to see overall trends in performance over time. In some respects, you can also get a sense of which elements are going up and down over time, but since the data goes up and down each week, it can be tricky to be exact in the percentage changes.

For this reason, Adobe has added a 100% stacked bar visualization. This visualization stretches the elements in your chart to 100% and shifts the graph from raw numbers to percentages (of the items being graphed, not all items necessarily). This allows you to more accurately gauge how each element is changing over time.

To enable this, simply click the gear icon of the visualization and check the 100% stacked box:

Once this is done, your chart will look like this:

In addition, if you hover over one of the elements, it will show you the actual percentage:

The 100% stacked setting can be used in any trended stacked bar visualization. For example, here is a super basic example that shows the breakdown of Blog Post Views by mobile operating system:

For more information on using the 100% stacked bar visualization, here is an Adobe video on this topic: https://www.youtube.com/watch?v=_6hzCR1SCxk&t=1s

Adobe Analytics, Featured

Finding Adobe Analytics Components via Tags

When I am working on a project to audit someone’s Adobe Analytics implementation, one of the things I often notice is a lack of organization that surrounds the implementation. When you use Adobe Analytics, there are a lot of “components” that you can customize for your implementation. These components include Segments, Calculated Metrics, Reports, Dashboards, etc. I have some clients that have hundreds of Segments or Calculated Metrics, to the point that finding the one you are looking for can be like searching for a needle in a haystack! Over time, it is so easy to keep creating more and more Adobe Analytics components instead of re-using the ones that already exist. When new, duplicative components are created, things can get very chaotic because:

  • Different users could use different components in reports/dashboards
  • Changes made to fix a component may only be fixed in some places if there are duplicative components floating out there
  • Multiple components with the same name or definition can confuse novice users

For these reasons, I am a big fan of keeping your Adobe Analytics components under control, which takes some work, but pays dividends in the long run.  A few years ago, I wrote a post about how you can use a “Corporate Login” to help manage key Adobe Analytics components. I still endorse that concept, but today, I will share another technique I have started using to organize components in case you find it helpful.

Searching For Components Doesn’t Work

One reason that components proliferate is because finding the components you are looking for is not foolproof in Adobe Analytics. For example, let’s say that I just implemented some code to track Net Promoter Score in Adobe Analytics. Now, I want to create a Net Promoter Score Calculated Metric so I can trend NPS by day, week or month. To do this, I might go to the Calculated Metrics component screen where I would see all of the Calculated Metrics that exist:

If I have a lot of Calculated Metrics, it could take me a long time to see if this exists, so I might search for the Calculated Metric I want like this:

 

Unfortunately, my search came up empty, so I would likely go ahead and create a new Net Promoter Score Calculated Metric. What I didn’t know is that one already exists, it was just named “NPS Score” instead of “Net Promoter Score.” And since people are not generally good about using standard naming conventions, this scenario can happen often. So how do we fix this? How do we avoid the creation of duplicative components?

Search By Variable

To solve this problem, I have a few ideas. In general, the way I think about components like Calculated Metrics or Segments is that they are made up of other Adobe Analytics elements, specifically variables. Therefore, if I want to see if a Net Promoter Score Calculated Metric already exists, a good place to start would be to look for all Calculated Metrics that use one of the variables that is used to track Net Promoter Score in my implementation. In this case, success event #20 (called NPS Submissions [e20]) is set when any Net Promoter Score survey occurs. Therefore, if I could filter all Calculated Metrics to see only those that utilize success event #20, I would be able to find all Calculated Metrics that relate to Net Promoter Score. Unfortunately, Adobe Analytics only allows you to filter by the following items:

It would be great if Adobe had a way that you could filter on variables (Success Events, eVars, sProps), but that doesn’t exist today. The next best thing would be the ability to have Adobe Analytics find Calculated Metrics (or other components) by variable when you type the variable name in the search box. For example, it would be great if I could enter this in the search box:

But, alas, this doesn’t work either (though could one day if you vote for my idea in the Adobe Idea Exchange!).

Tagging to the Rescue!

Since there is no good way today to search for components by variable, I have created a workaround that you can use leveraging the tagging feature of Adobe Analytics. What I have started doing, is adding a tag for every variable that is used in a Calculated Metric (or Segment). For example, if I am creating a “Net Promoter Score” Calculated Metric that uses success event# 20 and success event# 21, in addition to any other tags I might want to use, I can tag the Calculated Metric with these variable names as shown here:

Once I do this, I will begin to see variable names appear in the tag list like this:

Next, if I am looking for a specific Calculated Metric, I can simply check one of the variables that I know would be part of the formula…

…and Adobe Analytics will filter the entire list of Calculated Metrics to only show me those that have that variable tag:

This is what I wish Adobe Analytics would do out-of-the-box, but using the tagging feature, you can take matters into your own hands. The only downside is that you need to go through all of your existing components and add these tags, but I would argue that you should be doing that anyway as part of a general clean-up effort and then simply ask people to do this for all new components thereafter.

The same concept can be applied to other Adobe Analytics components that use variables and allow tags. For example, here is a Segment that I have created and tagged based upon variables it contains:

This allows me to filter Segments in the same way:

Therefore, if you want to keep your Adobe Analytics implementation components organized and make them easy for your end-users to find, you can try out this work-around using component tags and maybe even vote for my idea to make this something that isn’t needed in the future. Thanks!

Featured, Testing and Optimization

Adobe Personalization Insider

To my fellow optimizers in or near Atlanta, Los Angeles, Chicago, New York, and Dallas:

I am very excited to share that I am heading your way and hope to see you.  I have the privilege of joining Adobe this year for the Adobe Insider Tour which is now much bigger than ever and has a lot of great stuff for optimizers like you and me.   

If you haven’t heard of it, the Adobe Insider Tour is a free half-day event that Adobe puts together so attendees can network and collaborate with their industry peers.  And it’s an opportunity for all participating experts to keep it real through interactive breakout sessions, some even workshop-style.  Adobe will share some recent product innovations and even some sneaks to what’s coming next.

The Insider Tour has three tracks, the Analytics Insider, and Personalization Insider and for New York, there will also be an Audience Manager Insider.  If you leverage Adobe to support your testing and personalization efforts, your analysis, or for managing of audiences, the interactive breakouts will be perfect for you.  My colleague Adam Greco will be there was well for the Analytics Insider.

Personalization Insider

I am going to be part of the Personalization Insider as I am all things testing and if you part of a testing team or want to learn more about testing, the breakout sessions and workshop will be perfect for you.  

In true optimization form, get ready to discuss, ideate, hypothesize and share best practices around the following:

*Automation and machine learning

*Optimization/Personalization beyond the browser (apps, connected cars, kiosks, etc)

*Program ramp and maturity

*Experience optimization in practice

Experience Business Excellence Awards

There is also something really cool and new this year that is part of the Insider Tour.  Adobe is bringing the Experience Business Excellence (EXBE) to each city.  The EXBE Awards Program was a huge hit at the Adobe Summit as it allows organizations to submit their experiences of using Adobe Target that kicked some serious butt and compete for awards and a free pass to Summit.  I was part of this last year at Summit where two of my clients won with some awesome examples of using testing to add value to their business and digital consumers.  If you have any interesting use cases or inspirational tests, you should submit them for consideration.   

Learn More and Register

If you come early to the event, there will be a “GENIUS BAR” where you can geek out with experts with any questions you might have.  Please come at me with any challenges you might have with test scaling, execution or anything for that matter.  I will be giving a free copy of my book on Adobe Target to the most interesting use case brought to me during “GENIUS BAR” hours.

I really hope to see you there and the venues are also being held at some cool places.    

Here are the dates for each city:  

  • Atlanta, GA – June 1st
  • Los Angeles, CA – June 21st
  • Chicago, IL – September 11th
  • New York, NY – September 13th
  • Dallas, TX – September 27

Click the button below to formally register (required)

(I did something nerdy and fun with this CTA – if anyone figures out exactly what I did here or what it is called, add a comment and let me know:)

Adobe Analytics, Featured

Adobe Insider Tour!

I am excited to announce that my partner Brian Hawkins and I will be joining the Adobe Insider Tour that is hitting several US cities over the next few months! These 100% free events held by Adobe are great opportunities to learn more about Adobe’s Marketing Cloud products (Adobe Analytics, Adobe Target, Adobe Audience Manager). The half-day sessions will provide product-specific tips & tricks, show future product features being worked on and provide practical education on how to maximize your use of Adobe products.

The Adobe Insider Tour will be held in the following cities and locations:

Atlanta – Friday, June 1
Fox Theatre
660 Peachtree St NE
Atlanta, GA 30308

Los Angeles – Thursday, June 21
iPic Westwood
10840 Wilshire Blvd
Los Angeles, CA 90024

Chicago – Tuesday, September 11
Davis Theater
4614 N Lincoln Ave
Chicago, IL 60625

New York – Thursday, September 13
iPic Theaters at Fulton Market
11 Fulton St
New York, NY 10038

Dallas – Thursday, September 27
Alamo Drafthouse
1005 S Lamar St
Dallas, TX 75215

Adobe Analytics Implementation Improv

As many of my blog readers know, I pride myself on pushing Adobe Analytics to the limit! I love to look at websites and “riff” on what could be implemented to increase analytics capabilities. On the Adobe Insider Tour, I am going to try and take this to the next level with what we are calling Adobe Analytics Implementation Improv. At the beginning of the day, we will pick a few companies in the audience and I will review the site and share some cool, advanced things that I think they should implement in Adobe Analytics. These suggestions will be based upon the hundreds of Adobe Analytics implementations I have done in the past, but this time it will be done live, with no preparation and no rehearsal! But in the process, you will get to see how you can quickly add some real-world, practical new things to your implementation when you get back to the office!

Adobe Analytics “Ask Me Anything” Session

After the “Improv” session, I will have an “Ask Me Anything” session to do my best and answer any questions you may have related to Adobe Analytics. This is your chance to get some free consulting and pick my brain about any Adobe Analytics topic. I will also be available prior to the event at Adobe’s “Genius Bar” providing 1:1 help.

Adobe Analytics Idol

As many of you may know, for the past few years, Adobe has hosted an Adobe Analytics Idol contest. This is an opportunity for you to share something cool that you are doing with Adobe Analytics or some cool tip or trick that has helped you. Over the years this has become very popular and now Adobe is even offering a free pass to the next Adobe Summit for the winner! So if you want to be a candidate for the Adobe Analytics Idol, you can now submit your name and tip and present at your local event. If you are a bit hesitant to submit a tip, this year, Adobe is adding a cool new aspect to the Adobe Analytics Idol. If you have a general idea, but need some help, you can email and either myself or one of the amazing Adobe Analytics product managers will help you formulate your idea and bring it to fruition. So even if you are a bit nervous to be an “Idol” you can get help and increase your chances of winning!

There will also be time at these events for more questions and casual networking, so I encourage you to register now and hope to see you at one of these events!

Adobe Analytics, Featured

Elsevier Case Study

I have been in consulting for a large portion of my professional life, starting right out of school at Arthur Andersen (back when it existed!). Therefore, I have been part of countless consulting engagements over the past twenty-five years. During this time, there are a few projects that stand out. Those that seemed daunting at first, but in the end turned out to make a real difference. Those large, super-difficult projects are the ones that tend to stick with you.

A few years ago, I came across one of these large projects at a company called Elsevier. Elsevier is a massive organization, with thousands of employees and key locations all across Europe and North America. But what differentiates Elsevier the most, is how disparate a lot of their business units can be – from geology to chemistry, etc. When I stumbled upon Elsevier, they were struggling to figure out how to have a unified approach to implementing Adobe Analytics worldwide in a way that helped them see some key top-line metrics, but at the same time offering each business unit its own flexibility where needed. This is something I see a lot of large organizations struggle with when it comes to Adobe Analytics. Since over my career I have worked with some of the largest Adobe Analytics implementations in the world, I was excited to apply what I have learned to tackle this super-complex project. I am also fortunate to have Josh West, one of the best Adobe Analytics implementation folks in the world, as my partner, who was able to work with me and Elsevier to turn our vision into a reality.

While the project took some time and had many bumps along the way, Elsevier heeded our advice and ended up with an Adobe Analytics program that transformed their business. They provided tremendous support form the top (thanks to Darren Person!) and Adobe Analytics became a huge success for the organization.  To learn more about this, I suggest you check out this case study here.

In addition, if you want to hear Darren and I talk about the project while we were still in the midst of it, you can see a presentation we did at the 2016 Adobe Summit (free registration required) by clicking here.

Adobe Analytics, Featured

DB Vista – Bringing the Sexy Back!

OK. It may be a bit of a stretch to say that DB Vista is sexy. But I continue to discover that very few Adobe Analytics clients have used DB Vista or even know what it is. As I wrote in my old blog back in 2008 (minus the images which Adobe seems to have lost!), DB Vista is a method of setting Adobe Analytics variables using a rule that does a database lookup on a table that you upload (via FTP) to Adobe. In my original blog post, I mentioned how you can use DB Vista to import the cost of each product to a currency success event, so you can combine it with revenue to calculate product margin. This is done by uploading your product information (including cost) to the DB Vista table and having a DB Vista rule lookup the value passed to the Products variable and match it to the column in the table that stores the current product cost.  As long as you are diligent about keeping your product cost table updated, DB Vista will do the rest.  The reason I wanted to bring the topic of DB Vista back is that it has come up more and more over the past few weeks. In this post, I will share why and a few reasons why I keep talking about it.

Adobe Summit Presentation

A few weeks ago, while presenting at Adobe Summit, I showed an example where a company was [incorrectly] using SAINT Classifications to classify product ID’s with the product cost like this:

As I described in this post, SAINT Classifications are not ideal for something like Product Cost because the cost of each product will change over time and updating the SAINT file is a retroactive change that will make it look like each product ALWAYS had the most recently uploaded cost.  In the past, this could be mitigated by using date-enabled SAINT Classifications, but those have recently been removed from the product, I presume due to the fact they weren’t used very often and were overly complex.

However, if you want to capture the cost of each product, as mentioned above, you could use DB Vista to pass the cost to a currency success event and/or you could capture the cost in an eVar. Unlike SAINT, using DB Vista to get the cost, means that the data is locked in at the time it is collected.  All that is needed is a mechanism to keep your product cost data updated in the DB Vista table.

Measure Slack

Another case where DB Vista arose recently, was in the #Measure Slack group. There was a discussion around using classifications to group products, but the product group was not available in real-time to be passed to an eVar and the product group could change over time.

The challenge in this situation is that SAINT classifications would not be able keep all of this straight without the use of date-enabled classifications. This is another situation where DB Vista can save the day as long as you are able to keep the product table updated as products move groups.  In this case, all you’d need to do is upload the product group to the DB Vista table and use the DB Vista rule to grab the value and pass it to an eVar whenever the Products variable is set.

Idea Exchange

There are countless other things that you can do with DB Vista. So why don’t people use it more? I think it has to do with the following reasons:

  • Most people don’t understand the inner workings of DB Vista (hint: come to my upcoming  “Top Gun” Training Class!)
  • DB Vista has an additional cost (though it is pretty nominal)
  • DB Vista isn’t something you can do on your own – you need to engage with Adobe Engineering Services

Therefore, I wish that Adobe would consider making DB Vista something that administrators could do on their own through the Admin Console and Processing Rules (or via Launch!). Recently, Data Feeds was made self-service and I think it has been a huge success! More people than ever are using Data Feeds, which used to cost $$ and have to go through Adobe Engineering Services. I think the same would be true for DB Vista. If you agree, please vote for my idea here. Together, we can make DB Vista the sexy feature it deserves to be!

Adobe Analytics, Analytics Strategy, Digital Analytics Community, Industry Analysis

Analytics Demystified Case Study with Elsevier

For ten years at Analytics Demystified we have more or less done marketing the same way: by simply being the best at the work we do and letting people come to us.  That strategy has always worked for us, and to this day  continues to bring us incredible clients and opportunities around the world.  Still, when our client at Elsevier said he would like to do a case study … who were we to say no?

Elsevier, in case you haven’t heard of them, are a multi-billion dollar multinational which has transformed from a traditional publishing company to a modern-day global information analytics business.  They are essentially hundreds of products and companies within a larger organization, and each needs high quality analytics to help shape business decision making.

After searching for help and discovering that companies say they provide “Adobe consulting services” … without actually having any real-world experience with the type of global challenges facing Elsevier, the company’s Senior Vice President of Shared Platforms and Capabilities found our own Adam Greco.  Adam was exactly what they needed … and I will let the case study tell the rest of the story.

Free PDF download: The Demystified Advantage: How Analytics Demystified Helped Elsevier Build a World Class Analytics Organization

Adobe Analytics, Featured

Virtual Report Suites and Data Sources

Lately, I have been seeing more and more Adobe Analytics clients moving to Virtual Report Suites. Virtual Report Suites are data sets that you create from a base Adobe Analytics report suite that differ from the original by either limiting data by a segment or making other changes to it, such as changing the visit length. Virtual Report Suites are handy because they are free, whereas sending data to multiple report suites in Adobe Analytics costs more due to increased server calls. The Virtual Report Suite feature of Adobe Analytics has matured since I originally wrote about it back in 2016. If you are not using them, you probably should be by now.

However, when some of my clients have used Virtual Report Suites, I have noticed that there are some data elements that tend to not transition from the main report suite to the Virtual Report Suite. One of those items is data imported via Data Sources. In last week’s post, I shared an example of how you can import external metrics to your Adobe Analytics implementation via Data Sources, but there are many data points that can be imported, including metrics from 3rd party apps. One of the more common 3rd party apps that my clients integrate into Adobe Analytics are e-mail applications. For example, if your organization uses Responsys to send and report on e-mails sent to customers, you may want to use the established Data Connector that allows you to import your e-mail metrics into Adobe Analytics, such as:

  • Email Total Bounces
  • Email Sent
  • Email Delivered
  • Email Clicked
  • Email Opened
  • Email Unsubscribed

Once you import these metrics into Adobe Analytics, you can see them like any other metrics…

…and combine them with other metrics:

In this case, I am viewing the offline e-mail metrics alongside with the online metric of Orders and also created a new Calculated Metric that combines both offline and online metrics (last column). So far so good!

But watch what happens if I now view the same report in a “UK Only” Virtual Report Suite that is based off of this main report suite:

Uh oh…I just lost all of my data! I see this happen all of the time and usually my clients don’t even realize that they have told their internal users to use a Virtual Report Suite that is missing all Data Source metrics.

So why is the data missing? In this case the Virtual Report Suite is based upon a geographic region segment:

This means that any hits with eVar16 value of “UK” will make it into the Virtual Report Suite. Since all online data has an eVar16 value, it is successfully carried over to the Virtual Report Suite.  However, when the Data Sources metrics were imported (in this case Responsys E-mail Metrics), they did not have an eVar16 value so they are not included. That is why these metrics zeroed out when I ran the report for the Virtual Report Suite. In the next section, I will explain how to fix this so you make sure all of your Data Source metrics are included in the Virtual Report Suite

Long-Term Approach (Data Sources File)

The best long-term way to fix this problem is to change your Data Sources import files to make sure that you add data that will match your Virtual Report Suite segment. In this case, that means making sure each row of data imported has an eVar16 value. If you add a column for eVar16 to the import, any rows that contain “UK” will be included in the Virtual Report Suite. For this e-mail data, it means that your e-mail team would have to know which region each e-mail is associated with, but that shouldn’t be a problem. Unfortunately, it does require a change to your daily import process, but this is the cleanest way to make sure your Data Sources data flows correctly to your Virtual Report Suite.

Short-Term Approach (Segmentation)

If, however, making a change to your daily import process isn’t something that can happen soon (such as data being imported from an internal database that takes time to change), there is an easy workaround that will allow you to get Data Sources data immediately. This approach is also useful if you want to retroactively include Data Sources metrics that was imported before you make the preceding fix.

This short-term solution involves modifying the Segment used to pull data into the Virtual Report Suite. By adding additional criteria to your Segment definition, you can manually select which data appears in the Virtual Report Suite. In this case, the Responsys e-mail metrics don’t have an eVar16 value, but you can add them to the Virtual Report Suite by finding another creative way to include them in the segment. For example, you can add an OR statement that includes hits where the various Responsys metrics exist like this:

Once you save this new segment, your Virtual Report Suite will now include all of the data it had before and the Responsys data so the report will now look like this:

Summary

So this post is just a reminder to make sure that all of your imported Data Source metrics have made it into your shiny new Virtual Report Suites and, if not, how you can get them to show up there. I highly suggest you fix the issue at the source (Data Sources import file), but the segmentation approach will also work and helps you see data retroactively.

Adobe Analytics, Featured

Dimension Penetration %

Last week, I explained how the Approximate Count Distinct function in Adobe Analytics can be used to see how many distinct dimension values occur within a specified timeframe. In that post, I showed how you could see how many different products or campaign codes are viewed without having to count up rows manually and how the function provided by Adobe can then be used in other Calculated Metrics. As a follow-on to that post, in this post, I am going to share a concept that I call “dimension penetration %.” The idea of dimension penetration % is that there may be times in which you want to see what % of all possible dimension values are viewed or have some other action taken. For example, you may want to see what % of all products available on your website were added to the shopping cart this month. The goal here is to identify the maximum number of dimension values (for a time period) and compare that to the number of dimension values that were acted upon (in the same time period). Here are just some of the business questions that you might want to answer with the concept of dimension penetration %:

  • What % of available products are being viewed, added to cart, etc…?
  • What % of available documents are being downloaded?
  • What % of BOPIS products are picked up in store?
  • What % of all campaign codes are being clicked?
  • What % of all content items are viewed?
  • What % of available videos are viewed?
  • What % of all blog posts are viewed?

As you can see, there are many possibilities, depending upon the goals of your digital property. However, Adobe Analytics (and other digital analytics tools), only capture data for items that get “hits” in the date range you select. They are not clairvoyant and able to figure out the total sum of available items. For example, if you wanted to see what % of all campaign tracking codes had at least one click this month, Adobe Analytics can show you how many had at least one click, but it has no way of determining what the denominator should be, which is the total number of campaign codes you have purchased. If there are 1,000 campaign codes that never receive a click in the selected timeframe, as far as Adobe Analytics is concerned, they don’t exist. However, the following will share some ways that you can rectify this problem and calculate the penetration % for any Adobe Analytics dimension.

Calculating Dimension Penetration %

To calculate the dimension penetration %, you need to use the following formula:

For example, if you wanted to see what % of all blog posts available have had at least one view this month, you would calculate this by dividing the unique count of viewed blog posts by the total number of blog posts that could have been viewed. To illustrate this, let’s go through a real scenario. Based upon what was learned in the preceding post, you now know that it is easy to determine the numerator (how many unique blog posts were viewed) as long as you are capturing the blog post title or ID in an Adobe Analytics dimension (eVar or sProp). This can be done using the Approximate Count Distinct function like this:

Once this new Calculated Metric has been created, you can see how many distinct blog posts are viewed each day, week, month, etc…

So far, so good! You now have the numerator of the dimension penetration % formula completed.  Unfortunately, that was the easy part!

Next, you have to figure out a way to get the denominator. This is a bit more difficult and I will share a few different ways to achieve this. Unfortunately, finding out how many dimension values exist (in this scenario, total # of available blog posts), is a manual effort. Whether you are trying to identify the total number of blog posts, videos, campaign codes, etc. you will probably have to work with someone at your company to figure out that number. Once you find that number, there are two ways that you can use it to calculate your dimension penetration %.

Adobe ReportBuilder Method

The first approach is to add the daily total count of the dimension you care about to an Excel spreadsheet and then use Adobe ReportBuilder to import the Approximate Count Distinct Calculated Metric created above by date. By importing the Approximate Count Distinct metric by date and lining it up with your total numbers by date, you can easily divide the two and compute the dimension penetration % as shown here:

In this case the items with a green background were inputted manually and mixed with an Adobe Analytics data block. Then formulas were added to compute the percentages.

However, you have to be careful not to SUM the daily Approximate Count numbers since the sum will be different than the Approximate Count of the entire month. To see an accurate count of unique blog posts viewed in the month of April, for example, you would need to create a separate data block like this:

Data Sources Method

The downside of the Adobe ReportBuilder method is that you have to leave Adobe Analytics proper and cannot take advantage of its web-based features like Dashboards, Analysis Workspace, Alerts, etc. Plus, it is more difficult to share the data with your other users. If you want to keep your users within the Adobe Analytics interface, you can use Data Sources. Shockingly, Data Sources has not changed that much since I blogged about in back in 2009! Data Sources is a mechanism to import metrics that don’t take place online into Adobe Analytics. It can be used to upload any number you want as long as you can tie that number to a date. In this case, you can use Data Sources to import the total number of dimension items that exist on each day.

To do this, you need to use the administration console to create a new Data Source. There is a wizard that walks you through the steps needed, which include creating a new numeric success event that will store your data. The wizard won’t let you complete the process unless you add at least one eVar, but you can remove that from the template later, so just pick any one if you don’t plan to upload numbers with eVar values. In this case, I used Blog Post Author (eVar3) in case I wanted to break out Total Blog Posts by Author. Here is what the wizard should look like when you are done:

Once this is complete, you can download your template and create an FTP folder to which you will upload files. Next, you will create your upload file that has date and the total number of blog posts for each date. Again, you will be responsible for identifying these numbers. Here is what a sample upload file might look like using the template provided by Adobe Analytics:

Next, you upload your data via FTP (you can read how to do this by clicking here). A few important things to note are that you cannot upload more than 90 days of data at one time, so you may have to upload your historical numbers in batches. You also cannot data for dates in the future, so my suggestion would be to upload all of your historical data and then upload one row of data (yesterday’s count) each day in an automated FTP process. When your data has successfully imported, you will see the numbers appear in Adobe Analytics just like any other metrics (see below). This new Count of Blog Posts metric can also be used in Analysis Workspace.

Now that you have the Count of Blog Posts that have been viewed for each day and the count of Total Blog Posts available for each day, you can [finally] create a Calculated Metric that divides these two metrics to see your daily penetration %:

This will produce a report that looks like this:

However, this report will not work if you change it to view the data by something other than day, since the Count of Blog Posts [e8] metric is not meant to be summed (as mentioned in the ReportBuilder method). If you do change it to report by week, you will see this:

This is obviously incorrect. The first column is correct, but the second column is drastically overstating the number of available blog posts! This is something you have to be mindful of in this type of analysis. If you want to see dimension penetration % by week or month, you would have to do some additional work. Let’s look at how you can view this data by week (special thinks to Urs Boller who helped me with this workaround!). One method is to identify how many dimension items existed yesterday and use that as the denominator. Unfortunately, this can be problematic if you are looking at a long timeframe and if there are many additional items added. But if you want to use this approach, you can create this new Calculated Metric to see yesterday’s # of blog posts:

Which produces this report:

As you can see, this approach treats yesterday’s total number as the denominator for all weeks, but if you look above, you will see that the first week only had 1,155 posts, not 1162. You could make this more precise by adding an IF statement to the Calculated Metric and use a weekly number or if you are crazy, add 31 IF statements and grab the exact number for each date.

The other approach you can take is to simply divide the incorrect summed Count of Blog Posts [e8] metric by 7 for week and 30 for month. This will give you an average number of blog posts that existed and will look like this:

This approach has pretty similar penetration % numbers as the other approach and will work best if you use full weeks or full months (in this case, I started with the first full week in January).

Automated Method (Advanced)

If you decide that finding out the total # of items for each dimension is too complicated (or if you are just too busy or lazy to find it!), I will demonstrate an automated approach to find out this information. However, this approach will not be 100% accurate and can only be used for dimension items that will be persistent on your site from the day they are added. For example, you cannot use the following approach to identify the total # of campaign codes, since they come and go regularly.  But you can use the following approach to estimate the total # of values for items that, once added, will probably remain like files, content items or blog posts (as in this example).

Here is the approach. Step one is to create a date range that spans all of your analytics data like this:

You will also want to create another Date Range for the time period you want to see for recent activity. In this case, I created one for the Current Month To Date.

Next, create Segments for both of these Date Ranges (All Dates & Current month to Date):

Next, create a new Calculated Metric that divides the Current Month Approximate Count Distinct of Blog Posts by the All Dates Approximate Count Distinct of Blog Posts:

Lastly, create a report like this in Analysis Workspace:

By doing this, you are letting Adobe Analytics tell you how many dimension items you have (# of total blog posts in this case) by seeing the Approximate Count Distinct over all of your dates. The theory being that over a large timeframe all (or most) of your dimension items will be viewed at least once. In this case, Adobe Analytics has found 1,216 blog posts that have received at least one view since 1/1/16. As I stated earlier, this may not be exact, since there may be dimension items that are never viewed, but this approach allows you to calculate dimension penetration % in a semi-automated manner.

Lastly, if you wanted to adjust this to look at a different time period, you would drag over a different date range container on the first column and then have to make another copy of the 3rd column that uses the same date range as shown in the bottom table:

Adobe Analytics, Featured

Approximate Count Distinct Function – Part 1

In Adobe Analytics, there are many advanced functions that can be used in Calculated Metrics. Most of the clients I work with have only scratched the surface of what can be done with these advanced functions. In this post, I want to spend some time discussing the Approximate Count Distinct function in Adobe Analytics and in my next post, I will build upon this one to show some ways you can take this function to the next level!

There are many times when you want to know how many rows of data exist for an eVar or sProp (dimension) value. Here are a few common examples:

  • How many distinct pages were viewed this month?
  • How many of our products were viewed this month?
  • How many of our blog posts were viewed this month?
  • How many of our campaign tracking codes generated visits this month?

As you can see, the possibilities are boundless. But the overall gist is that you want to see a count of unique values for a specified timeframe. Unfortunately, there has traditionally not been a great way to see this in Adobe Analytics. I am ashamed to admit that my main way to see this has always been to open the dimension report, scroll down to the area that lets you go to page 2,3,4 of the results and enter 50,000 to go to the last page of results and see the bottom row number and write it down on a piece of paper! Not exactly what you’d expect from a world-class analytics tool! It is a bit easier if you use Analysis Workspace, since you can see the total number of rows here:

To address this, Adobe added the Approximate Count Distinct function that allows you to pick a dimension and will calculate the number of unique values for the chosen timeframe. While the function isn’t exact, it is designed to be no more that 5% off, which is good enough for most analyses. To understand this function, let’s look at an example. Let’s imagine that you work for an online retailer and you sell a lot of products. Your team would like to know how many of these products are viewed at least once in the timeframe of your choosing. To do this, you would simply create a new calculated metric in which you drag over the Approximate Count Distinct function and then select the dimension (eVar or sProp) that you are interested in, which in this case is Products:

Once you save this Calculated Metric, it will be like all of your other metrics in Adobe Analytics. You can trend it and use it in combination with other metrics. Here is what it might look like in Analysis Workspace:

Here you can see the number of distinct products visitors viewed by day for the month of April. I have also included a Visits column to show some perspective. I have also added a new Calculated Metric that divides the distinct count of products by Visits and used conditional formatting to help visualize the data. Here is the formula for the third column:

The same process can be used with any dimension you are interested in within your implementation (i.e. blog posts, campaign codes, etc.)

Combining Distinct Counts With Other Dimensions

While the preceding information is useful, there is another way to use Approximate Distinct Count functions that I think is really exciting. Imagine that you are in a meeting and your boss asks you how many different products each of your marketing campaigns has generated? For example, does campaign X get people to view 20 products and campaign Y get people to view 50 products? For each visit from each campaign, how many products are viewed? Which of your campaigns gets people to view the most products? You get the gist…

To see this, what you really want to do is use the newly created Approximate Count of Products metric in your Tracking Code or other campaign reports. The good news is that you can do that in Adobe Analytics. All you need to do is open one of your campaign reports and add the Calculated Metric we created above to the report like this:

Here you can see that I am showing how many click-throughs and visits each campaign code received in the chosen timeframe. Next, I am showing the Approximate Count of Products for each campaign code and also dividing this by Visit. Just for fun, I also added how many Orders each campaign code generated and divided that by the Approximate Count of Products to see what portion of products viewed from each campaign code were purchased.

You can also view this data by any of your SAINT Classifications. In this case, if you have your campaign Tracking Codes classified by Campaign Name, you can create the same report for Campaign Name:

In this case, you can see that, for example, the VanityURL Campaign generated 19,727 Visits and 15,599 unique products viewed.

At this point, if you are like me you are saying to yourself: “Does this really work?  That seems to be impossible…” I was very suspicious myself, so if you don’t really believe that this function works (especially with classifications), here is a method that Jen Lasser from Adobe told me you can use to check things out:

  1. Open up the report of the dimension for which you are getting Approximate Distinct Counts (in this case Products)
  2. Create a segment that isolates visits for one of the rows (in the preceding example, let’s use Campaign Name = VanityURL)
  3. Add this new segment to the report you opened in step 1 (in this case Products) and use the Instances metric (which in this case is Product Views)
  4. Look at the number of rows in Analysis Workspace (as shown earlier in post) or use the report page links at the bottom to go to the last page of results and check the row number (if using old reports) as shown here:

Here you can see that our value in the initial report for “VanityURL” was 15,599 and the largest row number was 15,101, which puts the value in the classification report about 3% off.

Conclusion

As you can see, the use of the Approximate Count Distinct function (link to Adobe help for more info) can add many new possibilities to your analyses in Adobe Analytics. Here, I have shown just a few examples, but depending upon your business and site objectives, there are many ways you can exploit this function to your advantage. In my next post, I will take this one step further and show you how to see how to calculate dimension penetration, or what % of all of your values received at least one view over a specified timeframe.

Adobe Analytics, Featured

Chicago Adobe Analytics “Top Gun” Class – May 24, 2018

I am pleased to announce my next Adobe Analytics “Top Gun” class, which will be held May 24th in Chicago.

For those of you unfamiliar with my Adobe Analytics “Top Gun” class, it is a one day crash course on how Adobe Analytics works behind the scenes based upon my Adobe Analytics book. This class is not meant for daily Adobe Analytics end-users, but rather for those who administer Adobe Analytics at their organization, analysts who do requirements gathering or developers who want to understand why they are being told to implement things in Adobe Analytics. The class goes deep into the Adobe Analytics product, exploring all of its features from variables to merchandising to importing offline metrics. The primary objective of the class is to teach participants how to translate every day business questions into Adobe Analytics implementation steps. For example, if your boss tells you that they want to track website visitor engagement using Adobe Analytics, would you know how to do that? While the class doesn’t get into all of the coding aspects of Adobe Analytics, it will teach you which product features and functions you can bring to bear to create reports answering any question you may get from business stakeholders. It will also allow you and your developers to have a common language and understanding of the Adobe Analytics product so that you can expedite getting the data you need to answer business questions.

Here are some quotes from recent class attendees:

I have purposefully planned this class for a time of year where Chicago often has nice weather in case you want to spend the weekend!  There is also a Cubs day game the following day!

To register for the class, click here. If you have any questions, please e-mail me. I hope to see you there!

Conferences/Community, Featured, Testing and Optimization

2018 Adobe Summit – the testing guys perspective

The 2018 Adobe Summit season has officially closed.  This year marked my 11th Summit with my first Summit dating back to 2008 when Omniture acquired Offermatica where I was an employee at the time.  I continue to attend Summit for a variety of reasons but I especially enjoy spending time with some of my clients and catching up with many old friends.  I also enjoy geeking out hardcore with the product and product marketing teams.

While I still very much miss the intimacy and the Friday ski day that Salt Lake City offered, I am warming much more than had I anticipated to Las Vegas.  I also got the sense that others were as well.  I also just learned that after Summit this year that quite a few folks have created their own Friday Funday if you will (totally down for Friday Motorcycle day next year!). The conference is bigger than ever with reported attendee numbers around 13,000.  Topics, or Adobe Products, covered have grown quite a bit too.  I am not sure if I got all the whole list but here are the products or topics, I saw covered at Summit:

  • Advertising Cloud
  • Analytics
  • Audience Manager
  • Campaign
  • Cloud Platform
  • Experience Manager
  • Primetime
  • Sensei
  • Target

My world of testing mainly lives in the Adobe Target, Adobe Analytics and to varying degrees, Adobe Audience Manager, Adobe Experience Manager, and Adobe Launch worlds.  It was cool to see and learn more about these other solutions but there was plenty in my testing and personalization world to keep me busy.  I think I counted 31 full sessions and about 7 hands-on labs for testing.  Here is a great write up of the personalization sessions this year broken down by category that was very helpful.

The conference hotel and venue are quite nice and make hosting 13,000 people feel like it is no big deal given its size.  As nice as the hotel is, I still stay around the corner at the Westin.  I like getting away and enjoy the walk to and from the event.  And boy did I walk this year.  According to my Apple Watch, in the four days (Monday – Thursday), I logged 63,665 steps and a mind-blowing 33.38 miles.

The sessions that I focused on where the AI ones given my considerable work with Automated Personalization, Auto-Allocate, and Recommendations.  I also participated in a couple of sessions around optimization programs given my work with MiaProva.

Below was my week and lessons learned for next year.

 

Summit week

Monday

I made a mistake this year and should have come in earlier on Monday or even Sunday for that matter.  Monday is the Adobe Partner day and they have quite a few fun things to learn about in regards to the partnership and Adobe in general.  It is also a nice time to hang out with the product teams at Adobe – before the storm of Summit begins.  In fact, I was able to make it one great event that evening at Lavo in the Venetian.  Over the last couple years at least, organizations that use Adobe solutions and agencies that help those organizations use Adobe solutions can be nominated for awards based on the impact of using Adobe solutions.  That night, attendees got to hear about some great use cases including one from Rosetta Stone where they used testing to minimize any detriment going from boxed software to digital experiences (a very familiar story to Adobe:).  If you find yourself part of a team that does something really cool or impactful with Adobe Experience Cloud solutions, consider nominating it for next year!

Also on that Monday is something called UnSummit.  I have gone to UnSummit a few times and always enjoyed it.  UnSummit is a great gathering of smart and fun people that share interesting presentations.  Topics vary but they are mainly about Analytics and Testing which is reminiscent of the old days at the Grand America in Salt Lake City.  I am not 100% sure why it is called UnSummit as that could leave the impression that it is a protest or rejection of Summit.  I can assure you that it isn’t or at least I’ve never heard of any bashing or protest.  In fact, all attendees are in town because of Summit.  Again, great event and if you have the time next year, I recommend checking it out.

Tuesday

Opening day if you will.  The general session followed up by many sessions and labs.  This sounds silly but I always come early to have breakfast at the conference.  I have had many a great conversation and met so many interesting people by simply joining them at the table.  I do this for all the lunches each day as well.  We are all pretty much there for similar reasons and have similar interests so it is nice to geek out a bit and network as well.

I also enjoy checking out the vendor booths as well and did so this year.  Lots of great conversations and it was cool to run into many former colleagues and friends.  Southwest Airlines even had a booth there but not sure why!  Maybe to market to thousands of business folks?

On Tuesday nights, Adobe Target usually hosts an event for Adobe Target users to get together at.  This year it was at the Brooklyn Bowl which is on the Linq Promenade, only a few blocks from the hotel.  A very cool area if you haven’t been that way.  They also have an In-n-out there too!

This event was great as I got to spend some time with some of my clients and enjoy some good food and music.  There was a live band there that night so it was a bit loud but still a great venue and event.  Lots of folks got to bowl which was awesome too.  Of the nightly events, I usually enjoy this one the most.

Wednesday

Big day today!  Breakfast networking, a session, the general session and then game time!  I had the honor of presenting a session with Kaela Cusack of Adobe.  We presented on how to power true personalization with Adobe Target and Adobe Analytics.  The session was great as we got to share how organizations are using A4T and the bi-directional flow of data between the two solutions to empower organizations to make use of the data that they had in Adobe Analytics.  Lots of really good feedback and I will be following up here with step by step instructions on how exactly organizations can do this for themselves.  You can watch the presentation here.

After my session Q&A, it was Community Pavilion time which is basically snacks and alcohol in the vendor booth area.  I also met with a couple of customers during this time.

Then it was time for Sneaks.  I never heard of Leslie Jones before but she was absolutely hysterical.  She had the crowd laughing like crazy.  Lots of interesting sneaks but the one around Launch visually interpreting something and then inserting a tag, I found to be the most interesting.  If Launch can receive inputs like that, then there should be no reason why Target can’t communicate or send triggers to Launch as well.  I see some pretty cool use cases with Auto-Allocate, Automated Personalization and Launch here!

After Sneaks it was concert time!  Awesome food, copious amounts of Miller Lite and lots of time to hang with clients and friends.  Here is a short clip of Beck who headlined that night:

 

Thursday

Last year I made the big mistake of booking a 3 pm flight out of Vegas on Thursday.  It was a total pain to deal with the luggage and I missed out on two really great sessions that Thursday afternoon.  I wasn’t going to make that mistake this year so I flew home first thing on Friday morning which I will do again next year too.

Thursday is a chill day.  I had quite a few meetings for Demystified and MiaProva prospects and attended a few great sessions.  Several people told me that the session called “The Future of Experience Optimization” was their favorite session of all of Summit and that took place on Thursday afternoon.  I was disappointed that I couldn’t attend due to a client meeting but will definitely be watching the video of this session.

Thursday late afternoon and night were all about catching up on email and getting an early nights rest.  Again, much more relaxing not rushing home.  So that was my week which somehow now feels like it was many weeks ago.

Takeaways

There were many great sessions, far too many to catch live.  Adobe though made every session available here for viewing.

There is quite a bit going on with Adobe Target and not just from a product and roadmap perspective.  There is a lot of community work taking place as well.  If you work with Target in any way, I recommend subscribing to both Target TV and the Adobe Target Forum.  I was able to meet Amelia Waliany at Adobe Summit this year and she totally cool and fun.  She runs these two initiatives for Adobe.

There are many changes and updates being made to Adobe Target and these two channels are great for staying up to date and for seeing what others are doing with the Product.  I also highly recommend joining Adobe’s Personalization Thursdays as they go deep with the product and bring in some pretty cool guests from time to time.

Hope to see you next year!

 

Adobe Analytics, Analytics Strategy, Conferences/Community, General

Don’t forget! YouTube Live event on Adobe Data Collection

March is a busy month for all of us and I am sure for most of you … but what a great time to learn from the best about how to get the most out of your analytics and optimization systems! Next week on March 20th at 11 AM Pacific / 2 PM Eastern we will be hosting our first YouTube Live event on Adobe Data Collection. You can read about the event here or drop us a note if you’d like a reminder the day of the event.

Also, a bunch of us will be at the Adobe Summit in Las Vegas later this month.  If you’d like to connect in person and hear firsthand about what we have been up to please email me directly and I will make sure it happens.

Finally, Senior Partner Adam Greco has shared some of the events he will be at this year … just in case you want to hear first-hand how your Adobe Analytics implementation could be improved.

 

Featured, Testing and Optimization

Personalization Thursdays

Personalization Thursdays and MiaProva

Personalization Thursdays

Each month, the team at Adobe hosts a webinar series called Personalization Thursdays.  The topics vary but the webinars typically focus on features and capabilities of Adobe Target.  The webinars are well attended and they often go deep technically which leads to many great questions and discussions.  Late last year, I joined one of the webinars where I presented “10 Execution tips to get more out of Adobe Target” and it was very well received!  You can watch that webinar here if you are interested.

Program Management

On Wednesday, March 15th, I have the privilege of joining the team again where I am presenting on “Program Management for Personalization at Scale”.  Here is the outline of this webinar:

Program management has become a top priority for our Target clients as we begin to scale optimization and personalization across a highly matrixed, and often global organization. It’s also extremely valuable in keeping workspaces discrete and efficiency of rolling out new activities. We’ll share the latest developments in program management that will assist with ideation and roadmap development, as well as make it easier to schedule and manage all your activities on-the-go, with valuable alerts and out of the box stakeholder reports.

I plan on diving into Adobe I/O and how organizations and software can use to scale their optimization programs.  I will also show how users of MiaProva leverage it to manage their tests from ideation through execution.

You have to register to attend but this webinar is open to everyone.  You can quickly register via this link:  http://bhawk.me/march-15-webinar

Hope to see you there!

Adobe Analytics, Featured

Where I’ll Be – 2018

Each year, I like to let my blog readers know where they can find me, so here is my current itinerary for 2018:

Adobe Summit – Las Vegas (March 27-28)

Once again, I am honored to be asked to speak at the US Adobe Summit. This will be my 13th Adobe Summit in a row and I have presented at a great many of those. This year, I am doing something new by reviewing a random sample of Adobe Analytics implementations and sharing my thoughts on what they did right and wrong. A while ago, I wrote a blog post asking for volunteer implementations for me to review, and I was overwhelmed by how many I received! I have spent some time reviewing these implementations and will share lots of tips and tricks that will help you improve your Adobe Analytics implementations. To view my presentation from the US Adobe Summit, click here.

Adobe Summit – London (May 3-4)

Based upon the success of my session at the Adobe Summit in Las Vegas, I will be coming back to London to present at the EMEA Adobe Summit.  My session will be AN7 taking place at 1:00 pm on May 4th.

DAA Symposium – New York (May 15)

As a board member of the Digital Analytics Association (DAA), I try to attend as many local Symposia as I can. This year, I will be coming to New York to present at the local symposia being held on May 15th. I will be sharing my favorite tips and tricks for improving your analytics implementation.

Adobe Insider Tour (May & September)

I will be hitting the road with Adobe to visit Atlanta, Los Angeles, Chicago, New York and Dallas over the months of June and September. I will be sharing Adobe Analytics tips and tricks are trying something new called Adobe Analytics implementation improv!  Learn more by clicking here.

Adobe Analytics “Top Gun” Training – Chicago/Austin (May 24, October 17)

Each year I conduct my advanced Adobe Analytics training class privately for my clients, but I also like to do a few public versions for those who don’t have enough people at their organization to justify a private class. This year, I will be doing one class in Chicago and one in Austin. The Chicago class will be at the same venue downtown Chicago as the last two years. The date of the class is May 24th (when the weather is a bit warmer and the Cubs are in town the next day for an afternoon game!). You can register for the Chicago class by clicking here.

In addition, for the first time ever, I will be teaming up with the great folks at DA Hub to offer my Adobe Analytics “Top Gun” class in conjunction with DA Hub! My class will be one of the pre-conference training classes ahead of this great conference. This is also a great option for those in the West Coast who don’t want to make the trek into Chicago. To learn more and register for this class and DA Hub, click here.

Marketing Evolution Experience & Quanties  – Las Vegas (June 5-6)

As you may have heard, the eMetrics conference has “evolved” into the Marketing Evolution Experience. This new conference will be in Las Vegas this summer and will also surround the inaugural DAA Quanties event. I will be in Vegas for both of these events.

ObservePoint Validate Conference – Park City, Utah (October 2-5)

Last year, ObservePoint held its inaugural Validate conference and everyone I know who attended raved about it. So this year, I will be participating in the 2nd ObservePoint Validate conference taking place in Park City, Utah. ObservePoint is one of the vendors I work with the most and they definitely know how to put on awesome events (and provide yellow socks!).

DA Hub – Austin (October 18-19)

In addition to doing the aforementioned training at the DA Hub, I will also be attending the conference itself. It has been a few years since I have been at this conference and I look forward to participating in its unique “discussion” format.

 

Adobe Analytics, Tag Management, Technical/Implementation

Adobe Data Collection Demystified: Ten Tips in Twenty(ish) Minutes

We are all delighted to announce our first of hopefully many live presentations on the YouTube platform coming up on March 20th at 11 AM Pacific / 2 PM Eastern!  Join Josh West and Kevin Willeitner, Senior Partners at Analytics Demystified and recognized industry leaders on the topic of analytics technology, and learn some practical techniques to help you avoid common pitfalls and improve your Adobe data collection.  Presented live, Josh and Kevin will touch on aspects of the Adobe Analytics collection process from beginning to end with tips that will help your data move through the process more efficiently and give you some know-how to make your job a little easier.

The URL for the presentation is https://www.youtube.com/watch?v=FtJ40TP1y44 and if you’d like a reminder before the event please just let us know.

Again:

Adobe Data Collection Demystified
Tuesday, March 20th at 11 AM Pacific / 2 PM Eastern
https://www.youtube.com/watch?v=FtJ40TP1y44

Also, if you are attending this year’s Adobe Summit in Las Vegas … a bunch of us will be there and would love to meet in person. You can email me directly and I will coordinate with Adam Greco, Brian Hawkins, Josh West, and Kevin Willeitner to make sure we have time to chat.

Adobe Analytics, Featured

Free Adobe Analytics Review @ Adobe Summit

For the past seven years (and many years prior to that while at Omniture!), I have reviewed/audited hundreds of Adobe Analytics implementations. In most cases, I find mistakes that have been made and things that organizations are not doing that they should be. Both of these issues impede the ability of organizations to be successful with Adobe Analytics. Poorly implemented items can lead to bad analysis and missed implementation items represent an opportunity cost for data analysis that could be done, but isn’t. Unfortunately, most organizations “don’t know what they don’t know” about implementing Adobe Analytics, because the people working there have only implemented Adobe Analytics oncee, or possibly two times, versus people like me who do it for a living. In reality, I see a lot of the same common mistakes over and over again and I have found that showing my clients what is incorrect and what can be done instead is a great way for them to learn how to master Adobe Analytics (something I do in my popular Adobe Analytics “Top Gun” Class).

Therefore, at this year’s Adobe Summit in Las Vegas, I am  going to try something I haven’t done in any of my past Summit presentations. This year, I am asking for volunteers to have me review your implementation (for free!) and share with the audience a few things that you either need to fix or net new things you could do to improve your Adobe Analytics implementation. In essence, I am offering to do a free review of your implementation and give you some free consulting! The only catch is that when I share my advice, it will be in front of a live audience so that they can learn along with you. In doing this, here are some things I will make sure of:

  • I will work with my volunteers to make sure that no confidential data is shown and will share my findings prior to the live presentation
  • I will not do anything to embarrass you about your current implementation. In fact, I have found that most of the bad things I find are implementation items that were done by people who are no longer part of the organization, so we can blame it on them 😉
  • I will attempt to review a few different types of websites so multiple industry verticals are represented
  • You do not have to be at Adobe Summit for me to review your implementation

So….If you would like to have me do a free review of your implementation, please send me an e-mail or message me via LinkedIn and I will be in touch.

 

 

Analysis, Presentation

10 Tips for Presenting Data

Big data. Analytics. Data science. Businesses are clamoring to use data to get a competitive edge, but all the data in the world won’t help if your stakeholders can’t understand, or if their eyes glaze over as you present your incredibly insightful analysis. This post outlines my top ten tips for presenting data.

It’s worth noting that these tips are tool agnostic—whether you use Data Studio, Domo, Tableau or another data viz tool, the principles are the same. However, don’t assume your vendors are in lock-step with data visualization best practices! Vendor defaults frequently violate key principles of data visualization, so it’s up to the analyst to put these principles in practice.

Tip #1: Recognize That Presentation Matters

The first step to presenting data is to understand that how you present data matters. It’s common for analysts to feel they’re not being heard by stakeholders, or that their analysis or recommendations never generate action. The problem is, if you’re not communicating data clearly for business users, it’s really easy for them to tune out.

Analysts may ask, “But I’m so busy with the actual work of putting together these reports. Why should I take the time to ‘make it pretty’?”

Because it’s not about “making things pretty.” It’s about making your data understandable.

My very first boss in Analytics told me, “As an analyst, you are an information architect.” It’s so true. Our job is to take a mass of information and architect it in such a way that people can easily comprehend it.

… Keep reading on ObservePoint‘s blog …

Featured

Podcasts!

I am a podcast addict! I listen to many podcasts to get me news and for professional reasons. Recently, I came across a great podcast called Everyone Hates Marketers, by Louis Grenier. Louis works for Hotjar, which is a technology I wrote about late last year. His podcast interviews some of the coolest people in Marketing and attempts to get rid of many of the things that Marketers do that annoy people. Some of my favorite episodes were the ones with Seth Godin, DHH from Basecamp and Rand Fishkin. This week, I am honored to be on the podcast to talk about digital analytics. You can check out my episode here, in which I share some of my experiences and stories throughout my 15 years in the field.

There is a lot of great content in the Everyone Hates Marketers podcast and I highly recommend you check it out if you want to get a broader marketing perspective to augment the great stuff you can learn from the more analytics-industry focused Digital Analytics Power Hour.

While I am discussing podcasts, here are some of my other favorites:

  • Recode Decode – Great tech industry updates from the best interviewer in the business – Kara Swisher
  • Too Embarrassed to Ask – This podcast shares specifics about consumer tech stuff with Lauren Goode and Kara Swisher
  • NPR Politics – Good show to keep updated on all things politics
  • How I Built This – Podcast that goes behind the scenes with the founders of some of the most successful companies
  • Masters of Scale – Great podcast by Reid Hoffman about how startups work and practical tips from leading entrepreneurs
  • Rework – Podcast by Basecamp that shares tips about working better

If you need a break from work-related podcasts, I suggest the following non work-related podcasts:

  • West Wing Weekly – This is a fun show to listen to and re-visit each episode of the classic television series “The West Wing”
  • Filmspotting – This one is a bit long, but provides great insights into current and old movies

Here is to a 2018 filled with new insights and learning!

Digital Analytics Community, Team Demystified

Welcome Tim Patten!

While I am running a little behind on sharing this news due to the holidays and some changes in the business I am delighted to announce that Tim Patten has joined the company as a Partner!

Tim is a great guy, a longtime friend, and an even longer-term member of the digital analytics and optimization community. He and I first met when he was at Fire Mountain Gems down in Medford, Oregon and have had multiple opportunities to work together over the years.

Most recently Tim has been a valuable contributor to our Team Demystified business unit, providing on-the-ground analytics support to Demystified’s biggest and best clients. In that role Tim repeatedly demonstrated the type of work ethic, intensity, and attitude that is a hallmark of a Demystified Partner, and so late last year we made the decision to invite him onto the main stage. Tim will be working side by side with Kevin Willeitner and Josh West helping Demystified clients ensure that they have the right implementation for analytics.

Go check out Tim’s blog and please help me welcome him to the company!

General

Tim Patten joins the team at Analytics Demystified

I am extremely excited to be joining the talented team of Analytics Demystified partners.  I am truly humbled to be working alongside some of the brightest industry veterans Digital Analytics has to offer, and I am looking forward to adding my expertise to the already broad offering of services that we provide. My focus will be on the technical and implementation related projects, however I will also assist with any analysis needs that my clients have, as well.

While joining the partner team is a new role for me, I have been working with Analytics Demystified for the past three years as a contractor through Team Demystified.  Prior to this, I was Principal Consultant and Director of Global Consulting Services at Localytics, a mobile analytics company.  My 10+ years of experience in Digital Analytics, as a consultant, vendor and practitioner, puts me in a great position to help our clients reach their maximum potential with their analytics investments.

On a personal note, I currently live in the Portland, Oregon area with my girlfriend and energetic Golden Retriever pup.  I’m a native Oregonian and therefore love the outdoors (anything from hiking to camping to snowboarding and fishing).  I’m also a big craft beer enthusiast (as is anyone from the Portland area) and can be found crafting my own concoctions during the weekends.

I can be reached via email at tim.patten(AT)analyticsdemystified.com, via Measure Slack, or Twitter (@timpatten).  

Adobe Analytics, Reporting, Uncategorized

Report Suite ID for Virtual Report Suites

As I have helped companies evaluate and migrate to using virtual report suites (typically to avoid the cost of secondary server calls or to filter garbage data) there will come a point where you will need to shift your reports to using the new virtual report suite instead of the old report suite. How you make that update varies a bit deepening on what tool is generating the report. In the case of Report Builder reports the migration takes a low level of effort but can be tricky if you don’t know where to look. So here’s some help with that 🙂

If you have used Report Builder you may be familiar with the feature that lets you use an Excel cell containing a report suite ID as an input to your Report Builder request. Behold, the feature:

Now, it is easy to know what this RSID is if you are the one that set up your implementation and you specified the RSID or you know where to find it in the hit being sent from your site. However, for VRSs you don’t get to specify your RSID as directly. Fortunately Adobe provides a list of all your RSIDs on an infrequently-used page in your admin settings. Just go to Admin>Report Suite Access:

There you will see a list of all your report suites including the VRSs. The VRSs start with “vrs_<company name>” and then are followed by a number and something similar to the initial name you gave your VRS (yellow arrow). Note that your normal report suites are in the list as well (orange arrow).

Now use that value to replace the RSID that you once used in your Report Builder report.

Keep in mind, though, that this list is an admin feature so you may also want to make a copy of this list that you share with your non-admin users…or withhold it until they do your bidding. Up to you.

 

Adobe Analytics, Featured

NPS in Adobe Analytics

Most websites have specific conversion goals they are attempting to achieve. If you manage a retail site, it may be orders and revenue. Conversely, if you don’t sell products, you might use visitor engagement as your primary KPI. Regardless of the purpose of your website (or app), having a good experience and having people like you and your brand is always important. It is normally a good thing when people use your site/product/app and recommend it to others. One method to capture how often people interacting with your site/brand/app have a good experience is to use Net Promoter Score (NPS). I assume that if you are a digital marketer and reading this, you are already familiar with NPS, but in this post, I wanted to share some ways that you can incorporate NPS scoring into Adobe Analytics.

NPS

The easiest way to add NPS to your site or app is to simply add a survey tool that will pop-up a survey to your users and ask them to provide an NPS. My favorite tool for doing this is Hotjar, but there are several tools that can do this.

Once your users have filled out the NPS survey, you can monitor the results in Hotjar or whichever tool you used to conduct the survey.

But, if you also want to integrate this into Adobe Analytics, there is an additional step that you can take. When a visitor is shown the NPS survey, you can to capture the NPS data in Adobe Analytics as well. To start, you would pass the survey identifier to an Adobe Analytics variable (i.e. eVar). This can be done manually or using a tag management system. In this case, let’s assume that you have had two NPS submissions with scores of 7 and 4. Here is what the NPS Survey ID eVar report might look like:

At the same time, you can capture any verbatim responses that users submit with the survey (if you allow them to do this):

This can be done by capturing the text response in another Adobe Analytics variable (i.e. eVar), which allows you to see all NPS comments in Adobe Analytics and, if you want, filter them by specific search keywords (or, if you are low on eVars, you could upload these comments as a SAINT Classification of the NPS Survey ID). Here is what the NPS Comments eVar report might look like when filtered for the phrase “slow:”

Keep in mind that you can also build segments based upon these verbatim comments, which is really cool!

Trending NPS in Adobe Analytics

While capturing NPS Survey ID’s and comments is interesting, you probably want to see the actual NPS scores in Adobe Analytics as well. You can do this by capturing the actual NPS value in a numeric success event in Adobe Analytics when visitors submit the NPS survey. You can also set a counter success event for every NPS survey submission, which allows you to create a calculated metric that shows a trend of your overall NPS.

First, you would setup the success events in the Adobe Analytics administration console:

Let’s look at this using the previously described example. When the first visitor comes to your site and completes an NPS survey with a score of 7, you would set the following upon submission:

s.events="event20=1,event21=7";

When the second visitor completes an NPS survey with a score of 4, you would set the following:

s.events="event20=1,event21=4";

Next, you can build a calculated metric that computes the your overall NPS. Here is the standard formula for computing NPS using a scale of 1-10:

In our scenario, the NPS would be -50, since we had one detractor and no promoters, computed as ((0-1)/2) x 100 = -50.

To create the NPS metric in Adobe Analytics, you first need to create segments to isolate the number of Promotors and Detractors you have in your NPS surveys. This can be done by building a segment for Promoters…

…and a segment for Detractors:

Once these segments have been created, they can be applied to the following calculated metric formula in Adobe Analytics:

Once you have created this calculated metric, you would see a trend report that looks like this (assuming only the two visitors mentioned above):

This report only shows the two scores from one day, so if we pretend that the previous day, two visitors had completed NPS surveys and provided scores of 9 & 10 respectively (a score of 100), the daily trend would look like this:

If we looked at the previous report with just two days (November 3rd & 4th) for a longer duration (i.e. week, month, year), we would see the aggregate NPS Score:

In this case, the aggregate NPS score for the week (which in this case just includes two days) is 25 computed as: ((2 Promoters – 1 Detractor)/4 Responses) x 100 = 25.

If we had data for a longer period of time (i.e. October), the trend might look like this (shown in Analysis Workspace):

And if we looked at the October data set by week, we would see the aggregate NPS (shown in Analysis Workspace):

Here we can see that there is a noticeable dip in NPS around the week of October 22nd. If you break this down by the NPS Comments eVar to see if there may be comments telling us why the scores dipped:

In this case, the comments let us know that the blog portion of the website was having issues, which hurt our overall NPS.

One side note about the overall implementation of this. In the preceding scenario I built the NPS as a calculated metric, but I could have also used the Promoter and Detractor segments to create two distinct calculated metrics (Promoters and Detractors)…

…which would allow me to see a trend of Promoters (or Detractors) over time:

Alternatively, you could also choose to set success events for Promoter Submissions and Detractor Submissions (in real-time) instead of using a segment to create these metrics. Doing this would require using three success events instead of two, but would remove the need for the segments, but the results would be the same.

Summary

As you can see, this is a fair amount of work. So why would you want to do all of this if you already have NPS data in your survey tool (i.e. Hotjar)? For me, having NPS data in Adobe Analytics provides the following potential additional benefits:

  • Build a segment of sessions that had really good or really bad NPS scores and view the specific paths visitors have taken to see if there are any lessons to be learned
  • Build a segment of sessions that had really good or really bad NPS scores and see the differences in cart conversion rates
  • Look at the retention of visitors with varying NPS scores
  • Identify which marketing campaigns are producing visitors with varying NPS scores
  • Easily add NPS trend to an existing Adobe Analytics dashboard
  • Easily correlate other website KPI’s with NPS score to see if there are any interesting relationships (i.e. does revenue correlate to NPS score?)
  • Use NPS score as part of contribution analysis
  • Create alerts for sudden changes in NPS
  • Identify which [Hotjar] sessions (using the captured Survey ID) for which you want to view recordings based upon behavior found in Adobe Analytics

These are just some ideas that I have thought about for incorporating NPS into your Adobe Analytics implementation. If you have any other ideas, feel free to leave a comment here.

Featured, Testing and Optimization

Simple and oh so very sweet

Informatica is a very large B2B company and one of the most successful players in the data management market.  Informatica also has an impressive testing and optimization program and they make heavy use of data and visitor behavior to provide the ideal experience for their digital consumers.

Like most spaces, in the B2B space, there are countless opportunities for testing and learning.  The more data that you have, the more opportunities exist for quantifying personalization efforts through targeted tests and for machine learning through solutions like Adobe’s Automated Personalization tools.  In fact, many B2B optimization programs are focused on the knowns and the unknowns with integrations between the testing solution(s) and with demand generation platforms as I wrote about a few years ago.

In a world with relatively complex testing options available with first-party data, third-party data such as Demandbase (great data source for B2B), and with limitless behavior data, it is important to not lose sight on simpler tests.  Just because rich data is available and complex testing capabilities exist, doesn’t mean the more basic tests and user experience tests should be deprioritized.  It is ideal for organizations to have a nice balance of targeted advanced tests along with an array of more general tests as it gives the organization a wider basket to catch opportunities to learn more about what is important to their digital consumers.   Informatica knows this and here is a very successful user experience test that they recently ran.

Informatica was recently named a leader in Gartner’s Magic Quadrant report and the testing team wanted to optimize how to get this report to their digital consumers of their product pages on their website.  Many different ideas were discussed and the user experience team decided to use a sticky banner that would appear on the bottom of the page.  Two key concepts were introduced into this test with the first being the height of the banner and the second being the inclusion of an image.  Both sticky banners allow for the user to X or close the banner as well.

The Test Design

Here is what Experience A or the Control test variant looked like (small sticky footer and no image) on one of their product pages:

and the Experience B test variant on the same product page (increased height and inclusion of image):

 

And up close:

vs.

 

The primary metric for this test was Form Completes which translates to visitors clicking on the banner and then filling out the subsequent form on the landing page.  We also set up the test to report on these additional metrics:

  • Clicks on the “Get the Reports” CTA in banner
  • Clicking on the Image (which lead to the same landing page)
  • Clicking on the “X” which made the banner go away

The Results

And here is what was learned.  For the “Get the Reports” call to action in both footers:

While our primary test metric is “Form Completes”, this was a great finding and learning.  There was a 32.42% increase in the same call to action either because of the increased height or the image.

For the “Image Click”:

This was not surprising since visitors could technically only click on the image for Experience B since the image didn’t exist for Experience A.  Some might wonder why this metric was even included in the test setup but by doing so, we were able to learn something pretty interesting.   The primary metric is “Form Completes” and in order to get a form complete we need to get visitors to that landing page.  The way that visitors get to that landing page is by either clicking on the “Get the Report” call to action or by clicking on the image.  We wanted to see what percentage of “clickers” for Experience B came from the Image vs. the “Get the Report” call to action.  Turns out 52.6% of clicks in Experience B came from the Image vs. the call to action which had 47.5% of the clicks.  Keep in mind though, while the image did marginally better in clicks, the same call to action in Experience B got a 32.42% increase vs. Experience A.  The image clickers represented an additional net gain of possible form completers!

For the “X” or close clickers:

This was another interesting finding.  There was a significant increase of over 127% of visitors clicking on the X for Experience B.  This metric was included so as to see engagement rates with the “X” and to compare those rates with the other metrics.  We found that engagement with the “X” was significantly higher, almost tenfold, compared to the calls to action or the image.  The increase of “X” clicks of Experience B compared to Experience A was surmised to be because of the increased height of Experience B.

And now, for the primary “Form Complete” metric:

A huge win!  They got close to a 94% lift in form completes with the taller sticky footer and image.  The Experience B “Get the Report” call to action led to a 32.42% increase in visitors arriving on the form page.  The image in this same Experience B brought a significant number of additional visitors to this same form page.  Couple this and we have a massive increase in form completions!

For a test like this, it often also helps to visualize the distribution of clicks across the test content.  In the image below, X represents the number of clicks on the Experience A “Get the Reports” call to action.  Using “X” as the multiplier, you can see the distribution of clicks across the test experiences.

Was it the image or the height or the combination of the two that led to this change in behavior?  Subsequent testing will shed more light but at the end of the day, this relatively simple test led to significant increases in a substantial organizational key performance indicator and provided the user experience teams and designers with fascinating learnings.

 

Featured, General

My First MeasureCamp!

Last Saturday, I attended my first MeasureCamp! It was the inaugural MeasureCamp for Brussels and it had about 150 people there and those people came from as far away as Russia to attend! About 40% of the attendees where not local, but being in central Europe, it was easy for people to come from France, UK, Germany, etc. (I was the lone American there).

Over the years, I have heard great things about MeasureCamp (and not just from Peter!), but due to scheduling conflicts and relatively few having taken place in the US, had not had an opportunity to attend. Now that I have, I can see what all of the fuss is about. It was a great event! While giving up a Saturday to do more “work” may not be for everyone, those who attended were super-excited to be there! Everyone I met was eager to learn and have fun! Unlike traditional conferences, MeasureCamp, being an “un-conference,” has a format where anyone can present whatever they want. That means you don’t just hear from the same “experts” who attend the same conferences each year (like me!). I was excited to see what topics were top of mind for the attendees and debated whether I wanted to present anything at all for my first go-round. But as I saw the sessions hit the board, I saw that there were some slots open, so at the last minute, I decided to do a “lessons learned” session and a small “advanced Adobe Analytics tricks” session. I attended sessions on GDPR, AI, visitor engagement,  attribution and a host of other topics.

Overall, it was great to meet some new analytics folks and to hear different perspectives on things. I love that MeasureCamp is free and has no selling aspects to it. While there are sponsors, they did a great job of helping make the event happen, while not pitching their products.

For those who have not attended and plan to, here is my short list of tips:

  1. Think about what you might want to present ahead of time and consider filling out the session forms ahead of time if you want to make sure you get on the board. Some folks even made pretty formatting to “market” their sessions!
  2. Be prepared to be an active participant vs. simply sitting in and listening. The best sessions I attended were the ones that had the largest number of active speakers.
  3. Bring business cards, as there may be folks you want to continue conversations with!

I am glad that Peter has built such a great self-sustaining movement and I look forward to seeing it more in the US in the future. I recommend that if you have a chance to attend a MeasureCamp, that you go for it!

Testing and Optimization

With Adobe Target, it is all about the Profile

Before there was Adobe’s Audience Manager or the Marketing Cloud Visitor ID service with A4T, there was the Adobe Target Profile.  In fact, this Profile goes all the way back to the roots of Adobe Target, to Offermatica.  This Profile, as old as it is, doesn’t really get the attention it deserves given the massive punch that it packs.  In my humble opinion, this Profile is arguably one of, if not the most valuable components of the Adobe Target platform to this day.  In fact, I have a whole chapter dedicated to it in my book as this Profile not only enables strategic personalization testing efforts but also allows for highly custom configurations of tests.

 

Adobe Target Profile

 

Given all that the Profile can do, I still find myself surprised to see just how many organizations are not really using it to its full potential.  In fact, I sadly still find organizations that are unaware of the capabilities that exist with the Profile or even how to use it.  Just late last week, I had a call with an organization that uses Adobe Target regularly and they shared that they had never heard of it which is one of the reasons why I am writing about it here today.

What is the Profile?

So what is this Profile exactly?  In geek speak, it is a collection of server-side visitor attributes stored on Adobe Target’s servers.  These visitor attributes are directly mapped and associated with the first party visitor identifier (PCID) that Adobe Target uses for visitor management.

In plain speak, the Profile is what Adobe uses to manage visitors but it also enables Adobe Target users to create audiences.  These audiences can be used to target tests to or to segment test results by, among other things.  These audiences are completely independent of any test so you can use them across tests as well.  The Profile enables audiences to be created based off of site behavior, environmental data, temporal data, first-party data via data layer or cookies, third-party data or offline behavior.  These Profile attributes can also be shared internally or externally.

Example Profile

Consider this example to solidify the concept.  You are an Adobe Target user and you want to create an audience within Adobe Target of visitors that have made a purchase.  You can do this because you know that the thank you page or the purchase complete page has a URL that contains “thanks.html”.  Without having to bother IT, you can create this audience within the Adobe Target user interface using profile scripts.  Profile scripts are simply snippets of code that execute on Adobe’s servers to define the profile attribute and in this example, our profile script will be read sort of like this:

“if the URL contains ‘thanks.html’, return true”

If we name this profile script “previous_purchaser” that creates a profile attribute called “previous_purchaser”.  Profile scripts execute to create “profile attributes”.

Whenever Adobe Target technology is seen by a visitor (by way of mbox calls), it creates a Profile.  Let us assume that in this example here that the visitor had never been to this site before and entered the site initially on the home page.  If Adobe Target is in place on the home page, that is where and when the Adobe Target Profile was born for this visitor.  As this visitor traversed the site and made its way to making a purchase, Adobe Target was called several times by way of mbox calls.  Every time the mbox makes a call, these server-side profile scripts execute or are evaluated to create profile attributes.

When the mbox call took place on the ‘purchase complete’ page where the URL contained ‘thanks.html’, this profile script we created above will then set a value of true!  This happens because each mbox call passes along the URL of the page it fired from.

In this case, the Profile was born on the home page and we are simply adding to that Profile that already exists with additional metadata.  In this case, we are adding an attribute with the name of “previous_purchaser” with a value of “true” to visitors that have made a purchase.

Now, Adobe Target users can target content to previous purchasers by creating an activity or test where the test audience is made up of the profile attribute “previous_purchaser” equal to or containing ‘true’.  Adobe Target profile attributes are ready for immediate use which makes them quite powerful.  You can use this “previous_purchaser” profile attribute on the very next page impression if you would like.

To summarize, our Audience is “Previous Purchasers” and we used “profile scripts” to create “profile attributes” that augment the Adobe Target Visitor Profile.  Only those visitors that saw the ‘purchase complete page’ will have this additional profile attribute for this example.

This example of previous purchasers is commonly used and one of the simpler use cases of using Adobe Target’s Profile to create audiences with profile attributes.  The sky is truly the limit when it comes to creating these audiences and we are not limited to a single event such as seeing a ‘thank you’ page.

Client side

The above example highlighted the most common way users of Adobe Target create profile attributes, through the user interface of Adobe Target.  Because these profile scripts execute on Adobe’s servers vs. the web page where visitors are seeing content, we consider these profiles to be server side.  When profile attributes are created within the user interface, they have the syntax of user.profile_attribute in the profile section of Adobe Target.  This is to differentiative this method of creating profile attributes vs. the alternative method which is client side by way of passing data to the mbox call.

Here is a generic example of mbox call and the text in red represents a profile attribute.  When Adobe Target sees a parameter value pair that has “profile.” at the start of it, it will augment that visitor ID with this new profile attribute.

If your organization leverages Tag Management or you have rich server-side data, you will find significant benefits to this client-side approach.  It can serve as a nice way to get data layer or managed data into Adobe Target quickly without having to write scripts in the Adobe Target interface.  Adobe limits profile attribute passing to 50 per mbox call and please consider the size of your mbox call when you add many profile attributes.

Profile facts

There is so much to profiles and many fascinating use cases for them.  In the months ahead, I plan on sharing more about what profiles can do and some fascinating use cases.  I just wanted to get the general concept of Profile attributes out there here.  In fact, there is a cool project in the works at MiaProva to help organizations make better and more strategic use of this capability.  More on that soon but here are some facts and factoids on profile attributes that you might find helpful.

  • The Adobe Target Profile that each one of your digital consumers has (assuming you have a global mbox) expires, by default, after 14 days of inactivity.  This means your audiences do too since it is mapped to the Profile that Adobe Target manages.
  • Mutual exclusivity within Adobe Target is doable because of profile scripts.
  • Adobe has a rich set built-in profile attributes that can be very helpful.  Things such as what ‘active tests’ users are into or the value of the unique visitor ID.  More information on that here.
  • Profile Attributes can not only be used for Adobe Target but for Adobe Recommendations and for Automated Personalization efforts.
  • Profiles can be static or dynamic – for example, you can have a profile attribute ‘have purchased’ and another profile attribute ‘number of purchase events’ that would increment every time the event happened
  • Profiles attributes can be created using the values of other profile attributes.
  • Profile attribute values can be passed onto other platforms.  If you’ve ever used the s_tnt integration with Adobe Target that passes test data to Adobe Analytics, you are using profile attributes (test name and experience name) and are sharing that to Adobe Analytics for consumption.
  • Automated Personalization LOVES profile attributes for their content decisioning.  If you want the models to consider an important data attribute of yours, create a profile attribute!
  • If you want to manage experiences across test activities, you would use profiles attributes.  For example, if you want everyone that got Experience B of one test to get Experience B of another test, you target both Experiences to the same profile attribute.
  • Adobe Target’s category affinity algorithm is an array of dynamic profile attributes.

If you weren’t already familiar with the Adobe Target Profile, I hope this post helped serve as a nice introduction to the Profile and how users can use attributes to append to the Profile to create audiences.  Profile attributes allow testers to up their game big time!  In an upcoming post, I will dive into mbox3rdPartyId which brings the Profile and its attributes to a whole new level!

Adobe Analytics, Featured

Minneapolis Adobe Analytics “Top Gun” Class – 12/7/17

Due to a special request, I will be doing an unexpected/unplanned Adobe Analytics “Top Gun” class in Minneapolis, MN on December 7th. To register, click here.

For those of you unfamiliar with my Adobe Analytics “Top Gun” class, it is a one day crash course on how Adobe Analytics works behind the scenes based upon my Adobe Analytics book. This class is not meant for daily Adobe Analytics end-users, but rather for those who administer Adobe Analytics at their organization, analysts who do requirements gathering or developers who want to understand why they are being told to implement things in Adobe Analytics. The class goes deep into the Adobe Analytics product, exploring all of its features from variables to merchandising to importing offline metrics. The primary objective of the class is to teach participants how to translate every day business questions into Adobe Analytics implementation steps. For example, if your boss tells you that they want to track website visitor engagement using Adobe Analytics, would you know how to do that? While the class doesn’t get into all of the coding aspects of Adobe Analytics, it will teach you which product features and functions you can bring to bear to create reports answering any question you may get from business stakeholders. It will also allow you and your developers to have a common language and understanding of the Adobe Analytics product so that you can expedite getting the data you need to answer business questions.

Here are some quotes from recent class attendees:

To register for the class, click here. If you have any questions, please e-mail me. I hope to see you there!

Conferences/Community

Digital Analytics Hub – Conference Recap

I’m back at my desk from the Digital Analytics Hub in the Big Easy and I’m feeling fresh and fired up about Digital Analytics. It may sound like a cliche, but how many conferences have you been to that purport to send you back to your desk with a go-to list of new ideas to take action on? Most conferences make these promises, most fall short. But for me, the DA Hub really delivered.

For those of you unfamiliar with the DA Hub, it’s a unique conference format in that there are no PowerPoint presentations or main stage speeches where one individual spouts wisdom to the masses. Not here. The format of the DA Hub is a series of huddles that are facilitated conversations among digital analytics professionals. As a consultant, I found myself sitting between client-side practitioners and vendors from many facets of analytics in almost every huddle that I attended. This multi-perspective-view enabled us to elevate the conversation beyond a one-sided bias to represent a cumulative vantage point for approaching digital analytics challenges. The overarching goal as stated by the conference co-organizer Michael Feiner, was not to solve the industry’s biggest challenges, but instead to put them on the table and discuss successes, failures, opportunities, and aspirations.

I’ll also mention that it was a safe environment for sharing, networking, and exchanging ideas. As I stood at the opening reception with a few other grizzed analytics industry veterans, we collectively guffawed about the days of old and agreed that there was a good mix of old-timers, newcomers, and rising stars in the digital analytics industry present at this event. More importantly, each huddle conversation started with a gentle reminder that all comments and feedback were unattributable and therefore sharing was encouraged and protected.    

Perhaps the worst thing about the DA Hub was not being able to attend all of the sessions. Inevitably, we were made to choose which huddles to attend. However, the good news is that there’s a LinkedIn Group (ask for your invitation!) that contains a synopsis of each huddle and the Outcomes that derived from each.

Here are my key take-away’s from the huddles I attended:

  1. Google and Adobe: Guard your lunch. Our little industry has seen it’s share of consolidation among vendors in the past few years. This may lead many to believe that it’s a two horse race when it comes to vendor choices for analytics data collection and analysis platforms. However, advancing perspectives, burgeoning technologies and MarTech proliferation have created a new environment that challenges the conventional thinking around build versus buy. Several enterprises have taken on the task of building out their own analytics platforms with great success. The benefits include greater integration flexibility, enhanced reporting capabilities, and data ownership. While building a solution isn’t for everyone; the big industry stalwarts (Google Analytics and Adobe Analytics) beware, because the roll your own crowd may just eat your lunch.  
  2. You can’t build a house on a shaky foundation. While the DA Hub did focus on a large number of advancements and innovations in the digital analytics industry (think AI, machine learning, customer identification across devices), many foundational elements in analytics are still a challenge. Specifically in the huddles I attended we discussed data governance, analyst skill sets, and basic reporting to meet organizational needs. Even some of the most successful companies in digital analytics are challenged with keeping data clean, nurturing talent, and building workable process around digital analytics. There are tried and true methods for fortifying your digital analytics practice. But when it comes to building, maintaining, and instilling confidence across companies (both large and small); the struggle is real.
  3. Run, don’t walk toward the future of digital analytics. Things are moving fast in digital analytics. If you’ve inadvertently got blinders on and are stuck at your desk, heads down implementing variables, pumping out reports, or delivering analysis, you’re gonna get burned. The scope of initiatives and activities that fall onto digital analytics teams today is increasing rapidly. It’s not enough anymore to just worry about the customer journey across your webpages and acquisition channels. Now we’ve got multi-device, multi-data sources, and multi-processing challenges to think about. Advancements in AI and machine learning can help us optimize at the speed of the customer. If you’re not working these angles in the next 12 to 18 months, your customers are going to notice. What’s more alarming as we advance our use and understanding of Marketing Science, privacy continues to be a show-stopping issue. If you’re not thinking about GDPR and ePrivacy today, deadlines and fines may be in your near-term future. Digital analytics is moving faster than ever and if you’re not keeping up, then you will most definitely suffer consequences.

In closing, I’ll sum up my experience at the Digital Analytics Hub with the same answer I gave during a short interview onsite in New Orleans…when asked if I’d return to the event next year…I gave a resounding YES! Kudos go out to the event organizers Michael, Matthias Bettag and all of the attendees who made this event one to remember.  

Conferences/Community, Featured

MeasureCamp Brussels!!

For years I have been trying to get to a MeasureCamp event, but my timing in Europe has always been a bit off.  For those not familiar with MeasureCamp, it is a cool “un-conference” held locally where anyone can attend (for free!) and share things they have done or tips related to the analytics field. I am excited to say that I will finally be able to attend my first MeasureCamp in Brussels this month! Since I will be in London conducting my advanced Adobe Analytics “Top Gun” class, I am going to stay over a few more days to experience MeasureCamp!  I hope to see you there!

Now I just have to figure out what topic I want to talk about!  If you have any suggestions, please leave them as a comment below!!

Analysis, Conferences/Community

Oct. 25th: 1-Day Workshop in San Francisco – Intro to R for the Digital Analyst

I’ll be conducting a small (up to 8 students) hands-on workshop that is an introduction to R for the digital analyst in San Francisco on Wednesday, October 25th.

If you are a digital analyst who is looking to dive into R, this 1-day intensive hands-on training is for you. This class is intended for digital analysts who are just getting started with R or who have tried to use R but have not successfully put it to use on a regular basis.

Course Overview

The course is a combination of lecture and hands-on examples, with the goal being that every attendee leaves the class with a basic understanding of:

  • The syntax and structure of the R platform — packages, basic operations, data types, etc.
  • How to navigate the RStudio interface — the script editor, the console, the environment pane, and the viewer
  • How to pull data from web analytics platforms (Google Analytics and Adobe Analytics) using R and the platforms’ APIs
  • The basics of transforming and manipulating data using R (base R vs. dplyr, with an emphasis on the latter — you don’t need to understand what that means to take the course; we’ll cover it!)
  • The “grammar of graphics” for data visualization (the paradigm for visualizing data in R using the most popular package for doing so — ggplot2)
  • Tips for troubleshooting R scripts (and writing code that can be readily troubleshot!)
  • The various options for producing deliverables directly from R

All of the material presented and applied during the class, as well as more advanced topics that cannot be covered in a one-day course, will be available to the students for reference as they put the material in to practice following the class.

Course Requirements

Students are expected to bring their own laptops. There will be communication prior to the class to ensure the required software (all free/open source) is installed and working.

Other Details

  • Date: Wednesday, October 25th
  • Time: 9:00 AM to 5:00 PM
  • Location: Elite SEM, 100 Bush St. #845, San Francisco, CA 94104
  • Cost: $895
  • Registration: click here to register

Questions?

Contact tim at analyticsdemystified dot com with any questions you have regarding the course.

Adobe Analytics, Analysis, Featured, google analytics

Did that KPI Move Enough for Me to Care?

This post really… is just the setup for an embedded 6-minute video. But, it actually hits on quite a number of topics.

At the core:

  • Using a statistical method to objectively determine if movement in a KPI looks “real” or, rather, if it’s likely just due to noise
  • Providing a name for said statistical method: Holt-Winters forecasting
  • Illustrating time-series decomposition, which I have yet to find an analyst who, when first exposed to it, doesn’t feel like their mind is blown just a bit
  • Demonstrating that “moving enough to care” is also another way of saying “anomaly detection”
  • Calling out that this is actually what Adobe Analytics uses for anomaly detection and intelligent alerts.
  • (Conceptually, this is also a serviceable approach for pre/post analysis…but that’s not called out explicitly in the video.)

On top of the core, there’s a whole other level of somewhat intriguing aspects of the mechanics and tools that went into the making of the video:

  • It’s real data that was pulled and processed and visualized using R
  • The slides were actually generated with R, too… using RMarkdown
  • The video was generated using an R package called ari (Automated R Instructor)
  • That package, in turn, relies on Amazon Polly, a text-to-speech service from Amazon Web Services (AWS)
  • Thus… rather than my dopey-sounding voice, I used “Brian”… who is British!

Neat, right? Give it a watch!

If you want to see the code behind all of this — and maybe even download it and give it a go with your data — it’s available on Github.