Adobe Analytics, Featured

Analysis Workspace – The Future is Here

One of the great things about Analysis Workspace is that it begs you to keep driving deeper and deeper into analysis in ways that the traditional Adobe Analytics reports do not. I have heard Ben Gaines talk about this as one of the reasons he loves Workspace so much and he is spot on. Ever since it burst onto the scenes, those who understand Adobe Analytics have realized that it represented the future of the product. The only thing holding it back was the fact that some key types of reports were unavailable, forcing users to continue to use the traditional Adobe Analytics reports.

However, this all changed yesterday. I believe that October 20th will go down in history (at least the history of Adobe Analytics geeks like me) as the day the world changed! On this day, a host of great new Analysis Workspace visualizations were released. These include:

While this may not seem like such a big deal, let me tell you why it is a huge deal. I believe that these additions represent the tipping point in which Adobe Analytics end-users give in and decide that Analysis Workspace is their primary reporting interface. While I have seen some of my clients dive head first into Analysis Workspace, I have also seen many of my clients “dip their toe in the water” with Analysis Workspace, but fall back to their comfort zone of traditional reports. It is my contention that this will no longer be possible and that Analysis Workspace will become the default going forward. Of course, this will take some time to learn the new interface, but the advantages are so compelling at this point, that those not making the shift will risk becoming Adobe Analytics dinosaurs.

To illustrate why I think this will happen, I am going to demonstrate the power of Analysis Workspace in the following section.

Stream of Consciousness

In my opinion, the intrinsic value of Analysis Workspace, like Discover before it, is the ability to come up with an analysis idea and be able to follow it through like a stream of consciousness. As an analyst, you want to be able to ask a question and then when you find the answer, ask a follow-up question and so on. In the traditional Adobe Analytics reports, there are a few cases in which you can break a report down by another, but it is somewhat limited. This limitation can break your train of thought and instead of asking the next question, you end up spending time thinking about how you need to work around the tool or, worse yet, add more implementation items to answer your follow-up question.

For example, let’s say that I want to see which products had the most orders this month. I can open the Products report and add the Orders metric. Then I want to see which campaigns drove the highest selling product, so I break the product down by campaign tracking code. Next I want to see the trend of that campaign code leading to orders of that product. At this point, I am a bit stuck since I need to build a segment and apply it to a Visits report. But to do this, I need to stop what I am doing, identify the correct segment definition, save it, open up a Visits report and apply the segment. Next I might want to see if there were any abnormal peaks or valleys in the data, so I might export the data to Excel and run a standard deviation formula against it for the last few months. This involves exporting data and making sure I have the formulas correct in Excel. What if I want to repeat this analysis on a weekly basis going forward? That means I need to open up Adobe ReportBuilder, make a data block, use formulas to apply the standard deviation and then schedule it to be sent weekly.

As you can see, there are a lot of manual steps involving Adobe Analytics, Excel, ReportBuilder, etc. At any point in this process, the phone could ring and I could get distracted and lose my train of thought. In the best case scenario, I am looking at a few hours to follow my concept through to analysis.

What Analysis Workspace does is two-fold. First, pretty much everything you need is built into the same tool so you don’t have to jump between different tools. Second, most of the things you need are one click away and can be done so fast that sometimes it feels like you are slowing down the tool instead of the other way around!

To illustrate this, I am going to build upon an example scenario that I blogged about last week. In that post, I described a situation in which I used the new Analysis Workspace Fallout report visualization to see what percent of visits to my website viewed my blog posts and of those how many found their way to some of my “sales pages. If you haven’t read that post, I suggest you take a few minutes and read that post now to have more context for what follows.

As described in the previous post, I have isolated a situation in which very few people are checking out my sales pages:

screen-shot-2016-10-21-at-3-23-58-pm

Upon seeing this, one question I might ask is where are visitors going who don’t go to my sales pages? I can easily see this by right-clicking on the sales page checkpoint item and selecting the fallout option like this:

screen-shot-2016-10-21-at-3-34-24-pm

This will result in a brand new report being populated that shows the answer to this question:

screen-shot-2016-10-21-at-3-36-03-pm

In addition, I may want to see which pages people who do eventually reach my sales pages also view. I can do this by again right-clicking on the sales pages checkpoint and then choosing fall-though like this:

screen-shot-2016-10-21-at-3-29-54-pm

This will create a brand new report showing where visitors went between the second to last and last steps like this:

screen-shot-2016-10-21-at-3-32-57-pm

Finally, I may want to see the general trend of visitors viewing my blog post and then reaching a sales page. To see this, I right-click on the last checkpoint and select the trend option to see a graph like this:

screen-shot-2016-10-21-at-3-40-11-pm

So in a matter of seconds, I can follow-up on my top queries and continue to dig deeper. In fact, when I see the graph above, Analysis Workspace shows me the statistical trend and the normal upper and lower bands of expected data. This provides context and negates my need to export data to Excel and do analysis there. In addition, I see two circles indicating cases in which my trend was outside of the norm via Adobe Analytics Anomaly Detection functionality. When I hover over either of these circles, I am given the opportunity to dig deeper into these data anomalies with one click:

screen-shot-2016-10-21-at-3-45-06-pm

Running this allows me to see what data is contributing to the data anomaly like this:

screen-shot-2016-10-21-at-3-50-56-pm

But another analysis I may be curious about is from which companies are visitors coming who do make it from my blog pages to my sales pages. Ideally, I’d like to build a segment of these folks and start marketing to them. Luckily, I can right-click on the final checkpoint and select the “create segment from touchpoint” option and see a brand new segment like this:

screen-shot-2016-10-21-at-3-55-12-pm

All I have to do is give this segment a name and I can use it in any report. So next, I will open a freeform table and add my DemandBase Company Name report with the Visits metric and then apply this new segment to the report like this:

screen-shot-2016-10-21-at-4-03-25-pm

Next I can right-click on the top prospect (row 2 above) and see the trend of them visiting my site:

screen-shot-2016-10-21-at-4-06-32-pm

Another way to analyze this might be to add a cohort table and see how often people who fall into my Blog to Sales segment visit my site and then return to it. I can do this by adding a cohort visualization, selecting Visits as the metrics and then applying my new auto-created segment to it like this:

screen-shot-2016-10-21-at-4-11-33-pm

here I might see that I have some people coming back in week one, two and three, so they might be serious about working with me. I can then right-click on the week three cell and create a new segment called “Really Interested in Adam” and add that back to my DemandBase Company Name freeform table:

screen-shot-2016-10-21-at-4-18-15-pm

Phew! Now, I purposely went a bit crazy there, but was to drive home the point. While you may not go through things exactly the way I just did, the cool part is that you can! You can easily keep adding visualizations and right-clicking to create sub-reports and segments (and I didn’t even hit all of the other visualizations that can be used as well!). At no point did I have to leave Adobe Analytics and use other tools and I was able to run all of these reports in under ten minutes!

This is why I think most Adobe Analytics users will make the leap to Analysis Workspace in the future. I encourage you to avoid digging your head into the sand and to get with the program. There are lots of blog posts and videos available to show you how to use Analysis Workspace and if you need more help, I offer training services as well 😉

Congrats to the Adobe Analytics product management team and their developers. Welcome to the future of Adobe Analytics…

Adobe Analytics, Featured

Analysis Workspace Fallout Reports

Yesterday, the Adobe Analytics team added a lot of cool new functionality related to Analysis Workspace. One of these additions was the addition of a Fallout visualization, which was previously available in the Ad Hoc Analysis product, but unavailable in Analysis Workspace. In this post, I will share some of my thoughts on this new visualization and how it can be used.

Fallout Report Refresher

Back in 2008, I blogged about how to use Fallout reports in SiteCatalyst, but a lot has changed since then! The concept of the Fallout report is that you add checkpoints to a report and Adobe Analytics will tell you what percent of your paths dropped off or continued from checkpoint A to B to C. Unfortunately, the traditional version of this report has many limitations:

  • Fallout is limited to a finite number of checkpoints (normally four unless you pay for more)
  • Fallout can only include values from one dimension. For example, if you are doing a fallout report for Pages, only pages can be used as checkpoints. Therefore, you cannot mix values from two different dimensions
  • Fallout reports could only be used for Traffic Variables (sProps), so this might cause you to track data you have in Conversion Variables (eVars) in an sProp as well, just to see fallout. This sProp limitation also means that you could not add metrics (Success Events) to Fallout reports
  • Checkpoint values in the Fallout report could not be grouped, so if you wanted to see a checkpoint in which either value A or value B was present, you would have to create a new sProp for that purpose, which creates a lot of unnecessary work
  • Fallout reports are limited to one visit

So as you can see, traditional Fallout reports were useful, but had a lot of limitations. Most people got around these limitations, by using Fallout reports in the Ad Hoc Analysis (formerly Discover) tool. That was helpful, but it required the installation of a Java client and understanding how to use a much more sophisticated tool, which didn’t always appeal to casual analytics users.

Welcome to the Future!

But now, Adobe has brought the best of the Ad Hoc Analysis Fallout reports to Analysis Workspace, the new reporting/visualization interface that works for both casual and advanced analytics users. As you probably know, Analysis Workspace works natively in the browser, but packs the same punch as the Java-based Ad Hoc Analysis product.

The new Fallout visualization removes all of the previously mentioned limitations so you can:

  • Have an unlimited number of checkpoints
  • Include Success Events, eVars or sProps and mix and match them in the same Fallout report
  • Group items together into one checkpoint
  • View fallout across multiple visits

To illustrate this, let’s go through an example. Imagine that I want to know how often people come to the Analytics Demystified website, read one of my blog posts and then proceed to view a few of the pages that pitch my consulting services. In a normal Fallout report, this would be difficult because I would need to have some sort of “Page Type” sProp that had one value for all of my blog posts (i.e. Adam Blog Posts) and another value for all of my sales pages (i.e. Adam Sales Pages). That would require some manual tagging effort, but if I did that, I could see a fallout from Adam Blog Posts to Adam Sales Pages, but within a visit only.

Let’s see how I could do this using the new Analysis Workspace visualization. First, I would add the Fallout visualization to the canvas. Then I would drag over my Blog Post Views Success Event as a checkpoint like this:

screen-shot-2016-10-21-at-12-24-52-pm

So now we can see that about 93% of our Visits have people who view blog posts. Next, I want to limit the second checkpoint to only those who read my blog posts. To do this, I can simply add a segment to the second checkpoint. This is another thing that has never been possible in traditional Fallout reports. So I will add my “Adam Blog Posts” segment to the second checkpoint by dragging it next to the Blog Post Views Success Event (you will see a black bar) so it looks like this:

screen-shot-2016-10-21-at-12-28-04-pm

Now I can see that about 16% of all visits find their way to one of my blog posts. Next, I want to see what percent of those folks make it to one of my sales pages. To do this, I use the left navigation of Analysis Workspace to find the Pages dimension, click the arrow next to it and then find the sales pages. Here is what the left navigation will look like before you click the arrow:

screen-shot-2016-10-21-at-12-30-40-pm

Once you click, you will see your pages and can search for the ones you want:

screen-shot-2016-10-21-at-12-32-16-pm

Once you find the pages you care about, you can drag them over one at a time (or select multiple using Command/Control) and drop them next to each other. Combining them creates an OR clause so if any of those pages is viewed, the Fallout report will count it. Here is what it looks like after I dragged over three different pages:

screen-shot-2016-10-21-at-12-21-27-pm

So now I can see that I am not getting a lot of folks reading my blog posts to view my consulting sales pages (darn freeloaders!). Since my percent is lower than I’d like, in this case, I am going to start adding a call to action for my sales pages to the bottom of my blog posts (see below) and then check in a few weeks to see if this helps decrease this large drop-off…

Additionally, there are some settings associated with this report that you can tweak. Using the “gear” icon, you can choose whether you want to include All Visits as the first checkpoint, or exclude that and start my fallout report with the first checkpoint.  You can also choose if you want to include Visits or Visitors in the report:

screen-shot-2016-10-21-at-12-37-44-pm

Here is what the report looks like if I uncheck the “All Visits” box:

screen-shot-2016-10-21-at-12-39-49-pm

Segmentation

But wait…there’s more. While we saw that we can add segments to checkpoints, there is much more you can do with segmentation and Fallout visualizations. First, you can add a segment to the entire workspace project which will impact all visualizations, including the Fallout report. For example, I can add my “Competitors” segment (which I get from DemandBase data) to the top of the project  and see my data change like this:

screen-shot-2016-10-21-at-1-05-35-pm

Now I can see that instead of 16% of visits viewing my blog posts, I have 44% of visits viewing them (not cool guys!) and that very few of them view my sales pages, which is understandable. But to make this easier to see, I can alternatively drag this segment next to the All Visits area at the top of the Fallout visualization and see the Fallout report separately for each segment like this:

screen-shot-2016-10-21-at-1-09-52-pm

This is a much easier way to see the differences. You can add up to three different versions to the Fallout report, so here is an example if I wanted to view All Visits, US Visits and Europe Visits together:

screen-shot-2016-10-21-at-1-11-29-pm

Additional Info

So that is a quick tutorial on the new Fallout visualization. I hope it helps you see some of the power that now exists. To see some more cool ways you can use this new functionality, check out this blog post by Antti Koski and watch this YouTube video from Adobe. Enjoy!

Adobe Analytics, Featured

Pricing State

If you are an online retailer, there are situations in which you will offer your products in various pricing states. For example, there may be some products that are on sale, some that have discounts based upon a discount code or some that are on clearance. In these cases, you may want to document the original price, the discounted price and see how the pricing state impacts conversion. In this post, I will show how to do this in Adobe Analytics and a few examples.

Capturing the Pricing State

The first thing you may want to see is whether pricing state has any conversion implications. This can be tracked in general and by product or product category. To do this, you will want to set an eVar with the current pricing state when visitors open each product page. For example, if a visitor opens Product A and it is a product priced at retail price, you may pass the phrase “retail price” to the eVar. But if a product is discounted, you would pass in the type of discount the visitor saw. Let’s imagine that your visitor viewed a product that had this pricing associated with it:

Screen Shot 2016-08-30 at 1.19.52 PM

In this case, the pricing state was “clearance” and it was discounted sixty-seven percent. There are a few ways to capture this, but to save eVars, I would probably capture this as “clearance:67” in the eVar to denote that the active pricing state was “clearance” and the percent off amount. Here is what the report might look like when viewed with the Product Views Success Event (with retail price value excluded):

Screen Shot 2016-08-30 at 2.01.41 PM

This report can be broken down by Product as needed or you can begin with the Products report and then break that down by Pricing State as needed. And if you have classified your Products into Product Categories, you can see the same information by Product Category.

Of course, those who have been reading my blog for a while may recognize that this new “Pricing State” eVar will require the use of Merchandising. This is due to the fact that your visitors may view multiple products, and Adobe Analytics needs to record the pricing state for each product viewed versus just storing the last pricing state and applying that to all products (as would be done with a non-Merchandising eVar). In this case, since we are setting the Pricing State eVar on the product page where we are already setting the Products variable, I would suggest using Product Syntax Merchandising.

Once you have set the eVar, each product viewed will have a its own Pricing State value and Adobe Analytics will wait and see which products are purchased in the visit or beyond (depending upon your eVar expiration). That means that you can add both the Product Views and Orders metrics to the Pricing State eVar report and create a calculated metric to see the conversion rate. The report may look something like this (again shown with retail pricing filtered out):

Screen Shot 2016-08-30 at 2.05.00 PM

This type of report will allow you to see if any combination of pricing state and discount percent performs better than others. You can use the search filter or segmentation to narrow down items as needed (i.e. just sale rows).

By capturing both the pricing state and the discount percent in the same eVar, you can later use the SAINT Classifications Rule Builder to group all items by pricing state (i.e. all “clearance” items together) and use REGEX to see a report by discount percent. That gets you three reports with only one eVar. You can switch to the pricing state type classification to see a higher-level view of conversion by pricing state as shown here:

Screen Shot 2016-08-30 at 2.10.17 PM

Or you can switch to the discount classification to see performance by discount amount, agnostic of pricing state as shown here:

Screen Shot 2016-08-30 at 2.15.56 PM

Pricing State Metrics

While conducting analysis related to Pricing State, keep in mind that it is also possible to capture the dollar amounts associated with Pricing States in currency Success Events. Since all of the amounts are present the product page, it is simply a matter of passing the correct amounts to the appropriate Success Events. Let’s look at this via an example. If a visitor views the product shown above, you know that the original price was $40 and the current price is $12.99. Therefore, if a visitor orders this product, $12.99 will be passed to the Revenue metric (using the Purchase event), but nothing will be done with the $40 amount.

But if desired, you could capture the original $40 price on the order confirmation page in a new metric called “Original Price.” This new metric would always capture the original price and can be compared to the Revenue amount by Product or Product Category. This can be done by creating a calculated metric that divides Revenue by this new Original Price metric. You can add this calculated to the Products report or the Product Category report to see which products and categories are selling the most/least at a discount as shown here:

Screen Shot 2016-08-30 at 2.30.30 PM

On its own, this new calculated metric will show you percent of discount across the entire site. This might be an interesting KPI to monitor or upon which to set alerts in Adobe Analytics:

Screen Shot 2016-08-30 at 2.31.22 PM

Another cool way you can use these metrics is in the Campaigns area. By opening the Campaigns report, you can see which campaigns lead to the most/least discounted sales (see below). This might help you shift marketing dollars to campaigns that are driving sales for non-discounted products.

Screen Shot 2016-08-30 at 2.36.59 PM

These are just some of the ways that you can augment your Adobe Analytics implementation by capturing data related to pricing state and discount amounts. Enjoy!

Adobe Analytics, Featured

Trending Path Reports

While using Adobe Analytics, there will be times when you want to see how often visitors go from Page A to Page B to Page C, etc. This is easy to do with Adobe Analytics Pathing reports. You can use the “Next Page” or the “Next Page Flow” report to see this. But when you run these reports, you are seeing only a one-time snapshot of the paths. For example, if you are looking at the month of August, you will see how often visitors in that month went from Page A to Page B, but not see if that behavior is trending up or down over time. There will be situations when you want to see the trend data, but the normal pathing reports don’t show this unless you know how to find it. Therefore, in this post, I will demonstrate how you can tweak the pathing reports in Adobe Analytics to see pathing trends and provide a few examples.

Trending the Next Page Report

To demonstrate how you can trend the paths between two pages, let’s imagine that you want to see how often visitors navigate from your home page to your blog page. To do this, you would open the Next Page Path report and at the top right, select the start page, which in this case is the “home” page. Once you do that, you will see a report like this:

TrendPage1

This report shows all of the times that visitors went from “home” to any other page. In this case, I am interested in those going directly to the “blog/” page, which looks to happen approximately 6% of the time. Next, you can click the “Trended” link near the top-left and view this report in the trended view. This is similar to other Adobe Analytics reports that you may have trended in the past. In this case, you will use the “Selected Items” area to manually select the “blog/” page as the one you want to see trended and when you are done, you would see a report that looks something like this:

TrendPage2

In this report, you are seeing the weekly trend of paths from “home” to “blog/” and can save, bookmark or email this report or add it to a dashboard. If you want to see the trend by day or month, you can change the settings in the calendar or by changing the “View By” setting near the top-left. So with a few clicks, you can trend paths between two pages. This is a feature that has always been in the product, but I am amazed how few people know that it is there.

But Wait…There’s More!

In addition to seeing trends of page paths, there is more you can do with this concept. As I have preached for years, Pathing in Adobe Analytics is one of the most under-utilized features. There are many times where you would like to see the sequence of events including KPI Pathing, Product Cart Addition Pathing, Page Type Pathing, etc. For all of these items, you can also see pathing trends as shown above.

For example, let’s say that you have a blog and want to see how often visitors view two posts in succession. In my case, I have a popular blog post on Merchandising and another more advanced follow-up post on the topic. If I pass the title of my blog posts to an sProp with Pathing enabled, I can choose the first Merchandising post and then see how often the next post viewed is the advanced follow-up post. To do this, I open the Next Page path report for the “Blog Post Title” sProp, choose the first post (Merchandising as shown below) and then view the subsequent posts.

TrendBlog1

Next, I switch to the trended view of the report and use the report settings to isolate the follow-up post as shown here:
Trend Blog2

Now I can see the trend between these two posts over time and see how they are doing. In this case, I don’t see a lot of follow-up blog post views. This is probably due to the fact that the follow-up post was created after the first one and there is no link tying the two together. I can then add a link to the bottom of the first post advertising the follow-up post (which I am going to do right now in fact!) and then watch the trend line to see if that results in an increase.

Using Sequential Segmentation

There is an alternative method of seeing the trends between two pages and it involves the use of the sequential segmentation feature. For those, not familiar with sequential segmentation, you can check out this video by my partner (even though it uses Discover, the concept is the same), but it is essentially segmenting on the order in which data is collected or events are set.

Let’s look at an example that shows how this is both similar and different from what we covered above. Let’s start by using the Next Page Path report like we did above to see a week trend of paths from the “home” page to the “blog/” page:

Screen Shot 2016-08-23 at 2.56.15 PM

Now, let’s create a sequential segment that isolates visits in which visitors saw the “home” page and then saw the “blog/” page. This is done by adding the Page dimension to a Visit container twice, defining each one with the appropriate page name and using the “Then” operator between them as shown here:

Segment

Once you have this segment defined, you can apply it to the Visits report and you should see similar trend data as we saw above. For example, if we look at the same week, here is the trend:

Screen Shot 2016-08-23 at 2.59.15 PM

However, as you may have noticed, the data is slightly different (31 vs. 38). This is due to a technical “gotcha” that you need to take into account when using sequential segmentation. The segment above includes all visits where people viewed the “home” page and eventually saw the “blog/” page. This doesn’t necessarily mean that they went directly from the “home” page to the “blog/” page like they did using the Next Page Path trend report. If you want to make sure that it was a direct path you have to define the “Then” operator further by constraining it to “within 1 Page View” as shown here:

Screen Shot 2016-08-23 at 3.01.29 PM

Once this more detailed segment is applied, the trend of Visits should be the same (or very close) to what was shown in the Next Page Path trend report as shown here:

Screen Shot 2016-08-23 at 3.09.39 PM

There are times when you may want to do more advanced analysis that goes beyond the Next Page Path trend report, so knowing how to see pathing trends both ways is advantageous.

So there you have it. A few ways to see trends of paths for you to add to your Adobe Analytics arsenal. If you have any questions, feel free to leave them as a comment here.

Adobe Analytics, Featured

Different Flavors of Success Events (Part 2)

Last week, I covered some of the cool new Success Event allocation features available in Adobe Analytics. These new allocations allow you to create different flavors of Success Events for Last Touch, Linear, Participation, etc. In this post, I will build on last week’s post and cover one of my favorite allocation additions – Reporting Window Participation. If you haven’t read the previous post, I recommend you do that first.

Expanding the Participation Window

In the last post, I demonstrated how you could create Visit-based Participation versions of any Success Event in your implementation. However, one of the six new allocation options is one that I can’t resist talking about because it is something I have been eagerly awaiting for years – “Reporting Window Participation.” While the Participation feature has been around for over a decade, it has always been limited to the session (Visit). This means that if you wanted to see which pages lead to orders, you could use Participation, but your data would be constrained to pages they viewed within a visit. That means if a visitor viewed ten pages, then came back tomorrow and viewed five pages and completed an order, only the last five pages would get credit, which can be very misleading.

But as you will notice, one of the new allocation options in the Calculated Metric Builder is called Reporting Window Participation and this allows you to see which items within the entire date range you are looking at led to the success event. So if you created an Orders Participation metric based upon the Reporting Window, all fifteen pages in the preceding example would get credit for the Order. This makes reporting more accurate and interesting.

Another great use for this is marketing campaigns. In the past, if you wanted to see which Orders or Leads were generated from each campaign code, your options were basically First Touch or Last Touch. But if you create a reporting window participation metric and view it in the campaign tracking code report, you can see which campaign codes, across multiple visits contributed to success. While this is still not true attribution (which divides credit as you desire), it does provide additional insights into cross-visit effectiveness of campaign codes.

To illustrate how the Reporting Window Participation feature works, let’s build upon the blog post example from my previous post. In this case, I want to do a similar analysis, but remove the Visit constraint from my analysis. To do this, I repeat the steps from the previous post to create a new Participation metric for Blog Post Views, but this time, change the Visit Participation to Reporting Window Participation like this:

Screen Shot 2016-08-25 at 3.03.55 PM

When this is added to the report (I am using a longer duration period of several months), I can now see the difference between Visit and Reporting Window Participation:

Screen Shot 2016-08-25 at 3.05.50 PM

As you can see, the Participation in the Reporting Window is much greater than the Visit. This means that visitors [who don’t delete cookies] are coming back and viewing multiple posts, just not always in the same session. If you want, you can create another calculated metric that divides the Reporting Window Participation by the original metric to see which post gets people to view the most other posts within the longer reporting window timeframe:

Screen Shot 2016-08-25 at 3.10.03 PM

In this report, you can see blog post pull-through for the visit or the reporting window and do some analysis to see how each post does in each scenario.

Finally, if you read my post on using Scatter Plots in Analysis Workspace, you can compare blog posts views and Participation (pull-through) to see which posts have the most pull-through, but lower amounts of views:

Screen Shot 2016-08-25 at 3.14.57 PM

Here you can see that I have some blog posts with a very high pull-through, but low views in the top-left quadrant. These may be ones that I want to publicize more since they seem to get people to read other posts afterwards in the same visit or a subsequent visit. Keep in mind, that this example is using blog posts, but the same type of analysis can be done to see which products on your site people view that lead them to view other products, categories lead to other categories, videos lead to other videos and so on.

One other note that has come to my attention is that Reporting Window Participation is, at times, based upon full months, such that selecting a mid-month date range might include data from the beginning of the month. You can learn more about that in this knowledge base article.

So between these two posts, you have a quick tutorial on how to find and use some of the new Success Event allocation options in Adobe Analytics. For more information, check out the Adobe documentation and there is also a video Ben Gaines created that you can view here.

Adobe Analytics, Featured

Different Flavors of Success Events (Part 1)

Recently, the Adobe Analytics product team made some enhancement to how metrics (Success Events) can be allocated using the Calculated Metric Builder. I have noticed that many people have not learned about this new update, so I am going to share a bit more information about it and some examples of how it can be used.

Allocation of Success Events

As I have explained in numerous past blog posts, when a Success Event fires in Adobe Analytics, that number (which can be a 1 or more) is bound to the current eVar value for each eVar report. For each eVar, you can choose if the metric is allocated as First Touch, Last Touch or Linearly (divided amongst values within the visit). It has been this way for years. However, those who have used the Ad-Hoc Analysis product (formerly Discover), have probably seen that each metric can be viewed as Last, Linear or as the Participation version (Participation gives credit to all values viewed) in each eVar report. That was a cool bonus of using Ad-Hoc – even if you choose First Touch for an eVar, in Ad-Hoc, you could see Last Touch as well and not have to waste more eVars.

Now, this same concept has been brought to the normal Adobe Analytics Reports (browser) interface through the Calculated Metric Builder. This means that you can see different flavors of Success Event metrics by Last, Linear and so on. There is even a great new one added that expands upon the use of Participation that I will cover in Part two of this post next week. To illustrate what has changed, let’s look at what is new in the Calculated Metric Builder:

Screen Shot 2016-08-25 at 2.19.58 PM

Here you will notice that there is a new/expanded Allocation drop-down box found within the gear icon of a Success Event that has been added to the Calculated Metric Builder. This drop-down allows you to choose which “flavor” of the Success Event you want to use in your calculated metric and you will notice both familiar and some new options. Unbeknownst to you, metrics added have always been using the “Default” option unless you manually changed it. But now there are additional options here, such as Linear, Visit Participation, Reporting Window Participation, Last Touch, etc. By selecting one of these and providing your metric with a new name, you can create a brand new metric.

Since this can be a bit confusing, let’s look at an example of how this new feature can be used. For this example, I will use the “Visit Participation” option within the Allocation drop-down. The scenario is that I have a blog and I have an eVar that captures the title of each of my posts. This is a Last Touch eVar and is commonly used with a Blog Post Views success event. Here is what a typical report looks like:

Screen Shot 2016-08-25 at 2.07.41 PM

Now, let’s say that I want to see which of my blog posts gets visitors to view the most other blog posts. To do this, I would normally go to the Admin Console and enable Participation on the Blog Post Views success event and then I would see a new metric called Blog Post Views Participation. This metric would give one “point” to each blog post title that is viewed and another point to each blog post for subsequent views of blog posts. For example, if someone viewed the Merchandising blog post and then viewed the Cohort Analysis post, the Merchandising post would receive two Participation points – one for itself and one for the Cohort post. Then I could divide the total Participation points by the total Blog Post Views to see which post had the most “pull-through.” This is something that has been done for years and you can read more about it here in my old Participation post (from 2009!).

But what has changed now, is that you no longer have to be an Adobe Analytics Administrator to do this. Traditionally, only Admins have been able to turn on Participation, so end-users were stuck until they could get help. But now, you can create a Participation version of any Success Event right in the Calculated Metric Builder. Here is how you do it:

  • To begin, simply open an eVar report and add the metric for which you want to see Participation like what you see in the report above (note that you can create the Participation metric outside of a report, but I will do it within the report context to simplify things)
  • From here, use your link of choice to bring up the metrics left-nav window and click “Add” to create a new metric
  • Next, drag over the metric for which you want Participation, which in this case is the Blog Post Views Success Event
  • Then, click the gear icon and then the Allocation dropdown to display the options. When you complete these steps, it should look something like this:

Screen Shot 2016-08-25 at 2.19.58 PM

In this scenario, you will select the “Visit Participation” option and provide an appropriate name for the metric until you have something that looks like this:

Screen Shot 2016-08-25 at 2.23.31 PM

When I save this and add it to my report, you’ll see this:

Screen Shot 2016-08-25 at 2.24.44 PM

This report is the same as you would have seen if your administrator has enabled Participation for the Blog Post View success event. The Participation numbers will be higher than the raw metric because each blog post gets a “1” for itself and then credit for subsequent posts viewed. The closer the numbers are in the two columns, the less the post drove views of other posts. If you want, you can even create a calculated metric that divides this new Participation metric by the original metric. The formula might look like this:

Screen Shot 2016-08-25 at 2.28.02 PM

Adding this new metric to the report would show us which blog posts are the best at pulling visitors into other blog posts as shown here:

Screen Shot 2016-08-25 at 2.29.16 PM

Using this report, you can easily see that the Merchandising post drives more posts than the Advanced Search Filters post. If you wanted, you could even re-sort to find the posts that have the most pull-through:

Screen Shot 2016-08-25 at 2.30.53 PM

In the end, the cool addition here is that any end-user can enable Participation for any metric without having to get any approvals or harass your Adobe Analytics administrator. But at a higher level, you can create six new flavors of each Success Event in your implementation without having to do any additional tagging! In this post, there isn’t time to cover all six of the options, but most should be self-explanatory and can be created using the same steps outlined above. Next week, I will continue this topic with one of my favorite new additions – the Reporting Window Participation feature!

Adobe Analytics, Featured

Scatter Plots in Analysis Workspace

Last week, I wrote about how to use the new Venn Diagram visualization in Analysis Workspace. Now I will discuss another new Analysis Workspace visualization – the Scatter Plot. This visualization should be familiar to those in the field and has been available in Microsoft Excel for years. The purpose of the scatter plot is to show two (or three) data points on an x/y axis so that you can visualize the differences between them. In this post, I will continue using my blog as an example of how the scatter plot can be leveraged.

Scatter Plot Visualization – Step by Step

The first step in creating a scatter plot visualization is to create a freeform data table. This normally means adding a dimension and a few metrics. I would recommend starting with two metrics that you want to see plotted against each other. Here you can see that I am looking at my blog posts sorted by popularity and also added Visit Time Spent:

Scatter1

Once I have this table the way I like it, I can drag over the scatter plot visualization and then highlight the two columns to see this:

Scatter2

In this case, I am seeing the views of each blog post on the “x” axis and the time spent on the “y” axis. Blog posts that have a lot of views will appear on the right side of the visualization, while those with fewer views will be on the left. At the same time, those with more time spent in the visit will be near the top and those with lower time spent will be near the bottom. Blog posts with the most views and the most time spent will be in the upper-right quadrant. You can hover your mouse over any of the scatter plot points to learn more about it. For example, if I want to see what the best item is at the top-right (in green), I can hover to see this:

Scatter3

In this case, my post on Merchandising eVars seems to be the one viewed the most and with the most time spent (probably because Merchandising is a tricky topic!).

Most web analysts use scatter plots to identify improvement opportunities. For example, if you are plotting products, cart additions and orders, you can see which products have a high number of cart additions, but a low number of orders and figure out ways to take action on that. In this case, I may look for blog posts that have a large amount of time spent (which may mean that they are engaged with the content), but a low number of views. In this example, I might hover over the purple circle and see this:

Scatter4

This may indicate that I need to promote this report suite tweaking blog post more to get it more views.

When using scatter plots, there are some ways you can customize what you see in the visualization. If you want to flip the x/y axis, you simply reverse the metric columns in your freeform data table. If you want to see percentages instead of raw numbers, you can do this in the settings as well. You can also choose whether or not you want to see a legend in the visualization.

Finally, if you want to plot an additional data point, you can add a third metric to your freeform data table and the scatter plot visualization will modify the size of the circles to reflect the size of the new data point. For example, if I add Average Page Depth to the freeform table, the circle size will reflect the average depth associated with each blog post. Now I can see that my Merchandising post seems to be more of a “one and done” reading versus other posts that appear to be viewed concurrently or with other website content.

Scatter5

 

Seamless Adobe Analytics Integration

One of the best parts of Analysis Workspace and its visualizations is how seamlessly it works with the other aspects of Adobe Analytics. Last week, I showed how you can apply segments to Venn Diagram visualizations and the same is true for scatter plots. But the integration doesn’t end there. Imagine that I look at some of the visualizations above and ask myself, “which types of blog posts do people view the most and spend the most time on?” While the above visualization helps me differentiate the different blog posts, I tend to write a lot of posts and that can make it difficult to see the big picture. To conduct this kind of analysis, I can use SAINT Classifications to associate a “Blog Post Type” with each blog post. In my case, my blog posts tend to be either about Adobe Analytics features, types of analyses you can do, implementation best practices, etc. So if I put each blog post into one category or type using SAINT, I can get a much higher level view of how my blog is performing. Here is a sample of what my SAINT file might look like:

Screen Shot 2016-08-23 at 9.00.28 AM

Once this is done, I can repeat the above steps to create a scatter plot, but this time, instead of using the Blog Post Title dimension, I will use the Blog Post Type dimension (classification of Blog Post Title) and re-build my scatter plot. This allows me to see fewer data points, since all of my blog posts have been grouped into a small number of types:

ScatterType

This new scatter plot allows me to see that blog posts focused on implementation best practices and analyses tend to get the most views and have the most time spent. Posts around product features are next, but have a drop-off in the time spent. I can also see that posts on Analysis Workspace have a low number of views and time spent, but I attribute that to the fact that those posts haven’t been around very long and I would expect that category to move closer to the pink circle (Adobe Analytics product feature posts) over time. Finally, I can see that my posts about training classes and my miscellaneous posts that are a bit different don’t seem to get as many views or time spent. This combination of SAINT Classifications and the scatter plot allows me to learn things that I could not have easily surmised by looking at the scatter plot of the individual blog posts above.

As you can see the combination of pre-existing Adobe Analytics features and the new Analysis workspace visualizations can be extremely powerful. Since they are easy to build, unlimited and have no additional cost, I suggest that you try them out with your implementation. Enjoy!

Adobe Analytics, Featured

Venn Diagram in Analysis Workspace

If you are an Adobe Analytics customer, you have probably noticed that they have been tearing it up lately when it comes to Analysis Workspace. There has been a lot of cool innovations and fun stuff for you to play around with in this new freeform interface. Being an “old fogey” myself, sometimes it takes me a while to play around with the new stuff, but I have started doing that lately and found it to be interesting. In this post, I will demonstrate how you can use the new Venn Diagram visualization to do analysis.

Venn Diagram Visualization

If you are in the analytics space, you probably already know what a Venn Diagram is, but just to be sure, it is a data visualization that allows you to see how much of an overlap there is between data elements. In Analysis Workspace, Adobe allows you to add up to three Segments to the Venn Diagram and then choose a metric for which you want to see the intersection. To illustrate this, let’s look at an example. Let’s say that I want to see what percent of visitors to the Analytics Demystified blog view my blog posts and I also want to see how often my competitors are reading my blog posts. The first part is relatively easy, since I can build a segment to see which visitors view at least one of my blog posts. The latter requires me to use a tool like DemandBase to identify the companies hitting my blog and then SAINT Classifications to pick out companies that I think might be competitors of mine (or at least offer similar services to mine).

Once I have these segments built, I can go to Analysis Workspace and add the Venn Diagram visualization to the canvas and add my segments and the desired metric:

Screen Shot 2016-08-16 at 4.59.11 PM

Once this is done and I click the “Build” button, I will see the Venn Diagram like this:

Screen Shot 2016-08-16 at 4.59.45 PM

Here I can see that I have 26,000 unique visitors that have viewed my blog and about 4,000 competitors who viewed our website. But if I want to see the intersection of these, I can hover over the overlapping area and see this:

Screen Shot 2016-08-16 at 4.59.55 PM

Now I can see that there are about 1,300 visitors (~5%) who have read my blog and are competitors.  I can also click the “Manage Data Source” area to see a tabular view of this data if desired:

Screen Shot 2016-08-16 at 5.05.21 PM

Next, I might want to do more research on the intersection of these two segments. To do this, I simply right-click on the overlapping area and create a brand new segment from the Venn Diagram overlap:

Screen Shot 2016-08-16 at 5.07.15 PM

This will take me to the segment builder, where the segment is already pre-populated and I can make any tweaks necessary and provide a name:

Screen Shot 2016-08-16 at 5.09.01 PM

Now that I have a brand new segment, I can use it like I would any other segment anywhere within Adobe Analytics. In this case, if I want to see the specific list of competitors reading my blog, I can add a new freeform table and add the DemandBase Company eVar, the Visitors metric and then apply this new segment to see the top competitors viewing my blog:

Demandbase

Of course, I can use the unlimited breakdown feature of Analysis Workspace to drill down as much as I want. For example, I can see exactly which blog posts a particular company is viewing, I can break that down by the Blog Post eVar and maybe even again by the Cities report:

Demandbase3

As if that weren’t cool enough, I can also apply additional segments to the entire workspace canvas and those segments will be applied to ALL elements on the workspace canvas. For example, I noticed in the table above that a lot of competitors reading my blog appear to be from overseas. If I want to limit all of this data to companies hitting my blog from the US only, I can create a US Only segment and apply that to the entire canvas by dropping it into the segment area at the top of the page:

Demandbase2

This will limit all of the canvas visualizations to US Only data and all of the tables and Venn Diagram will instantly update!

As you can see, the Venn Diagram visualization can be very powerful. Instead of creating hundreds of segments to identify interesting intersections, you can simply add them to the Venn Diagram visualization and then when you find the ones you like, create the segments right from there. These segments can contain visitors who viewed products from Category A and Category B or visitors who viewed a video and purchased. The possibilities are truly endless. I recommend that you pick some of your favorite segments and try it out. I think you will have a much fun as I have had seeing the intersections of your data.

Adobe Analytics, Featured

Using UTM Campaign Parameters in Adobe Analytics

One of the primary use cases for digital analytics tools like Adobe Analytics and Google Analytics (GA) is the ability to track external campaign referrals and see their impact on KPI’s. Way back in 2008 (yes, 8 years ago!), I blogged about how to track campaigns in Adobe Analytics (then called Omniture SiteCatalyst). Since then, a lot has changed in the online marketing landscape. With many digital marketers being exposed to Google Analytics, the way campaign tracking is done in GA has almost become the industry de facto standard. The most popular GA method uses a set of UTM parameters to identify the campaign source, medium, term, content and campaign (though there is a “utm_id” option similar to how Adobe does it). These parameters are normally passed in the URL and parsed by GA to populate the appropriate analytics reports. But as Adobe Analytics users know, Adobe uses one variable (s.campaigns) to track external campaigns. So what if you are running both Adobe Analytics and Google Analytics or you simply want to use the Google standard since that is what your advertising agencies are using? In this post, I will show how you can make the UTM campaign code tracking standard work in Adobe Analytics so your campaign data matches what is in GA.

Updating the Query String Parameter Code

Most Adobe Analytics clients are using something akin to http://www.mysite.com?cid=abc123 in their URL’s and having the GetQueryParameter JavaScript Plug-in pass the value after “cid=” to the s.campaigns variable. But in reality, you can pass any values you want to s.campaigns and the plug-in can be configured to look for any query string parameter. Therefore, if you want to use the UTM campaign parameters, you can adjust the plug-in to concatenate the values into one string with a separator and pass it to the s.campaigns variable. For example, if you view the URL below, you will see that I have used four out of the five UTM parameters in the URL:

Screen Shot 2016-05-04 at 12.08.56 PM

From here, the plug-in does the concatenation and as you can see in the JavaScript Debugger, here is what is passed to the s.campaigns variable in Adobe Analytics:

Debugger

If you want more details on the technical implementation of this, you can check out this article on the Adobe forum.

Reporting on UTM Campaign Codes

Once you have completed the above technical implementation and have campaign data populating into Adobe Analytics, here is what it might look like in the campaigns report:

Screen Shot 2016-05-04 at 12.38.50 PM

Now that you have the data in a consistent format, you can use SAINT Classifications to split out each of the parameters into separate reports. To do this, you would add a new SAINT Classification for each UTM parameter you used. This is done in the Administration Console and as shown below, I have added four new classifications (Source, Medium, Campaign Description and Campaign Owner):

Screen Shot 2016-05-04 at 12.47.15 PM

Once you have your classification reports created, you need to tell Adobe Analytics how to populate them. You could upload the meta-data manually, but the easiest way to do this is to use the SAINT Rule Builder, which allows you to automate the classifications using RegEx or other methods. In this scenario, RegEx is the most logical option since it can be used to parse out each parameter using the “:” as the separator. This is what the rule set would look like:

Screen Shot 2016-05-04 at 12.45.21 PM

Once this is activated, you can see your campaign data in each of these reports (Source report shown here as an example):

Screen Shot 2016-05-04 at 12.56.51 PM

Final Thoughts

It is up to each organization to decide how it wants to track its marketing campaigns. I have many clients who like to customize how assign campaign codes, so please don’t take this post as a recommendation for adopting the UTM approach. A similar process can be adopted no matter what naming convention you decide to use for your campaign codes. However, there are many benefits of adopting naming conventions once they become a standard, such as integration with 3rd party tools and data integration. It is my hope that this post simply educates you on how you can use the UTM campaign code approach in Adobe Analytics if needed. There is more discussion on this topic in Quora if you are interested in delving into the topic in more detail.

 

Adobe Analytics, Featured

Report Suite Inconsistency [Adobe Analytics]

In my last post about Virtual Report Suites, I discussed some of the pros and cons of consolidating an Adobe Analytics implementation with multiple report suites into one combined report suite and using Virtual Report Suites. However, one of the reasons why your organization might not be able to combine its report suites and leverage Virtual Report Suites, is the pervasive problem of report suite inconsistency. This is a topic I have ranted about periodically, most recently in this post about whether you should start over when re-implementing Adobe Analytics. In this post, I will review why report suite inconsistency is important, especially as you consider moving to an implementation with fewer report suites and more Virtual Report Suites.

Why Are Report Suites Inconsistent?

Most organizations implementing Adobe Analytics have the best intentions at the start. They want to implement one site and track the most important items. But after a while, things start to go downhill. A second site is implemented and it has some different needs, so different variables are used. Then maybe a different team implements a mobile app and yet another set of variables is used. This process continues until the organization has 5-10 report suites and very little is common amongst them. You know you have a problem when you see this in the Administration Console (with all of your report suites selected):

Screen Shot 2015-12-03 at 3.27.25 PM

It is so easy to fall into this trap, so I don’t mean to blame you if it has happened to your organization. Often times, it was done by your predecessors over a long timeframe. Unless you have strict policies and procedures to prevent this type of inconsistency, it is will happen more often than not.

Of course, there are specific cases where you want different report suites to be inconsistent and for which seeing a “multiple” above is expected. For example, you may decide that each report suite will have 5-10 variables that are unique to each suite and for those variable slots, any data they want can be collected. I have many clients who specify 20 eVars, 20 sProps and 50 Success Events to be “local” variables that are purposely not consistent across report suites. That is a valid approach and requires discipline and management to enforce. The report suite inconsistency I am talking about is the unintentional inconsistencies that occur in many Adobe Analytics implementations. This is what I hope to help you avoid.

Why Is Report Suite Inconsistency Bad?

There are several reasons why not having report suite consistency can hurt you. Here are some of the ones that I encounter the most:

Data in Global Data Set Can be Wrong

If you have different data points feeding into the same variable in different report suites, when you combine the dataset, you will have different values rolled up. For example, if you track Cities in eVar5 for one suite and Zip Codes in eVar5 for another, in the shared data set, you will see a mixture of Cities and Zip Codes. This is even worse if you think about Success Events. If you are tracking Leads in event1 in one suite and Onsite Searches in event1 for another suite and roll the data up, you will see a sum of Leads and Onsite Searches in the shared data set and have no way to know which is which! That can get you in a lot of trouble, especially if you label event1 as Leads in the shared data set and many of the numbers represent Onsite Searches!

Can’t Use Virtual Report Suites

As mentioned in my previous post, if you want to save money on secondary server calls and consolidate your report suites into one master suite (using Virtual Report Suites), you need to make your report suites consistent. This is due to the fact that having one master report suite necessitates having just one set of variable definitions.

Can’t Re-Use Reporting Templates

One of the greatest benefits of having consistent report suites is the re-use of reports and reporting templates. If you use the same variables across multiple report suites, you can easily jump from one Adobe Analytics report to the same report in another suite, by simply changing the suite in the top-right dropdown. Let’s say that you have configured a great report in Adobe Analytics with a dimension and a few metrics. With one click you can change the report suite and see the same report for the second report suite without any re-work. The same applies if you use dashboards or reporting templates in Adobe ReportBuilder. Adobe ReportBuilder is where report suite consistency pays off the most, since you may spend a lot of time getting your Excel reports/dashboards to work and formatted properly. But this time can be leveraged for multiple report suites by tying the report suite ID to a cell in Microsoft Excel and refreshing the data for a different suite. If your report suites aren’t consistent, you would have to have different data blocks for each report suite and lose out on one of the best features of Adobe ReportBuilder.

Can’t See Aggregated Pathing

If you having Pathing turned on for sProps, you can see paths before and after specific items, but only for paths within the site for which the report suite is configured. If you send data to a global (shared) report suite, you can see paths across multiple web properties as long as both have the same sProp and Pathing enabled. For example, let’s say that you have a search phrase sProp20 with Pathing enabled in your Brand A report suite, but for Brand B, you have the search phrase in sProp15. In both of these report suites, you can see the Pathing of search phrases, but if the same person visits both brand sites in the same session, you might want to see search phrases paths across both sites. Even if you have a global (shared) report suite, you cannot see this, since the data is being stored in two different sProps. But if you had used the same sProp in both suites, you could see all search phrase Pathing in the global (shared) report suite for the entire session.

Can’t Re-use Training and End-User Documentation

I always like to provide good end-user documentation and training for implementations I work on. This means having some sort of file or presentation that explains each business requirement, how it is tagged, what data is collected and how it can enable analysis. I also like to provide training on how to use Adobe Analytics and the key reports/dashboards that have been pre-built for end-users. When you have a consistent implementation across multiple sites, you can build these deliverables once and re-use them for all sites. But if you have an inconsistent implementation, you have to create these deliverables multiple times, which can use up a lot of unnecessary bandwidth.

Can’t Use Consistent Tagging/JS File/Tag Management Setup

Last, but certainly not least, having inconsistent variable definitions means that each site has to be implemented slightly differently. Instead of always passing search phrases to sProp20 (as in the preceding example), your developers have to know that for Brand B, they have to place that data in sProp15 instead. Even if you use a tag management system and a data layer, you still have to configure your TMS differently by report suite, which increases your odds of mistakes and data quality issues. In addition, documentation of your implementation becomes much more difficult and time-consuming.

How Do You Avoid Report Suite Inconsistency?

So, how do you avoid report suite inconsistency? That is often the million dollar question, but there is no perfect answer for this (unfortunately). In my experience, this comes down to process and coordination. When I ran the Adobe Analytics implementation at Salesforce.com, I ruled it with an iron glove. I was the only one with Admin access, so no one could add any variables to any report suites without going through me. But since that approach might not be practical at larger organizations, I recommend that you have a shared solution design document that is kept up to date and always in line with the settings in the Adobe Analytics administration console. You can do this by comparing the two at least once a month and by using the administration console to compare the variables across your report suites. I also recommend that you drive your analytics program by business requirements instead of variables, so that you are only adding variables when new business requirements arise. I explain more about that process in my Adobe white paper.

Final Thoughts

Having consistency in your analytics implementation is difficult, but a goal worth striving for (in my opinion). I hope this post helps you see why it is advantageous and why I encourage my clients to pursue this goal. While it may take a bit more planning and forethought in the beginning of the process, it definitely pays dividends down the road. If you have any thoughts, questions or comments, please let me know.  Thanks!

Adobe Analytics, Featured

Virtual Report Suites [Adobe Analytics]

Recently, Adobe provided an Adobe Analytics update that includes a cool new feature called “Virtual Report Suites.” Virtual Report Suites are an exciting new way to segment your Adobe Analytics data and control access to it. In this post, I will share some of my thoughts on this new feature and share some resources that Adobe has provided so you can learn more about it.

A Brief History Lesson

Before I get into Virtual Report Suites, I think it is worthwhile to go back in time to see how this feature evolved, and why it is so cool. Back in the early days of Omniture SiteCatalyst (I am dating myself!), it was not possible to segment data instantaneously. To see segmented data, you had to either run a DataWarehouse report or use the Discover product (now called Ad Hoc Analysis). But there was also another feature that wasn’t used very often called Advanced Segment Insight (ASI). ASI was a way that you could define a segment and then re-process all of your data for just that segment and it acted just like a new report suite. However, the data was usually about 24 hours in arrears, so it didn’t provide anything close to real-time segmentation.

With the advent of instant segmentation in v15 of Adobe Analytics, the entire game changed for Adobe Analytics customers. Suddenly, you could segment data in real-time! This meant that ASI was no longer needed, so that feature was phased out of the product (or at least hidden!). This new ability to instantly segment also brought with it some new Adobe Analytics architecture considerations. For example, people began wondering whether they still needed to have multiple report suites and pay for extra secondary server calls. Why not just throw all of your data into one massive report suite and use segmentation to narrow down your data set? This would save money and avoid having to deal with different report suites. As I outlined at the time in this blog post, some of the reasons to not go down to one report suite were as follows:

  • The complexity of segments your users might have to make when dealing with just one data set;
  • The fact that even though you could segment, you could not enforce security constraints, so everyone could see all data in the combined data set (i.e. users in the UK can see USA data and vice-versa);
  • You could not have different local currencies in the combined data set, so you’d have to pick just one currency.

For me, the most critical of these items was #2 – the one around security. But many companies over the last few years have decided to consolidate their implementations and trade segmentation complexity and data security control for implementation complexity and to save some money.

Virtual Report Suites

Now let’s chat about Virtual Report Suites. This new feature allows you to create a new report suite based upon a segment definition and have its data available in near real-time. When you create a Virtual Report Suite, it appears in the list of report suites with a blue dot to differentiate it. This is really what ASI should have always been if the technology would have allowed it! The cool part about Virtual Report suites is that they solve the #2 item about around security. With Virtual Report Suites, you can assign users to a security group and limit what data they can see. Here are some examples of how you can use this new security feature:

  • You have multiple brands as part of your company and want to track all data in one report suite, but only let marketers from each brand see their own data;
  • You have multiple country websites and want to track all data together, but only allow each country marketing team to see its own data;
  • You have an agency that you work with and want them to see campaign and some conversion data, but not all of your analytics data.

As you can see, the addition of security can tip the scales towards a consolidated report suite approach. For those clients of mine that were worried about security, they now have one less reason to not reduce the number of report suites they maintain.

Obviously, the main driver for consolidating your report suites into one combined one is to save money on your Adobe contract. Secondary server calls can add up quickly and money saved can be applied to more analysts or adding Adobe Target to your implementation. Combining report suites also avoids some of the inconsistency issues I find in client implementations described in this post.

However, there are still some “gotchas” you need to consider before you decide to consolidate all of your report suites into one suite and use Virtual Report Suites. Adobe has outlined these considerations in this great FAQ document. Here are the ones that jump out to me as being most important:

  • Unique Values – Sometimes combining data sets leads to variables exceeding the monthly unique value limits (normally 500,000). Exceeding this limit has negative ramifications when it comes to segments and SAINT Classifications;
  • Current Data – If you like seeing up to the minute data in your Adobe Analytics reports, you will only be able to see that in the normal report suite, not the Virtual Report Suites;
  • Non-Shared Variables – If you have different report suites for different sites, you can provide each site with its own set of non-shared variables. This means that Site A might use eVars 75-100 to track different things than Site B does. But if you combine your data sets, you cannot have different values in the same variable slots, so you you might have to allocate different variable slots (i.e. Site A gets eVars 75-85 and Site B gets eVars 86-100) to each site and might not have enough variables to go around;
  • Full Picture – One of the issues of using multiple report suites or Virtual Report suites is that sometimes your users don’t get the full picture when doing analysis. For example, if you have a segment that looks for visits that enter on a campaign landing page, when you look at the segmented reports, it will show a different story than the main report suite where visitors could have entered on any page. This means that paths, participation and eVar allocation are all different between the main suite and the multi-suite tagged or Virtual Report Suite.  That isn’t necessarily a bad thing, but it can be if the people doing analysis don’t realize or remember that they are not seeing the full picture. Here is a typical example. Imagine that a paid search campaign code drove lots of people to your website. You can see that clearly in your main report suite. But when you look at the Virtual Report suite, depending upon the segment used to create it, the primary entry pages may not be included, so the campaign variable isn’t populated. Therefore, when doing analysis in your Virtual Report Suite, you may find that most visits originate from “Typed/Bookmarked,” when, in fact, they were driven by a paid search campaign. This just takes some practice and education to make sure you don’t make bad business decisions due to your report suite architecture;
  • Currencies – As mentioned previously, if you deal with multinational sites, you may want to have a different currency for each site and Virtual Report Suites don’t currently support this.

These are the main things I would suggest you think about, but you can get more information in the Adobe FAQ document. You can also check out the short video that Ben Gaines created on Virtual report suites here.

Final Thoughts

The addition of Virtual Report Suites is an exciting development in the evolution of Adobe Analytics and one that will definitely have a long-term positive impact. It brings with it the opportunity to drastically change how you architect your analytics solution. But making the decision to change your Adobe Analytics report suite architecture is not something you do every day. Therefore, I would suggest that you do some due diligence before you make any drastic changes. There are still ways that you can use and get value from Virtual Report Suites, even if you don’t choose to move all of your data into one combined data set right away. If you have questions or want to bounce ideas off me as an objective 3rd party, feel free to contact me.  Thanks!

Adobe Analytics, Featured

Using Custom Variables vs. Segmentation [Adobe Analytics]

As I work with and train clients to use Adobe Analytics, sometimes I encounter confusion around custom variables and segmentation. Both of these Adobe Analytics features are used to segment data, but they do so in different ways. In this post, I am going to take a step back and discuss how to think of these different product features in the proper context when doing analysis.

Custom Variables

So what exactly are custom variables? In daily use, Adobe Analytics custom variables are dimensions, taking the form of conversion variables (eVars) and Traffic Variables (sProps). Each implementation gets a specific number of these custom variables in addition to the ones that are provided out-of-the-box. These variables are used to track specific data elements that are meaningful to your organization. For example, if you have visitors log into your website (or mobile app), you may decide to allocate an eVar to the User ID  and pass that ID upon login. By storing this value in an eVar, any activity from that point forward can be associated with a specific User ID. The choice to use an eVar vs. an sProp depends on a few factors including how long you want the value to persist, whether you need Pathing, etc. Additionally, you can use the various expiration settings to determine for how long the User ID will be retained in Adobe’s virtual cookie.

However, one downside of custom variables is that activity is only tied to their values from the time they are set and afterwards. For example, in the scenario above, it could be the case that a visitor viewed twenty pages on the website prior to logging in and providing their User ID to the eVar. In that case, all of the activity from the first twenty pages will not be associated to any User ID. Therefore, if various Success Events are set (i.e. Cart Add, File Download) during those initial twenty pages, they would appear in the “None” row of the User ID eVar report. Unfortunately, over the years, I have seen that many Adobe Analytics customers don’t understand this nuisance, which is pretty important.

Ideally, you set as many of your custom variables as early as you can in the visit, but there will always be cases in which Success Events occur prior to an eVar receiving a value. Let’s illustrate this with another common example. Imagine that a visitor visits a B2B software site, views a bunch of products and eventually views the pricing page for a CRM product. Based upon the fact that they ended up on the pricing page of the CRM product, your marketing team chooses to assign a value of “CRM Prospect” to a “Marketing Segment” eVar. Hence, from that point forward, you can see all website activity for “CRM Prospects” by using the “Marketing Segment” eVar, but what about all of the activity that these people did before they were assigned to this segment? Up until that point, they were anonymous as far as that eVar was concerned. For example, if you were to create a Conversion Funnel report in Adobe Analytics and filter it by the “Marketing Segment” eVar value of “CRM Prospect,” you would only see filtered Success Events that took place (were set) after the eVar had been populated with the value. This can create issues at times and lead to real confusion.

In all of these cases, the common theme is that custom variables are useful, but sometimes don’t show the complete picture. Next we’ll talk about how using Segmentation can help complete the picture.

Segmentation

As you hopefully know by now, the Segmentation feature within Adobe Analytics allows you to narrow down your data set to only those hits, visits or visitors that meet the specific criteria you have added to your segment definition. This feature allows you to instantly focus your web analysis on the exact population you care about. As you would expect, the segments you create can use out-of-the-box data elements or any of the aforementioned custom variables. However, the reason why Segmentation is so powerful is that when you use a Visit or Visitor based segment, you are able to include all of the data that took place within the session (or beyond if using a Visitor container) instead of just the data that took place after a value was set in a custom variable.

Since this can be confusing, let’s use one of our previous examples to illustrate this. Imagine that you are interested in seeing the internal search phrases (stored in an eVar) used by visitors in the “CRM Prospect” segment, which uses the “Marketing Segment” eVar describe above. If you were to open the “Marketing Segment” eVar and find the row for “CRM Prospect,” you could then break this row down by the internal search phrase eVar (possibly using the “Internal Searches” Success Event) and see all of the search phrases used by that group of people. However, as explained above, you would really only be seeing the search phrases that were used after people were identified as being in the “CRM Prospect” segment. It is possible that some of the folks who eventually got placed in to the “CRM Prospect” segment conducted searches for various phrases prior to them being added to the segment. Therefore, using the custom eVar may not give you the entire picture.

If you want to be more thorough, in addition to using the custom eVar, you can rely on Segmentation to get your answer. In this case, you could create a segment in which the Visit contained a “Marketing Segment” eVar value of “CRM Prospect.” This means that Adobe Analytics will look for any activity that took place in the entire visit in which the “CRM Prospect” value took place. This segment would include all of the activity after the eVar value was set and the activity that took place before it was set. Once you apply this segment, if you open the internal search phrase eVar, the values in the report will, by segment definition, include all of the search phrases conducted by people who at some point were added to the “CRM Prospect” segment [for the advanced folks, you can even use exclude containers to take out any other segments if you want to be exact]. Therefore, the data you will get back using the custom variable approach may not be as complete as you would get back using the segment approach. This becomes even more potent if you use a Visitor container in your segment since that will include data from all visits in which a visitor was placed into the “CRM Prospect” segment.

Final Thoughts

While this topic can be a bit confusing to novices, it is something that is important for your end-users to understand. I often find that end-users are not properly trained on how to use Segmentation to its fullest extent and, therefore, many end-users rely solely on custom variables. This sometimes means that they are not getting the full picture when it comes to data analysis. It is for this reason that I suggest you read this post a few times and teach your users how custom variables really work, including what happens before and after they are set and how using Segmentation differs. You will probably end up seeing a steep increase in the usage of Segmentation as a result!

Adobe Analytics, Featured

When to Tweak Report Suites and When to Start Anew

As someone who has made a living auditing, fixing and improving Adobe Analytics implementations, there are a few questions related to this that I receive all of the time. One of these questions, is whether a company doing a cleanup/re-implementation of their Adobe Analytics implementation should make changes to their existing report suite(s) or start over with brand new report suites. As you would expect, there is no one right answer to this, but I thought I would share some of the things I consider when making this decision.

Auditing What You Have

The first step of the cleanup process for an Adobe Analytics implementation is doing a thorough audit or review of what you have today. I can’t tell you how many companies I have worked with who hired me to do a “quick” review of their implementation and had no idea how inaccurate or bad it really was in its current state. Too often, I see companies that don’t question the data they are collecting and assume it is useful and that the data is correct. Trust me when I tell you that, more often than not, it isn’t!  I’d say that most of the companies I start with have scores around 60% – 70% out of 100% when it comes to their Success Events, eVars and sProps functioning properly. If you are unsure, I suggest you read this white paper that I wrote in partnership with Adobe (and if you think you may have some issues or want an objective validation that your implementation is in good shape, check out this page which describes my service in this area).

As you audit your existing implementation, you will want to make sure to look at the following:

  1. How many report suites do you have and which are actively being used (and which can be hidden in the Admin Console)?  How consistent are these report suites?  If you select all of them in the Admin Console, how often do you see “Multiple” values?  The more you do, the more you may be in trouble.
  2. How good is your data? If you have incomplete or inaccurate data, what is the value of sticking with your old report suites that are filled wth garbage?
  3. How important to your business is year over year data? Some organizations live and die by YoY data, while others focus more on recent periods, especially if they have recently undergone a website re-design.

The answers so these types of questions will have impacts on the ultimate decision as I will dive into below.

Old Report Suites or New Ones?

So getting back to the original question – if you are going to re-implement, should you use existing report suites or new ones? The only way I can explain this is to show you the scenarios that I see most often.

Report Suite Inconsistency

If in the audit process above, you find that you have lots of report suites, and that the variable assignments in each of them are pretty different, the odds are that you should start from scratch with new report suites and, this time, make sure that they are set-up consistently. As mentioned above, the easiest way to see this is to select all of your active suites, choose a variable type (i.e. Events, eVars or sProps) and view the settings like this:

Screen Shot 2015-12-03 at 3.27.25 PM

If you see this when selecting multiple report suites, the odds are you are in trouble, unless you manage lots of different websites that have absolutely nothing in common across them. I tend to see this issue most in the following situations:

  1. Different sites are implemented by different parts of the business and no communication exists between them or there is no centralized analytics “Center of Excellence”
  2. One business is acquired and both used the same analytics tool, but the implementations were done while the companies were separate
  3. Two businesses or business units with different websites/products had implemented separately and, only later, decide they want to combine their data
  4. A mobile team goes rogue and creates new report suites and tags all mobile sites/apps with a brand new set of variables without talking to the desktop group

Regardless of how you got there, the reason report suite inconsistency is so bad, is that salvaging it requires a massive variable reconciliation if you want to use your existing report suites and, even then, all but one suite is going to have different data in it than it did in the past. For example, let’s say that event 1 above is “Internal Searches” for two report suites and for the eight other suites has a different definition.  That means you have nine different definitions for event 1 across ten report suites. Even if you lay down the law and say that after your re-implementation, event 1 will always be “Internal Searches,” you will still have numbers that are not Internal Searches in eight out of 10 of your report suites historically. Thus, if someone looks at one of the suites that historically didn’t have event 1 as Internal Searches, for a long period of time, it will not actually be Internal Search data. Personally, I’d rather have no historical data than potentially misleading data. In addition, I think it is easier to start anew and tell the developers from all of your disparate sites that they must move their data from wherever they have it now to the new standard variable assignment list, rather than trying to map existing data to new variables using Processing Rules, DTM, VISTA Rules or other work-arounds.  Doing the latter just creates Band-Aids that will eventually break and corrupt your data once again in the future.

Here is an example of the eVars for one of my clients:

Screen Shot 2015-12-04 at 5.39.19 PM

In this case, I only compared five different report suites out of more than fifty that they have in total. As you can see, reconciling this to ultimately have a global report suite or to send all data into one suite would be quite a challenge!

Conversely, if it turns out that all of your suites have pretty good variable definition consistency, then you can move the data from the incorrect variable slots to the correct ones in your lower priority report suites and continue using your existing report suites. For example, if you have one main report suite and then a bunch of other less significant report suites (like micro-sites), you may decide that you want to keep the main suite the way it is and force all of the other suites to change and adhere to the variable definitions of the main suite. This is a totally acceptable solution and will allow you to have year over year data for your main suite at least.

However, if you go down this path, I would suggest that if there are variables that the non-main report suite uses, that they be added to the main report suite in new variable slots so that eventually all of your report suites are consistent. For example, let’s say that one of the lesser important suites has a success event #1 that is registrations, but there is no registrations event in the main suite that you want to persist. In this case, you should move event 1 in the non-main suite to the next available event number (say event 51) and then add this event 51 to the main report suite as well. If you will never have registrations in your main suite, it is still ok to label event 51 as registrations, simply disable it in the admin console. This way, you avoid any variable definition conflicts in the future. For example, if the main report suite needs a new success event they would use event 52 instead of event 51 since they now know that event 51 is taken elsewhere. The only time this gets tricky is when it comes to eVars, since they are limited to 100 for most customers, but conserving eVars is a topic for another day!

Regardless of what you find, looking at the consistency of your variable definitions is an important first step in the process.

Data Quality

As mentioned above, if the data quality for your existing implementation isn’t very good, then there are fewer reasons to not start fresh. Therefore, another step I take in the process is to determine what percent of my Success Events, eVars and sProps in my current implementation I trust. As described in this old post, data quality is paramount when it comes to digital analytics and if you aren’t willing to put your name on the line for your data, then it might as well be wrong. Even if your report suites are pretty consistent (per the previous section), there may be benefits to starting over with new report suites if you feel that the data you have is wrong or could be misleading.

When I joined Salesforce.com many years ago to manage their Omniture implementation, I had very little faith in the data that existed at the time. When doing my QA checks, it seemed like most metrics came with a list of asterisks associated with them, such that presentation slides looked like bibliographies! While I hated the idea of starting over, I decided to do it because it was the lesser of two evils. It caused some temporary pain, but in the end, helped us shed a lot of baggage and move forward in a positive direction (you can read a lot more about how we did this in this white paper). For this reason, I suggest you make data quality one of your deciding factors in the new vs. old report suite decision.

Year over Year Data

Finally, there is the issue of year over year data. As mentioned above, if your variable definitions are completely inconsistent and/or your data quality is terrible, you may not have many options other than starting with new suites for some or all of your report suites. Moreover, if your data quality is poor, having year over year flawed data isn’t much of an improvement over having no year over year data in my opinion. The only real data that you would lose if your data quality is bad is Page Views, Visits and Unique Visitors (which are pretty hard to mess up!). In most cases, I try to avoid having year over year data be the driving force in this decision. It is a factor, but I feel that the previous two items are much more important.

Sometimes, I advise my clients to use Adobe ReportBuilder as a workaround to year over year data issues. If you decide to move to new report suites, you can build an Excel report using ReportBuilder that combines two separate data blocks into one large Excel data table that can be graphed continuously. In this case, one data block contains data for a variable in the old report suite and the other data block contains data for the same data point in a different variable slot in the new report suite. But to an end user of the Excel sheet, all they see is one large table that updates when they refresh the spreadsheet.

For example, let’s imagine that you have two report suites and one has internal searches in event 1 and the other has internal searches in event 5. Then you decide to create a brand new suite that puts all internal searches into event 1 as of January 1st. In ReportBuilder (Excel), you can create one data block that has event 1 data for suite #1 for dates prior to January 1st, another data block that has event 5 data for suite #2 for dates prior to January 1st and a third data block that has event 1 data for January 1st and beyond in the new report suite. Then you simply use a formula to add the data from event1 and event 5 in the data blocks that precede January 1st and then put that data block directly next to the final data block that contains event 1 data starting January first (in the new suite). The result will be a multi-year view of internal search data that spans all three report suites. A year later, your new report suite will have its own year over year data in the new combined event 1, so eventually, you can abandon the Excel workaround and just use normal Adobe Analytics reporting to see year over year data.

While this approach may take some work, it is a reasonable workaround for the few major data points that you need to report on year over year while you are making this transition to a newer, cleaner Adobe Analytics implementation.

Justification

Sometimes, telling your boss or co-workers that that you need to re-implement or start over with new report suites can be a difficult thing to do.  In many respects, it is like admitting failure. However, if you do your homework as described above, you should have few issues justifying what you are doing as a good long-term strategy. My advice is to document your findings above and share them with those who may complain about the initiative. There will always be people who complain, but at the end of the day, you need to instill faith in your analytics data and if this is a necessary step, I suggest you do it. I heave learned over the years that the perception of your organization’s analytics team is one of the most critical things and something you should safeguard as much as you can. In our line of work, you are asking people to make changes to websites and mobile apps, partly based upon the data you are providing. That demands a high level of trust and once that trust is broken it is difficult to repair.

I also find that many folks I work with were not around when the existing implementation was done. This is mainly due to the high turnover rate in our industry, which, in turn, is due to the high demand for our skills. If you are new to the analytics implementation that you now manage, I recommend that you perform an audit to make sure what you inherited is in good shape. As described in books like this, you have a narrow window of time that is ideal for cleaning things up and asking for money if needed. But if you wait too long, the current implementation soon becomes “your problem” and then it is harder to ask for money to do a thorough review or make wholesale changes. Plus, when you first start, you can say things like “You guys were the ones who screwed this up, so don’t complain to me if we have to start it over and lose YoY data…You should have thought of that before you implemented it incorrectly or with shoddy data quality!” (You can choose to make that sound less antagonistic if you’d like!)

Avoiding a Repeat in the Future

One last point related to this topic. If you are lucky enough to be able to clean things up, reconfigure your report suites and improve your data quality, please make sure that you don’t ever have to do that again! As you will see, it is a lot of work. Therefore, afterwards, you want to put processes in place to ensure you don’t have to do it again in a few years! To do this, I suggest that you reduce the number of Adobe Analytics Administrators to only 1-2 people, even if you are part of a large organization. Adding new variables to your implementation should be a somewhat rare occurrence and by limiting administration access to a select few, you can be sure that all new variables are added to the correct variable slots. I recommend doing this through a digital analytics “center of excellence,” the setup of which is another one of the services that I provide for my clients. As they say, “an ounce of prevention is worth a pound of cure!”

Adobe Analytics, Featured

Cart Conversion by Product Price

Back in 2010, I wrote about a way to see how much money website visitors were adding to the shopping cart so that amount could be compared to the amount that was actually purchased. The post also showed how you could see this “money left on the table” by product and product category. Recently, however, I had a client ask a similar question, but one focused on whether the product price was possibly a barrier to cart conversion. Specifically, the question was asked whether visitors who add products to the cart that are between $50 and $100 end up purchasing more or less than those adding products valued at a different price range. While there are some ways to get to this information using the implementation approach I blogged about in the preceding post, in this post, I will share a more straightforward way to answer this question.

Capturing Add to Cart Value

In the preceding post, to see money left on the table, I suggested that the amount of the product being added to the cart be passed to a new currency Success Event. But to answer our new question, you will want to augment that by passing the dollar amount to a Product Syntax Merchandising eVar when visitors add a product to the cart. For example, if a visitor adds product # 111 to the cart and its price is $100, the syntax would look like this:

s.events="scAdd,event10";
s.products=";111;;;event10=100;evar30=100";

In this case, the $100 cart value is being passed to both the currency Success Event and the Merchandising eVar (I suggest rounding the dollar amount to the nearest dollar to minimize SAINT Classifications later). Both of these amounts are “bound” to the product number (111 in this example).

Once this is done and repeated for all of your visitors, you can use the new Merchandising eVar to see Cart Additions by price of item added to cart using a report like this:

Screen Shot 2015-11-26 at 12.46.00 PM

Since the new Merchandising eVar has been bound to the product added to the shopping cart, if the visitor purchases the product prior to the eVar’s expiration (normally purchase event), the eVar value will be applied to the purchase event as well for those products ultimately purchased. Therefore, when orders and revenue are set on the order summary page, each order will be “bound” to the product price value such that you can see a report that looks like this:

Screen Shot 2015-11-26 at 12.50.46 PM

Using this report, you can see how each price point performs with respect to cart to order conversion. Since you will have many price points, you will likely want to use SAINT Classifications to group your price points into larger buckets or ranges to make the data more readable:

Screen Shot 2015-11-26 at 12.55.23 PM

Once you have this, you can switch to the trended view of the report and see how each price range converts over time. Of course, you can break this down by product or external campaign code to see what factors result in the conversion rate being higher or lower than your standard cart conversion rate (Orders/Cart Additions). This analysis can be used in conjunction with my competitor pricing concept to see which products you should emphasize and de-emphasize on your online store. You can also use this new eVar in segments if you ever want to isolate cases in which a specific product price range was added to the cart or purchased.

As you can see, there are lots of handy uses for this implementation concept, so if you have a shopping cart, you may want to try it out and see what creative ways you can exploit it to further your analysis capabilities.

Adobe Analytics, Featured

Average Internal Search Position Clicked

A few years ago, I wrote an extensive post describing how to track internal search position clicks to see which internal search positions visitors tend to click on. That post showed how to track impressions and clicks for internal search positions and how to view this by search phrase. Recently, however, I had a client ask for something tangentially related to this. This client was interested in seeing the overall average search position clicked when visitors search on their website and for each search term. While the preceding post provides a way to see the distribution of clicks on internal search spots, it didn’t provide a straightforward way to calculate the overall average. Therefore in this post, I will share a way to do this for those who want to see a trended view of how far down the search results list your visitors are going.

Calculating the Average Internal Search Position Clicked

The key difference in calculating the average internal search position clicked from what I described in my previous post, is that you need to switch from using a dimension (eVar) to using a metric (Success Event). To compute the average search position, the formula we eventually need is one that divides the summation of the position numbers clicked by the number of total internal searches. For example, if I conduct a search and click on the 10th result and then another search and click on the 5th result, I have clicked on an aggregate of 15 internal search positions (10+5) and had 2 search clicks. When I divide these two elements, I can see that my average search position clicked is 7.5 (15/2). Hence, if you apply the same approach for all of your visitors, you will be able to calculate the overall average internal search position.

From an implementation standpoint, this is relatively easy. If you have internal search on your site, you are probably already setting a metric (Success Event) on the search results page to determine how often searches are taking place. If you followed my advice in this post, you would also be setting a second metric when visitors click on a result in your search result list. Therefore, the only piece you are missing is a metric that quantifies the position number clicked. To do this in Adobe Analytics you would set a new Numeric (or Counter if on latest code) Success Event with the number value of the position clicked (let’s call this Search Position). For example, if a visitor conducts a search and clicks on the 5th position, you would pass a value of “5” to the new success event. This will create the numerator needed to calculate the average search position.

Once you have done this, you have the two metrics you need to calculate the average – search position numbers and the number of search clicks. Simply create a new Calculated Metric that divides the Search Position by the # of Search Clicks to compute the average as shown here:

Screen Shot 2015-12-04 at 9.26.18 AM

This will produce a metric repost like this:

Screen Shot 2015-11-25 at 10.42.47 AM

Average Search Position by Search Phrase

Since you are most likely already capturing the search phrase when visitors search, you can also view this new Calculated Metric by search phases, by simply adding it to the dimension (eVar) report:

Screen Shot 2015-12-04 at 9.24.21 AM

This report and the preceding ones can be used to watch your search result clicks overall and by search phrase. This may help you determine if your search results are meeting the needs of your users and whether you even need to have pages and pages of search results.

One fun way I have used this type of analysis is to take my top search phrases and hand-pick specific links that I want them to go to for the top search phrases (recommended links). Then you can see if your users prefer the organic results or the ones you have picked for them (using a new eVar!). Another way to use this analysis is to see if changes made to your internal search makes the average search position clicked go up or down. Regardless of how you use it, if you are going to use internal search on your site, you may as well track it appropriately. Enjoy!

Adobe Analytics, Featured

Using Cohort Analysis in Adobe Analytics

With the latest release of Adobe Analytics, the Analysis Workspace interface now provides a way to conduct cohort analyses. The new Cohort Analysis borrows from an existing one that Adobe had previously made available for mobile implementations, but now it is available for use everywhere and with everything that you have in your Adobe Analytics implementation. In this post, I will provide a quick “how to” since I have been surprised by how few of my Adobe customers are aware of this new functionality.

Cohort Analysis Revisited

A cohort analysis is used when you want to isolate a specific event and then see how often the same folks completing that event go on to complete a future event. In the recent decade, cohort analyses became popular due to social networking tools when they were used to judge the “stickiness” of these new tools.  For example, in the early days of Twitter, people would look to see how often users who tweeted in January were still tweeting in February. In this case, the number of people who tweeted in February was a separate number from those who tweeted in January and then in February, with the latter being “cohorts.”  For more information on this topic, check out the wikipedia page here.

New Cohort Analysis Visualization

Once you are comfortable with cohort analysis as a concept, let’s look at how you can create cohort analyses in the new Adobe Analytics interface. To start, use the left navigation to access the Analysis Workspace feature of the product (note that if you are on an older version of Adobe Analytics, you may not have Analysis Workspace enabled):

Screen Shot 2015-11-16 at 4.52.36 PM

In the Analysis Workspace area, you will click the visualizations tab to see all of the potential visualizations:

Screen Shot 2015-11-16 at 4.53.40 PM

From here, you will drag the “Cohort Table” visualization over to your reporting canvas and should see this:

Screen Shot 2015-11-16 at 4.55.51 PM

At this point, you need to select your timeframe/granularity (i.e. Month, Week, Day) using the drop-down box and then drag over the metric you want visitors to have performed to be included in the cohort. This is done by clicking on the components tab at the top-left:

Screen Shot 2015-11-16 at 5.00.06 PM

Keep in mind that you cannot use calculated metrics and some other out-of-box metrics as inclusion metrics, but you can use any of your raw success events. Also, if you click on the “Metrics” link, you can see all metrics and do a search filter, which is very handy. When contemplating your inclusion metric, think about what actions you want visitors to take to be included in the cohort. For example, if you are looking for people who have ordered, you would use the Orders metric, but if you are interested in people who have viewed content on your site, you may use a Content Views success event. As an example, let’s use the latter and build a cohort of visitors who have viewed content on the site and see how many of those visitors come back to the website within x number of days. To do this, we would change the granularity to days, add a Content Views metric as the inclusion metric and then add the Visits metric as the return metric so the cohort analysis looks like this:

Screen Shot 2015-11-16 at 5.06.46 PM

You may also notice that you have the option to increase the number of times each metric has to occur before people would be be added to the inclusion or return portion of the cohort. By default the number is one, meaning that the above cohort is looking for cases in which one or more Content Views took place and then one or more return Visits took place. To narrow down the cohort, we could easily increase these numbers to force visitors to have viewed more content to be included in the cohort or returned to the site more than once to be included. But in this example, we’ll keep these set to one and run the report to see this:

Screen Shot 2015-11-16 at 5.10.43 PM

Here we can see that on November 3rd, we had 6,964 unique visitors who had Content Views and that of those who viewed content on that day, 13% (892 Visitors) returned to the site within one day (had a return Visit). Keep in mind that all numbers shown in cohort analyses are unique visitor counts. The color shading shows the intensity of the cohort relative to the other cohort cells. By looking horizontally, you can see the drop-off by day for each cohort starting date and as you look vertically, the days will follow a cascading pattern with the newest starting dates having the fewest return dates like this:

Screen Shot 2015-11-16 at 5.17.03 PM

Changing the granularity from Day to Week, would work the same way, but have far fewer cohorts unless you extend your timeframe:

Screen Shot 2015-11-16 at 5.23.53 PM

Here is an example in which I have made both the inclusion and return metric the same thing (Content Views), but made viewing two pieces of content required to be eligible for the return cohort:

Screen Shot 2015-11-16 at 5.25.40 PM

Here you will notice that requiring two return content views reduced the first (Nov 3rd) cohort from 13% down to 9%. You can use these settings to identify interesting patterns. Since you can also make as many cohorts as you want using all of your success events, the amount of information you can glean is enormous.

Putting Cohorts To Use

Once you learn how to generate cohort analyses, you may ask yourself “Ok, now what do I do with these?” That is a valid question. While a blog post isn’t the best venue for sharing all you can do with cohort analyses, let me share a couple ways I would suggest you use them. The first way is to apply segments to your cohorts. For example, you may want to determine if visitors from a specific region perform better than another, or if those using your responsive design pages are more likely to return. Here is an example in which the previous cohort is segmented for Microsoft browsers to see if that makes the cohort better or worse:

Screen Shot 2015-11-16 at 5.35.19 PM

In this case, our Nov 3rd cohort went from 13% to 8% just based upon browser. Since you probably have many segments, this provides more ways you can slice and dice these cohorts and adding a segment is as easy as dropping it into the top of your Analysis Workspace page like this:

Screen Shot 2015-11-16 at 5.37.29 PM

Keep in mind that any segment you apply will be applied to both the inclusion and return criteria. So in the preceding scenario, by adding a Microsoft Browser segment, the inclusion visitor count only includes those visitors who had a Content View event and used a Microsoft browser and the return visits also had to be from a Microsoft browser.

But my favorite use for cohorts is using a semi-hidden feature in the report. If you have a particular cohort cell (or multiple cells) that you are interested in, you can right-click on it and create a brand new segment just for that cohort! For example, let’s say we look at our original content to return visit cohort:

Screen Shot 2015-11-16 at 5.10.43 PM

Now, let’s say something looks suspicious about the Nov 3rd – Day 4 cohort, which is at 3% (top-right cell). We can right-click on it to see this:

Screen Shot 2015-11-16 at 5.45.25 PM

Then clicking will show us the following pre-defined segment in the segment builder:

Screen Shot 2015-11-16 at 5.46.24 PM

Now you can name and save this segment and use it in any analysis that you may need in the future!  You can also make changes to it if you desire before saving.

While there is much more you can do with cohorts, this should be enough for you to get started and begin playing around with them. Enjoy!

Adobe Analytics, Featured

Sharing Calculated Metrics in Adobe Analytics

Over the past year, Adobe Analytics users have noticed that the product has moved to a different model for accessing/editing/creating analytics components such as Calculated Metrics, Segments, etc… In this post, I want to touch upon one aspect that has changed a bit – the sharing of calculated metrics.

 

Sharing Calculated Metrics – The Old Way

In the older interface of Adobe Analytics (pre version 15.x), it was common to create a calculated metric, then select multiple report suites and apply that calculated metric to multiple suites. For example, if you wanted to create a Null Search ratio, you would create the formula and then select your report suites and save it. Here is an example in which a few calculated metrics have been applied to thirteen report suites:

Screen Shot 2015-11-16 at 9.08.44 AM

This approach would save you the work of creating the metric thirteen separate times, which could be a real pain, especially if you had hundreds of report suites.

However, employing this [old] preferred approach of sharing calculated metrics can actually make things a bit confusing when you switch over to the new version of Adobe Analytics. When using the new Calculated Metrics manager, the old approach will  cause you to see the same calculated metric multiple times, since it shows all calculated metrics for all report suites in the same window. Here is how the same calculated metric looks in the more updated version:

Screen Shot 2015-11-16 at 8.59.08 AM

In this case, you would see the same metric for as many report suites as it was associated with in your implementation. While you could keep all of these different versions, doing so presents the following potential risks:

  1. It can be confusing to novice end-users
  2. If someone makes a change to one of the calculated metrics (in one report suite), it can deviate from the others, so that you lose integrity of the metric across your implementation/organization
  3. If you want to make a change to a calculated metric in the future, you have to do it multiple times

In addition to these risks, in the newest version of Adobe Analytics, there are some cool new ways to share metrics that don’t require this duplication of the same metric hundreds of times.

Sharing Calculated Metrics – The New Way

If you were to make a new calculated metric now, using the latest version of Adobe Analytics, you could create the metric once and simply share it to all users or groups of users. Once you have created your metric, you use the share feature and select “All” as shown here:

Screen Shot 2015-11-16 at 9.23.00 AM

Doing this allows you to see the calculated metric in every report suite, without having multiple versions of it. As shown here, you will still see the calculated metric when you click “Show Metrics” from within the Adobe Analytics interface:

Test

Therefore, if you have twenty calculated metrics across fifty report suites, you would have twenty rows in your calculated metric manager instead of one thousand! This makes your life as an administrator much easier in the future.

Moving From The Old to the New

So what if you already have a lot of metrics and they are shown multiple times in your calculated metrics manager? If you decide you want to trim things down and go to the newer approach, you would want to do the following:

  1. I suggest creating a corporate login as outlined in this blog post. This is a centralized admin login that the core analytics team maintains
  2. Review all of your shared bookmarks and dashboards to find all cases in which calculated metrics you are about to remove are used
  3. Copy each of the existing calculated metrics using the corporate login ID (described in step 1) and share it across all or designated users
  4. Once this is done, you can delete all of the duplicate versions of the calculated metric
  5. Go back to the shared bookmarks and dashboards using the old version of calculated metrics and replace them with the newly created shared version

While this may take some time, it will free up time in the future, since it will minimize the number of calculated metrics you have to maintain in the long run. I also find that it is beneficial to periodically review all of your shared reports and calculated metrics and do a clean-up. This process forces you to do this and you may be amazed how many you have, how many you can remove and how many are wrong!

Adobe Analytics, Featured

Creating Weighted Metrics Using the Percentile Function

When using Adobe Analytics, one of the things that has been a bit annoying historically is that when you sort by a calculated metric, you often see really high percentages for rows that have very little data. For example, if you create a click-through rate metric or a bounce rate metric and sort by it, you may see items of 100% float to the top, but when looking at the raw instances, the volume is so low that it is insignificant. Here is an example of a case where you may be capturing onsite searches by term (search criteria) and clicks on search results for the same term (as outlined in this post):

Screen Shot 2015-10-24 at 12.51.07 PM

In this case, it is interesting to see that certain terms have a highly disproportionate number of clicks per search, but if each is searched only once, it isn’t that statistically relevant. To get around this, Adobe Analytics customers have had to export all data to Microsoft Excel, re-create the calculated metrics, delete all rows with fewer than x items (searches in this case) and then sorted by the click-through rate. What a pain! That is a lot of extra steps!

But now, thanks to the new Derived Metrics feature of Adobe Analytics, this is no longer required. It is now possible to use more complex functions and formulas to narrow down your data such that you can sort by a calculated metric and only see the cases where you have a higher volume of instances. In this post, I will demonstrate exactly how this is done.

Using the Percentile Function

The key to sorting on a calculated metric is the use of the new PERCENTILE function in Adobe Analytics. This function allows you to choose a percentile within a list of values and use that in a formula. To illustrate this, I will continue the onsite search example from above. While the click-through rate formula used above is accurate, we want to create a report that only shows the click-through rate when search criteria have at least x number of searches. However, since the number of unique search criteria will vary greatly, we cannot simply pick a number (like 50 or more), because we don’t know how many searches will be performed for the chosen date range. For example, one of your users may choose one day, in which greater than 5o searches is unlikely, but another user may choose a full year, which will make greater than 50 a huge number of items with a very long tail. To deal with all scenarios, you can use the PERCENTILE function, which will look at all of the rows for the selected date range and allow you to calculate the xth percentile of that list. Hence, the threshold is relative to the date range chosen with respect to the number of instances that have to take place for it to show up in the calculated metric. Since this can be a bit confusing, let’s look at an example:

To start, you can build a new calculated metric that shows you what the PERCENTILE formula will return at a specific percentile. To do this, open the Calculated Metrics builder and make the following formula:

Screen Shot 2015-10-24 at 1.04.27 PM

Since there may be a LOT of unique values for search criteria, I am starting off with a very high percentile (99.5%) to see how many searches it takes to be in the 99.5% percentile. This is done by selecting the core metric (Searches in this case) and then making the “k” value 99.5 (Note: You can also figure out the correct “k” value by using the PERCENTILE function in Microsoft Excel with the same data if you find that easier). Once you are done, save this formula and add it to your search criteria report so you see this:

Screen Shot 2015-10-24 at 1.08.04 PM

This formula will have the same value for every row, but this is ok since we are only using it temporarily to figure out if 99.5% is the right value. In this case, what we see is that at a 99.5% percentile, anything with over 18 searches will show us the search click-through rate and anything below 18 searches will not. Now it is up to you to make your judgement call. Is 18 too high in this case?  Too low? If you want to raise, it, simply raise the “k” value in the formula to 99.8 or something similar.

While doing this, keep in mind, that changing your date range (say choosing just one day), will change the results as well. The above report is for 30 days of data, but look what happens when we change this to just one day of data:

Screen Shot 2015-10-24 at 1.13.14 PM

As you can see, the threshold changed from 18 to 6, but the number of overall searches also went down, so the 99.5% percentile seems to be doing its job!

Once you have determine what your ideal percentile “k” value is, it is now time to use this formula in the overall click-through rate formula. To do this, you need to create an IF statement and use a GREATER THAN function as well. The goal is to tell Adobe Analytics that you want it to show you the search click-through rate only in cases where the number of searches is greater than the 99.5% percentile. In other cases, you want to set the value to “0” so that when you sort in descending order, you don’t have crazy percentages showing up at the top, even though there are low values. Here is what the formula will look like:

Screen Shot 2015-10-24 at 1.18.22 PM

While this may look a bit intimidating at first, if you look at its individual components, all it is really doing is calculating a click-through rate only when the number of searches is above our chosen threshold. Now you can add this metric to your report and see the results:

Screen Shot 2015-10-24 at 1.22.26 PM

As you can see, this new calculated metric is no different from the existing one in cases where the number of searches is greater than the 99.5% threshold. But look what happens when we sort by this new Weighted Click-Through Rate metric:

Screen Shot 2015-10-24 at 1.24.22 PM

Unlike the report shown at the beginning of this post, we don’t see super high percentages for items with low numbers of searches. All of the results are above the threshold, which makes this report much more actionable. If you want, you can verify this by paging down to the point where our new weighted metric is 0% (in this case when searches are under 18):

Screen Shot 2015-10-24 at 1.28.11 PM

Here you can see that searches are less than 18 and that the previous click-through rate metric is still calculating, but our new metric has hard-coded these values to “0” for sorting purposes.

Final Thoughts

As you can see, using the PERCENTILE function can be a real time-saver. While this example is related to onsite search, the same concept can be applied to any calculated metric you have in your Adobe Analytics implementation. In fact, Adobe has created a similar metric for Weighted Bounce Rate that is publicly available for all customers to use for marketing campaigns. So any time you want to sort by a calculated metric and not see rows with low numbers of data, consider using this technique.

 

Adobe Analytics, Featured

Product Finding Methods

Product Finding Methods, is a topic that I have touched upon briefly in past blog posts, but not in great detail. Some others have also talked about it, but in a quick Google search, the most relevant post I found on the subject was this post from back in 2008. Therefore, in this post, I thought I would explore the topic and how it can be applied to both eCommerce and non-eCommerce sites.

What is Product Finding Methods?

I define Product Finding Methods as the way that website/app visitors use to find products that they ultimately purchase. For example, if a visitor comes to your website and conducts a search and then finds a product they like, then search would be the Product Finding Method that should be associated with that product. Most websites have about 5-10 different Product Finding Methods, such as:

  • Internal Search
  • Navigation/Browsing
  • Internal Campaigns/Promos
  • Wishlist/Favorites/Registries
  • Collections
  • Cross-Sell
  • Campaign Landing Pages

These are usually the main tools that you use to drive visitors to products, with the goal of getting them to add items to their online shopping cart. Having a Product Finding Methods report is useful when you want to see a holistic view of how visitors are getting to your products, but it can also be used to see how each product or product category is found. In most cases, the KPI that you care about for Product Finding Methods is Orders or Revenue because, while it may be interesting to see which methods get visitors to add items to the shopping cart, you make money when they order products and pay you!

Why Implement Product Finding Methods?

So why should you care about Product Finding Methods? Most organizations implementing this do so to identify which method is most successful at driving revenue. Since websites can be tweaked to push visitors to one finding method over another, if you have one that works better than another, you can work with your design team to either fix the lagging one or push people to the better one. For example, if your internal search functionality rarely produces orders, there may be something inherently wrong with it. Even if you track the internal search click-through rate and that is ok, without the Product Finding Methods report, you may not know that those clicking on results are not ultimately buying. The same may be true for your internal promotions and other methods. I once had a client that devoted large swaths of their product pages to product cross-sell, but never had a Product Finding Methods report to show them that cross-sell wasn’t working and that they were just wasting space.

Another use of Product Finding Methods is to see if there are specific products or product categories that lend themselves to specific finding methods. For example, you may have products that are “impulse” buys that do very well when spotlighted in a promotion on the home page, but don’t do so well when found in search results (or vice versa). Having this information allows you to be more strategic in what you display in internal promotions. By creating a “Look to Book” Calculated Metric (Orders/Product Views) within a Product Finding Methods report, it is easy to see which products do well/poorly for each finding method.

Finally, once you have implemented Product Finding Methods, you can use them in Segments. If you have a need to see all visits or visitors who have used both Internal Search and a Registry to find products, you can do this easily by selecting those two methods in the Segment builder. Without Product Finding Methods being implemented, creating a viable segment would be very cumbersome, likely involving  a massive amount of Page-based containers.

How Do You Implement Product Finding Methods?

So let’s say that you are intrigued and want to see how visitors are finding your products. How would you implement this in Adobe Analytics?

Obviously, if you are looking to breakdown Orders and Revenue (which are Success Events) by a dimension, you are going to need a Conversion Variable (eVar). This eVar would capture the most recent Product Finding Method that the user interacted with, regardless of whether that Finding Method led directly or indirectly to a product. To do this, you would work with your developers to identify all of your Product Finding Methods and determine when each should be passed to the Product Finding Methods eVar. For example, the “Internal Search” product finding method would always be set on the search results page.

However, before we go any further, we need to talk about Product Merchandising. If you are not familiar with the Product Merchandising feature of Adobe Analytics, I suggest that you read my post on that now. So why can we not use a traditional eVar to capture the Product Finding Method? The following example will illustrate why. Imagine that Joe visits our site and clicks on a home page promotional campaign and you have your developer set the Product Finding Methods eVar with a value of “Internal Campaign.” Then Joe finds a great product and adds it to the shopping cart. Next Joe does a search on our site, so you have your developer set the Product Finding Methods eVar with a value of “Internal Search.” Joe proceeds to add another product to the shopping cart and then checks out and purchases both products. In this scenario, which Product Finding Method will get credit for each product? If you said “Internal Search” you would be correct because it was the “most recent” value prior to the Purchase Success Event firing. Unfortunately, that is not correct. In this scenario, product #1 should have “Internal Campaign” as its finding method and product #2 should have “Internal Search” as the finding method. Because we need each product to have its own eVar value, we need to use the Product Merchandising feature, which allows us to do that.

For Product Finding Methods, it is common to use the “Conversion Syntax” methodology of Product Merchandising since we often don’t know when the visitor will ultimately get to a product or which product it will be. By using the “Conversion Syntax” method, we can simply set the value when the Product Finding Method occurs and have it persist until the visitor engages in some action that tells us which product they are interested in (the “binding” Success Events). Normally, I recommend that you “bind” (or associate the Product Finding Method with the product) when visitors view the Product Detail Page, perform a Cart Addition, engage with a Product Quick View or other similar product-related actions. These can be configured in the Administration Console as needed.

Once you have navigated the murky waters of Product Merchandising, set-up your Product Finding Methods eVar and worked with your developers to pass the values to the eVar, you will see a report that looks something like this:

Product Finding Method

 

From this report, you can then use breakdowns to breakdown each Product Finding Method by Product ID:

Product Finding Methods Breakdown

If you have SAINT Classifications setup for your Products Variable, you can see the same reports for things like Product Category, Product Name, Product Type, etc… All of these reports are bi-directional, so if you wanted to see the most popular Product Finding Methods for a specific product, all you would do is open the Products report and then break it down by the Product Finding Method eVar.

What About Deep-Links?

One question you may be asking yourself is: “What happens if visitors deep-link directly to a product page on my website and there is no Product Finding Method?” Great question! If you don’t account for this scenario, you would see a “None” row in the Product Finding Methods eVar report for those situations. In that case, the “None” row can be explained as “No Product Finding Method.” But one tip that I will share with you is that you can set a default value of the referring marketing channel in your Product Finding Methods eVar on the first page of the visit. If you can identify the marketing channel (Paid Search, SEO, E-mail, etc…) from which visitors arrive to your site, you can pass that channel to the Product Finding Methods eVar when the visit begins. Doing so will establish a default value so that if a product “binding” event takes place before any of your onsite product finding methods are activated, those products will be bound to your external marketing channel. This gives you more information than the “None” row, but still allows you to quantify what percent of your products have an internal vs. external product finding method. Obviously, to use this tip, you have to be able to identify the external marketing channel of each visit so that it can be passed to the eVar. I tend to do this by some basic rules in JavaScript that analyze the referrer and any campaign tracking codes I am already using. You can see a version of this by looking at the first report above in row five labeled “external campaign referral,” and notice that no “None” row exists in that report.

Non-eCommerce Uses

So what if you don’t sell products on your website? Does that mean you cannot use Product Finding Methods? Of course not! Even if you don’t sell stuff, there are likely uses for the above implementation. For example, if you manage a research site, you probably have visitors looking for content. In that case, your content is your product and you may be storing your content ID’s in the Products variable. This means that you can capture the different methods that your visitors use to find your content and assign each Content ID with the correct Product Finding Method.

If you manage a B2B website, you have product pages, but you may not sell the actual product online. In this case, the implementation for eCommerce will work the same way, but instead of Orders and Revenue, you may make the final Success Event Leads Completed. You can also see how visitors find your product videos, pricing sheets, etc…

Similar approaches can be employed for non-profit sites, government sites and so on. If you work with a non-eCommerce site, you may just have to think a bit more creatively about what your finding methods might be, what your “products” are and which binding events make sense. As long as you understand the general concept: Figuring out how visitors make their way to the stuff you care about, you will be able to find a way to use Product Finding Methods in your implementation.

If you have questions or other approaches related to this topic, feel free to leave a comment here or ping me at @adamgreco. Thanks!

Adobe Analytics, Reporting

Sharing Analytics Reports Internally

As a web analyst, one of your job functions is to share reports and data with your internal stakeholders. There are obviously many different ways to do this. Ideally, you are able to meet with stakeholders in person, share your insights (possibly using some of the great techniques espoused in this new podcast!) and make change happen. However, the reality of our profession is that there are always going to be the dreaded “scheduled reports” that either you are sending or maybe receiving on a daily, weekly or monthly basis. I recall when I worked at Salesforce.com, I often looked at the Adobe Analytics logs and saw hundreds of reports being sent to various stakeholders all the time. Unfortunately, most of these reports are sent via e-mail and end up in a virtual black hole of data. If you are like me and receive these scheduled reports, you may use e-mail rules and filters to stick them into a folder/label and never even open them! Randomly sending recurring reports is not a good thing in web analytics and a bad habit to get into.

So how do you avoid this problem? Too much data has the ability to get your users to tune out of stuff all together, which will hurt your analytics program in the long-run. Too little data and your analytics program may lose momentum. While there is no perfect answer, I will share some of the things that I have seen work and some ideas I am contemplating for the future. For these, I will use Adobe Analytics examples, but most should be agnostic of web analytics tool.

Option #1 – Be A Report Traffic Cop

One approach is to manually manage how much information your stakeholders are receiving.  To do this, you would use your analytics tool to see just how many and which reports are actually being sent by your users. In Adobe Analytics, Administrators can see all scheduled reports under the “Components” area as shown here:

Report List

Here we can see that there are a lot of reports being sent (though this is less than many other companies I have seen!). You can also see that many of them have errors, so those may be ones to address immediately. In many cases, report errors will be due to people leaving your company. Some of these issues can be addressed in Adobe by using Publishing Lists, which allow you to easily update e-mail addresses when people leave and new people are hired, without having to manually edit the report-specific distribution list.

Depending upon your relationship with your users, you may now be in a position to talk to the folks sending these reports to verify that that are still needed. I often find that a lot of these can be easily removed, since they were scheduled a long time ago and the area they address is no longer relevant.

Another suggestion is to consider creating a report catalog. I have worked with some companies to create an Excel matrix of who at the company is receiving each  recurring report, which provides a sense on how often your key stakeholders are being bombarded. If you head up the analytics program, you may want to limit how many reports your key stakeholders are getting to those that are more critical so you maximize the time they spend looking at your data. This is similar to how e-mail marketers try to limit how many e-mails the same person receives from the entire organization.

Option #2 – Use Collaboration Tools Instead of E-mail

Unless you have been under a rock lately, you may have heard that intra-company collaboration tools are making a big comeback. While Lotus Notes may have been the Groupware king of the ’90s, tools like Chatter, Yammer, HipChat and Slack are changing the way people communicate within organizations. Instead of receiving silo’d e-mails, more and more organizations are moving to a shared model where information flows into a central repository and you subscribe or are notified when content you are interested in appears. Those of you who read my “thesis” on the Slack product know, I am bullish on that technology in particular (since we use it at Analytics Demystified).

So how can you leverage these newer technologies in the area of web analytics? It is pretty easy actually. Most of these tools have hooks into other applications. This means that you can either directly or indirectly share data and reports with these collaboration tools in a way that is similar to e-mail. Instead of sending a report to Bill, Steve and Jill, you would instead send the report to a central location where Bill, Steve and Jill have access and already go to get information and collaborate with each other. The benefit of doing this is that you avoid long threaded e-mail conversations that waste time and are very linear. The newer collaboration tools are more dynamic and allow folks to jump in and comment and have a more tangible discussion. Instead of reports going to a black hole, they become a temporary focal point for an internal discussion board, which brings with it the possibility (no guarantee) of real collaboration.

Let’s look at how this might work. Let’s assume your organization uses a collaboration tool like Slack. You would begin by creating a new “channel” for analytics reports or you could simply use an existing one that your desired audience is already using. In this example, I will create a new one, just for illustration purposes:

New Channel

Next, you would enable this new channel to receive e-mails into it from external systems. Here is an example of creating an e-mail alias to the above channel:

Alias

 

Next, instead of sending e-mails to individuals from your analytics tool, you can send them to this shared space using the above e-mail address alias:

Screen Shot 2015-08-27 at 9.40.28 AM

The next time this report is scheduled, it will post to the shared group:

Posted

Now you and your peers can [hopefully] collaborate on the report, add context and take action:

Reaction

Final Thoughts

These are just a few ideas/tips to consider when it comes to sharing recurring/scheduled reports with your internal stakeholders. I am sure there are many other creative best practices out there. At the end of the day, the key is to minimize how often you are overwhelming your constituents with these types of repetitive reports, since the fun part of analytics is when you get to actually interpret the data and provide insights directly.

Adobe Analytics, Excel Tips, Featured

Working with Variable-Row-Count Adobe Report Builder Queries

I use Adobe Report Builder a lot. It’s getting to the point where I have to periodically reassure my wife that my relationship with the tool is purely platonic.

One of the situations I often run into with the tool is that I have a query built that will have a variable number of rows, and I then want to have a pivot table that references the data returned from that query. For instance, if I want to put start/end dates for the query in a couple of cells in Excel, and then plot time-series data, the number of rows returned will vary based on the specific start and end dates returned. This can present some challenges when it comes to getting from a raw query to a clean visualization of the returned data. Fortunately, with some crafty use of COUNTA(), pivot tables, and named ranges, none of these challenges are insurmountable.

The example I’m walking through below gets fairly involved, in that it works from a single Report Builder query all the way through the visualization of multiple sparklines (trends) and totals. I chose this example for that reason, even though there are many situations that only use one or two of the techniques described below. As noted at the end of the post, this entire exercise takes less than 10 minutes once you are comfortable with the approach, and the various techniques described are useful in their own right — just steroid-boosted when used in conjunction with each other.

The Example: Channel Breakdown of Orders

Let’s say that we want to look at a channel breakdown of orders (it would be easy enough to have this be a channel breakdown of visits, orders, revenue, and other metrics and still work with a single Report Builder query, but this post gets crazy enough with just a single metric). Our requirements:

  • The user (with Report Builder installed) can specify start and end dates for the report; OR the start and end dates are dynamically calculated so that the report can be scheduled and sent from within Report Builder
  • For each of the top 4 channels (by orders), we want a sparkline that shows the daily order amount
  • We want to call out the maximum and minimum daily values for orders during the period
  • We want to show the total orders (per channel) for the period

Basically, we want to show something that looks like this, but which will update correctly and cleanly regardless of the start and end data, and regardless of which channels wind up as the top 4 channels:

Final Visualization

So, how do we do that?

A Single Report Builder Query

The Report Builder query for this is pretty easy. We just want to use Day and Last Touch Channel as dimensions and Orders as a metric. For the dates, we’ll use cells on the worksheet (not shown) designated as the start and end dates for the query. Pretty basic stuff, but it returns data that looks something like this:

Basic Report Builder Query

This query goes on a worksheet that gets hidden (or even xlVeryHidden if you want to get fancy).

A Dynamic Named Range that Covers the Results

We’re going to want to make a pivot table from the results of the query. The wrinkle is that the query will have a variable number of rows depending on the start/end dates specified. So, we can’t simply highlight the range and create a pivot table. That may work with the initial range of data, but it will not cover the full set of data if the query gets updated to return more rows (and, if the query returns fewer rows, we’ll wind up with a “(blank)” value in our pivot table, which is messy).

To work around this is a two-step process:

  1. Use the COUNTA() function to dynamically determine the number of rows in the query
  2. Define a named range that uses that dynamic value to vary the scope of the cells included

For the first step, simply enter the following formula in a cell (this can also be entered in a named range directly, but that requires including the sheet name in the column reference):

=COUNTA($A:$A)

The COUNTA() function counts the number of non-blank cells in a range. By referring to $A:$A (or, really, A:A, would work in this case), we will get a count of the number of rows in the Report Builder query. If the query gets refreshed and the number of rows changes, the value in this cell will automatically update.

Now, let’s name that cell rowCount, because we’re going to want to refer to that cell when we make our main data range.

rowData Named Cell

Now, here’s where the magic really starts to happen:

  1. Select Formula >> Name Manager
  2. Click New
  3. Let’s name the new named range rawData
  4. Enter the following formula:
    =OFFSET(Sheet1!$A$1,0,0,rowCount,3)
  5. Click OK. If you click in the formula box of the newly created range, you should see a dashed line light up around your Report Builder query.

rawData Named Range

Do you see what we did here? The OFFSET() function specifies the top left corner of the query (which will always be fixed), tells Excel to  start with that cell (the “0,0” says to not move any rows or columns from that point), then specifies a height for the range equal to our count of the rows (rowCount), and a width of the range of 3, since that, too, will not vary unless we update the Report Builder query definition to add more dimensions or metrics.

IMPORTANT: Be sure to use $s to make the first parameter in the OFFSET() formula an absolute reference. There is a bug in most versions of Excel such that, if you use a non-absolute reference (i.e., Sheet1!A1), that “A1” value will pretty quickly change to some whackadoo number that is nowhere near the Report Builder data.

Make Two Pivot Tables from the Named Range

The next step is to make a couple of pivot tables using our rawData named range:

  1. Select Insert >> Pivot Table
  2. Enter rawData for the Table/Range
  3. Specify where you want the pivot table to be located (if you’re working with multiple queries, you may want to put the pivot tables on a separate worksheet, but, for this example, we’re just going to put it next to the query results)
  4. Click OK

You should now have a blank pivot table:

Blank pivot table

We’re just going to use this first pivot table to sort the channels in descending order (if you want to specify the order of the channels in a fixed manner, you can skip this step), so let’s just use Last Touch Marketing Channel for the rows and Orders for the values. We can then sort the pivot table descending by Sum of Orders. This sort criteria will persist with future refreshes of the the table. Go ahead and remove the Grand Total while you’re at it, and, if you agree that Excel’s default pivot table is hideous…go ahead and change the style. Mine now looks like this:

Base Pivot Table

Tip: If your report is going to be scheduled in Report Builder, then you want to make sure the pivot table gets refreshed after the Report Builder query runs. We can (sort of) do this by right-clicking on the pivot table and select Pivot Table Options. Then, click on the Data tab and check the box next to Refresh data when opening the file.

Now, there are lots of different ways to tackle things from here on out. We’ve covered the basics of what prompted this post, but then I figured I might as well carry it all the way through to the visualization.

For the way I like to do this, we want another pivot table:

  1. Select the initial pivot table and copy it
  2. Paste the pivot table a few cells to the right of the initial pivot table
  3. Add Days as an additional row value, which should make the new pivot table now look something like this:

Pivot Table

This second pivot table is where we’ll be getting our data in the next step. In a lot of ways, it looks really similar to the initial raw data, but, by having it in a pivot table, we can now start using the power of GETPIVOTDATA() to dynamically access specific values.

Build a Clean Set of Data for Trending

So, we know the order we want our channels to appear in (descending by total orders). And, let’s say we just want to show the top 4 channels in our report. So, we know we want a “table” (not a true Excel table in this case) that is 5 columns wide (a Date column plus one column for each included channel). We don’t know exactly how many rows we’ll want in it, though, which introduces a little bit of messiness. Here’s one approach:

  1. To the right of our second pivot table, click in a cell and enter Date. This is the heading for the first column.
  2. In the cell immediately to the right of the Data column, enter a cell reference for the first row in the first pivot table we created. If you simply enter “=” and then click in that cell, depending on your version of Excel, a GETPIVOTDATA() formula will appear, which we don’t want. I sometimes just click in the cell immediately to the left of the cell I actually want, and then change the cell reference manually.
  3. Repeat this for three additional columns. Ultimately, you will have something that looks like this:

Column Headings

Are you clear on what we’re doing here? We could just enter column headings for each channel manually, but, with this approach, if the top channels changes in a future run of the report, these headings (and the data — more to come on that) will automatically update such that the four channels included are the top 4 — in descending order — by total orders from the channel.

Now, let’s enter our dates. IF the spreadsheet is such that there is a cell with the start date specified, then enter a reference to that cell in the cell immediately below the Date heading. If not, though, then we can use a similar trick to what we did with COUNTA() at the beginning of this post. That’s the approach described below:

  1. In the cell immediately below the Date heading, enter the following formula
    =MIN($A:$A)

    This formula finds the earliest date returned from the Report Builder query. If a 5-digit number gets displayed, simply select the entire column and change it to a date format.

  2. Now, in the cell immediately below that cell, enter the following formula:
    =IF(OR(N3="",N3>=MAX($A:$A)),"",N3+1)

    The N3 in this formula refers to the cell immediately above the one where the formula is being entered. Essentially, this formula just says, “Add one to the date above and put that date here,” and the OR() statement makes sure that a value is returned only if the date that would be entered is within the range of the available data.

  3. Drag the formula entered in step 2 down for as many rows as you might allow in the query. The cells the formula get added to will be blank after the date range hits the maximum date in the raw data. This is, admittedly, a little messy, as you have to determine a “max dates allowed” when deciding how many rows to drag this formula down on.

At this point, you should have a table that looks something like the following:

Date cells

Now, we want to fill in the data for each of the channels. This simply requires getting one formula set up correctly, and then extending it across rows and columns:

  1. Click in the first cell under the first channel heading and enter an “=”
  2. Click on any (non-subtotal) value in the second pivot table created earlier. A GETPIVOTDATA() formula will appear (in Windows Excel — that won’t happen for Mac Excel, which just means you need to decipher GETPIVOTDATA() a bit, or use the formula example below and modify accordingly) that looks something like this:
    =GETPIVOTDATA("Orders",$K$2,"Day",DATE(2015,8,13),"Last Touch Marketing Channel","Direct")
  3. That’s messy! But, if you look at it, you’ll realize that all we need to do is replace the DATE() section with a reference to the Date cell for that row, and the “Direct” value with a reference to the column heading. The trick is to lock the column with a “$” for the Date reference, and lock the row for the channel reference. That will get us something like this:
    =GETPIVOTDATA("Orders",$K$2,"Day",$N3,"Last Touch Marketing Channel",O$2)

    GETPIVOTDATA

  4. Now, we only want this formula to evaluate if there’s actually data for that day, so let’s wrap it in an IF() statement that checks the Date column for a value and only performs the GETPIVOTDATA() if a date exists:
    =IF($N3="","",GETPIVOTDATA("Orders",$K$2,"Day",DATE(2015,8,13),"Last Touch Marketing Channel","Direct"))
  5. And, finally, just to be safe (and, this will come in handy if there’s a date where there is no data for the channel), let’s wrap the entire formula in an IFERROR() such that the cell will be blank if there is an error anywhere in the formula:
    =IFERROR(IF($N3="","",GETPIVOTDATA("Orders",$K$2,"Day",DATE(2015,8,13),"Last Touch Marketing Channel","Direct")),"")
  6. Now, we’ve got a formula that we can simply extend to cover all four channel columns and all of the possible date rows:

Top Channels by Day

One More Set of Named Ranges

We’re getting close to having everything we need for a dynamically updating visualization of this data. But, the last thin we need to do is define dynamic named ranges for the channel data itself.

First, we’ll need to calculate how many rows of data are in the table we built in the last step. We can calculate this based on the start and end dates that were entered in our worksheet (if that’s how it was set up), or, we can use the same approach that we took to figure out the number of rows in our main query. For the latter, we can simply count the number of number cells in the Date column using the COUNT() function (COUNTA will not work here, because it will count the cells that look blank, but that actually have a formula in them):

Calculating Trend Length

Again, we could simply put this formula in as the definition for trendLength rather than putting the value in a cell, but it’s easier to trace it when it’s in a cell.

For the last set of named ranges, we want to define a named range for each of the four channels we’re including. Because the specific channel may vary as data refreshes, it makes sense to simply call these something like: channel1_trend, channel2_trend, channel3_trend, channel4_trend.

We again use the OFFSET() function — this time in conjunction with the trendLength value we just calculated. For each range, we know where the first cell will always be — we know the column and where the first row is — and then the OFFSET() function will let us define how tall the range is:

  1. Select Formulas >> Name Manager
  2. Click New
  3. Enter the name for the range (channel1_trend, channel2_trend, etc.)
  4. Enter a formula like the following:
    =OFFSET(Sheet1!$O$3,0,0,trendLength,1)

    Named Ranges for Trends
    The “1” at the end is the width of the range, which is only one column. This is a little different from the first range we created, which was 3 columns wide.

  5. Click OK
  6. Repeat steps 2 through 5 for each of the four channels, simply updating the cell reference in the OFFSET() function for each range ($P:$3, $Q:$3, etc.) (named ranges can be created with a macro; depending on how many and how involved I need to create, I sometimes write a macro rather than creating these one-by-one; but, even creating them one-by-one is worth it, in my experience).

 

Now, we’re ready to actually create our visualization of the data.

The Easy Part: Creating the Visualization

On a new worksheet, set up a basic structure (typically, I would actually have many width=1 columns, as described in this post, but, for the sake of keeping things simple here, I’m using variable-width columns).

Base Visualization

Then, it’s just a matter of filling in the rows:

  1. For the channel, enter a formula that references the first pivot table (similar to how we created the column headings for the last table we created on the background sheet)
  2. For the sparkline, select Insert >> Line and enter channel1_trend, channel2_trend, etc.
  3. For the total, use GETPIVOTDATA() to look up the total for the channel from the first pivot table — similar to what we did when looking up the daily detail for each channel:
    =GETPIVOTDATA("Orders",Sheet1!$H$2,"Last Touch Marketing Channel",B3)

    The B3 reference points to the cell with the channel name in it. Slick, right?

  4. For the maximum value, simply use the MAX() function with channel1_trend, channel2_trend, etc.:
    =MAX(channel1_trend)
  5. For the minimum value, simply use the MIN() function with channel1_trend, channel2_trend, etc.:
    =MIN(channel2_trend)

When you’re done, you should have a visual that looks something like this:

Final Visualization

Obviously, the MIN() and MAX() are just two possibilities, you could also use AVERAGE() or STDEV() or any of a range of other functions. And, there’s no requirement that the trend be a sparkline. It could just as easily be a single chart with all channels on it, or individual charts for each channel.

More importantly, whenever you refresh the Report Builder query, a simple Data >> Refresh All (or a re-opening of the workbook) will refresh the visualization.

Some Parting Thoughts

Hopefully, this doesn’t seem overwhelming. Once you’re well-versed in the underlying mechanics, creating something like this — or something similar — can be done in less than 10 minutes. It’s robust, and is a one-time setup that can then not only let the basic visualization (report, dashboard, etc.) be fully automated, but also provides an underlying structure that can be extended to quickly augment the initial report. For instance, adding an average for each channel, or even providing how the last point in the range compares to the average.

A consolidated list of the Excel functionality and concepts that were applied in this post:

  • Dynamic named ranges using COUNTA, COUNT, and OFFSET()
  • Using named ranges as the source for both pivot tables and sparklines
  • Using GETPIVOTDATA() with the “$” to quickly populate an entire table of data
  • Using IF() and IFERROR() to ensure values that should remain blank do remain blank

Each of these concepts is powerful in its own right. They become triply so when combined with each other!

Adobe Analytics, Featured

What Does 1,000 Success Events Really Mean?

In the last year, Adobe Analytics introduced the ability to have over 1,000 Success Events. That was a pretty big jump from 100 previously. As I work with clients I see some who struggle with what having this many Success Events really means. Does it mean you can now track more stuff? At a more granular level? Should you track more? Etc… Therefore, in this post, I am going to share some of my thoughts and opinions on what having 1,000 Success Events means and doesn’t mean for your Adobe Analytics implementation.

Knee Jerk Reaction

For most companies, I am seeing what I call a “knee jerk reaction” when it comes to all of the new Success Events. This reaction is to immediately track more things. But as my partner Tim Wilson blogged about, more is not necessarily better. Just because you can track something, doesn’t mean you should. But let’s take a step back and consider why Adobe enabled more Success Events. While I cannot be 100% certain, since I am not an Adobe Analytics Product Manager, it is my hunch that the additional Success Events were added for the following reasons:

  1. It is easier to increase Success Events than other variables (like eVars) due to the processing that happens behind the scenes
  2. Success Events are key to Data Connector integrations and more clients are connecting more non web-analytics data into the Adobe Marketing Cloud
  3. Some of the Adobe product-to-product integrations use additional Success Events
  4. Having more Success Events allows you to push more metrics into the Adobe Marketing Cloud

I don’t think that Adobe was saying to itself, “our clients can now only track 100 metrics related to their websites/apps and they need to be able to track up to 1,000.”

This gets to my first big point related to the 1,000 Success Events. I don’t think that companies should track additional things in Adobe Analytics just because they have more Success Events. If the data you want to collect, has business benefit, then you should track it, but if you ever say to yourself, “we have 1,0000 Success Events, so why not use them?” there is a good chance you are going down a bad path. For example, if you have thirty links on a page and you want to know how often each link gets clicked, I would not advocate assigning a Success Event to each of the thirty links. If you wouldn’t do it when you had 100 Success Events, I would not suggest doing it just because you have more.

But there will be legitimate reasons to use these new Success Events. If your organization is doing many Data Connector integrations, the amount of Success Events required can grow rapidly. If you have a global organization with 300 different sites and each wants to have a set of Success Events that they can use for their own purposes, you may decide to allocate a set of Success Events to each site (though you can also double-up on Success Events and not send those to the Global report suite as well). In general, my advice is to not have a “knee jerk reaction” and change your implementation approach just because you have more Success Events.

Use Multiple Success Events vs. an eVar

Another thing that I have seen with my clients is the idea of replacing or augmenting eVars with multiple versions of Success Events. This is a bit complex, so let me try and illustrate this with an example. Imagine that one of your website KPI’s is Orders and that your stakeholder wants to see Orders by Product Category. In the past, you would set an Order (using the Purchase Event) and then use an eVar to break those Orders down by Product Category. But with 1,000 Success Events, it is theoretically possible for you to set a different Success Event for each Product Category. In this example, that would mean setting the Orders metric and at the same time setting a new custom Success Event named Electronics Orders (or Apparel Orders depending upon what is purchased). The latter would be a trend of all Electronics Orders and would not require using an eVar to generate a trend by Product Category. If you have fifty Product Categories, you could use fifty of your 1,000 Success Events to see fifty trends.

This raises the question, is doing what I just described a good thing or a bad thing? I am sure many different people will have different opinions on that. Here are the pros and cons of this approach from my point of view:

Pros

  1. While I would not recommend removing the Product Category eVar, in theory, you could get rid of it since you have its core value represented in fifty separate Success Events. This could help companies that are running out of eVars, but still not something I would advocate because you can lose the great attribution benefits of eVars (especially across multiple visits).
  2. Today, it is only possible to view one metric in a trended report, so if you want to see more than just Orders for a specific Product Category (say Orders, Revenue and Units for Electronics), you can’t do so using an eVar report. But if each of these metrics were tied to a separate Product Category event, you could use the Key Metrics report to get up to five metrics trended for a particular Product Category. But keep in mind that you would need fifty Success Events for each metric you want to see together, which can make this a bit un-scalable. Also keep in mind that you can trend as many metrics as you want using the Adobe ReportBuilder tool.
  3. You can create some cool Calculated Metrics if you have all of these additional Success Events, such as Electronics Orders divided by (Electronics Orders + Apparel Orders) that may be more difficult to produce in Adobe Analytics proper without using Derived Metrics or Adobe ReportBuilder.
  4. Having additional metrics allows you to have Participation enabled on each, which can provide more granular Participation analysis. For example, if you enable Participation on the Orders event, you can see which pages lead to Orders. But if you enable Participation on a new Electronics Orders event, you will be able to see which pages lead to orders of Electronics products. The latter is something that isn’t possible (easily) without having a separate Electronics Orders Success Event.
  5. If you want to pass Adobe Analytics data to another back-end system using a Data Feed, there could be some advantage to having a different metric (in this example for each Product Category) vs. one metric and an eVar in terms of mapping data to an external database.

Cons

  1. Setting so many Success Events can be a nightmare for your developers and a pain to maintain in the Administration Console. It may require extra time, logic, TMS data mappings and so on. In the preceding example, developers may have to write additional code to check for fifty product categories. In some cases, developers may only know the Product ID and not the category (which they had planned on being a SAINT Classification), but setting the additional Success Events forces them to write more code to get the product category. If visitors purchase products from multiple product categories, developers have to start defining rules that makes things more complex than originally anticipated. And if product categories change (new ones are added or old ones are removed), that can mean more development work vs. simply passing in different values to an eVar. While using a Tag Management System can make some of this easier, it still creates a lot more work for developers, who are normally already stretched to their limits!
  2. Having different versions of the same Success Events can be confusing to your end-users (i.e. Orders vs. Electronics Orders) and can make your entire implementation a bit more confusing
  3. Employing this approach too often can force you to eventually run out of Success Events, even with the increased number available. For example, if you set Orders, Revenue, Units and Cart Additions for each Product Category, you are already looking at 200 Success Events. Setting these same events for a different dimension (eVar), would require another 200 Success Events!

While I can see some merits in the benefits listed above, my opinion is that blowing out different Success Events instead of using an eVar is something that can have value in some targeted situations, but is not for everyone. Call me old fashioned, but I like having a finite number of metrics and dimensions (eVars) that break them down. If there is a short list of metrics that are super critical to be seen together in the Key Metrics report and the number of times they would have to be duplicated by dimension is relatively small, then perhaps I would consider adding an extra 10-20 Success Events for each dimension value. But I see this as a bit of a slippery slope and wouldn’t advocate going crazy with this concept. Perhaps my opinion will change in the future, but for now, this is where I land on the subject.

Derived Metrics

Tangentially related to this topic is the concept of Derived Metrics. Derived Metrics are the new version of Calculated Metrics in which you can add advanced formulas, functions and segments. The reason I bring these up is that Derived Metrics can be used to create multiple versions of metrics by segmenting on eVar or sProp values. For example, instead of creating fifty versions of the Orders metric as described above, you could have one Orders metric and then create fifty “Derived” metrics that use the Orders metric in combination with a segment based upon a Product Category eVar. This requires no extra development effort and can be done by any Adobe Analytics user. The end result would be similar to having fifty separate Success Events as each can be trended and up to five can be added to the Key Metrics report. The downsides of this approach is that these Derived Metrics will not be easily fed into a data feed if you want to send data directly to an external database and won’t have Participation metrics associated with them.

It is somewhat ironic that shortly after Adobe provided 1,000 Success Events, it also provided a great Derived Metrics tool that actually reduces the need for Success Events if used strategically with Segments! My advice would be to start with using the Derived Metrics and if you later find that you have reasons to need a native stream of data for each version of the event (i.e. Data Feed) or Participation, then you can hit up your developers and consider creating separate events.

Final Thoughts

So there you have some of my thoughts around the usage of 1,000 Success Events. While I think there can be some great use cases for taking advantage of this new functionality, I caution you to not let it change your approach to tracking what is valuable to your stakeholders. I am all for Adobe adding more variables (I wish the additional eVars didn’t cost more $$$!), but remember that everything should be driven by business requirements (to learn more about this check out my Adobe white paper: http://apps.enterprise.adobe.com/go/701a0000002IvLHAA0).

If you have a different opinion or approach, please leave a comment here.  Thanks!

Adobe Analytics, Featured, Technical/Implementation

Engagement Scoring + Adobe Analytics Derived Metrics

Recently, I was listening to an episode of the Digital Analytics Power Hour that discussed analytics for sites that have no clear conversion goals. In this podcast, the guys brought up one of the most loaded topics in digital analytics – engagement scoring. Called by many different names like Visitor Engagement, Visitor Scoring, Engagement Scoring, the general idea of this topic is that you can apply a weighted score to website/app visits by determining what you want your visitors to do and assigning a point value to that action. The goal is to see a trend over time of how your website/app is performing with these weights applied and/or assign these scores to visitors to see how score impacts your KPI’s (similar to Marketing Automation tools). I have always been interested in this topic, so I thought I’d delve into it a bit while it was fresh in my mind. And if you stick around until the end of this post, I will even show how you can do visitor scoring without doing any tagging at all using Adobe Analytics Derived Metrics!

Why Use Visitor Scoring?

If you have a website that is focused on selling things or lead generation, it is pretty easy to determine what your KPI’s should be. But if you don’t, driving engagement could actually be your main KPI. I would argue that even if you do have commerce or lead generation, engagement scoring can still be important and complement your other KPI’s. My rationale is simple. When you build a website/app, there are things you want people to do. If you are a B2B site, you want them to find your products, look at them, maybe watch videos about them, download PDF’s about them and fill out a lead form to talk to someone. Each of these actions is likely already tracked in your analytics tool, but what if you believe that some of these actions are more important than others? Is viewing a product detail page as valuable as watching a five minute product video? If you had two visitors and each did both of these actions, which would you prefer? Which do you think is more likely to be a qualified lead? Now mix in ALL of the actions you deem to be important and you can begin to see how all visitors are not created equal. And since all of these actions are taking place on the website/app, why would you NOT want to quantify and track this, regardless of what type of site you manage?

In my experience, most people do not undertake engagement scoring for one of the following reasons:

  • They don’t believe in the concept
  • They can’t (or don’t have the energy to) come up with the scoring model
  • They don’t know how to do it

In my opinion, these are bad reasons to not at least try visitor scoring. In this post, I’ll try to mitigate some of these. As always, I will show examples in Adobe Analytics (for those who don’t know me, this is why), but you should be able to leverage a lot of this in other tools as well.

The Concept

Since I am by no means the ultimate expert in visitor scoring, I am not in a position to extol all of its benefits. I have seen/heard arguments for it and against it over the years. If you Google the topic, you will find many great resources on the subject, so I encourage you to do that. For the sake of this post, my advice is to try it and see what you think. As I will show, there are some really easy ways to implement this in analytics tools, so there is not a huge risk in giving it a try.

The Model

I will admit right off the bat that there are many out there much more advanced in statistics than me. I am sure there are folks out there that can come up with many different visitor scoring models that will make mine look childish, but in the interest of trying to help, I will share a model that I have used with some success. The truth is, that you can create whatever model you want to use is fine, since it is for YOUR organization and not one to be compared to others. There is no universal formula that you will benchmark against. You can make yours as simple or complex as you want.

I like to use the Fibonacci-like approach when I do visitor scoring (while not truly Fibonacci, my goal is to use integers that are somewhat spaced out to draw out the differences between actions as you will see below). I start by making a list of the actions visitors can take on my website/app and narrow it down to the ones that I truly care about and want to include in my model. Next I sort them from least valuable to most valuable. In this example, let’s assume that my sorted list is as follows:

  1. View Product Page
  2. View at least 50% of Product Video
  3. View Pricing Tab for Product
  4. Complete Lead Generation Form

Next, I will assign “1” point to the least important item on the list (in this case View Product Page). Then I will work with my team to determine how many Product Page Views they feel is equivalent to the next item on the list (in this case 50% view of Product Video). When I say equivalent, what I mean is that if we had two website visitors and one viewed at least 50% of a product video and the other just viewed a bunch of product detail pages, at what point would they consider them to be almost equal in terms of scoring? Is it four product page views or only two? Somehow, you need to get consensus on this and pick a number. If your team says that three product page views is about the same as one long product video view, then you would assign “3” points each time a product video view hist at least 50%. Next you would move on to the third item (Pricing Page in this example) and follow the same process (how many video views would you take for one video view?). Let’s say when we are done, the list looks like this:

  1. View Product Page (1 Point)
  2. View at least 50% of Product Video (3 Points)
  3. View Pricing Tab for Product (6 Points)
  4. Complete Lead Generation Form (15 Points)

Now you have a model that you can apply to your website/app visitors. Will it be perfect? No, but is it better than treating each action equally? If you believe in your scores, then it should be. For now, I wouldn’t over-think it. You can adjust it later if you want, but I would give it a go under the theory that “these are the main things we want people to do, and we agreed on which were more/less important than the others, so if the overall score rises, then we should be happy and if it declines, we should be concerned.”

How To Implement It

Implementing visitor scoring in Adobe Analytics is relatively painless. Once you have identified your actions and associated scores in the previous step, all you need to do is write some code or do some fancy manipulation of your Tag Management System. For example, if you are already setting success events 13, 14, 15, 16 for the actions listed above, all you need to do is pass the designated points to a numeric Success Event. This event will aggregate the scores from all visitors into one metric that you later divide by either Visits or Visitors to normalize (for varying amounts of Visits and Visitors to your site/app). This approach is well documented in this great blog post by Ben Gaines from Adobe.

Here is what a Calculated Metric report might look like when you are done:

Website Engagement

Using Derived Metrics

If you don’t have development resources or you want to test out this concept before bugging your developers, I have come up with a new way that you can try this out without any development. This new approach uses the new Derived Metrics concept in Adobe Analytics. Derived Metrics are Calculated Metrics on steroids! You can do much more complex formulas than in the past and apply segments to some or all of your Calculated Metric formula. Using Derived Metrics, you can create a model like the one we discussed above, but without any tagging. Here’s how it might work:

First, we recall that we already have success events for the four key actions we care about:

Screen Shot 2015-09-03 at 3.57.27 PM

 

Now we can create our new “Derived” Calculated Metric for Visitor Score. To do this, we create a formula that multiplies each action by its weight score and then sums them (it may take you some time to master the embedding of containers!). In this case, we want to multiply the number of Product Page Views by 1, the number of Video Views by 3, etc. Then we divide the sum by Visits so the entire formula looks like this:

Formula

 

Once you save this formula, you can view it in the Calculated Metrics area to see how your site is performing. The cool part of this approach is that this new Visitor Score Calculated Metric will work historically as long as you have data for the four events (in this case) that are used in the formula. The other cool part is that if you change the formula, it will change it historically as well (which can also be a bad thing, so if you want to lock in your scores historically, use Ben’s approach of setting a new event). This allows you to play with the scores and see the impact of those changes.

But Wait…There’s More!

Here is one other bonus tip. Since you can now apply segments and advanced formulas to Derived Metrics, you can customize your Visitor Score metric even further. Let’s say that your team decides that if the visitor is a return visitor, that all of the above scores should be multiplied by 1.5. You can use an advanced formula (in this case an IF Statement) and a Segment (1st Time Visits) to modify the formula above and make it more complex. In this case, we want to first check if the visit is a 1st time visit and if so, use our normal scores, but if it isn’t change the scores to be 1.5x the original scores. To do this, we add an IF statement and a segment such that when we are done, the formula might look like this (warning: this is for demo purposes only and I haven’t tested this!):

Advanced Formula

If you had more patience than I do, you could probably figure out a way to multiply the Visit Number by the static numbers to exponentially give credit if you so desired. The advanced formulas in the Derived Metric builder allow you to do almost anything you can do in Microsoft Excel, so the sky is pretty much the limit when it comes to making your Visitor Score Metric as complex as you want. Tim Elleston shows some much cooler engagement metric formulas in his post here: http://www.digitalbalance.com.au/our-blog/how-to-use-derived-metrics/

Final Thoughts

So there you have it. Some thoughts on why you may want to try visitor scoring, a few tips on how to create scores and some information on how to implement visitor scoring via tags or derived metrics. If you have any thoughts or comments, let me know at @adamgreco.

Adobe Analytics

Adobe Analytics Tips & Tricks (White Paper)

Analytics has the potential to be incredibly powerful for businesses. However, companies sometimes don’t know where to start, or how to take advantage of the capabilities of their digital analytics solutions.

From just getting started with the basics, through advanced segmentation, mobile, attribution, predictive analytics and data visualization, here are a few of my favorite tips for how to do more with your digital analytics program.

2015-07-15_11-18-45

Click to download my free Analytics “Tips and Tricks” whitepaper (sponsored by Adobe.) 

Adobe Analytics, Featured

Using Adobe Analytics New ‘Calculated Metrics’ to Fix Data Inaccuracies

Few features were more hotly anticipated following Adobe Summit than the arrival of the newly expanded calculated metrics to Adobe Analytics. Within one week of its release, it is already paying off big time for one of my clients. I’m going to share a use case for how these advanced calculated metrics fixed some pretty broken revenue data.

Our example for this case study is KittensSweaters.com*, an ecommerce business struggling with their Adobe Analytics data. Over the past few months, KittenSweaters has dealt with a number of issues with their revenue data, including:

  • “Outlier” orders where the revenue recorded was grossly inflated or even negative
  • A duplicate purchase event firing prior to the order confirmation page, that double counted revenue and
  • Donations to their fundraiser counting as sweaters revenue, instead of in a separate event

For example, here you can see the huge outliers and negative revenue numbers they saw in their data:

calc-metrics-bad-revenue-chart

Historically, this would require segmentation be layered upon all reports (and ensuring that all users knew to apply this segmentation before using the data!)

However, using the new Calculated Metrics in Adobe Analytics, KittenSweaters was able to create a corrected Revenue metric, and make it easily available to all users. Here’s how:

First, create a segment that is limited only to valid orders.

In the case of KittenSweaters, this segment only allows in orders where:

  1. The product category was “sweaters”; and
  2. The purchase was fired on the proper confirmation page; and
  3. The order was not one of the known “outlier” orders (identified by the Purchase ID)

calc-metrics-segment

You can test this segment by applying it on the current Revenue report and seeing if it fixes the issues. Historically, this would have been our only route to fix the revenue issues – layer our segment on top of the data. However, this requires all users to know about, and remember to apply, the segment.

So let’s go a step further and create our Calculated Metric (Components > Manage Calculated Metrics.)

Let’s call our new metric “Revenue (Corrected)”. To do so, drag your new segment of “Valid Sweater Orders” into the Definition of your metric, then drag the Revenue metric inside of the segment container. Now, the calculated metric will only report on Revenue where it matches that segment.

calc-metrics-calcmetricbuilder

Voila! A quick “Share” and this metric is available to all KittenSweaters.com employees.

You can use this new metric in any report by clicking “Show Metrics” and adding it to the metrics displayed:

calc-metrics-showmetrics

Now you’ll get to see the new Metrics Selector rather than the old, clunky pop-up. Select it from the list to populate your report. You can also select the default Revenue metric, to view the two side by side and see how your corrections have fixed the data.
calc-metrics-metricsselector

You can quickly see that our new Revenue metric removes the outliers and negative values we saw in the default one, by correcting the underlying data. (YAY!)

calc-metrics-revenuecomparisonchart

But why not make it even easier? Why make busy KittenSweaters employees have to manually add it to their reports? Under Admin Settings, we can update our corrected metric to be the default:
calc-metrics-setasdefault

You can even use these new calculated metrics in Report Builder! (Just be sure to download the newest version.)kitten-sweater

It’s a happy day in the KittenSweaters office! While this doesn’t replace the need for IT to fix the underlying data, this definitely helps us more easily provide the necessary reporting to our executive team and make sure people are looking at the most accurate data.

Keep in mind one potential ‘gotcha’: If the segment underlying the calculated metric is edited, this will affect the calculated metric. This makes life easier while you’re busy building and testing your segment and calculated metric, but could have consequences if someone unknowingly edits the segment and affects the metric.

Share your cool uses of the new calculated metrics in the comments! If you haven’t had a chance to play around with them yet, check out this series of videos to learn more.

* Obviously KittenSweaters.com isn’t actually my client, but how puuuurrfect would it be if they were?!

Adobe Analytics, Featured, google analytics, Technical/Implementation

The Hard Truth About Measuring Page Load Time

Page load performance should be every company’s #1 priority with regard to its website – if your website is slow, it will affect all the KPIs that outrank it. Several years ago, I worked on a project at salesforce.com to improve page load time, starting with the homepage and all the lead capture forms you could reach from the homepage. Over the course of several months, we refactored our server-side code to run and respond faster, but my primary responsibility was to optimize the front-end JavaScript on our pages. This was in the early days of tag management, and we weren’t ready to invest in such a solution – so I began sifting through templates, compiling lists of all the 3rd-party tags that had been ignored for years, talking to marketers to find out which of those tags they still needed, and then breaking them down to their nitty-gritty details to consolidate them and move them into a single JavaScript library that would do everything we needed from a single place, but do it much faster. In essence, it was a non-productized, “mini” tag management system.

Within 24 hours of pushing the entire project live, we realized it had been a massive success. The difference was so noticeable that we could tell the difference without having all the data to back it up – but the data eventually told us the exact same story. Our monitoring tool was telling us our homepage was loading nearly 50% faster than before, and even just looking in Adobe at our form completion rate (leads were our lifeblood), we could see a dramatic improvement. Our data proved everything we had told people – a faster website couldn’t help but get us more leads. We hadn’t added tags – we had removed them. We hadn’t engaged more vendors to help us generate traffic – we were working with exactly the same vendors as before. And in spite of some of the marketing folks being initially hesitant about taking on a project that didn’t seem to have a ton of business value, we probably did more to benefit the business than any single project during the 3 1/2 years I worked there.

Not every project will yield such dramatic results – our page load performance was poor enough that we had left ourselves a lot of low-hanging fruit. But the point is that every company should care about how their website performs. At some point, almost every client I work with asks me some variation of the following question: “How can I measure page load time with my analytics tool?” My response to this question – following a cringe – is almost always, “You really can’t – you should be using another tool for that type of analysis.” Before you stop reading because yet another tool is out of the question, note that later on in this post I’ll discuss how your analytics tool can help you with some of the basics. But I think it’s important to at least acknowledge that the basics are really all those tools are capable of.

Even after several years of hearing this question – and several enhancements both to browser technology and the analytics tools themselves – I still believe that additional tools are required for robust page load time measurement. Any company that relies on their website as a major source of revenue, leads, or even just brand awareness has to invest in the very best technologies to help that website be as efficient as possible. That means an investment not just in analytics and optimization tools, but performance and monitoring tools as well. At salesforce.com, we used Gomez – but there are plenty of other good services as well that can be used on a small or large scale. Gomez and Keynote both simulate traffic to your site using any several different test criteria like your users’ location, browser, and connection speed. Other tools like SOASTA actually involve real user testing along some of the same dimensions. Any of these tools are much more robust than some of the general insight you might glean from your web analytics tool – they provide waterfall breakdowns and allow you to isolate where your problems come from and not just that they exist. You may find that your page load troubles only occur at certain times of the day or in certain parts of the world, or that they are happening in a particular leg of the journey. Maybe it’s a specific third-party tag or a JavaScript error that you can easily fix. In any case, these are the types of problems your web analytics tool will struggle to help you solve. The data provided by these additional tools is just much more actionable and helpful in identifying and solving problems.

The biggest problem I’ve found in getting companies to adopt these types of tools is often more administrative than anything. Should marketing or IT manage the tool? Typically, IT is better positioned to make use of the data and act on it to make improvements, but marketing may have a larger budget. In a lot of ways, the struggles are similar to those many of my clients encounter when selecting and implementing a tag management system. So you might find that you can take the learnings you gleaned from similar “battles” to make it easier this time. Better yet, you might even find that one team within your company already has a license you can use, or that you can team up to share the cost. However, if your company isn’t quite ready yet to leverage a dedicated tool, or you’re sorting through red tape and business processes that are slowing things down, let’s discuss some things you can do to get some basic reporting on page load time using the tools you’re already familiar with.

Anything you do within your analytics tool will likely be based on the browser’s built-in “timing” object. I’m ashamed to admit that up until recently I didn’t even realize this existed – but most browsers provide a built-in object that provides timestamps of the key milestone events of just about every part of a page’s lifecycle. The object is simply called “performance.timing” and can be accessed from any browser’s console. Here are some of the useful milestones you can choose from:

  • redirectStart and redirectEnd: If your site uses a lot of redirects, it could definitely be useful to include that in your page load time calculation. I’ve only seen these values populated in rare cases – but they’re worth considering.
  • fetchStart: This marks the time when the browser first starts the process of loading the next page.
  • requestStart: This marks the time when the browser requests the next page, either from a remote server or from its local cache.
  • responseEnd: This marks the time when the browser downloads the last byte of the page, but before the page is actually loaded into the DOM for the user.
  • domLoading: This marks the time when the browser starts loading the page into the DOM.
  • domInteractive: This marks the time when enough of the page has loaded for the user to begin interacting with it.
  • domContentLoaded: This marks the time when all HTML and CSS are parsed into the DOM. If you’re familiar with jQuery, this is basically the same as jQuery’s “ready” event (“ready” does a bit more, but it’s close enough).
  • domComplete: This marks the time when all images, iframes, and other resources are loaded into the DOM.
  • loadEventStart and loadEventEnd: These mean that the window’s “onload” event has started (and completed), and indicate that the page is finally, officially loaded.

JavaScript timing object

There are many other timestamps available as part of the “performance” object – these are only the ones that you’re most likely to be interested in. But you can see how it’s important to know which of these timestamps correspond to the different reports you may have in your analytics tool, because they mean different things. If your page load time is measured by the “loadEventEnd” event, the data probably says your site loads at least a few hundred milliseconds slower than it actually appears to your users.

The major limitation to using JavaScript timing is exactly what you’d expect: cross-browser compatibility. While IE8 is (finally!) a dying browser, it has not historically been the only one to lack support – mobile Safari has been a laggard as well as well. However, as of late 2015, iOS now supports this feature. Since concern for page load time is even more important for mobile web traffic, and since iOS is still the leader in mobile traffic for most websites, this closes what has historically been a pretty big gap. When you do encounter an older browser, the only way to fill this gap accurately for browsers lacking timing support is to have your development team write its own timestamp as soon as the server starts building the page. Then you can create a second timestamp when your tags fire, subtract the difference, and get pretty close to what you’re looking for. This gets a bit tricky, though, if the server timezone is different than the browser timezone – you’ll need to make sure that both timestamps are always in the same timezone.

This functionality is actually the foundation of both Adobe Analytics’ getLoadTime plugin and Google Analytics’ Site Speed reports. Both have been available for years, and I’ve been suspicious of them since I first saw them. The data they provide is generally sound, but there are a few things to be aware of if you’re going to use them – beyond just the lack of browser support I described earlier.

Adobe’s getLoadTime Plugin

Adobe calculates the start time using the most accurate start time available: either the browser’s “requestStart” time or a timestamp they ask you to add to the top of the page for older browsers. This fallback timestamp is unfortunately not very accurate – it doesn’t indicate server time, it’s just the time when the browser got to that point in loading the page. That’s likely to be at least a second or two later than when the whole process started, and is going to make your page load time look artificially fast. The end time is when the tag loads – not when the DOM is ready or the page is ready for user interaction.

When the visitor’s browser is a modern one supporting built-in performance timing, the data provided by Adobe is presented as a series of numbers (in milliseconds) that the page took to “load.” That number can be classified into high-level groups, and it can be correlated to your Pages report to see which pages load fastest (or slowest). Or you can put that number into a custom event that can be used in calculated metrics to measure the average time a given page takes to load.

Adobe Analytics page load time report

Google’s Site Speed Reports

Google’s reports, on the other hand, don’t have any suspect handling of older browsers – the documentation specifically states that the reports only work for browsers that support the native performance timing object. But Google’s reports are averages based on a sampling pool of only 1% of your visitors (which can be increased) – but you can see how a single visitor making it into that small sample from a far-flung part of the world could have a dramatic impact on the data Google reports back to you. Google’s reports do have the bonus of taking into account many other timing metrics the browser collects besides just the very generic interpretation of load time that Adobe’s plugin offers.

Google Analytics page load time report

As you can see, neither tool is without its flaws – and neither is very flexible in giving you control over which time metrics their data is based on. If you’re using Adobe’s plugin, you might have some misgivings about their method of calculation – and if you’re using Google’s standard reports, that sampling has likely led you to cast a suspicious eye on those reports when you’ve used them in the past. So what do you do if you need more than that? The only real answer is to take matters into your own hands. But don’t worry – the actual code is relatively simple and can be implemented with minimal development effort, and it can be done right in your tag management system of choice. Below is a quick little code snippet you can use as a jumping-off point to capture the page load time on each page of your website using built-in JavaScript timing.

	function getPageLoadTime() {
		if (typeof(performance) !== 'undefined' && typeof(performance.timing) == 'object') {
			var timing = performance.timing;
			
			// fall back to less accurate milestones
			var startTime = performance.timing.redirectStart ||
					performance.timing.fetchStart ||
					performance.timing.requestStart;
			var endTime = performance.timing.domContentLoadedEventEnd ||
					performance.timing.domInteractive ||
					performance.timing.domComplete ||
					performance.timing.loadEventEnd;
			
			if (startTime && endTime && (startTime < endTime)) {
				return (endTime - startTime);
			}
		}
		
		return 'data not available';
	}

You don’t have to use this code exactly as I’ve written it – but hopefully it shows you that you have a lot of options to do some quick page load time analysis, and you can come up with a formula that works best for your own site. You (or your developers) can build on this code pretty quickly if you want to focus on different timing events or add in some basic support for browsers that don’t support this cool functionality. And it’s flexible enough to allow you to decide whether you’ll use a dimensions/variables or metrics/events to collect this data (I’d recommend both).

In conclusion, there are some amazing things you can do with modern browsers’ built-in JavaScript timing functionality, and you should do all you can to take advantage of what it offers – but always keep in mind that there are limitations to this approach. Even though additional tools that offer dedicated monitoring services carry an additional cost, they are equipped to encompass the entire page request lifespan and can provide much more actionable data. Analytics tools allow you to scratch the surface and identify that problems exist with your page load time – but they will always have a difficult time identifying what those problems are and how to solve them. The benefit of such tools can often be felt across many different groups within your organization – and sometimes the extra cost can be shared the same way. Page load time is an important part of any company’s digital measurement strategy – and it should involve multiple tools and collaboration within your organization.

Photo Credit: cod_gabriel (Flickr)

Adobe Analytics

Adobe’s new Marketing Cloud Visitor ID: How Does it Work?

A few months ago, I wrote a series of posts about cookies – how they are used in web analytics, and how Google and Adobe (historically) identify your web visitors. Those two topics set the stage for a discussion on Adobe’s current best practices approach for visitor identification.

You’ll remember that Adobe has historically used a cookie called “s_vi” to identify visitors to your site. This cookie is set by Adobe’s servers – meaning that by default it is third-party. Many Adobe customers have gone through the somewhat tedious process of allowing Adobe to set that cookie from one of their own subdomains, making it first-party. This is done by having your network operations team update its DNS settings to assign that subdomain to Adobe, and by purchasing (and annually maintaining) an SSL certificate for Adobe to use. If that sounds like a pain to you, you’re not alone. I remember having to go through the process when I worked at salesforce.com – because companies rightly take their websites, networks, and security seriously, what is essentially 5 minutes of actual work took almost 3 months!

So a few years back, Adobe came up with another alternative I discussed a few months ago – the “s_fid” cookie. This is a fallback visitor ID cookie set purely in JavaScript, used when a browser visiting a website still on third-party cookies rejected Adobe’s cookie. That was nice, but it wasn’t a very publicized change, and most analysts may not even know it exists. That may be because, at the time it happened, Adobe already had something better in the works.

The next change Adobe introduced – and, though it happened well over a year ago, only now am I starting to see major traction – was built on top of the Demdex product they acquired a few years ago, now known as Adobe Audience Manager (AAM). AAM is the backbone for identifying visitors using its new “Marketing Cloud” suite, and the Marketing Cloud Visitor ID service (AMCV) is the new best-practice for identifying visitors to your website. Note that you don’t need to be using Audience Manager to take advantage of the Visitor ID service – the service is available to all Adobe customers.

The really great thing about this new approach is that it represents something that Adobe customers have been hoping for for years – a single point of visitor identification. The biggest advantage a company gains in switching to this new approach is a way to finally, truly integrate some of Adobe’s most popular products. Notice that I didn’t say all Adobe products – but things are finally moving in that direction. The idea here is that if you implement the Marketing Cloud Visitor ID Service, and then upgrade to the latest code versions for tools like Analytics and Target, they’ll all be using the same visitor ID, which makes for a much smoother integration of your data, your visitor segments, and so on. One caveat is that while the AMCV has been around for almost 2 years, it’s been a slow ramp-up for companies to implement it. It’s a bit more challenging than a simple change to your s_code.js or mbox.js files. And even if you get that far, it’s then an additional challenge to migrate to the latest version of Target that is compatible with AMCV – a few of my clients that have tried doing it have hit some bumps in the road along the way. The good news is that it’s a major focus of Adobe’s product roadmap, which means those bumps in the road are getting smoothed out pretty quickly.

So, where to begin? Let’s start with the new cookie containing your Marketing Cloud Visitor ID. Unlike Adobe’s “s_vi” cookie, this cookie value is set with JavaScript and will always be first-party to your site. However, unlike Google’s visitor ID cookie, it’s not set exclusively with logic in the tracking JavaScript. When your browser loads that JavaScript, Adobe sends off an additional request to its AAM servers. What comes back is a bit of JavaScript that contains the new ID, which the browser can then use to set its own first-party cookie. But there is an extra request added in (at least on the first page load) that page-load time and performance fanatics will want to be aware of.

The other thing this extra request does is allow Adobe to set an additional third-party cookie with the same value, which it will do if the browser allows. This cookie can then be used if your site spans multiple domains, allowing you to use the same ID on each one of your sites. Adobe’s own documentation says this approach will only work if you’ve set up a first-party cookie subdomain with them (that painful process I discussed earlier). One of the reasons I’ve waited to write this post is that it took awhile for a large enough client, with enough different sites, to be ready to try this approach out. After a lot of testing, I can say that it does work – but that since it is based on that initial third-party cookie, it’s a bit fragile. It works best for brand-new visitors that have no Adobe cookies on any of your website. If you test it out, you’re likely to see most visits to your websites work just like you hoped – and a few where you still get a new ID, instead of the one stored in that third-party cookie. There’s a pretty crazy flow chart that covers the whole process here if you’re more of a visual learner.

Adobe has a lot of information available to help you migrate through this process successfully, and I don’t want to re-hash it here. But the basics are as follows:

  1. Request from Adobe that they enable your account for the Marketing Cloud, and send you your new “Org ID.” This uniquely identifies your company and ensures your visitors get identified correctly.
  2. If you’re using (or want to use) first-party cookies via a CNAME, make sure your DNS records point to Adobe’s latest regional data center (RDC) collection servers. You can read about the details here.
  3. If your migration is going to take time (like if you’re not using tag management or can’t update all your different implementations or sites at the same time), work with Adobe to configure a “grace period” for the transition process.
  4. Update your Analytics JavaScript code. You can use either AppMeasurement or H code – as long as you’re using the latest version.
  5. Deploy the new VisitorAPI JavaScript library. This can happen at the same time you deploy your Analytics code if you want.
  6. Test. And then test again. And just to be safe, test one more time – just to make sure the data being sent back to Adobe looks like you expect it to.

Once you finish, you’re going to see something like this in the Adobe Debugger:

adobe-debugger-amcv

 

There are two things to notice here. The first is a request for Adobe Audience Manager, and you should see your Marketing Cloud Org ID in it. The other is a new parameter in the Analytics request called “mid” that contains your new Marketing Cloud Visitor ID. Chances are, you’ll see both those. Easy, right? Unfortunately, there’s one more thing to test. After helping a dozen or so of my clients through this transition, I’ve seen a few “gotchas” pop up more than once. The Adobe debugger won’t tell you if everything worked right, so try another tool like Charles Proxy or Firebug, and find the request to “dpm.demdex.net.” The response should look something like this if it worked correctly:

charles-amcv-good

However, you may see something like this:

charles-amcv-bad

If you get the error message “Partner ID is not provisioned in AAM correctly,” stop your testing (hopefully you didn’t test in production!). You’ll need to work with Adobe to make sure your Marketing Cloud Org ID is ”provisioned” correctly. I have no idea how ClientCare does this, but I’ve seen this problem happen enough to know that not everyone at Adobe knows how to fix it, and it may take some time. But where my first 4-5 clients all had the problem the first time they tested, lately it’s been a much smoother process.

If you’ve made it this far, I’ve saved one little thing for last – because it has the potential to become a really big thing. One of the less-mentioned features that the new Marketing Cloud Visitor ID service offers you is the ability to set your own unique IDs. Here are a few examples:

  • The unique ID you give to customers in your loyalty program
  • The unique ID assigned by your lead generation system (like Eloqua, Marketo, or Salesforce)

You can read about how to implement these changes here, but they’re really simple. Right now, there’s not a ton you can do with this new functionality – Adobe doesn’t even store these IDs in its cookie yet, or do anything to link those IDs to its Marketing Cloud Visitor ID. But there’s a lot of potential for things it might do in the future. For example, very few tools I’ve worked with offer a great solution for visitor stitching – the idea that a visitor should look the same to the tool whether they’re visiting your full site, your mobile site, or using your mobile app. Tealium’s AudienceStream is a notable exception, but it has less reporting capability than Adobe or Google Analytics – and those tools still aren’t totally equipped to retroactively change a visitor’s unique ID. But creating an “ID exchange” is just one of many steps that would make visitor stitching a realistic possibility.

I’ve intentionally left out many technical details on this process. The code isn’t that hard to write, and the planning and coordination with deploying it is really where I’ve seen my clients tripped up. But the new Marketing Cloud Visitor ID service is pretty slick – and a lot of the new product integration Adobe is working on depends on it. So if you’re an Adobe customer and you’re not using it, you should investigate what it will take to migrate. And if you’ve already migrated, hopefully you’ve started taking advantage of some of the new features as well!

Adobe Analytics, General

The Right Use for Real Time Data

Vendors commonly pitch the need for “real-time” data and insights, without due consideration for the process, tools and support needed to act upon it. So when is real-time an advantage for an organization, and when does it serve as a distraction? And how should analysts respond to requests for real-time data and dashboards?

There are two main considerations in deciding when real-time data is of benefit to your organization.

1. The cadence at which you make changes

The frequency with which you look at data should depend on your organization’s ability to act upon it. (Keep in mind – this may differ across departments!)

For example, let’s say your website release schedule is every two weeks. If, no matter what your real-time data reveals, you can’t push out changes any faster than two weeks, then real-time data is likely to distract the organization.

Let’s say real-time data revealed an alarming downward trend. The organization is suddenly up in arms… but can’t fix it for another two weeks. And then… it rights itself naturally. It was a temporary blip. No action was taken, but the panic likely sidetracked strategic plans. In this case, real-time served as a distraction, not an asset.

However, your social media team may post content in the morning, and re-post in the afternoon. Since they are in a position to act quickly, and real-time data may impact their subsequent posts, it may provide a business advantage for that team.

When deciding whether real-time data is appropriate, discuss with stakeholders what changes would be made in response to observed shifts in the data, how quickly those changes could be made, and what infrastructures exists to make the changes.

2. The technology you have in place to leverage it

Businesses seldom have the human resources needed to act upon trends in real-time data. However, perhaps you have technologies in place to act quickly. Common examples include real-time optimization of advertising, testing and optimization of article headlines, triggered marketing messages (for example, shopping cart abandonment) and on-site (within-visit) personalization of content.

If you have technology in place that will actually leverage the real-time data, it will absolutely provide your organization an advantage. Technology can spot real-time trends and make tweaks far more quickly than a human being can, and can be a great use of real-time information.

But if you have no such technology in place, and real-time is only so executives can see “how many people are checking out right now”, this is unlikely to prove successful for the business, and will draw resources away from making more valuable use of your full data set.

Consider specific, appropriate use cases

Real-time data is not an “all” or “nothing.” There may be specific instances where it will be advantageous for your organization, even if it’s not appropriate for all uses.

A QA or Troubleshooting Report (Otherwise known as the “Is the sky falling?!” report) can be an excellent use of real-time data. Such a report should look for site outages or issues, or breaks in analytics tracking, to allow quick detection and fixes of major problems. This may allow you to spot errors far sooner than during monthly reporting.

The real-time data can also inform automated alerts, to ensure you are notified of alarming shifts as soon as possible.

Definitions matter

When receiving a request for “more real-time” data, dashboards or analysis, be sure to define with stakeholders how they define “real-time.”

Real-time data can be defined as data appearing in your analytics tool within 1 minute of the event taking place. Vendors may consider within 15 minutes to be “real-time.” However, your business users may request “real-time” when all they really mean is “including today’s partial data.”

It’s also possible your stakeholders are looking for increased granularity of the data, rather than specifically real-time information. For example, perhaps the dashboards currently available to them are at a daily level, when they need access to hourly information for an upcoming launch.

Before you go down the rabbit hole of explaining where real-time is, and is not, valuable, make sure that you understand exactly the data they are looking for, as “real time” may not mean the same thing to them as it does to you.

Adobe Analytics, Analytics Strategy, General, google analytics

How Google and Adobe Identify Your Web Visitors

A few weeks ago I wrote about cookies and how they are used in web analytics. I also wrote about the browser feature called local storage, and why it’s unlikely to replace cookies as the primary way for identifying visitors among analytics tools. Those 2 concepts really set the stage for something that is likely to be far more interesting to the average analyst: how tools like Google Analytics and Adobe Analytics uniquely identify website visitors. So let’s take a look at each, starting with Google.

Google Analytics

Classic GA

The classic Google Analytics tool uses a series of cookies to identify visitors. Each of these cookies is set and maintained by GA’s JavaScript tracking library (ga.js), and has a name that starts with __utm (a remnant from the days before Google acquired Urchin and rebranded its product). GA also allows you to specify the scope of the cookie, but by default it will be for the top-level domain, meaning the same cookie will be used on all subdomains of your site as well.

  • __utma identifies a visitor and a visit. It has a 2-year expiration that will be updated on every request to GA.
  • __utmb determines new sessions and visits. It has 30-minute expiration (same as the standard amount of time before a visit “times out” in GA) that will be updated on every request to GA.
  • __utmz stores all GA traffic source information (i.e. how the visitor found your site). If you look closely at its value, you’ll be able to spot campaign query parameters or search engine referring domains, or at the very least the identifier of a “direct” visit. It has an expiration of 6 months that is updated on every request to GA.
  • __utmv stores GA’s custom variable data (visitor-level only). It has an expiration of 2 years that is updated on every request to GA.

ga

That was a mouthful – you might want to read through it again to make sure you didn’t miss anything! There are even a few cookies I didn’t list because GA sets them but they don’t contribute at all to visitor identification. If that looks like a lot of data sitting in cookies to you, you’re exactly right – and it helps explain why classic GA offers a much smaller set of reports than some of the other tools on the market. While I’m sure GA does a lot of work on the back-end, with all those cookies storing traffic source and custom variable data, there’s definitely a lot more burden being placed on the browser to keep a visitor’s “profile” up-to-date than on other analytics tools I’ve used. Understanding how classic GA used cookies is important to understanding just what an advancement Google’s Universal Analytics product really is.

Universal Analytics

Of all the improvements Google Universal Analytics has introduced, perhaps none is as important as the way it identifies visitors to your website. Now, instead of using a set of 4 cookies to identify visitors, maintain visit state, and store traffic source and custom variable data, GA uses just one, called _ga, with a 2-year expiration, and the same default scope as with Classic GA (top-level domain). That single cookie is set by the Universal Analytics JavaScript library (analytics.js) and used to uniquely identify a visitor. It contains a value that is relatively short compared to everything Classic GA packed into its 4 cookies. Universal Analytics then uses that one ID to maintain both visitor and visit state inside its own system, rather than in the browser. This reduces the amount of cookies being stored on the visitor’s computer, and opens up all kinds of new possibilities in reporting.

ua

One final note about GA’s cookies – and this applies to both Classic and Universal – is that there is code that can be used to pass cookie values from one domain to another. This code passes GA’s cookie values through the query string onto the next page, for cases where your site spans multiple domains, allowing you to preserve your visitor identification across sites. I won’t get into the details of that code here, but it’s useful to know that feature exists.

Many of the new features introduced with Universal Analytics – including additional custom dimensions (formerly variables) and metrics, enhanced e-commerce tracking, attribution, etc. – are either dependent upon or made much easier by that simpler approach to cookies. And the ability to identify your own visitors with your own unique identifier – part of the new “Measurement Protocol” introduced with Universal Analytics – would have fallen somewhere between downright impossible and horribly painful with Classic GA.

This one change to visitor identification put GA on a much more level playing field with its competitors – one of whom we’re about to cover next.

Adobe Analytics

Over the 8 years or so that I’ve been implementing Adobe Analytics (and its Omniture SiteCatalyst predecessor), Adobe’s best-practices approach to visitor identification has changed many times. We’ll look at 4 different iterations – but note that with each one, Adobe has always used a single ID to identify visitors, and then maintained visitor and visit information on its servers (like GA now does with Universal Analytics).

Third-party cookie (s_vi)

Originally, all Adobe customers implemented a third-party cookie. This is because rather than creating its visitor identifier in JavaScript, Adobe has historically created this identifier on its own servers. Setting the cookie server-side allows them to offer additional security and a greater guarantee of uniqueness. Because the cookie is set on Adobe’s server, and not on your server or in the browser, it is scoped to an Adobe subdomain, usually something like companyname.112.2o7.net or companyname.dc1.omtrdc.net, and is third-party to your site.

This cookie, called s_vi, has an expiration of 2 years, and is made up of 2 hexadecimal values, surrounded by [CS] and [CE]. On Adobe’s servers, these 2 values are converted to a more common base-10 value. But using hexadecimal keeps the values in the cookie smaller.

First-party cookie (s_vi)

You may remember from an earlier post that third-party cookies have a less-than-glowing reputation, and almost all the reasons for this are valid. Because third-party cookies are much more likely to be blocked, several years ago, Adobe started offering customers the ability to create a first-party cookie instead. The cookie is still set on Adobe’s servers – but using this approach, you actually allow Adobe to manage a subdomain to your site (usually metrics.companyname.com) for you. All Adobe requests are sent to this subdomain, which looks like part of your site – but it actually still just belongs to Adobe. It’s a little sneaky, but it gets the job done, and allows your Adobe tracking cookie to be first-party.

s_vi

First-party cookie (s_fid)

In most cases, using the standard cookie (either first- or third-party) works just fine. But what if you’re using a third-party cookie and you find that a lot of your visitors have browser settings that reject it? Or what if you’re using a first-party cookie, but you have multiple websites on completely different domains? Do you have to set up subdomains for first-party cookies for every single one of them? What a hassle!

To solve for this problem where companies are worried about third-party cookies – but can’t set up a first-party cookie for all their different websites – a few years ago Adobe began offering yet another alternative. This approach uses the standard cookie, but offers a fallback method when that cookie gets rejected. This cookie is called s_fid, and it is set with JavaScript and has a 2-year expiration. Whenever the traditional s_vi cookie cannot be set (either because it’s the basic Adobe third-party cookie, or you have multiple domains and don’t have first-party cookies set up for all of them), Adobe will use s_fid to identify your visitors. Note that the value (2 hexadecimal values separated by a dash) looks very similar to the value you’d find in s_vi. It’s a nice approach for companies that just can’t set up first-party cookies for every website they own.

Adobe Marketing Cloud ID

The current iteration of Adobe’s visitor identification is a brand-new ID that allows for a single ID across Adobe’s entire suite of products (called the “Marketing Cloud”). That means if you use Adobe Analytics and Adobe Target, they can now both identify your visitors the exact same way. It must sound crazy that Adobe has owned both tools for over 6 years and that functionality is only now built right into the product – but it’s true!

amc

This new Marketing Cloud ID works a little differently than any approach we’ve looked at so far. A request will be made to Adobe’s server, but the cookie won’t be set there. Instead, an ID is created and returned to the page as a snippet of JavaScript code. That code can then be used to write the ID to a first-party cookie by Adobe’s JavaScript library. That cookie will have the name of AMCV_, followed by your company’s unique organization ID at Adobe, and it has an expiration of 2 years. The value is much more complex than with either s_vi or s_fid, but I’ll save more details about the Marketing Cloud ID until next time. It offers a lot of new functionality and has some unique quirks that probably deserve their own post. We’ve covered a lot of ground already – so check back soon and we’ll take a much more in-depth look at Adobe’s Marketing Cloud!

Adobe Analytics, Technical/Implementation

Profile Website Visitors via Campaign Codes and More

One of the things customers ask me about is the ability to profile website visitors. Unfortunately, most visitors to websites are anonymous, so you don’t know if they are young, old, rich, poor, etc. If you are lucky enough to have authentication or a login on your website, you may have some of this information, but for most of my clients the “known” percentage is relatively low. In this post, I’ll share some things you can do to increase your visitor profiling by using advertising campaigns and other tools.

Advertising Campaign Tracking Codes

If you have been using Adobe Analytics (or Google Analytics) for any length of time, you are probably already capturing campaign tracking codes when visitors reach your website. In Adobe Analytics, this is done via the s.campaigns variable. While this data is valuable to see which campaign codes are working to get you conversions, it can also be used to profile your visitors if used strategically.

Let’s look at an example. Imagine that your advertising team is looking to reach 18-21 year old males. To do this, they can work with an agency to identify the most likely places to reach this audience through publishers like Facebook or display advertising targeted at sites geared towards this demographic. If you embed campaign tracking codes in those sites that have a high probability of targeting 18-21 males, you can assume that many visits to your website from these campaign codes will be from this demographic. Therefore, you can use SAINT Classifications to classify these codes into a segment profile. If the following tracking codes all came from this targeted campaign, you might classify it like this:

Once you have classified the codes by demographic, you can use segmentation to isolate Visits (and Visitors) who came from these codes. While this may not be a large population, you can segment the data and treat it as a sample size to see how that demographic is performing vs. your general population or other demographics. Keep in mind that you may get some false positives since ad targeting isn’t an exact science, but if your advertising is well targeted, you should have a decent amount of confidence in your segment. In fact, there may be cases in which the sole purpose of spending a small amount on advertising is to test out how a different target demographic uses your website.

Business to Business via Demandbase

If you work for a Business to Business (B2B) company, in addition to using campaign codes to profile visitors, you can also use tools like Demandbase to identify anonymous visitors (companies) to your website. I have used this in the past when I worked for Salesforce.com and in my current role at B2B clients. It is amazing how much information you can gather at the company level including Company, Industry, Size, etc. This information can be embedded into your web analytics implementation so that you can segment on it along with your other eVars and sProps:

This allows you to build segments on this data:

And you can see reports like this:

Here is a brief video I did a few years back on this integration:

Summary

As you can see, whether you are a B2C or B2B company, there are some quick wins you can achieve by adding meta-data to campaign tracking codes and using other technologies to identify anonymous visitors. These short-term solutions can be augmented by more robust tools offered by Adobe, Google and others, but these ideas may be a way to get started and build a case for more advanced visitor profiling. If you have other techniques you have used, feel free to leave a comment here.

Adobe Analytics

Creating Conversion Funnels via Segmentation

Regardless of what type of website you manage, it is bound to have some sort of conversion funnel. If you are an online retailer, your funnel may consist of people looking at products, selecting products, and then buying products. If you are a B2B company, your funnel may be higher-level like acquisition, research, trial and then form completion. Many of my clients want to model their conversion funnels in Adobe Analytics (SiteCatalyst) so they can se where visitors fall, in what percentages and how these buckets change over time. Unfortunately, this isn’t one of Adobe Analytics’ strong suits. In this post, I will share why the out-of-box conversion funnels are not ideal and how you can use segmentation to help build your conversion funnels.

Conversion Funnel Report

As I described in my old blog post on Conversion Funnels, the Conversion Funnel report is merely a graphical representation of whatever Success Events you happen to add to the report. This works if you have discrete Success Events related to each of your conversion funnel steps, but it does not show you what percent of your population is currently at each step of the funnel. For example, if I visit an online retail website, view a product, then add a product to cart (Cart Add Success Event is set) and then order a product (Purchase Success Event is set), the conversion funnel would have a value of “1” for me in each of the rows of this conversion funnel report:

While this may be useful in the context of seeing what percent of visitors make it through each step of the funnel, what if my question is “What percent of my population reached a specific step in the overall conversion funnel this week or month versus last week or month?” In this situation, the out-of-the box conversion funnel report can show you a time-based comparison, but as I will show later, this doesn’t give you the full picture:

In the next section, I will show you how segmentation can be used to improve upon this…

Using Segments to Create Funnel Populations

To address the aforementioned questions in Adobe Analytics, it is best to use the segmentation features of the product. Using segmentation, you can place each website visit (or visitor) into one of your high-level conversion funnel buckets and then create a different type of funnel in Excel using the ReportBuilder tool. First, you have to identify what criteria you are going to use to determine if a visit is in bucket #1, #2, etc. In this case, let’s imagine that you work for a B2B company and that your first bucket is “Awareness” and it is defined as people who have come to your website, but never seen a product, attempted to download a trial of it or purchased it. The second conversion funnel bucket is “Researchers” and this includes visits where people have looked at one or more products (or clicked on demos/videos and other product-related actions), but have not added a product to the cart or purchased (or filled out a lead form if online purchase is not possible). The third conversion funnel bucket is “Interested” and this includes visits in which people have either added to cart of filled out a lead form, but have not purchased (if available online). Our last conversion funnel buckets is our “Buyers” who have successfully purchased a product or committed to the product in some way (if purchase is not available online).

With these four conversion funnel buckets in mind, your next step is to subdivide all of your visits (or visitors) into one of these four buckets. While this may seem easy, it is actually a bit tricky, because you have to make sure that the same visit is not present in more than one bucket. Doing this requires some fancy Adobe Analytics segmentation skills. To create the first conversion funnel bucket, you would want to create a Visit segment that excluded any visitors who had viewed products, added products to the cart or purchased:

Next, we want to create our Researchers segment for visits that viewed products (you can also add other research events here with an “OR” clause), but excluding visits where a cart addition or order took place:

Next, we want to create our Interested segment for visits that added products to cart (you can also add things like lead form completions here), but excluding visits where an order took place:

Finally, we have our Buyers segment to see visits where visitors completed an order:

If you add up the various Visit counts in the above segments, you can see that they are mutually exclusive and add up to the total 40,089,255 showing in the segment preview area. This is a quick way to verify that you have built your segments correctly.

Applying Conversion Funnel Segments

Now that you have your conversion funnel segments defined, there are many ways you can use them. First, you can apply each segment to see any report for visits at that stage of the conversion funnel. For example, you could look at what internal search phrases are used by Researchers vs. Awareness folks. You could view the different pathing behaviors by conversion funnel segment or see what campaign codes drove each type. But the most interesting thing you can do (in my opinion) is to create a conversion funnel report in Microsoft Excel using ReportBuilder. For example, if you were to build a Visits data block with the “Awareness” segment applied, you would be looking at Awareness visits for the specified date range. Then you could do the same thing for the other three segments and then trend the percentages over time. Once you have separate data blocks, you can use formulas to combine them into a percentage-based conversion funnel and see the progression over time like this:

In the preceding example, there is not much of a spread when it comes to the last two funnel steps, but if we use some different [fake] data, let’s see how cool the reporting of this might look:

What I like about this type of analysis, is that it provides an opportunity to see where YOUR website problems lie. Every website is different. Some websites are great at getting top of funnel visitors to get to stage three or four of the funnel, but then they struggle to get them across the finish line. Others are the opposite in that they don’t get many people to stage two or three, but when they do, they convert very well. Knowing where your website’s problems lie, allows you to identify practical ways to improve your funnel. This can be done by focusing your testing and design efforts in the right places, instead of wasting time in areas where your website is doing well. As you can see, this is a different type of approach to conversion funnel analysis, but one that I think can help your organization better understand how visitors are flowing through your conversion path at a high level and provide benchmarks of this over time. If you already have most of your key conversion funnel KPI’s set, then this solution requires no tagging, just the creation of some new segments, so there is no reason to not give it a try!

Adobe Analytics

When to Use Variables vs SAINT in Adobe Analytics

In one of my recent Adobe SiteCatalyst (Analytics) “Top Gun” training classes, a student asked me the following question:

When should you use a variable (i.e. eVar or sProp) vs. using SAINT Classifications?

This is an interesting question that comes up often, so I thought I would share my thoughts on this and my rules of thumb on the topic.

Background Information

As a refresher, SiteCatalyst variables like eVars and sProps are used to store values that break down Success Events and Traffic Metrics respectively. For example, if you have a metric for onsite searches, you should be setting a Success Event and if you want to see that Success Event broken down by onsite search phrase, you might use an eVar to see the number of onsite searches by search phrase. SAINT Classifications allow you to apply meta-data to eVars and sProps so you can collect additional data or group data values into buckets. For example, you might use SAINT Classifications to group onsite search phrases into buckets like “Product-related terms” or “SKU # terms,” etc…

However, there are many cases in which you have a choice to capture data in a variable (eVar or sProp) or to use a SAINT Classification. Let’s look at an example to illustrate this. Imagine that you have a website and many of your customers have a Login ID that they use prior to ordering products. You are passing the Login ID value to an eVar so you can see all of your Success Events (i.e. Searches, Orders, Revenue) by Login ID in your SiteCatalyst reports. One day your boss approaches you and says that she wants to see your website KPI’s by the City visitors live in and that City is one of the attributes your back-end folks have related to each Login ID. At this point, you have two choices, one is to have your IT folks pass in the City to a new eVar using the Login ID value (if they can’t do this in real-time you could also pass this to SiteCatalyst via DB VISTA). The other option is to upload the City value for each Login ID as a SAINT Classification of the existing Login ID eVar. Both of these options would meet the objective of your boss, but which one is the right approach?

If I were a betting man, I would guess that most of you mentally chose option#2 which treats City as a SAINT attribute of the Login ID eVar. Does that sound right? Why not? It saves you tagging work and helps you avoid working with IT, which usually has delays associated with it. However, would it surprise you to know that I would NOT choose option #2 in this case, and instead would pass the City to a new eVar? Before I tell you why, let me review some of the things I consider when making a decision like this:

Advantages of SAINT Classifications

  • Conserves Variables – One of the key advantages of using SAINT Classifications is that they allow you conserve variables, especially eVars, which tend to run out before any others
  • No Tagging Required – SAINT Classifications don’t require additional tagging
  • Retroactive – SAINT Classifications are retroactive so if you mess up when assigning a value, you can always fix it later by simply updating the SAINT data or fixing your rules if using the SAINT Rule Builder. For example, if you incorrectly assign a campaign tracking code to a Campaign Name, you can easily updated this after the fact. If you had passed the campaign name to an eVar, there wouldn’t be much you could do to fix historical data. However, the retroactive nature of SAINT Classifications can also be a negative at times (more on this later)

Advantages of Variables

  • Data Stored Forever – Once you pass data into a variable (eVar or sProp), it is there forever (for better or worse). This is useful if you want to forever document the value at the time a KPI took place
  • sProp Pathing – If you are passing data to an sProp, you can enable Pathing on the variable to see the sequence in which values were collected. Unfortunately, Pathing is not available on SAINT Classifications in Adobe Analytics (though it is in Discover, now known as Ad Hoc Analysis)
  • Data Feeds – Many companies use Data Feeds to export Adobe SiteCatalyst data to other data warehouses and Data Feeds only contain data that is organically passed into SiteCatalyst, which excludes SAINT data

As you can see, there is more than meets the eye when it comes to deciding which approach you should use when collecting data. Do you need data in a Data Feed? Do you need Pathing? Do you need to be able to update values after the fact? For each situation, I find the preceding items to be a useful checklist to keep handy.

And Now Back To Our Story…

So now that you have seen my list of considerations, can you see why I suggested using a new eVar for City in our scenario? In this case, the item I focused on was the retroactive nature of SAINT Classifications. In this case, if you were to treat City as a SAINT Classification of Login ID, things would probably work out ok initially, but might have issues in the long run. Let’s say that Adam Greco visits your site, logs-in using ID#12345 and then completes an order for $200. At some point you have uploaded a SAINT file that correctly associates Adam’s Login ID with the city of Chicago. At this point, you can use the SAINT Classification “City” report to pivot the data and see an order of $200 for the city of Chicago. However, now let’s imagine that Adam decides to move to San Francisco (something I have done twice in my life!). Your back-end data would at some point learn that Adam has changed cities, and the next time you upload your SAINT file, Adam’s Login ID will be associated with San Francisco. Since SAINT Classifications are retroactive, this will have the impact of changing all activity associated with Adam’s Login ID to look like Adam has always lived in San Francisco, even though all of his KPI’s to date were done in Chicago. This means that your “City” report is inaccurate since it is inflating metrics for San Francisco and deflating metrics for Chicago (and for those who say that the answer is to use Date-Enabled SAINT Classifications, I wish you luck as I have never seen a company have the time to keep those updated!).

This scenario shows why it is so important to review my list of considerations above. While it is a shame to have to waste an eVar for City, in this case when you can make an association between Login ID and City, using a new variable may be the right thing to do if you want to see what City the Login ID was associated with at the time that the KPI took place and lock that value in forever. In my experience, the retroactive issue is the one that I see companies make the most mistakes with and many don’t even know that they have made a mistake until I point it out to them. Therefore, I will share another rule of thumb I have learned over the years:

Consider whether the data attribute is inherent to the eVar/sProp value or whether it can change. If meta-data is inherent to the value being classified or it can change and it won’t disrupt your data, use SAINT Classifications. Otherwise, use a new variable. When I say “inherent,” I mean that it will most likely not change. For example, if one attribute you have for Login ID is “Gender,” there is a strong likelihood that this can be a SAINT Classification, since it is unlikely that this value will change for each Login ID (outside of a very complicated surgical procedure!). Another example might be birth date which will never change for each Login ID. However, if you have a loyally program and treat different Login ID’s as Basic, Gold or Silver members, that can easily change over time, so that would be a candidate for a new variable so you are documenting their status at the time that the KPI took place.

As you think about how many attributes you may currently be incorrectly storing via SAINT (it happens to the best of us), you may wonder how you will have enough variables to capture all of these attributes. Keep in mind that just because I am suggesting that you set variables instead of using SAINT for data that is affected by retroactivity, it doesn’t mean that you need to store each of these data points in their own variable. For example, if you decide to capture Member Status, City and Zip Code as variables instead of SAINT Classifications of Login ID, if they are all available on the same page (server call), you can concatenate them into one eVar (i.e. Gold Member|Chicago|60603) and then apply SAINT Classifications to that eVar. In this case, you are still capturing the actual value you need to make sure you are not burned by the retroactive nature of SAINT Classifications, but you can conserve eVars by capturing multiple values in one eVar and splitting out the data using SAINT later. In fact, if you capture the data in a methodical manner, you can even use RegEx in the SAINT Classification Rule Builder to do this automatically.

Final Thoughts

So there you have it. Some things that you should consider when deciding whether you should use a new variable or SAINT Classifications when collecting new data attributes in your Adobe SiteCatalyst (Analytics) implementation. If you would like to learn more tips like this about Adobe SiteCatalyst, consider attending my Adobe SiteCatalyst “Top Gun” training class. Thanks!

Adobe Analytics

Advanced Conversion Syntax Merchandising

As I have mentioned in the past, one of the Adobe SiteCatalyst (Analytics) topics I loathe talking about is Product Merchandising. Product Merchandising is complicated and often leaves people scratching their heads in my “Top Gun” training classes. However, many people have mentioned to me that my previous post on Product Merchandising eVars helped them a lot so I am going to continue sharing information on this topic. In this post, I will delve into some more advanced concepts related to Product Merchandising. If you have not read my other Product Merchandising post, I suggest you do that before attempting to digest this one!

eVar Allocation

When it comes to Conversion Syntax Merchandising eVars, I see many clients make mistakes with allocation. As a refresher, allocation is an Admin Console setting in which you tell SiteCatalyst if the eVar should use the first value it receives or the most recent value it receives, if multiple values are present prior to a success event taking place. For traditional eVars, it is common to use “Most Recent” allocation as a way to ensure that the most recent value passed gets credit for all future success. However, Conversion Syntax Merchandising eVars are a bit different in that this allocation is set at the product level when the Merchandising eVar value is “bound” to the product at the specified binding event(s) dictated in the Admin Console. This means that the Allocation setting is not actually for the current eVar value, but rather, for the eVar value and product combination.

Since that can be confusing, let’s look at an example. Suppose that a visitor comes to your website and conducts an internal search for “books.” You have an internal search phrase Merchandising eVar so you can see which phrases lead to each product being purchased. So in this scenario, the visitor has searched for “books” and adds Product #100 to the cart. Now, if the same visitor searches for “novels” and adds a different product to the cart (say Product #200), it doesn’t really matter if you use “Original Value (First)” allocation or “Most Recent (Last)” allocation for the Conversion Syntax Merchandising eVar since there are two different products involved and allocation is tied to the binding event of products and eVar values. However, in the unique case in which the same visitor searches for “novels” and finds the same product #100 and decides to add it to the cart a second time, you have to tell SiteCatalyst which eVar value (“books” or “novels”) should be “bound” to Product #100. In this scenario (which admittedly may not happen too often), most clients have indicated that they would like to attribute success to the first search term for product #100 vs. the second search term that led to the same product, since it was the original way they discovered the product. The allocation setting you make (Original or Most Recent), will determine which eVar value gets credit for if the same product is used more than once (product #100 in this example). Therefore, most people decide to use “Original Value (First)” as the allocation method for Conversion Syntax Merchandising eVars.

Fake Products

The next tricky thing about Conversion Syntax Merchandising eVars has to do with non-Order/Revenue success events. As you would expect, since it is their primary purpose, Conversion Syntax Merchandising eVars do a great job of making sure that each product has its own eVar value when it comes time for the purchase event such that each eVar value is correctly associated with the right product. However, there are cases in which you will want to use eVars for more than just the purchase event (Orders, Revenue, Units). For example, if you think back to the preceding example of internal search, besides storing the internal search phrases to associate with products upon purchase, you may also want to see something more basic, like how many internal searches took place for each search phrase. In that case, you would set a success event each time an internal search takes place, and you would already be setting the Conversion Syntax Merchandising eVar with the search phrase (i.e. “books”). Naturally, you would expect that if you add the internal searches success event to the internal search phrase Merchandising eVar report, you would see the number of searches taking place by phrase. Unfortunately, you would be wrong. What you may not know, is that Conversion Syntax Merchandising eVars only associate values with success events when the Products Variable is set or when binding has already occurred. Of course, you can set the Merchandising eVar anytime you want, and it will store a value, but it will not associate that value with success events unless a product value is passed to the Products Variable. I believe the reasoning here was that Merchandising was meant for products, so the two go hand-in-hand.

So what do you do if you want to use the same Conversion Syntax Merchandising eVar to both associate eVar values to products and as a way to breakdown custom success events by its values (like a traditional eVar)? You have two choices. The first option is to set two eVars – one with Merchandising and one without. In this example, you would have two internal search phrase eVars and just have to label them correctly (i.e. Internal Search Phrases-Merchandising & Internal Search Phrases). The other option is to set what I call a “fake” product. By passing in a “fake” product when setting a custom success event, you can trick Adobe SiteCatalyst into associating an eVar value with the custom success event. The process of setting a “fake” product is not very difficult and can be automated using some basic JavaScript code. The key is to increment the fake product by one each time it is set, so that SiteCatalyst doesn’t see the same product twice for the same visitor.

This is best illustrated via an example. Let’s continue with our internal search example, only this time, in addition to seeing how many times each internal search phrase leads to orders & revenue, you want to have a custom internal searches success event and be able to break it down by internal search phrase. The way most companies attempt to accomplish this is by using a success event and eVar code like this:


 

However, doing this will yield some undesirable results. Here is what a report of this eVar might look like in SiteCatalyst:

You will notice an abnormally high “None” percent in this report, which represents cases in which there was no association between the eVar value and the Internal Searches success event. Since it should be impossible to have an internal search event with no internal search phrase, you would expect to have no values in the “None” row for the internal searches success event (since most companies will still populate a value of [blank search] or something similar if users search with no phrase). The “None” value for Orders is fine, since that represents cases in which no search phrase was used prior to the order.

To rectify this, you would add the “fake” product to your code so it looks like this:

 

 

 

Setting this “fake” product allows SiteCatalyst to set the Conversion Syntax Merchandising eVar value at the same time that event 10 (Internal Searches) is fired, so you can see one internal search for “books,” while still keeping the Merchandising eVar5 value ready to bind to a “real” product at the time of your selected binding events (normally Cart Addition and Product View). Using this code results in a more accurate report when viewed with the custom success event, which in this case is the internal searches success event:

You may also notice that the “fake” product used is a value and then a number. You can make the “fake” product any value you’d like, but most people tend to label it in a way that indicates what event was taking place. In this case, I named it “intsearch1” since the “fake” product had to do with internal search. If the “fake” product had been done as a result of an internal campaign eVar, I might have named it “intcampaign1” instead. However, it is important to note that you need to increment the “fake” product value (i.e. intsearch2, intsearch3, etc…) so that the same value is not used more than once by the same visitor. Using the same “fake” product value for all cases (every search term in this example) would negate the power of Merchandising, which is designed to attribute different values to different products. The only exception to this is a scenario in which the visitor intentionally uses the same value (i.e. searches on the same search keyword in this scenario), and in that case you would want to re-use the same “fake” product value whether the duplicate value happened sequentially or after another “fake” value has been passed. It is also important to remember to add the success event that you want to use this eVar with to the list of “Binding Events” in the Administration Console. In this case, you would add the Internal Search success event to the previous list of Binding Events (i.e. Cart Addition and Product View).

Note that this “fake” product workaround only has to be used when all of the following conditions are true:

  1. You are using a Conversion Syntax Merchandising eVar
  2. You want to see that Merchandising eVar’s value associated with a success event other than Orders, Revenue, Units
  3. You are not setting the Products variable with a value at the time the success event is being set (this is why none of this applies to Product Syntax Merchandising eVars)

This means that you only really need to worry about this in cases where you want the Conversion Syntax eVar to do double-duty. I have found that the following situations are the main times I need this work-around:

  • Internal Search Phrase eVar and Internal Searches success event
  • Navigation Element Clicked eVar and Navigation Link Clicks success event
  • Internal Campaign eVar and Internal Campaign Clicks success event
  • Product Filter Element eVar and Product Filter Clicks

Final Thoughts

As I mentioned at the outset, Product Merchandising is a bit tricky and the detailed items here around Conversion Syntax can be even trickier. I have learned that there are some things that you just have to memorize when it comes to Adobe SiteCatalyst and this post covers a few of them.

Do you want Adam Greco to review your company’s Adobe Analytics implementation and show you how to get the most out of the product?  Many companies are only using a fraction of the functionality offered by Adobe and/or have major flaws in their implementation.  Click here to learn more about having Adam audit your Adobe Analytics implementation.

Adobe Analytics, Featured

SiteCatalyst Unannounced Features

Lately, Adobe has been sneaking in some cool new features into the SiteCatalyst product and doing it without much fanfare. While I am sure these are buried somewhere in release notes, I thought I’d call out two of them that I really like, so you know that they are there.

Search Within Add Metrics Dialog Window

You can now use a search filter within the Add Metrics window to easily find the metrics you want to add to a conversion or traffic report. Simply enter the search area and begin typing:

Weekdays & Weekends in Metric Reports

A few years ago, Adobe added the ability to filter metric reports by Mondays, Tuesdays, etc. This allowed you to look at the same day (i.e. Monday) over the last few months to see how a metric changed on each subsequent day of the week. However, one gap that remained was the ability to filter by weekdays or weekends. I am pleased to report that Adobe has now added these as valid filters in metric reports as shown here:

 

Create Segment From Fallout Report

When Adobe added sequential segmentation to the Analytics product, another “unannounced” feature emerged related to the Fallout report. Now when you launch a Fallout report, you have the option (shown in red below) to generate a new sequential segment using the items currently in the Fallout report.

When you click on the link shown above, you will be taken to a screen that looks like this:

From here, all you need to do is make tweaks or save the segment.

I am guessing that there are a few more unknown new features so if you spot one, please leave a comment here so we can all enjoy! Thanks!

Adobe Analytics

Competitor Pricing Analysis

One of my newest clients is in a highly competitive business in which they sell similar products as other retailers. These days, many online retailers have a hunch that they are being “Amazon-ed,” which they define as visitors finding products on their website and then going to see if they can get it cheaper/faster on Amazon.com. This client was attempting to use time spent on page as a way to tell if/when visitors were leaving their site to go price shopping. Unfortunately, I am not a huge fan of time spent on page, since a page could have wide varieties of time spent on page due to many other reasons other than price shopping (i.e. working, going to the bathroom, yelling at kids-in my case, etc.). Because of this, I wanted to come up with an alternative way to see if price was a potential reason for lost business. However, before I share my idea, I want to add a disclaimer that there is no [legal] way to really know if people are leaving your site to buy something elsewhere due to price, but the technique I will show may shed some light on how pricing impacts your conversion rates.

Competitor Pricing – Step 1

The first part of my competitive pricing solution requires that for some or all of your products (SKU’s), you have detailed competitor pricing. Many of my clients have teams that are constantly monitoring competitive websites and documenting the current prices for some or all of their products. If your organization doesn’t have this, my solution will not work (so you can stop reading now!). If you do have this information, you will need to create a spreadsheet that has your product ID’s (values passed to the Products Variable) and your competitors’ price in the next column. If you have multiple competitors, you can add a new column for each one:

Next, you will have to talk with your Adobe Account Manager to create a new DB Vista Rule. As a refresher, a DB Vista Rule allows you to populate SiteCatalyst variables with values from a database lookup table stored on Adobe’s secure servers. This will allow you to pass in the competitor price for each product viewed and added to cart on your website via a server-side lookup. The Adobe Engineering Services team can walk you through how to upload the competitor prices to DB Vista and how to updated it over time. Keep in mind that you will need to have a process in place that updates competitors’ prices as they change, preferably within the hour so your data is accurate. This is often done by FTP’ing changes on an hourly basis. Creating a DB Vista Rule will cost you a one-time fee of a few thousand dollars, but that you can maintain it yourself thereafter. If you want to save some money, you can ask your internal developers if they can ping a similar competitor cost table in real-time as visitors are on your site, but in my experience, the work effort around that is much more than the cost of the DB Vista Rule.

Competitor Pricing – Step 2

Once you have a way to send competitor prices (by Product ID) into SiteCatalyst, where should it go? What I propose is that you pass the Product ID, your price and your competitors’ price, concatenated in a string to a new Conversion Variable (eVar). Since your visitors may view multiple products, you will also want to make this a Merchandising eVar using Product Syntax. I recommend that the data be passed when visitors view the product detail page or add a product to the shopping cart. For example, if a visitor views SKU # 10010100 and your price is $30.00 and your competitors’ price is $29.50, you would pass this:

 

 

In this case, the product ID is available on the page, as is your current price. The only data point you don’t have is your competitors’ price, which can be added to the string via the DB Vista Rule. This allows you to capture all of the key elements needed to do analysis. For example, if you add the Product Views success event to this new eVar report and filter for the above product ID, you will see all of the different pricing permutations between you and your competitor for the selected date range:

Next, you can add Cart Additions or Orders to the report to see how often each product converted with the given pricing spread:

In this fictitious example, you can see that Orders per Product View was up significantly when pricing was the same or better than the competitor for the product in question.

But there is even more information you can glean when we apply SAINT Classifications. For example, you can classify the product with just the pricing range difference to boil this data down to a finite number of rows in a way that is a tad easier to interpret:

Taking this concept one step further, you can apply another SAINT Classification that takes the Product ID out of the equation to see how the pricing spread impacts all products:

For those that really need things spelled out for them, you can use SAINT to create the highest level view of your pricing by boiling the data down to cases where you were higher, lower or the same with respect to pricing:

Obviously, the last few reports can still be viewed by Product by simply using the Products variable breakdown, but I think they show a good high-level view of pricing impact. Keep in mind that each of these rows can be trended over time in SiteCatalyst or ReportBuilder to see a long-term effect.

Product Margin

For those of you who like to kick things up a notch, you can also use the same DB Vista Rule to incorporate your product margin to the new eVar. If you upload your product costs to the DB Vista table, you can have the rule calculate the difference between your price and your cost and add the result as another parameter to the eVar. Then, via SAINT Classifications, you can split this out and see cases where your price is higher than your competitor broken down by your margin:

In this case, the product in question has a cost of $26.00 so the difference is passed as the last parameter to the eVar so we can include it in our analysis. This allows us to create new SAINT Classification where we can see Orders/Product View (or Cart Addition) for all products by the product margin amount:

Since all SAINT Classifications can be broken down by each other, this also allows us to see our conversion rates by price difference broken down by product margin amount:

Keep in mind that all SAINT Classifications are eligible for use in Segmentation, which means that you can now build a segment using pricing differential to competitors and product margin as criteria when doing web analysis! Also, if you want to learn how to add product costs as a new metric with which you can calculate product margin as a KPI, check out my old blog post from 2008 on how to do that.

Final Thoughts

As I stated early on, there is no way to make a direct connection between people looking at your site and then price shopping on another site, but my theory is that if you consistently under-perform when you are priced higher than your known competitor(s), this approach may give you some data to validate your theories. Obviously, there are other factors such as shipping, taxes, etc. that can have a major factor, but some of those can be included in this solution as well by simply adding additional parameters to the eVar shown above. Other ways to do similar competitive analysis include using Voice of Customer surveys to ask your visitors if they are price shopping, or moving all SiteCatalyst and competitive data into Adobe’s Data Workbench product. Either way, if you like the concept, you can give it a try or contact me if you want some assistance. If you have other ways to do this, feel free to leave a comment here. Thanks!

 

Adobe Analytics

Product Cart Addition Sequence

In working with a client recently, an interesting question arose around cart additions. This client wanted to know the order in which visitors were adding products to the shopping cart. Which products tended to be added first, second third, etc.? They also wanted to know which products were added after a specific product was added to the cart (i.e. if a visitor adds product A, what is the next product they tend to add?). Finally, they wondered which cart add product combinations most often lead to orders.

I had to admit that I was surprised that no one had asked me these questions in the past (a rarity for an old-timer like me!). However, I love getting new questions since it allows me to come up with cool ways to answer them. Therefore, in this post, I will share some of the ideas that I am proposing to this client in case your organization has similar questions.

Product Cart Order Sequence

To tackle the question of which products are added to the cart first, second, third, my first instinct was to try out the cool new sequential segmentation in Adobe Reports & Analytics (SiteCatalyst). This feature has been around in Ad Hoc Analysis (Discover) for a while, but is new to Adobe Reports and Analytics. However, the more I thought about this, the more I realized that sequential segmentation wouldn’t help very much. The only scenario in which I think it might help, is if you want to know exactly how often Product A was followed by Product B and then Product C and an order took place thereafter. If you know the sequence you are looking for, you can isolate it and look at any report (i.e. Visits, Orders) using sequential segmentation.

But my client is looking to do more exploration and find out which products are added first, second, third, etc. Therefore, my thoughts turned to my old friend Pathing. Pathing is a great way to see a sequence of anything happening on a website/app. In this case, the sequence I am looking to see is products added to cart. Therefore, a cool way to answer this question would be to create a new Traffic Variable (sProp) and pass the Product ID’s (or Names) of each product added to the shopping cart to the variable when a Cart Addition takes place. Once this is done, you can enable Pathing on this new “Products Added to Cart” sProp so you can see all of the available pathing reports. For example, you can open the Full Paths report to see the most popular product combinations added to the shopping cart. Obviously, the first batch of entries in this report will be cases with just one product added:

However, when you get deeper into the results, you will start to see multi-product combinations:

Of course, you can narrow these paths to a specific product in this report using the “Showing Paths containing” feature:

Or you could also use the next page flow report to see products added after a specific product (in this case an Exit means that no other products were added to the cart in the same visit):

 

Or you could see similar information using Pathfinder:

 

As you can see, by simply passing product ID’s (or names) to a new sProp, you can gain insight into which products are added the most and in which combinations.

If you have a Product Category SAINT Classifications for your Products variable, you can also see all of the above sports by Product Category in Discover (Ad Hoc Analysis) by using pathing on classifications. Or you could always pass in the Product Category to another sProp if it is known at the time as suggested in the comments by Jan Exner.

But What About Orders?

While the preceding concept may be interesting, it falls short of the original goal because it doesn’t show which of these cart addition sequences leads to orders. While you could segment on visits with an order and then look at the remaining paths, I prefer to visualize the actual paths and see exactly when the order took place. Therefore, to add this component, I suggest that you pass the phrase “order” to the same new traffic variable on the order confirmation page. By including this one new value, it will be included in the pathing reports and can be used in any of the reports above or the fall-out report. You can also use the previous page flow report beginning with the “order” value to see the most common cart addition product sequences (paths) that lead to success:

This is probably best done in Ad Hoc Analysis (Discover) where you can have unlimited branches in the report, but you can still extract value from this in Adobe Reports & Analytics.

Other Pathing Reports

While I haven’t had much time to play with this concept, I would imagine that you could also extract some useful information from the additional pathing reports that are enabled when you turn on pathing for this new “Products Added to Cart” sProp. For example, if you want the “411” on a particular product being added to the cart, you can open the Summary report:

You could also see how often each product was the only product added to the cart or abandoned in the cart by using an Exit Rate formula (Exits/Visits). Keep in mind that if a visitor adds another product to the cart, the product in question will no longer be an “exit” as far as this report is concerned, so the exit rate below is the combination of single carts + abandons per visit:

You may even be able to use the “Page Depth” (even though they really aren’t pages!) to see how often a particular product was the first one added to cart, second, etc… I say may, because this is what I think this report is showing, but I need Ben Gaines to verify this for me!

Lastly, if you care about Cart Removals (which is not something I normally care about since many people simply exit instead of removing products), you could also include them in this approach. To do this, you’d have to change the values you pass to the sProp to be “Add:[Product ID or Name]” and then use “Remove:[Product ID or Name]” instead of just passing in the product ID or name.

Final Thoughts

As those of you who have read my posts in the past know, sometimes, I come up with crazy ideas like this and they work out, but other times they don’t. If you think this concept is interesting, feel free to give it a try, but keep in mind that this is just a concept for now until I get some clients to do more experimentation…Enjoy!

Adobe Analytics, Technical/Implementation

New or Old Report Suite When Re-implementing?

In the recent white paper I wrote in partnership with Adobe, I discuss ways to re-energize your web analytics implementation. Often times, this involves re-assessing your business requirements and rolling out a more updated web analytics implementation. However, if you decide to make changes to your implementation in a tool like Adobe Analytics (SiteCatalyst), at some point you will have to make a decision as to whether you should pass new data into the existing report suite or begin fresh with a new report suite. This can be a tough decision, and I thought I would use this blog post to share some things to consider to help you make the best choice for your organization.

Advantages of Using The Existing Report Suite

To begin, let’s look at the benefits of using the same report suite when you re-implement. The main one that comes to mind is the ability to see historical trends of your data. In web analytics, this is important, since seeing a trend of Visits or Orders gives you a better context from which to analyze your data. In SiteCatalyst, you get the added benefit of seeing monthly and yearly trend lines in reports to show you month over month and year over year activity. Obviously, if you decide to start fresh with a new report suite, your users will only see data from the date you re-implement in the SiteCatalyst interface.

Another benefit of continuing with your existing report suite is that you will retain unique visitors for those that have visited your site in the past and have not deleted their cookies. When you begin with a new report suite, all visitors will be new unique visitors so you will be starting your unique visitor counts over from the day you re-implement. Starting with a new report suite will also result in some recency reports(i.e. Visit Number, Returning Visitors and Customer Loyalty) being negatively impacted. Additionally, using an existing report suite allows you to retain any values currently persisting in Conversion Variables (eVars). Often times you have eVar values that are meant to persist until a KPI takes place or until a specific timeframe occurs. If you create a new report suite, all eVars will start over since they are tied to the SiteCatalyst cookie ID.

Another area to consider is Segmentation. It is common to use a Visitor container within a SiteCatalyst segment to look for visitors who have performed an action at some point in the past. This segment will rely on the cookie ID so if you begin with a new report suite, you will lose visitors in your desired segment. For example, let’s say you have a segment that looks for visitors who have come from an e-mail at some point in the past and ordered in today’s visit. If you create a new report suite, you will lose all data from people who may have come from an e-mail prior to the new report suite being created.

If your end-users have dashboards, bookmarks and alerts setup, using the existing report suite will avoid the need to re-create them in the new report suite for variables that remain unchanged. Depending upon how active your users are, this can have a significant impact, as re-creating these can result in a lot of re-work.

There are many other items to consider, but these are the ones that I have seen come up most often as advantages of keeping the existing report suite when re-implementing.

Advantages of Using A New Report Suite

So now that I have scared you off of using a new report suite when re-implementing, let me take the counter-arguement. Despite all of the advantages listed above, there are many cases in which I recommend starting with a brand new report suite. The most obvious is when the current implementation is proven to be grossly incorrect or misaligned. I often encounter situations in which the current implementation hasn’t been updated for years and not at all related to what is currently on the website (or mobile app). If what you have doesn’t answer the relevant business questions, all of the advantages listed above become obsolete. In this situation, seeing historical trends of irrelevant data points, losing eVar values or report bookmarks isn’t a big deal. You may still lose out your historical unique visitor counts since that is out-of-the-box functionality, but I don’t think this justifies not starting with a clean slate. If you are not sure if your current implementation is aligned with your latest business goals, I highly recommend that you perform an implementation audit. This will help you understand how good or bad your implementation is, which is a key component of making the new vs. existing report suite decision.

The next situation is one in which the current implementation is using many of the allotted SiteCatalyst variables, but the new implementation has so much data to collect that it has to re-use the same variables going forward. This gets messy since it is easy to re-name existing variables, but you cannot remove historical data from them. Therefore, if you convert event 1 from “Internal Searches” to “Leads,” because you no longer have a search function and are out of success events, you can get into trouble when your end-users view a trend of leads for this month and see that they are a fraction of what they were last year! Your users may not understand that the data they are seeing from last year is “Internal Searches” and not “Leads,” and may sound off alarms indicating that the website is broken and conversion has fallen off the cliff! While you can do your best to annotate SiteCatalyst reports and educate people, the re-use of existing variables is always a risk, whereas using a new report suite does not require the re-use of existing variables and can avoid this confusion. Where possible, I suggest that you use previously unused variables for your new implementation so this historical data issue doesn’t affect you. Obviously, this requires that your existing implementation isn’t using most or all of your available SiteCatalyst variables. Hence, one key factor when deciding whether to use an existing report suite or create a new one is counting the number of incremental variables you will need variable slots for and determining whether you have enough to avoid having to re-use old variables for new data. If you have enough, that may tip the scale to re-use, but if you don’t, it may make you lean towards a new report suite.

When it comes to historical trends, one thing to keep in mind is that even if you choose to create a new report suite, it is still possible to see historical trends for data that the new and old report suites have in common. This can be done by importing data into the new suite using Data Sources. This is most effective when the data you are uploading are success events (numbers) and a bit more difficult for eVar and sProp data. The main benefit of this approach is that it allows your SiteCatalyst users to see the data from within the SiteCatalyst interface. Another option is to use Adobe ReportBuilder. Within Excel, you can build a data block for the data in the old report suite and then another data block for the same data in the new report suite and then merge the two together in a graph using two data ranges. Doing this allows you to create charts and graphs that span the old and the new, but these are only available in Excel and not in the SiteCatalyst interface.

Another justification for starting with a new report suite is that your current suite has data that is untrustworthy. I often talk to companies who say that they simply do not trust that the data in SiteCatalyst is correct. As I mention in the white paper, trust is an easy thing to lose and a hard thing to earn back. Your SiteCatalyst reports can be correct nine times out of ten, but people will focus on the one time it was wrong. When this happens too often, it may be time to start with a new report suite and make sure that anything added to this new suite is validated and trusted. This can help you create a new perception and help you re-build the trust that is so essential to web analytics.

Final Thoughts

As you can see, there are many things to consider when it comes to re-implementation and report suites. The current state of your implementation and its data will be the biggest decision points, but every situation is different. Hopefully this helps provide a framework for making the decision and allows you to weigh the pros and cons of each approach.

Adobe Analytics, Analytics Strategy, Conferences/Community, General

The Reinvention of Your Analytics Skills!

Last week, myself and 7,000+ of my friends attended Adobe’s Summit 2014 in Salt Lake City. The overarching theme of the event was “the reinvention of marketing”, which got me thinking about how digital analytics professionals can continue to reinvent themselves and their skills.

Digital analytics is a rapidly evolving field, progressing swiftly from log files, to basic page tagging, to cross-device tracking. The “web analysts” of just a few years ago have progressed from pulling basic reports to advanced segmentation, optimisation and personalisation and modeling in R.

So as technology continues to develop, how can analysts and marketers stay up to date on their skills?

1. Attend trainings and conferences like Adobe Summit. These events are a great opportunity to learn how other companies are leveraging technologies, and spark creative ideas. If you struggle to justify budget, propose attending low cost events like DAA Symposiums or our ACCELERATE, or consider submitting a speaking submission to share your own insights (as speaking normally earns you a free conference pass.)

2. Read up! There is no shortage of blogs and articles that discuss new trends in digital. Try to carve out a small amount of time each day or week to read a few.

3. Network and discuss. Local events like DAA Symposiums, Web Analytics Wednesdays and Meet Ups are great places to meet people and discuss trends and challenges.

4. Join the social conversation. If you can’t attend local events (or, not as often as you would like) use social media as another source of inspiration and conversation. Twitter, Linked In groups or the new DAA forums are great places to start.

5. Online courses. Lots of vendors offer free webinars that can help you stay up to date with your skills. Or, consider taking a Coursera, Khan Academy or similar online course to learn something new.

6. Experiment. Playing can be learning! If you hear of a new tool, social channel or technology, try getting your hands on it to see how it works.

What other tips do you have for keeping skills fresh? Share them in the comments!

Adobe Analytics, Conferences/Community

Adobe Summit Bound (2014)

It seems impossible to believe that twelve months has passed already. But here I am, Salt Lake City-bound for another Adobe Digital Marketing Summit.

For the past couple of years, I have been lucky enough to be invited to Adobe Summit as a “Summit Insider.” Being a Summit Insider gives me a chance to not only enjoy the education, networking and entertainment at Summit, but also an opportunity to share the experience with those who might not be able to make it. I’m super excited to be back, so thanks to the Adobe team for inviting me!

What am I looking forward to?

Like a kid in a candy store, I eagerly perused the Summit Agenda and have carefully selected breakout sessions on topics like predictive analytics, social analytics, data communication and storytelling, and building cross-department co-operation and a culture of analytics.

And even though I am the totally clueless person who never knows the bands, I’m definitely looking forward to the Summit Bash and musical acts Vampire Weekend and Walk The Moon. (Don’t worry, I created a Spotify playlist to brush up on my “new cool music” knowledge.)

Come say hi!

Are you planning on attending Summit? Come say hi! I’ll be there with my fellow Summit Insiders, Travis Wright, Toby Bloomberg and Elisabeth Osmeloski, as well as my partners at Analytics Demystified.

Keep up to date

Don’t forget to follow #AdobeSummit on Twitter via the official Twitter account (@AdobeSummit) and your Summit Insiders.

In town a little early?

Come check out Un-Summit on Monday afternoon. Un-Summit is a great chance to catch up with friends before the conference craziness kicks off, and hear from some great speakers.

Adobe Analytics

Current Order Value [Adobe SiteCatalyst]

I recently had a client pose an interesting question related to their shopping cart. They wanted to know the distribution of money its visitors were bringing with them to each step of the shopping cart funnel. For example, what percent of visitors have between $25 and $50 in their cart when they reach the “Billing” step of the conversion funnel? Does this percentage remain constant throughout the funnel or are there significant drop-offs? Unfortunately, this is not something that can be easily derived in SiteCatalyst, but with a bit of creativity, I will show you how you can add this data to your implementation.

Calculating Current Order Value

The first step in this process is to work with your developers to create a new Counter eVar that will hold the current order value. As soon as a visitor adds an item to the cart, pass the dollar amount associated with that cart addition to the Counter eVar (in addition to passing it to a currency event as prescribed in my “Money Left On Table” blog post). This value will be bound to the Cart Addition success event and future cart events unless it is modified. If the visitor adds more products to the cart, pass in those amounts and if the visitor removes an item from the cart, subtract it from the Counter eVar value (remember you pass values to Counter eVars using the “+” or “-” sign). I would expire the Counter eVar at the Purchase or Visit (if your site doesn’t have a persistent cart).

By having these values in the Counter eVar, you will end up with many different dollar amounts when you open the eVar report with one of your cart events. Here is an example of what the eVar report might look like:

Obviously, this report is not that readable, so the next step is to classify it into meaningful groupings, such as Under $20, $21-$35, $36-$50, etc… This will allow you to analyze the data in buckets and look for insights. Which groupings you choose are up to you and you can use SAINT to have multiple groups, such as every five dollars, every ten dollars, etc… Here is what it might look like after the SAINT Classification:

This general concept is similar to one that I described in my Revenue Bands post, but in that scenario, we were just passing the final order amount to a regular text eVar. The difference here is that we are using the Counter eVar to adjust the order value up or down as it progresses through the cart process.

Viewing Distribution

Once we have the current order values tied to each stage of the cart funnel and have grouped them accordingly using SAINT, our next challenge is to compare the distributions. There are a few different comparisons you can make with this data, so I will touch upon each of them. The first one you might want to see is whether the various percent distributions are steady or going up/down over time. In this case, you may not care about the actual raw numbers that are associated with each order value range, but rather, are most likely more interested in the percent of the total. For example, it may not be that interesting that 2,500 checkouts fell into the range of $15-$25, but it may be interesting to know that this dollar range represented 15% of all visits to the checkout step of the funnel. If you could see this percentage, then you could trend it over time and see if that $15-$25 bucket is increasing, decreasing or steady over time.

To see these percentages, you have two options, the first is to download data to Excel and create formulas to calculate the percent and trend it over time. If you want to use the SiteCatalyst interface, the best way to do this is to employ the “Total Metrics” feature. This feature allows you to create a calculated metric that divides the row value by the total at the bottom of the report. For example, if you wanted to calculate the percent of each dollar band while at the Checkout step, you would divide Checkouts by Total Checkouts using a formula like the one shown here:

This formula moves the percent shown in the regular eVar report front and center so it is the actual metric of the report. To visualize this better, let’s look at the previously shown report with this new metric column added:

As you can see, the percentages that were previously on the right side of the column (more as an FYI), are now present by themselves as a real metric in SiteCatalyst. Now you can use this percentage as a true metric, meaning that you can trend it over time and see its historical performance:

This allows you to see how each dollar amount band does and do some hard-core web analysis!

Another analysis you may want to do with this data is to see the drop-off between the dollars amount percentages added to cart, the percentages making it to checkout, etc… This is a bit more complex because you are looking at one dollar amount grouping, but seeing how it changes as visitors get further in the cart process. Unfortunately, there is no great SiteCatalyst report for comparing different percentages over time, so this analysis will have to be done in Excel.

To begin, you will want to create additional “Total” metrics like the one shown above for the other cart steps that you care about. In SiteCatalyst, this is what a report might look like, though it is limited in its use. In this case, the client has a customization step in the funnel, a billing page step and then a checkout step. Using the “Total” metrics, you can compare the changes in dollar amounts at the various steps of the funnel:

In this case, we are looking to see how consistent the percentages are across each row and seeing if we can identify any problem areas. However, to do analysis on this, Excel might be a better tool since it is easier to compare the percentages between different columns. Also keep in mind that you can break this report down by Product or Product Category to see how these percentages change by Product.

Final Thoughts

If your website has discrete steps in its funnel and if you are curious to see how much money visitors have at each step of the cart, the preceding is one way to do this. In addition to what I have shown here, having this information can be useful in other ways. For example, if you want to build a segment of all cases in which a visitor had more than $100 at the checkout step, but did not purchase, the eVar described here can be used as part of your segment criteria. I am sure there are many other ways to use this data as well, but hopefully this gives you some food for thought.

 

Adobe Analytics

Currencies & Exchange Rates [Adobe SiteCatalyst]

If your web analytics work covers websites or apps that span different countries, there are some important aspects of Adobe SiteCatalyst (Analytics) that you must know. In this post, I will share some of the things I have learned over the years related to currencies and exchange rates in SiteCatalyst.

Implementation

When you work for a multi-national organization, the first decision you have to make is whether you plan to have a different report suite for each country website or whether you will combine all data into one report suite and use segmentation for day-to-day analysis. For the pros and cons of this decision, I suggest you refer to this old post that covers multi-suite tagging vs. segmentation. As noted in that post, one of the downsides of using one report suite and segmentation is that you cannot have a different currency for each country. I find this very limiting, so let’s assume that you have a different report suite for each country site in your organization. When implementing each report suite, you will assign a currency that the report suite will use. For example, if the report suite is for Japan, in the Administration Console, you will make the currency Japanese Yen:

Once you do this, you just need to make sure that when you pass Revenue and currency success events that you set the s.currencyCode variable to the appropriate currency code for that country (i.e. JPY). This will tell SiteCatalyst that the numbers you are passing should be stored as Japanese Yen. If you are using multi-suite tagging and sending a second copy of data to a global report suite, then Revenue and currency success events will be translated into the currency of the global report suite (i.e. US Dollars) using the currency exchange rates found on xe.com. This allows your users in one country to see data in their own local currency, while letting executives see data rolled-up in a master suite in one unified currency.

One-Report Suite Only

As mentioned above, if you don’t have a separate report suite for each country site, either having just one report suite for the entire organization or a report suite for a region that contains multiple currencies, you cannot take advantage of the preceding currency translation feature. In this case, you have two choices. Your first choice is to use the same currency for all countries and pass data in that currency at the time of data collection. For example, if you have a European report suite, you may choose to use Euro as the primary currency and translate British Pounds and other non-Euro currencies into Euros at the time data is passed into SiteCatalyst. The second option is to pass currency amounts into a Numeric Success Event in a way that is currency agnostic. In this approach, you would not use the out-of-box Revenue event and instead would create a custom Numeric success event and pass in the raw numbers in the currency of that country. For example, if a 200 Euro order takes place in Germany, you would pass in a value of 200 and if a 300 British Pound order takes place, you would pass in a value of 300 to the Numeric success event. At the same time, you should pass in the currency the order took place in to an eVar. Once you have the raw transaction amount and the currency type, you can download the data to Excel using Adobe ReportBuilder and translate the raw Numeric success event numbers into the appropriate currency using a lookup table and referencing the eVar that indicates the currency. While this will not provide a way to see local currencies within the native SiteCatalyst interface, you can at least have your Excel dashboards show local currencies. Obviously you can use both of these approaches concurrently, using a master currency for the region and then providing local currencies in an Excel dashboard.

Pegged Exchange Rates

Over the years, I have worked with several clients that use “pegged” exchange rates. In this scenario, their organization uses one set of currency exchange rates for the entire fiscal year instead of using the daily exchange rates. This causes a problem for Adobe SiteCatalyst, since its default behavior is to use the daily exchange rates found on xe.com. Keep in mind that the local currencies in country-specific report suites will be fine since they are not being translated into a master currency. In this scenario, the only figure that is negatively affected is the currency amount in your global report suite, since that is when currency translation occurs. For example, if you collect an order for 300 Euro in Germany and the German report suite is set to Euros, everything will be fine. However, when that 300 Euro order is sent to the global report suite (let’s assume it is a US-based organization), it will be translated into US Dollars by default using today’s exchange rate instead of your pegged exchange rate (which can be quite different).

Unfortunately, there isn’t a way to override this default behavior, so I recommend using a DB VISTA rule to have SiteCatalyst lookup the pegged exchange rates published by your organization. As currency data is collected, you can use DB VISTA to bypass or overwrite the exchange rate translation done by SiteCatalyst with the rates approved by your organization. Unfortunately, DB VISTA rules cost a few thousand dollars, but in this case, it is probably worth it to have your global currency figures reflected correctly.

Interface Currency Setting

The last area related to currencies I want to cover is the currency setting found within the SiteCatalyst interface itself. I call this out because it can be very dangerous if you do not understand it. In the Report Settings area of the left navigation, there is a way to change the currency that you see when using SiteCatalyst. Here is what it looks like:

From this screen you can change the currency setting you use. Here is an example of me changing it from US Dollars to Euros:

Doing this will now show currency reports in Euros:

The dangerous part of this feature is that it seems like it does more than it actually does. How awesome is it that we instantaneously converted all of our data from US Dollars to Euros? Unfortunately, this is a mirage. Using this feature simply translates all historical data into the new currency (Euros in this case) using the current exchange rate. This means that historical data is not converted using the exchange rate that was present at the time the data was collected. Therefore, if the exchange rate has changed significantly, our data will be off. This is why it is important that you educate your users about this feature before they start using it and present inaccurate data to people in your organization.Once you understand how this feature works, you may re-think using this feature and proactively discourage its use!

 

Adobe Analytics

Linking Authenticated Visitors Across Devices [Adobe SiteCatalyst]

In the last few years, people have become accustomed to using multiple digital devices simultaneously. While watching the recent winter Olympics, consumers might be on the Olympics website, while also using native mobile or tablet apps. As a result, some of my clients have asked me whether it is possible to link visits and paths across these devices so they can see cross-device paths and other behaviors. This type of linking has long been available using advanced tools like Adobe Data Workbench (formerly known as Visual Sciences or Discover on Premise or Adobe Insight), but in this post I wanted to share some things you can do in SiteCatalyst (Adobe Reports & Analytics) as well.

Authenticated Visitors Only

The first thing to note, is that it is not easy to link visitors hitting your website and native apps if they are not authenticated via some sort of identifier (ID). As you probably know, visitors have different SiteCatalyst Visitor ID’s on each device, so it is hard to know which ID’s represent the same person. However, if your website/native app allows visitors to login using a customer ID or loyalty ID, you have more options available to you. Adobe has one approach called “visitor stitching” that you can read about here, but I have not seen many clients use that successfully. The reason I don’t see this working is that it requires a large change to your SIteCatalyst implementation involving replacing the out-of-the-box Visitor ID with your own Visitor ID. This can have some major ramifications (i.e. distorted unique visitor counts) and many of my clients aren’t ready for that type of risk.

However, this solution can be modified a bit to be a bit more useful and that is what I am going to discuss here. Instead of replacing the Visitor ID on your main report suite, I advocate creating a new “Authenticated Only” report suite in SiteCatalyst in which you send only authenticated traffic. This new report suite would most likely be populated using a VISTA Rule. When data is sent to this new report suite, the s.visitorID would be set to your own authenticated ID so it matches up across any device.

Let’s look at an example. Imagine that Joe Smith visits your website and views a few pages as an anonymous (non logged-in) visitor. Then on the fifth page of the visit, he authenticates. From that point on, you can send all of his traffic to a multi-suite tagged Authenticated Only report suite as a secondary server call. Next, let’s assume that Joe takes a break from your website and begins using one of your native mobile apps. Upon using the native app, he authenticates so all of his data also goes to the Authenticated Only report suite as a secondary server call. Since both of these actions are time-stamped, in the Authenticated Only report suite, you would see activity from both of Joe’s sessions, and in the order they took place. For example, in a path report you might see a series of pages Joe viewed while on the website and then in the next page flow report, see a path from a page on the website to a page found in the native app. Obviously, there is no direct link between these pages, but by passing both data elements to the same report suite, SiteCatalyst sees them as being consecutive. This can provide insights into when people are moving between different devices and what content they view on each. When it comes to pathing, you might want to consider setting a new version of the page name variable in which you pre-pend the digital channel to the page name (i.e. web:home page vs. app:home page) so when you see pages in pathing reports, you know which device the visitor was on when they viewed the page:

Seeing paths across multiple devices is cool, but that is only the tip of the iceberg! Since data coming from both platforms have the same Visitor ID, all eVars that you set will retain their values across devices since eVar values are tied to the Visitor ID. For example, if a visitors’ City is passed to eVar4 on the website, its value will persist for all actions taking place on the native app. This means that any Success Events set on the native app can be attributed to the visitor’s City, even though they never input the city within the mobile app! The same is true in reverse, as eVars captured in the native app will be available to the website. This eVar persistence can be very powerful when you think about things like Campaign Tracking Codes, which can be collected in either platform and extended to the other.

Another cool aspect of this solution is that you can do cross-device Segmentation. For example, you might want to build a “Visit” segment in which visitors viewed more than one device and XYZ actions took place. This would now be possible using the Authenticated Only report suite since the behavior on both devices looks to SIteCatalyst as if it all took place in one session (which it did!).

Another bonus of this solution is the ability to use the SiteCatalyst feature of Participation. Participation is only visit-based, so you can only see which pages within the visit led to Success Events (i.e. Orders). But when visitors switch devices, they are creating a new visit, which breaks Participation. But with this solution, any pages viewed on one device would show as having contributed (Participated) to success taking place on the other device, since both devices are included in the same visit!

Caveats

As always, there are a few caveats to any solution. The following are a few things to keep in mind before getting overly excited:

  • Sending traffic to another report suite will use additional secondary server calls which has a cost implication. To estimate how much this solution would cost, look at how many page views you have for authenticated pages (pages viewed by people who are logged in) and you will get a good approximation of how many additional server calls you will have to pay for
  • VISTA Rules also cost a few thousand dollars so that is another cost you would have to incur
  • Using this solution assumes that all of your report suites are set-up consistently, meaning that the same Success Events, eVars and sProps are used for your website and native app suites. You should be doing this as a best practice anyway for creating global report suites, but in case you are not, be sure to only pass in the variables that are common to both via the VISTA rule or you will get different numbers or values in the Authenticated Only report suite. Keep in mind that you can use VISTA rules to change the Event/eVar/sProp numbers on the fly if it is too difficult for you to sync up your implementations right away (though I don’t recommend doing this in VISTA as a long-term solution since it can be easily broken)
  • Any pages view before visitors authenticate will not be captured in the new Authenticated Only report suite. This means that you will not be getting the full picture of the visit in this new report suite. For example, you may not get the accurate entry page or miss the passing of some campaign tracking codes, but there are some more advanced techniques you can use to store this information and pass it on the first authenticated page
  • Using this approach with native mobile apps that store offline data is not recommended since the offline data timestamps can mess up your data collection and eVar attributions

Outside of these few caveats, if you want to see cross-device behavior, this is a relatively simple (and cheap) solution. Obviously, if you want to get much deeper into cross-channel behavior, you may want to investigate Adobe’s Data Workbench or tools like Causata (now owned by NICE). If you are interested in seeing how your visitors float between your digital channels/devices, feel free to give this approach a try…

Adobe Analytics

Onsite Search Term Exit Rates [Adobe SiteCatalyst]

Recently, I have had a few clients ask me the following question:

How can I determine which onsite search terms have the highest exit rate on the search results page?

This question also appeared on the Adobe Analytics message board. While it is easy to see how often visitors exit from your search results page, that analysis won’t show you which specific onsite search terms had higher or lower exit rates. Of course, you could pick one specific onsite search term and segment on that to see exit rates from the search results page for that term, but that is a non-scalable approach if you want to see this for multiple search terms or for all of them in descending order. So I thought I’d share some ideas on how you can tackle this type of analysis in Adobe SiteCatalyst.

Option #1 – Search Term Exit sProp

The first thing that comes to my mind to solve this problem is Pathing. Once pathing is enabled on an sProp, you can see entries and exits. In this case, you can pass in the onsite search term to a new Traffic Variable (sProp) on the search results page. You are probably already storing onsite search terms in an sProp so you can see search term pathing (seeing search terms used before and after other search terms). However, this sProp will be a bit different. For this sProp, all you want to know is whether they exited or not. To accomplish this, have your developers pass a value of “[did not bounce]” to the sProp if the visitor reaches any page on your website after the search results page. By passing this “dummy” value, you are ensuring that SiteCatalyst won’t see an exit if they reached a page beyond the search results page.

Once this is done, you can open this new sProp and add the Exits metric and see a list of search terms with the most exits in descending order:

If you see the “[did not bounce]” item, you can simply exclude that from the report using a search filter.

Option #2 – Pages After Search Terms

There may be cases in which you also want to see where visitors went after seeing search results for a specific onsite search term if they did continue their path. There are a few ways to do this. One way is to build a segment that isolates visits in which the onsite search term you care about was used and then look at the path reports for that segment. A downside of this approach is that it will include paths taking place before and after the search term was used unless you use Discover (Ad Hoc Analysis). Therefore, the way I would approach this is to continue building upon the concept above, but tweak it a bit.

The tweak you will make is to not pass the “[did not bounce]” value on the page after the search results page, but rather, to pass the s.pagename value to the new Search Term Exit sProp described above. Since this can be confusing, here is a recap of the tagging steps you’d want to tell your developer. On each page of the site except for the search results page, pass the value being passed to s.pagename to your new Search Term Exit sProp. When visitors are on the search results page, have your developer pass the onsite search term to the new sProp (I recommend inserting the phrase “term:” to make it clear which items in the new sProp are search terms and which are page names). Believe it or not, that is it! For example, if a visitor is on the Greco Inc. home page and then searches for “boots” and then goes back to the home page, here are the three values that you would pass to the new Search Term Exit sProp respectively:

grecoinc:home:homepage
term:boots
grecoinc:home:homepage

By doing this, your end-users can open the Next Page Flow report for this new sProp, choose the onsite search term (“term:boots” in this case) and then see the path flows after the search term. The way pathing works, it will only show paths taking place after “term:boots” and the exit percent will be people who exited right from the search results page. Here is what the report might look like:

In SiteCatalyst, you can only see two levels of paths from the onsite search term, but if you have access to Discover (Ad Hoc Analysis), you can see an unlimited number of paths emanating from the onsite search term. As you can see, this version of the solution provides everything that option one provided, but also shows you the specific pages visitors viewed after each onsite search term. It is up to you to decide how much analysis you want to do and what questions you want to answer.

Another side-benefit of this approach is that you can take advantage of fall-out pathing reports. Let’s say that you want to know how often visitors searching for “boots” make it to the shopping cart or to the order confirmation page. To do this, you can create a fall-out report that starts with “boots” and then add your cart and order confirmation pages to the fall-out report as checkpoints.

In Discover, you can even group onsite search terms into buckets using the grouping feature and do a similar fall-out report from a group of terms leading to carts or orders!

I am sure there are many more ways to answer these types of questions, but for those focusing mainly on SiteCatalyst, I hope that this is helpful.

Adobe Analytics, Analytics Strategy

Black Friday Analytics

So it’s that time of year again when commercialism runs rampant, people spend with reckless abandon, and at any moment there could be fisticuffs at your local Wal-Mart. But alas, this is Holiday Season in America, so be joyous about it!

I’ve been watching online spending trends for the past decade and most recently tying to discern what impact mobile and social media plays in all that glitters online. All signs indicate that 2013 is door-busting records with all time highs for online sales, yet depending on which data you believe in, there’s different stories to be told.

Two analytics leaders, IBM and Adobe routinely benchmark holiday shopping. And while their methodologies differ, so too does their data. Here’s a snapshot of some of their published findings thus far:

Show me the Money

IBM’s Digital Analytics Benchmark reports a +18.9% increase from 2012 in Black Friday sales during this year’s holiday season. Average Order Value (AOV) was $135 with on average 3.8 items per order.

Adobe’s Digital Index reported slightly higher profits with a 39% increase from 2012 for a whopping $1.93 Billion in online sales. Adobe reported a similar AOV at $139 and also revealed that the peak shopping time on Black Friday was between 11AM and noon ET, when retailers accrued $150 Million during this single profitable hour.

While both companies reported lift on 2013 online sales during these two days of shopping, each indicates substantial lift in Thanksgiving Day sales, which may have cannibalized some of Friday’s profits. And while Cyber Monday numbers are still being tallied, all signs point to the biggest online shopping day yet, which likely has retailers grinning from ear to ear early on in this short 2013 holiday shopping season.

Mobile Madness

Both indices show mobile as a significant driver in online sales. Adobe reported that on Thanksgiving and Black Friday, nearly one out of every four sales was made via mobile device. IOS devices and in particular, iPads were the device of choice in both company’s findings. Adobe reported that a total of $417 Million was recognized in just two days (Thanksgiving and Black Friday) via iPad sales by businesses within their index.

This should come as no surprise to those of us following the data, but mobile now represents nearly 40% of all Black Friday traffic. That’s a trend that retailers just cannot ignore. And as a consumer, you probably can’t ignore it either. Tactics reported by IBM indicate that retailers sent 37% more push notifications via alerts and popup messages on installed apps during these two heavy online shopping days.

Where in the World?

The biggest discrepancy between the two online shopping benchmarks comes from the geographic perspective. Keep in mind here, that IBM’s Digital Analytics Benchmark is comprised of data from 800 US Retail websites; and the Adobe Digital Index data represents a wholly different set of US retailers that accrued 3 billion online visits during the Thanksgiving to Cyber Monday shopping spree. (Note that exact comparable data isn’t provided in publicly available information.)

Yet, Adobe’s data reflects the majority of online shopping on Black Friday coming from 1) Vermont, 2) Wyoming, 3) South Dakota, 4) North Dakota, and 5) Alaska. They cite weather and rural locations as rationale for these states topping the list. IBM on the other hand, indicates that on Black Friday 2013, the highest spending states from their benchmark include: 1) New York, California, Texas, Florida, and Georgia. It’s not atypical to see variances in data sets, yet keep in mind when interpreting results for yourself, it’s all about the data collection method. Results will vary based on who is in your benchmark and how you’re slicing the data.

Social Influence

While IBM’s early data cited in an article by All Things Digital made the outlook for social appear dreary,
Adobe weighed in with a contradictory and uplifting perspective on social. IBM did not report on social sales for Black Friday in 2013 apparently because the findings weren’t “interesting”, but their report from 2012 showed that directly attributable revenue from social media (last click) was a dismal .34% of Black Friday sales. By my math that equates to a paltry $3.5 Million total online dollars via social media sales for Black Friday. The AllThingsD reporter managed to eek out of Jay Henderson, IBM’s Strategy Director, that social sales were flat again this year. Moreover, the article quotes Henderson as saying “I don’t think the implication is that social isn’t important, but so far it hasn’t proven effective to driving traffic to the site or directly causing people to convert.” Hmm…

However, this year Adobe is telling a slightly different story. According to their Cyber Monday blog post, social media has referred a whopping $150 million in sales in just five days from Thanksgiving to Cyber Monday. While, it’s not clear if they’re tracking using a last- or first-click perspective, this data indicates that social is pulling its share of the holiday sled this 2013 season. Well, at least social is pulling about 2% of the sled based on a total of $7.4 billion in total online sales from Thanksgiving through Cyber Monday.

Whichever metrics you choose to believe, counting dollars in social media ROI is never an easy task and it usually doesn’t lead to riches. I’m about to publish a white paper on this very topic, so if you’d like to learn more about quantifying the impact of social, email me for more info.

The Bottom Line

This holiday season is shaping up to be the biggest yet for retailers of all sizes. Remember when just a few years ago people were afraid to buy ***anything*** online? Well, it certainly appears that those days are gone. So, as the days before Christmas (or whichever holiday you celebrate) wind down, and the free shipping deals get sweeter, and the door-busters swing closed until next year, take a close look at your data to see what the digital data trends leave for you.

Adobe Analytics, Conferences/Community

Advanced Analytics Education Dates Announced

Based on the very successful roll-out of our Advanced Analytics Education offering at ACCELERATE 2013 Analytics Demystified is delighted to announce our “Adobe Intensive” sessions in Portland, Oregon April 23rd and 24th, 2014. We will be packing decades of knowledge into two days of Adobe-centric training and covering Adobe SiteCatalyst, Adobe ReportBuilder, Adobe Discover, and Adobe Target, all for one low price.

Instructors include Adam Greco, Senior Partner at Analytics Demystified and the author of The Adobe SiteCatalyst Handbook: An Insider’s Guide and Demystified Partners Kevin Willeitner and Brian Hawkins. Class sizes will be small by design, and so we believe our Adobe Intensive provides an incredible opportunity to learn these technologies directly from the master’s themselves.

Learn more about our Adobe Intensive and register today!

Adobe Analytics, Analytics Strategy, General

The problem with "Big Data" …

A lot has been written about “big data” in the past two or three years — some say too much — and it is clear that the idea has taken hold in the corner offices and boardrooms of corporate America. Unfortunately, in far too many cases, “big data” projects are failing to meet expectations due to the sheer complexity of the challenge, lack of over-arching strategy, and a failure to “start small” and expand based on demonstrated results.

At Analytics Demystified we have been counseling our clients to think differently about this opportunity, encouraging the expanding use of integrated data and increasingly complex systems via an incremental approach based initially on digitally collected information.  We refer to the approach, somewhat tongue-in-cheek, as “little big data” and recently had an opportunity to write a full-length white paper on the subject (sponsored by Tealium.)

You can download the white paper freely from Tealium:

Free White Paper: Digital Data Distribution Platforms in Action

The central thesis of the paper is that through careful and considered digital data integration — in this case powered by emerging Digital Data Distribution (D3P) platforms like Tealium’s AudienceStream — the Enterprise is able to develop the skills and processes necessary for true “big data” projects on reasonably sized and integrated data sets (hence, “little” big data.) The same types of complex, integrated analyses are possible using the same systems and data storage platforms, but by simplifying the process of collection and integration via D3P companies can focus on generating results and proving value … rather than spinning their wheels creating massive data sets.

I will be delivering a webcast with Tealium on this white paper and subject on Wednesday, October 16th at 10 AM Pacific / 1 PM Eastern if you’re interested in learning more:

Free Webinar: Digital Data Distribution Platforms in Action

If you are struggling with “big data” or are interested in how D3P might help your business better understand the integrated, multi-channel consumer, please join us.

Adobe Analytics

Shipping, Discounts & Taxes [SiteCatalyst]

If you are an online retailer, it is likely that your orders that contain shipping, discounts and/or taxes. Over the years I have seen some good and bad ways to track shipping, discounts and taxes in SiteCatalyst so I thought I would share some tips that I have found helpful.

The Basics

To start, let’s talk about why you might want to track shipping, discounts and taxes in SiteCatalyst. If you sell products in retail stores and online, your customers have an opportunity cost associated with shipping. Customers can often save money by coming to your physical store to get a product, but there may be a convenience factor associated with having products shipped. By tracking the total shipping dollars associated with each product, it’s possible to see which products are commonly shipped and the associated dollars. You can also look at shipping dollars as a standalone metric to see if shipping dollars per order are going up or down over time. The same concept applies to product discounts. You may have co-workers who want to see which products have been discounted and the amounts of the discounts. Tax amounts tend to be less meaningful from a web analytics perspective, but I will demonstrate how to track them in case your organization needs to track them for some reason.

The general method of tracking shipping is to use a Currency Success Event to store the amount the visitor spends on Shipping and the Products variable to connect that shipping amount to the Product ID. This is done through the product string syntax and might look like this:

s.events="purchase,event30";
s.products=";111;1;400;event30=5"

In this case, you have a scenario in which a visitor has purchased one unit of product ID#111 for $400 and is paying $5 in shipping. The latter is stored in success event 30 and can be viewed trended or broken down by Product.

If you also want to track discounts associated with the purchase, you can dedicate another currency success event to discounts. Discounts would work the same way as shipping in that it would be set in the products string using another currency success event. If there was a $10 discount for the product shown above, the syntax might look like this:

s.events="purchase,event30,event31";
s.products=";111;1;400;event30=5|event31=10"

Tax amounts are tracked in a similar manner. The syntax you might use if the preceding order had a tax amount of $32 is as follows:

s.events="purchase,event30,event31,event32";
s.products=";111;1;400;event30=5|event31=10|event32=32"

When this is done, if the preceding order were the only one to take place on your website, you would end up with a report that looks like this:

As you can see, tracking shipping, discounts and taxes is not that difficult and only involves using three new currency success events and the products string. However, things can get a bit trickier as I will show in the next section.

Fake Products?

One strange thing I have seen over the years related to tracking shipping, discounts and taxes is treating these as separate products. I am not quite sure why companies do this, but I am not a fan of this approach. This method adds a fake product called “shipping” or “taxes” to each applicable order and attributes the full shipping or tax amount to these fake products. Here is what the syntax might look like:

s.events="purchase,event30";
s.products=";111;1;400,;shipping;;event30=5.5"

This results in a Products report that looks like this:

As you can see, all shipping dollars are associated with the fake product of “shipping” instead of the products that drove shipping.

This approach can also wreak havoc on reports that use the Orders metric, like Merchandising reports, which will often show greater than 100% due to these fake products. Here is an example:

If you break down the above report by Product, you can see that the culprits are these fake products:

For all of the above reasons, I am not a fan of this “fake product” approach.

Multiple Products

Tracking shipping, discounts and taxes gets more difficult when visitors purchase multiple products concurrently. For example, there may be cases in which a visitor purchases three products and two have a shipping cost or discount, but the third product does not. I have a feeling that the multiple product scenario is what causes people to implement the preceding “fake product” method, but I think this is a lazy approach.

The more precise way to track shipping and discounts for multiple products is to associate the exact dollar amounts for each to each of the products being purchased as shown in the first examples above. If multiple products are purchased and we cared about tracking shipping and discounts, the resulting syntax might look like this:

s.events="purchase,event30,event31";
s.products=";111;1;400;event30=5|event31=10,;222;1;200;event30=2|event31=5"

In this example, two products were purchased and each has its own shipping and discount amount, correctly lined with the the product that drove these amounts.

However, there may be cases in which you cannot identify the exact amounts by product. In this case, you have a few options. The first (and preferred) option is to proportionally allocate shipping/discount amount based upon the purchase prices. For example, if someone purchases three products of amounts equal to $250, $100 and $50 and the shipping is $40, you could assign shipping amounts of $25, $10, $5 respectively. The same proportional approach would apply to discounts and taxes. While this isn’t perfect, it may be the best that you can do without working with IT to get the exact amounts per product. It is also something that can be done with some fancy JavaScript so you don’t have to get time with your IT folks. Another approach I have seen used is to simply put all shipping into the product with the largest revenue amount, but this will make your shipping data pretty inaccurate.

To summarize, tracking shipping, discounts and taxes is something you should consider for your SiteCatalyst implementation if you sell products online. However, the approach you take may depend upon what data you can get from your IT folks. Hopefully this post help outline some of the choices you have so you can determine which approach is the best for you.

Adobe Analytics

Campaigns & The None Row [SiteCatalyst]

The “None” row in SiteCatalyst. You either love it or hate it. It is amazing how far I have seen some companies go to avoid it and banish it from their reports. Personally, I love the “None” row and often try to explain to people its uses. In this post, I will review what the “None” row is and explain why not using it for Campaign Tracking can hurt your SiteCatalyst implementation.

The “None” Row Re-Visited

Way back in 2008, I explained the “None” row (apparently before images were allowed in blog posts ;-)) as part of my explanation of Conversion Variables (eVars). For those unfamiliar, eVars store values that are collected along the way and when a Success Event takes place, the current value of each eVar gets credit for the Success Event. For example, if eVar 1 captures the zip code of 60035 and a form completion Success Event takes place, that form completion would be attributed to the zip code 60035. But what if no zip code had been passed to eVar 1? In that case, the Success Event would be attributed to the “None” row so that the total of the rows in the eVar report matches the total of form completions for the same time period. That is really all the “None” row is used for in SIteCatalyst. However, in the next section, I will show you the most common “None” row mistake I see and how to avoid it.

“None” Rows Gone Bad – Campaigns

The tracking of marketing campaigns is one of the most important uses of the “None” row. When visitors come to your website, it is customary to track their arrival with a marketing campaign tracking code. This might be a paid search keyword identifier, a friendly URL name or a tracking code associated with a social media campaign. More advanced companies (a.k.a. my clients) go even further and pass unpaid referrals a tracking code for things like SEO or external websites. Therefore, in the Campaigns (s.campaigns) SiteCatalyst report, the “None” row either represents the unpaid visits to your site or, if you are tracking paid and unpaid referrals, it represents your “Typed/Bookmarked” traffic.

So let’s imagine that in your implementation, you have tracking codes for all paid and unpaid referrals to your website and that the “None” row truly represents traffic that is typed/bookmarked. Let’s also suppose that you decide to have two versions of your Campaigns report in which the Campaigns variable (s.campaigns) expires at the Visit and another custom Campaign eVar expires after 30 days. The latter is common as many marketers want to see if the same visitor who came to the website from a specific tracking code comes back in the next 30 days and if so, to attribute success to that tracking code.

However, now let’s say that your new mean boss tells you that he/she doesn’t like seeing the “None” row in the two Campaign reports. They say to you:”If it represents Typed/Bookmarked, why don’t we just pass that into SiteCatalyst so it is easily understood by everyone” (Since executives are great at simplifying things right?). So you have your developer write some code that passes in “Typed/Bookmarked” in the s.campaigns and the custom eVar variable if no known referrer is found. There is no more “None” row and everybody is happy.

Unfortunately, I have seen this scenario play out too many times. If you do what I just described, you have just ruined your Campaign eVar that has a 30 day expiration. By passing in a catch-all value of “Typed/Bookmarked” in the 30 day expiration eVar, you have forced SiteCatalyst to replace its current value with a new value of “Typed/Bookmarked.” In the previous example, if the visitor who came from the paid search keyword comes back a week later and types in your company’s URL, the paid search keyword will be overwritten. This means that you are taking away credit from paid campaigns and punishing them in cases where visitors actually remember your brand and come back to you a second time (and decide not to cost you money both times!). Passing in a catch-all value of “Typed/Bookmarked” turns your 30 day expiration into a Visit expiration. In this scenario, we already had a Campaign variable that had Visit expiration, but thanks to your boss, who doesn’t understand SiteCatalyst, you now have two of them!

This example illustrates the magic of the “None” row. It provides a way to see what percent of your success can be attributed to a specific value and what percent cannot. In the case of marketing campaigns, the “None” row represents the Typed/Bookmarked segment, and since no value is being passed, it has the added benefit of allowing your Campaign eVars that expire beyond the Visit to attribute success as intended. The same principle applies to all other eVars, but I find that Campaigns is the area in which my clients make this mistake most often. Therefore, my advice is to not be afraid of the “None” row, but rather, to embrace it and bask in its glory!

If Your Boss Really Hates The “None” Row

Lastly, if for some reason, you cannot convince your boss to live with the “None” row, there is one more trick I can show you to appease them. Unbeknownst to many SiteCatalyst users, it is possible to classify the “None” row. When building a SAINT file, if you use the value “~none~” as shown here, you can put whatever value you’d like for the “None” row in the classification report. Here I am showing renaming the “none” row with “Typed-Bookmarked” in a Marketing Channel classification of the Campaign variable.

However if you really wanted to, you could create a new “Cleaned Campaign Code” classification of the Campaigns report and assign a different value to the “None” row. Personally, I think this is cumbersome and would never do it, but it is technically possible if you really need to have a version of your low level camapign codes and don’t want to see a “None” row.

Hopefully this post will help those who may have inadvertently fallen into the trap described above, or at the very least, help others avoid it in the future. If you have any questions or comments, feel free to leave a comment below. Thanks!

Adobe Analytics, Tag Management

Adobe acquires Satellite – what next?

The Twitterverse was buzzing this morning about Adobe acquiring Satellite from SearchDiscovery, and while neither side has yet to make a public announcement, I have gotten confirmation of the big news. Suddenly, my post from a few days ago feels a bit dated! Time will tell how large the ripple effect from this acquisition turns out to be, but it definitely has the potential to shake up a young, dynamic market. I haven’t had as much experience using Satellite as I’d like, but I expect that to change quickly. I’m really intrigued by this move, because I feel like both companies bring exactly what the other needs:

  • Adobe provides tag management at the best possible price (free to clients) but has struggled to gain much traction with its own tag manager, largely due to the heavy focus on Adobe technology.
  • Satellite has a fresh UI and a really innovating approach to tagging – but so far, heavily funded start-ups like BrightTag, Ensighten, and Tealium have been playing with a bit of an advantage. This move can really help Satellite expand its reach.

Clearly with this acquisition the team at Satellite will greatly benefit from Adobe’s position with the enterprise. Adobe now has a unique opportunity to take Satellite’s core offering and add real value to it. And it definitely fits with Adobe’s strategy of acquiring smaller, up-and-coming firms (Neolane is the most recent example) and give them a bit of extra “juice” to take that next step.

Congrats to both companies! I look forward to seeing your growth together, and also to see where this takes the rest of the industry.

Adobe Analytics

Product Cross-Sell [SiteCatalyst]

Editor’s Note: Despite the fact that Adobe is retiring the name “SiteCatalyst,” it will take me a while to adjust to that change so I will continue to refer to the product as such.

If you sell products on your website, there is a good chance that you try to cross-sell products. Made famous by Amazon.com, the concept of “People who like this product also like these products…” is forever ingrained in our heads. While SiteCatalyst isn’t a merchandising or recommendations tool in of itself (Adobe and others have products that specialize in that), it can be used to see how well each product is cross-selling other products. This type of cross-sell reporting can be useful from a web analysis perspective to answer the following questions:

  • How often does cross-sell occur (in general)?
  • Which products are added to the cart via cross-sell?
  • Which products cross-sell each other?
  • Which product categories cross-sell each other?

In this post, I will share some ways you can answer these questions using SiteCatalyst.

Tracking Cross-Sell During Cart Addition

The first step in tracking product cross-sell is to set-up your implementation in a way that can report upon Cross-Sell Cart Additions and capture which products are being cross-sold. To do this, let’s go through an example. Let’s imagine you work for AVG. As shown below, a visitor has just added the AVG Security 2013 product to the shopping cart. While there, the visitor sees a cross-sell for the Backup DVD product. If the visitor clicks the Add to Cart button for the Backup DVD product, it should be counted as a Cross-Sell Cart Addition. We would also want to capture which product drove the DVD Backup Cross-Sell Cart Addition.

To capture this in SiteCatalyst, when the visitor clicks on the blue “Add to Cart” button for the Backup DVD product, in addition to the normal Cart Addition success event for the DVD Backup product, you can pass the product ID of the referring product to a Merchandising eVar. The syntax might look something like this:

Using this syntax, we are telling SiteCatalyst that a Cart Addition took place for the Backup DVD product, and that it was driven by the Security 2013 product through eVar 10, which might be named in the Administration Console something like “Cross-Sell Product.” Keep in mind that in this sample code I am using actual product names only because it is easier to explain, but that in reality, you would want to pass product ID’s to the Products variable and eVar 10 instead and then use SAINT Classifications to add the friendly product names, category, etc…

So now let’s see what this ends up looking like in SiteCatalyst reports. If we were to open the Products report and add Orders and Revenue, we can see how often each product was purchased. But if we break this report down by our new Cross-Sell Product Merchandising eVar, we can see how often each product was purchased as a result of a Cross-Sell and even which product cross-sold it:

In the report above, we can see that the Backup DVD was sold without cross-sell approximately 95% of the time. For the remaining 5%, we can see which products drove its addition to the cart and ultimately its purchase. Here we can see that the AVG Security 2013 product is the top cross-seller of the Backup DVD product. Obviously, we can view the converse of this report by opening the Cross-Sell Product report and breaking it down by product to see what other products the AVG Security 2013 product cross-sold.

Another thing you may notice is that I set an additional success event (event30) in the above syntax. I did this so that I can have a metric that captures how often Cross-Sell Cart Additions took place. The scAdd success event captures all Cart Additions, but you would only set event 30 when the Cart Addition is the result of a Cross-Sell. This event 30 allows you to trend Cross-Sell Cart Additions and you can add it to the Cross-Sell Product eVar report to see how often each product drove visitors to click the Cross-Sell button. This can then be compared to Orders to see Cross-Sell conversion by product.

You can also use this additional Cross-Sell Cart Additions success event is to create a Calculated Metric to quantify what percent of all Cart Additions are Cross-Sell Cart Additions (Cross-Sell Cart Additions/Cart Additions). This is easily trended and you might have merchandisers set internal targets or goals to increase this via Test&Target or other tools.

You can also add both the Cart Additions and Cross-Sell Cart Additions success event to the Products report to see Cross-Sell Cart Addition % by Product:

If desired, you can also see cross-sell of product categories. If you are a good SiteCatalyst administrator, you should already be using SAINT Classifications to group products into product categories. If you are doing this, then you can view the above product cross-sell report by product category to see how well one product category is doing at cross-selling another product category. Using the example above, if we classified the AVG Security 2013 product into the Security product category and the Backup DVD product was classified into the Backup product category, we could see how often the Security Category cross-sells the Backup Category.

As an aside, if you are using a Merchandising variable to capture “Finding Methods” (capturing the method that visitors used to find products they ultimately purchase), you want to be sure that when the Cross-Sell Cart Addition Click success event you set a value of “Cross-Sell” to the Finding Methods eVar. This will allow you to bind each product driven by cross-sell appropriately.

So there you have it. Some ideas of you to ponder as you think about product cross-sell on your website. If you have any questions or additions, feel free to leave a comment here. Thanks!

Adobe Analytics

Missing SAINT Classification Data [SiteCatalyst]

Recently, the Adobe SiteCatalyst product team “hit it out of the park” (to follow Ben’s analogy) with the latest SiteCatalyst point release! There are many awesome features that people like me have been waiting for. This release has things like segment comparisons, increased self-service capabilities via the Admin Console, Classifications on List Variables, Hourly Trending and improved test search filtering. Probably the biggest point release I have seen going back to version 9! Kudos to all involved!

However, the biggest feature enhancement was the SAINT Classification Rule Builder. This has been a long time coming and I am excited to start using it. I highly recommend you read more about this in the SiteCatalyst help section (login required). This new feature will go a long way towards helping clients maintain and clean up their SAINT Classifications. While I was giddy about the concept of SiteCatalyst customers having updated SAINT Classifications, I decided to share some other tips I have used to help clients minimize their missing SAINT data. When I work with clients to audit their Adobe SiteCatalyst implementations, one thing I review is how many of their eVars and sProps are missing SAINT classification data. Hopefully, these tips, combined with the new SAINT Classification Rule Builder, will lead you into SAINT Classification bliss!

The “None” Row in SAINT

In the past, I have explained how the “None” row in SiteCatalyst is annoying (at times), but actually a good thing, and not something to be feared. The “None” row can be extremely useful in Campaign reports and many others. If you see a “None” row in any eVar report, it simply means that when the chosen Success Events took place, there was no value for the current eVar. After a while, most SiteCatalyst users begin to understand this. Traffic variable (sProp) reports don’t have “None” rows since if there is no data, it just doesn’t show it instead of lumping the reminder into a “None” row.

However, when it comes to SAINT Classifications, for the most part, the “None” row tends to be a bad thing. The reason is that when you see a “None” row, it can mean one of two things:

  1. The root eVar variable that you are classifying did not have a value
  2. You are missing SAINT Classification data, causing unclassified data to appear in the “None” row for the eVar (or sProp) classification

To better illustrate this, let’s look at an example. Let’s say you work for a company that sells video games. You are passing Product ID’s to the Products variable and also have a few SAINT classifications of the Products variable including the one shown here (Game Genre):

As you can see, there is a significant percentage of Orders and Revenue appearing in the “None” row of this classification report. But how do you know if the cause is #1 or #2 above or a mixture of both? Did someone launch new products and forget to pass in a Product ID to the products variable and is that why there is no assigned Game Genre? Or do we have all of the Product ID’s correctly assigned to the Products variable, but forgot to add the Game Genre meta-data via SAINT? Unfortunately, it is difficult to know the answer to this question without doing some research.

Isolating the True “None” Row

If you are a SiteCatalyst guru, you probably know that the fastest way to figure this out is to do what I call the “breakdown by the root” trick. What I do is to click the breakdown icon next to the “None” row and choose to break that row down by the variable that it is a classification of (its root). In this case, you would break down the Game Genre “None” row by the Products variable to see if there are any product ID’s that show up. If you see Product ID’s in the breakdown report, you know that you are missing SAINT classification data. If you only see a “None” value, then you have done all that you can do via SAINT and have to figure out why such a high percentage of Orders and Revenue are not being associated with a Product ID. The latter is often a tagging issue.

In this example, when you create this breakdown, you can see that both problems exist. About 4% of the Orders taking place are missing a Product ID in the Products variable, which means that we have no way of knowing which Game Genre they would fall into. However, the rest of the items appearing in the breakdown report have Product ID’s. This means that they are simply unclassified. Therefore, if we were to successfully classify all of these Product ID’s, we could bring our overall percent of unclassified Orders down from 22.1% to 0.8% (1,095/128,916 Orders), which makes a huge difference! I have found that having large “None” rows for classifications can confuse your users and lead to the perception that your data isn’t sound. To stay on top of this, another trick I suggest is that you schedule the preceding breakdown report to be mailed to you weekly for your most important variable classifications.

Using a “Dummy Value”

Next is what I call the “dummy value” trick. There are sometimes cases in which you know that you will be missing meta-data. For example, in the gaming scenario above, there could be a case in which you know the Product ID, but for some reason don’t yet have the Game Genre right away. Looking at the second report above, there may be a legitimate reason why Product ID 7777 and 7767 don’t yet have a Genre assigned. If that is the case, my suggestion is that you set a “dummy value” in your SAINT file to act as a placeholder for the actual value that will be coming later. To do this, simply add the “dummy value” in any blank spots of your SAINT file. For example, let’s say that you download your products SAINT file and it looks like this:

All you have to do is fill in the blanks with a “dummy value.” I like to put “dummy” values in all caps and/or brackets so they are easy to identify and filter out of reports if needed. The preceding SAINT file would now look like this:

Once this file has been uploaded and processed, you can re-open the first report shown above and see this:

Obviously, not much has changed since all we did was move most of the “None” values to a new “dummy” row. However, we now can see that the actual “None” row is only about 0.8% and more importantly, this report communicates to SiteCatalyst users that it is known that 21.3% of the Game Genre’s are currently missing (so don’t call and pester us!). You can put any message you want in the brackets such as “[GAME GENRE COMING SOON…]” or whatever you think makes sense to your users. Additionally, it is easy to see this report without the “dummy value” by simply using a search filter to remove anything with a “[” or “]” symbol, which is easier than removing the “None” row from reports.

Final Thoughts

If you have to deal with SAINT classifications on a regular basis, knowing how to do the following can make your life a lot easier:

  • Isolate the true “None” values from those missing SAINT classifications
  • Get a report of those SAINT items that are missing meta-data through scheduled reports
  • Communicate which SAINT values are known to be missing vs. ones that are true “None” values through a “dummy value”

Together these tips should save you some time and headaches when it comes to SAINT. If you have any questions on these tips or additional ones, feel free to leave a comment here.

Adobe Analytics, General, Technical/Implementation

Big vs. Little Implementations [SiteCatalyst]

Over the years, I have worked on Adobe SiteCatalyst implementations for the largest of companies and the smallest of companies. In that time, I have learned that you have to have a different mindset when it comes to each type of implementation. Implementing both the same way can lead to issues. Big implementations (which can be either large due to complexity or traffic volume) are not inherently better or worse, just different. For example, an implementation at a company like Expedia is going to be very different than an implementation at a small retail website. Personally, I find things that excite me about both types. When working with a large website, the volume of traffic can be amazing and your opportunities to improve conversion are enormous. One cool insight that improves conversion by a small percentage, can mean millions of dollars! Conversely, when working with a smaller website, you usually have a smaller development team, which means that you can be very agile and implement things almost immediately.

Hence, there are pros and cons with each type of website and these are important things to consider when approaching an implementation or possibly when considering what type of company you want to work for as a web analyst. The following will outline some of the distinctions I have found over the years in case you find them to be helpful.

Implementation Differences

The following are some of the SiteCatalyst areas that I have found to be most impacted by the size of the implementation:

 

Multi-suite Tagging
Most large websites have multiple locations, sites or brands and use multi-suite tagging. When you bring together data from multiple websites into one “global” suite, you have to be sure that all of the variables line up amongst the different child report suites. Failure to do this will result in data collisions that will taint Success Event metrics or combine disparate eVar/sProp values. If you have 10+ report suites, it almost becomes a full-time job to manage these, making sure that renegade developers don’t start populating variables without your knowledge. If you use multi-suite tagging and have a global report suite, my suggestion is to keep every report suite as standardized as possible. This may sound draconian, but it works.

For example, let’s say you have five report suites that are using eVars 1-45 and a few other report suites that require some new eVars. Even if the latter report suites don’t intend to use eVars 1-45 (which I doubt), I would still recommend that you use eVars 46 on for the new eVars for the additional report suites. This will ensure that you don’t encounter data conflicts. Taking this a step further, I would label eVars 1-45 as they are in the initial report suites using the Administration Console. I would also label eVars 46 on with the new variable names in the original set of report suites. At the end of the day, when you highlight all report suites in the Admin Console and choose to see your eVars, you should strive to see no “Multiple” values. That means you have a clean implementation and no variable conflicts. Otherwise, you will encounter what I call “Multiple Madness” (shown here).

If you really have a need for each website to track its own site-specific data points, one best practice is to save the last few Success Events, eVars and sProps for site-specific variables. For example, you may reserve Success Events 95-100 and eVars 70-75 to be different in each report suite. That will provide some flexibility to site owners. You just have to recognize that those Success Events and eVars should be hidden (or disabled) in the global report suite so there is no confusion. Another exception to the rule might be sites that are dramatically different than the core websites. For example, you may have a mobile app or intranet site that you are tracking with SiteCatalyst. This mobile app or intranet site may be so drastically different from your other sites that you want to have it in its own separate report suite that will never merge with your other report suites. In this case, you can either create a separate Company Login or just keep that one report suite separate from the others and use any variables you want for it. Keep in mind that the Administration Console allows you to create “groups” of report suites so you can group common ones together and use that group to make sure you don’t have any “multiple” issues. You can also use the Menu Customization feature to hide variables in report suites where they are not applicable. Even if you don’t currently have a global report suite, I still recommend following the preceding approach. You never know when you might later decide to bring multiple report suites together, and using my approach makes doing so a breeze (simply changing the s_account variable) versus having to re-implement variables and move them to open slots at a later date. The latter will cause you to lose historical trends, modify reports and dashboards and confuse your end-users.

When you have a smaller implementation, it is common to have just one production report suite. This avoids the preceding multi-suite tagging issues and makes your life a lot easier!

Variable Conservation
As if coordinating variables across multiple report suites isn’t hard enough, this issue is compounded by the fact that multi-suite tagging means that you only have ~110 success events, ~78 eVars and ~78 sProps to use for all sites together vs. being able to use ~250 variables differently for each website. This means that most large implementations inevitably run out of variables (eVars are usually the first type of variable to run out). Therefore, large implementations have to be very aggressive on conserving variables, which can handcuff them at times. As a web analyst, you can often make a case for tracking almost anything, since the more data you have the more analyses you can produce and the more items you can add to your segments. Unfortunately, when dealing with a large implementation, for the reasons cited above, you may need to prioritize which data elements are the most important to track lest you run out of variables. This isn’t necessarily a bad thing as it helps your organization focus on what is really important across the entire business and tracking more isn’t always better.

If you contrast this with a smaller implementation that has no multi-suite tagging and no global report suite, the smaller implementation is free to use all variables for the one site being tracked. This provides ~250 variables to use as you desire. That should be plenty for any smaller site, so variable conservation isn’t as high of a priority. A few times, in my SiteCatalyst training classes, I have had both large and small companies sitting next to each other, and have witnessed the big company drooling over the fact that the smaller company was only using 20 of their eVars (wishing they could borrow some)! While it may sound strange, there are many cases in which I would tell a smaller organization to set success events and eVars that I would conversely tell a large organization not to set. For example, if I were working with a small organization that had only one workflow process (i.e. credit card application) and they wanted to track all six steps with success events, I might say “go for it!” But if that same scenario arose for a large website (i.e. American Express), I would encourage them to only set success events for the key milestone workflow steps to conserve success events. This is just one example of why I tend to approach large and small implementations differently.

One final note related to variable conservation. Keep in mind that you can use concatenation combined with SAINT Classifications to conserve variables. For example, instead of storing Time of Day, Day of Week and Weekday/Weekend in three separate eVars, you can concatenate those together into one and apply SAINT Classifications. This will save a few eVars and a similar process can be replicated for things like e-mail attributes, product attributes, etc.

Uniques Issues
If you have a large website, there is an increased chance you will have issues with “uniques.” Most eVar and sProp reports have a limit of 500,000 unique values per month. I have many large clients that try to track onsite search phrases or external search keywords and exceed the unique threshold by the 10th day of the month. This makes some key reports less useful and often results in data being exported via a data feed or DataWarehouse report to back-end tools for more robust analysis. For some large implementations, since the data points can’t be used regularly in the SiteCatalyst user interface due to unique limits, I sometimes have clients pass data to an sProp to conserve eVars, since in DataWarehouse, Discover and Segmentation, having values in an sProp is similar to having it in an eVar.

Smaller implementations normally only hit uniques issues if they are storing session ID’s (i.e. ClickTale, Tealeaf) or customer ID’s.

Large # of Page & Product Names
Many large websites have so many pages on their site (i.e. one page per product and over 100,000 products) that having an individual page name for each page is virtually impossible. In these cases, you often have to take page names up a level and start at a page category level. The same concept can apply to individual product names or ID’s as well.

Smaller implementations rarely have these issues since they tend to have fewer pages and numbers of products.

Page Naming Conventions
Another area where I see those running large implementations make mistakes is related to page naming across multiple websites. If you are managing a smaller implementation, you can name your pages anything you’d like. For example, while I don’t recommend it, if you want to call your website home page, “Home Page,” you will be ok. However, this approach won’t always work with a large implementation. If you have five report suites and one global report suite and you named the home page of each “Home Page,” in the global report suite, you would see data from all five report suites merged into one page name called “Home Page.” While there may be reasons to do this, you will probably also want to have a way to see things like Pathing and Participation for each of the home pages from each site individually in the global report suite. In this post, I show how you can have both (“have your cake and eat it too!”), but this example highlights the complexity that can arise when dealing with larger implementations.

SAINT Classifications
Large websites can often have a variable with more than a million SAINT classification values. Updating SAINT tables can take days or weeks unless you are methodical about your approach. Smaller sites with lower numbers of SAINT values can often re-upload their entire SAINT file daily or weekly to make sure all values are classified. Large implementations don’t have this luxury. They have to monitor which values are new or missing SAINT values so they can only upload the new or changed items so it doesn’t take weeks for SAINT tables to be updated. If you work with a large implementation, keep in mind that you can update SAINT Classifications for multiple report suites with one upload if you use the FTP method vs. browser uploads.

Time to Implement
In general, large implementations tend to move slower than smaller ones. While tag management systems are helping to remedy this, I still find that adding new variables or fixing broken variables takes much longer with large implementations (often due to corporate politics!). This means that you have to be sure that your tagging specifications are right the first time, since getting changes in after a release may be difficult.

Conversely, with smaller websites, you can be much more nimble and update SiteCatalyst tagging on the fly. For example, you may doing a specific analysis and realize that it would be helpful for you to have the Zip Code associated with a form. If you work with a smaller site, you may be able to use a SiteCatalyst Processing Rule or call your developer and have them add Zip Code to eVar30 and have data the same day!

Globally Shared Metrics, Dashboards, Reports, etc.
When you work with a small implementation, you may have a few calculated metrics, dashboards or reports that you share out to your users. This is a great way to collaborate and enforce some standards or consistency related to your implementation. However, when you have a large implementation, sometimes with 300+ SiteCatalyst users having logins, this type of sharing can easily get out of control. Imagine each SiteCatalyst user sharing five reports or dashboards. The shared area of the interface becomes a mess and you are not sure which reports/dashboards you should be using. Therefore, when you are working with a large implementation, it is common to have to implement some processes in which reports and dashboards are sent to the core web analytics team who can then share them out to others. This allows the SiteCatalyst user community to know which reports/dashboards are “approved” by the organization. You can learn more about centralizing reports and dashboards by reading this blog post.

Final Thoughts

As I mentioned in the beginning of this post, bigger isn’t always better. As shown from the items above, I often find that bigger implementations lead to more headaches and more limitations. However, keep in mind that with great volume, comes conversion improvement opportunities that often dwarf smaller sites.

One over-arching piece of advice I would give you, regardless of whether you work with a large or small implementation, is to review your implementation every six months (or at least yearly) and determine if you are still using all of your variables. It is better to get rid of what you no longer need periodically than to have to do a massive overhaul one day in the future.

While this post covers just a few of the differences between large and small implementations, they are the ones that I tend to see people mess up the most. If you have other tips for readers, feel free to leave a comment here. Thanks!

Adobe Analytics, google analytics

Handy Google Analytics Advanced Segments for any website

Advanced Segments are an incredibly useful feature of Google Analytics. They allow you to analyse subsets of your users, and compare and contrast behaviour. Google Analytics comes with a number of standard segments built in. (For example, New Visitors, Search Traffic, Direct Traffic.)

However, the real power comes from leveraging your unique data to create custom segments. Better yet, if you create a handy segment, it is easily shared with other users.

Sharing segments

To share segments, go to Admin:

AdvSeg1

and choose the profile you wish to view your segments for.

Choose Advanced Segments:

AdvSeg2

(Note: You can also chose “Share Assets” at the bottom. That will allow you to share any asset, including segments, custom reports and more.)

Find the segment you are interested in sharing, and click Share:

AdvSeg3

This will give you a URL that will share the segment.

AdvSeg4

Send this URL to the user you wish to share the segment with. They simply paste into their browser:

AdvSeg5

It will ask them which segment they would like to add the segment to:

AdvSeg6

Sharing segments does not share any data or permissions, so it’s safe to share with anyone.

Once a user adds a shared segment to their profile/s, it becomes theirs. (This means: If you make subsequent changes to the segment, they will not update for another user. But it also means the user can customise to their liking, if needed.)

Something to keep in mind

Sharing segments of course requires those segments to be applicable to the profile a user is adding them to. (For example, if you create an Advanced Segment where Custom Variable 1 is “Customer” and the segment is applied to a profile where no Custom Variables are configured, it won’t work.)

The good news: Free stuff!

The good news is there are a few super-handy segments you can apply to your profiles today that should apply to any Google Analytics account. (Unless you’ve made some super wacky modifications of standard dimensions!)

Here are a few segments I have found helpful across many Google Analytics accounts. Simply click the link and follow the process above to add to your own Google Analytics account.

Organic Search (not provided) traffic: Download segment

I find this a pretty helpful segment to monitor the percentage of (not provided) traffic for different clients.

Definition:

  • Include Medium contains “organic” and
  • Include Keyword contains “(not provided)”

Mobile (excluding Tablet): Download segment

The default Google Analytics Mobile segment includes tablets. However, since ease of use of a non-optimised website is much better on tablet than smartphone, it can be really helpful to parse non-tablet mobile traffic out and see how users on a smaller screen are behaving.

Definition:

  • Include Mobile (Including Tablet) containing “Yes” and
  • Exclude Tablet containing “Yes”

Desktop Traffic: Download segment

Definition:

  • Include Operating System matching Regular Expression “windows|macintosh|linux|chrome.os|unix” and
  • Exclude Mobile (Including Tablet) containing “Yes”
  • Note: Why didn’t I just create the segment to exclude Mobile = Yes? Depending on your site, you may get traffic from non-mobile, non-desktop sources like gaming devices. This segment adds a little extra specificity, to try to narrow down to just computer traffic.

Major Social Networks Traffic: Download segment

Definition:

  • Include Source matching Regular Expression “facebook|twitter|t.co|tweet|hootsuite|youtube|linkedin|pinterest|insta.*gram|plus.*.google”

Social Traffic: Download segment

Definition:

  • Include Source matching Regular Expression “facebook|twitter|t.co|tweet|hootsuite|youtube|linkedin|pinterest|insta.*gram|plus.*.google|
    bit.*ly|buffer|groups.(yahoo|google)|paper.li|digg|disqus|flickr|foursquare|glassdoor|
    meetup|myspace|quora|reddit|slideshare|stumbleupon|tiny.*url|tumblr|yammer|yelp|posterous|
    get.*glue|ow.*ly”
  • Include Medium containing “social”
    • Note: Medium containing “social” will capture any additional social networks that might be relevant to your business, assuming you use utm_ campaign tracking and set medium as “social”.
  • Note: Is there a social network relevant to your business that’s missing? Once you’ve added the segment, it’s yours to modify!

Apple Users (Desktop & Mobile): Download segment

Definition:

  • Include Operating System matching Regular Expression “Macintosh|iOS”

They’re all yours now

Remember, once you add a shared segment, it becomes your personal Google Analytics asset. Therefore, if there are tweaks you want to make to any of these segments (for example, adding another social network that applies to your business) you can edit and tailor to what you need.

Let’s hear your favourites!

Do you have any favourite Advanced Segments you use across different sites? Share yours in the comments!

Adobe Analytics

Revenue Bands [SiteCatalyst]

When it comes to tracking online purchases in SiteCatalyst, there are many different ways to report on Orders, Units and Revenue. There are the standard shopping cart metrics and an easy way to create calculated metrics using those cart metrics, such as Average Order Value (AOV). However, a question I get from time to time is related to looking at website data by how much money visitors spend in an Order. In this post, I will share some thoughts on how to add Revenue Bands to your SiteCatalyst implementation.

Revenue Bands

So what do I mean by Revenue Bands? I think of Revenue Bands as groupings of revenue amounts by which you can view any of your SiteCatalyst Success Events. For example, let’s say that your boss comes to you and wants to know what percent of Orders taking place last week were between $200 and $300. That seems like an easy question for SiteCatalyst to answer right? But how would you actually answer it? In the past, I have shown how you could use a Counter eVar to store and accrue Revenue to Date, but that answers a related, but different question than the one at hand.

One way to answer this question would be to use Segmentation. You could create a segment in which Orders were greater than $300 and less than $400 and then apply this to any SiteCatalyst report. However, you may get future questions asking for different amounts, such as Orders greater than $400 or greater than $500, etc. This would necessitate creating multiple different segments, which might be annoying after a while.

Another approach would be to classify your Order ID eVar report. As a best practice, you should be storing each unique Order ID an a custom eVar as described in this blog post. Once you are doing this, you could classify all Orders into buckets so items in each of the rows shown here would be grouped into the correct Revenue Band using SAINT Classifications:

However, this would be a pain to keep updated so I would steer away from this option.

So what would be the easiest way to see SiteCatalyst data by Revenue Bands? My advice is to simply identify the Revenue Bands that you care about, and use some tagging (or a processing rule) to pass these Revenue Bands to an eVar on the order confirmation page. For example, let’s say you want one Revenue Band for “Under $50,” another for $51-$100 and then after that for each one hundred dollar range. You can work with your developers to map this out and then set the appropriate value to an eVar on the order confirmation page. Regardless of how it is set, the end result is an eVar with various Revenue Bands such that you have a report like this:

Obviously, you can also capture the raw revenue amounts in an eVar and use SAINT Classifications to group into Revenue Bands. This would provide more flexibility, but also adds a bit more work. If you are set with your Revenue Bands, I would use the preceding approach, otherwise just pass in the raw Revenue Amounts. However, if passing in raw Revenue amounts, I highly suggest you remove the “cents” portion of the revenue amount so your SAINT Classifications are much easier!

Regardless of which approach you choose, by simply adding the Orders metric to the resulting report, you can see Order percentages for each Revenue Band. Since this is an eVar, we can also break this report down by any other eVar such as Visit number, Product or Marketing Channel. Conversely, we might want to take a report like Marketing Channel and break it down by this new Revenue Band eVar to see a report like this:

This new eVar can also be used for segmentation purposes and actually makes the building of segments a bit easier (in my opinion).

So there you have it. A simple way to add Revenue Bands to your SiteCatalyst reporting…Enjoy!

Adobe Analytics

How to Build a Cohort Analysis in Adobe ReportBuilder

As a follow-up to Adam’s Cohort Analysis post for SiteCatalyst I wanted to provide an example of how you can easily translate a standard output from Adobe ReportBuilder into the cohort view. I have seen some other posts on how to create a cohort analysis in Adobe ReportBuilder but they all seem to require a lot more work than you should have to put into a dashboard if you use a few more Excel tricks. The following dashboard shows you how you could create a cohort view without having to create a gazillion segments or a bunch of different ReportBuilder requests in the same workbook. Keep in mind you may still have to do some of that extra work if you aren’t implemented correctly but hopefully you have implemented in such a way that doing important analysis like this is easy for you.

What This Report Gives You

I think the coolest thing about this example, and the real value that the data provides, is that you can see the average attrition for each cohort over time. The cohort table below gives you the revenue attrition for each cohort for every month that cohort has been alive. However, I like to end it all with a simple output that is easy to understand. So you’ll notice that I stuck an Average Attrition column at the end which gives a single number representing the cohort’s performance over time. You can see in this example that the Feb-2012 cohort has had the most attrition (click for a larger view).

Once you have identified a bad or good cohort you can then investigate what kind of promotions or programs may have been in place for that group. Those may all contribute to the poor repeat business.

How to Make This Report

Before starting, keep in mind that there is a lot of date recognition going on in this example using custom American dates. The way Excel recognizes dates varies by local so you may have to adjust your classifications to work better for your region if it gives you trouble.

First, insert your ReportBuilder request. In Step 1 of ReportBuilder pick the Original Purchase Month classification and ensure that the time range encompasses all the data you want to look at.

On step 2 add the Month dimension from the “Dimensions” tab and include Revenue from the metrics tab. Insert the request into cell A5 of the worksheet. Notice that I also adjusted the report to include the“Top 1-10000” values. This is much more than I need but shouldn’t hurt if you have your date ranges correct.

With the ReportBuilder request inserted in the workbook and if you are using the same sort of data as shown in the example then that may be all you need to do. Continue reading, though, if you want to learn about the rest of the formulas.

  1. Start creating the table by setting up your start date in cell G6. This formula looks at all the dates under Original Purchase Month and takes the minimum date (the oldest date). This will establish the starting point of our table which will update automatically as you pull in different dates. Note that this is an array function which you have to press control+shift+enter to input. I’m using an array function here to evaluate every date individually otherwise the MIN function doesn’t work. If you are using a more standard date format for your classification you might not need the DATEVALUE in the array function.
  2. In cell G7 I use this formula to increment the month up for each row as it is copied downward.
  3. In cell H6 is where the real magic happens. This is another array function (remember to use control+shift+return) and it will match the Original Purchase Month on the same row with the Month that is X number of months ahead. X is determined by taking the column number that the cell is in and subtracting the column number at the beginning of the table. This is a good trick for making an auto-incrementor right in the formula. It will count up the months as you drag the formula over. The thing that really makes this an array function is the two MATCH criteria we have since we need to look for the right Original Purchase Month and Month.
  4. I hate doing manual work so I dragged the formula from cell H6 across the whole table. Then, to account for any cells that generate an error (because there is no data for that month) I applied conditional formatting to make the “#N/A” a super light gray so you know it is there but it isn’t in the way.
  5. The last part is the easiest part. You now make a similar table below (cell H22), calculate the change from month to month (see cell I22), stick an average on the end (column T), and apply some quick conditional formatting. As you apply the formatting be sure to apply separately to the body of the table and the averages since those are really different sets of data to evaluate.

Final Thoughts

This was an example around monthly time ranges. Keep in mind that you could do week or other granularities. Just make sure you have a classification in place that matched that granularity.

Another thing I would only do for this example is include the final table on the same sheet as the source data. For a real dashboard I would move the data and intermediate steps to a different tab and just show the final report on the first tab.

We’ll, there you have it…a workbook that easily translates a typical ReportBuilder output into a cohort table. Enjoy!

Adobe Analytics

Segmenting on Key Dates [SiteCatalyst]

Recently, while working with a client, I got into an interesting discussion about doing web analysis around key dates in their marketing program. There are many cases in which milestone marketing events take place on specific dates and clients ask me if there is an easy way in SiteCatalyst to slice and dice data by those key dates. What web analyst hasn’t had a situation where metrics spike up or down on a date or date range and you have no idea why! This is a topic I have dabbled with over the years, but this situation forced me to think about it a bit more deeply. The following will share some ideas I had related to this in case this is a question your organization has as well.

Key Date Reporting

As I thought about this scenario, it dawned on me that there is not a great way to report on key dates in SiteCatalyst. Obviously, you can look at any metric report and see spikes in website activity by date. For example, when I worked at Salesforce.com, around the time of our Dreamforce conference, we would see a tremendous spike in Form Completions around the conference dates that might look like this:

From this report, we can surmise that something happened around these dates. If you work in Marketing at Salesforce.com, I guarantee that you would know that these dates coincide with the Dreamforce conference, but what if the marketing event is something much smaller. A targeted e-mail blast or a social media campaign? What if there were only a modest increase in traffic and metrics on a specific date? I think back to how many times I was called into some executive’s office asking why a particular metric or conversion rate changed on a specific date. I also remember how many times we had to copy a SiteCatalyst chart to a PowerPoint presentation and annotate it with a bubble indicating why there was an increase or decrease. Eventually, SiteCatalyst began adding some ways that you could annotate charts in SiteCatalyst using the Calendar Events feature. This feature allows you to specify a date or date range and add a note to reports in that time period as shown here:

However, adding notes to reports doesn’t allow you to do much in terms of reporting data. Let’s say that you wanted to see the Average Order Value (AOV) for your website during the Black Friday period and compare it to the AOV during other key shopping periods (i.e. Valentine’s Day). Unfortunately, Calendar Events won’t help you very much. It isn’t even easy to compare conversion rates for two date ranges using Segmentation since it is difficult to create a segment on date ranges in SiteCatalyst (unlike Adobe Discover) and even if you could, there is no easy way to compare segments or compare date ranges for Success Events or Calculated Metrics in SiteCatalyst (can only be done for eVars and sProps). Your best bet would be to use Adobe ReportBuilder and pull a data block for the Valentine’s date range and a separate one for the Black Friday date range and compare the two. But what if you want to do this type of comparison natively within SiteCatalyst? Are you out of luck? Have no fear, Omni-Man is here to show you how to do this!

Key Date Segmentation

Back in 2011, I wrote a blog post recommending that each SiteCatalyst implementation have a Date Stamp eVar. The purpose of this eVar was to record the date that Success Events and eVars were set and its primary use was for segmenting on dates. At the time, I was using this eVar to look for actions that took place in the past within SiteCatalyst since only Discover provided the way to segment on dates natively. As I thought about the preceding key dates issue, the idea struck me that my client could leverage this Date eVar to enable additional web analysis for key dates. To do this, you can apply SAINT Classifications to the Date eVar and denote key marketing dates for items normally found in a marketing campaign calendar. Once these items have been uploaded to SAINT, you have an eVar value that can be used to segment data by date ranges of your choosing.

Let’s walk through the creation of this solution. First, you would set the current date to an eVar in each website visit as described in this post. Next, you would use the Administration Console to apply a SAINT Classification to this Date eVar. In this case, we will do just one classification and call it “Key Marketing Dates.” Next we will fill out the SAINT Classification file with some of our key marketing dates. Note that you can leave non-key dates blank or set a dummy value of “No Key Events” on dates having no key marketing events. Here is a sample SAINT file:

Once this SAINT file has been uploaded and propagated to the SiteCatalyst servers, you can open the classified report:

In this report, we now can see a row for each “Key Marketing Date” which is an aggregation of the specific dates associated with that key marketing date label. From here, we can add any metrics we’d like and can compare metrics for those dates. Keep in mind that these rows can contain one or multiple dates depending upon how you have classified the Date Stamp eVar. In addition to the above “ranked” report, you could switch to the trended view to see one metric trended by up to five Key Marketing Date values. It is also possible to break this report down by any other eVar report using Subrelations. For example, you might like to see the above report broken down by Products.

Another powerful use of this concept is the ability to filter Conversion Funnel reports for these key date ranges since it is now treated like any other eVar:

Finally, you can use these Key Date ranges as segmentation criteria since all SAINT Classifications can be used as segmentation criteria:

A Few Gotchas

As is often the case, no solution is perfect. If you have marketing campaigns or key dates that overlap, things get tricky. One way to address key date overlaps is to list both values in the classification value. Alternatively, you could also create more than one SAINT classification and have each SAINT column designated for a specific type of campaign. For example, the first column might be reserved for e-mail campaigns, the next column might be reserved for social media campaigns, etc. That would allow you to have multiple “Key Dates” for the same date stamp value. However, my hunch is that the above solution will work for most companies.

Another potential issues is that you will only see data in the Key Marketing Date report if the date range you have selected includes the dates that were classified using SAINT. Therefore, when running these types of reports, it would be advantageous to use a longer timeframe (i.e. year).

Well, there you have it. What do you think? Have you done something similar? If so, please share your ideas here as a comment…

Adobe Analytics

Alternative Conversion Flows

Many online marketers have a desire to test out different conversion flows on their website. Whether those flows are for an alternative checkout process or a new application process, the overall desire is the same. By testing out an alternative conversion flow, you can see how website conversion differs and find opportunities to optimize your website and boost conversion. In this post, I will share how you can track these alternative conversion flows in Adobe SiteCatalyst.

Conversion Flow eVar

Luckily, tracking alternative conversion flows is easy in SiteCatalyst. As you probably already know, SiteCatalyst provides Conversion Variables (eVars) that area meant to be set and used to break down various website conversion events (Success Events). Therefore, eVars can be used to store the names of your various conversion flows. For example, let’s imagine that you work for a credit card company and have a standard 4 step application process, but want to test out a streamlined 3 step process. To do this, all you need to do is create a new “Conversion Flow” eVar and pass the appropriate value to it at the start of each process flow. If the current website visitor has been shown the 4 step process, you would pass in a value of “credit-card:4-step” and if the visitor was shown the 3 step process, you would pass a value of “credit-card:3-step” to the eVar. This simple action allows you to segment your website success events into two buckets and see how each conversion flow plays out with respect to conversion:

In this example, we can see that the 3-step process looks to be converting better than our default 4 step process. As always, this new conversion flow eVar can be broken down by other eVars (i.e. Campaigns) and can be used as part of a segment in SiteCatalyst. If you want the results of the test to be limited to one visit, you would set the eVar expiration to “Visit” but if you have cases where you want to retain which flow they were in beyond the visit, set the eVar expiration accordingly (i.e. Month).

Another thing to keep in mind when using this conversion flow eVar is that it can be used over and over again. Once you are done with the preceding conversion flow test, you can re-use the same eVar for other conversion flow tests. When re-using this eVar, you will just want to make sure that preceding tests are completed. I have seen some clients who try to cram too much into a conversion flow eVar and forget that subsequent values will overwrite preceding ones if values are passed to the same eVar.

Concurrent Flows or Tests

So what do you do if you have multiple conversion flow tests taking place simultaneously? For example, let’s say that in addition to the 3 vs. 4 step conversion flow test above, you are also testing landing page A vs. landing page B? This presents a real quandary, since SiteCatalyst does not have a great way to deal with this.

The easiest way to track multiple conversion flows or tests is to use multiple eVars. I suggest that you identify the general types of flows or tests you will have and assign an eVar to each. For example, if your website routinely does landing page tests and conversion flow tests, you might reserve one eVar for each. Each visitor would be assigned a value in both eVars and you can break one down by the other. For example, in the preceding example, if visitors were assigned a landing page value in an additional eVar, the above report might look like this when broken down:

Obviously, this approach has some limitations since, if you do a lot of different types of tests, you will use up many eVars, but this is probably the most straightforward approach.

The other approach, albeit one that I have not yet tried with a client, is using a List Var to store the various test values. As you may recall, SiteCatalyst provides three List Vars that allow you to store multiple values in one eVar. I don’t see why you could not use a comma-separated list of values and put all of the various tests that a visitor is part of in that eVar. However, since I have not yet tried this, there may be some unforeseen downsides to doing this. For example, there may be cases in which you need to remember which flows/tests visitors have been in and persist those values to the List Var to avoid a string of two or three test values being overwritten by a single test value deep within your website. If you are going to try this approach, I suggest you pre-pend each value with the type of test it relates to such as “landing:control” and “app-flow:4-step” so you can differentiate each in the List Var report. However, for now, I suggest that you begin with the multiple eVar approach.

 

Adobe Analytics

Products & SKU’s

When I work with retailers who use Adobe SiteCatalyst, one topic that often emerges is the best way to handle the tracking Product ID’s and SKU’s. In this post, I will outline the challenges that exist and share some ways to handle product and SKU tracking in SiteCatalyst.

The Product vs. SKU Dilemma

The primary challenge that arises when it comes to Products and SKU’s is that there are often cases in which you have to set conversion Success Events at the point you know only the Product ID and other cases in which you know the Product ID and the detailed SKU. This is best illustrated by an example. Imagine that you are a retailer and one of the products you sell is a sweater. At the point that a website visitor views the product page for the sweater, you would want to set a Product View Success Event and the Products Variable (s.products). In this case, you most likely have a Product ID for the sweater being looked at by the visitor so you might pass that to the Products Variable such that your tagging looks like this:

s.events="prodView,event1";
s.products=";ProductID-111";

So far so good. However, now let’s assume the website visitor chooses a color for the sweater (i.e. blue) and adds it to the shopping cart. In this case, you probably still know the Product ID, but also have a more detailed SKU that represents the sweater with the color being “blue.” Now you have two tagging choices. During the Cart Addition (scAdd) Success Event, should you pass the Product ID # (ProductID-111) or the more detailed SKU as shown below?

s.events="scAdd";
s.products=";SKU-111_2";

The issue with the preceding code is that your Products report will be disjointed since Product Views will be tied to Product ID’s and Cart Additions (and presumably Orders and Revenue) will be tied to the SKU ID. Here is what a sample Products report would look like if the above visitor were the only visitor to the website:

This is clearly not ideal since you’d like to see a full funnel report for each Product or SKU. While we have the option of cleaning up this report by applying SAINT Classifications to roll it up by Product ID, this can be time consuming. Therefore, let’s look at a few ways to improve upon this reporting.

Solution #1 – Product ID Only

If our goal is to produce a clean Products report such that metrics are consistent for each Product ID, one approach is to only pass the higher-level Product ID to the Products variable for all shopping cart Success Events. In the preceding example, this would mean passing a value of “ProductID-111” with the Product View event and all other shopping cart events. This will allow you to see drop-off between these shopping cart Success Events by Product ID as shown here:

This is the most basic solution, but has one major drawback – it is not possible to see detail below the Product ID. Since you are only setting Product ID’s, there is no way for SiteCatalyst to magically allow you to breakdown the shopping cart Success Events by SKU since you haven’t provided the SKU. This approach works if your products don’t have detailed SKU’s, but if they do, you might find this option limiting and consider moving onto the next approach.

Solution #2 – SKU Merchandising

If your organization subdivides Products into SKU’s at the Cart Addition step, I suggest you use a different approach. In the past I have discussed Product Merchandising, which is a way in SiteCatalyst to associate an eVar value with a specific Products Variable value. Product Merchandising can be used in this situation to bind a SKU to each Product ID using a new SKU Merchandising eVar. In this case we will ask ClientCare to enable a new Merchandising eVar using the “Product Syntax” approach. Once this is done, we can pass the Product ID to the Products Variable as shown in the above solution, but additionally pass the SKU to a new Merchandising eVar whenever it is present:

During the Product View:

s.events="prodView, event1";
s.products=";ProductID-111";

During the Cart Addition:

s.events="scAdd";
s.products=";ProductID-111;;;;evar10=SKU-111_2";

Keep in mind that if we wanted, we could pass in the actual SKU value (i.e. “Blue” as the color) instead of using the SKU #, but passing the SKU# is ok since we can use SAINT to classify it later.

So far this may not seem to get us much further than we were previously, but as I will show, this set-up does make a big difference. First, you can see a complete funnel by Product ID as shown above by using the Products Variable. But now we have an additional eVar that can be used to see the conversion funnel by SKU. To do this, simply add shopping cart metrics to this new SKU eVar report and you can see everything except Product Views (since those don’t have a SKU):

Since you have two different variables, you can also use Conversion Subrelations to break the Product ID down by the new SKU eVar to see the Product ID metrics broken down by SKU for all shopping cart metrics except Product Views:

Final Thoughts

As always, there are many different approaches to things in SiteCatalyst, but hopefully the preceding gives you some things to consider when dealing with Products and SKU’s. If you have other cool approaches you have used, please leave them as a comment here. Thanks!

Adobe Analytics, Reporting, Technical/Implementation

SiteCatalyst Tip: Corporate Logins & Labels

As you use Adobe SiteCatalyst, you will begin creating a vast array of bookmarked reports, dashboards, calculated metrics and so on. The good news is that SiteCatalyst makes it easy for you to publicly share these report bookmarks and dashboards amongst your user base. However, the bad news is that SiteCatalyst makes it easy for you to publicly share these report bookmarks and dashboards amongst your user base! What do I mean by this? It is very easy for your list of shared bookmarks, dashboards, targets and other items to get out of control. Eventually, you may not know which reports you can trust and trust is a huge part of success when it comes to web analytics. Therefore, in this post, I will share some tips on how you can increase trust by putting on your corporate hat…

Using a Corporate Login

One of the easiest ways to make sense of shared SiteCatalyst items at your organization is through the use of what I call a corporate login. I recommend that you create a new SiteCatalyst login that is owned by an administrator and use that login when sharing items that are sanctioned by the company. For example, if I owned SiteCatalyst at Greco, Inc., I might create the following login ID:

Once this new user ID is created, when you have bookmarks, dashboards or targets that are “blessed” by the company, you can create and share them using this ID. For example, here is what users might see when they look at shared bookmarks:

As you can see, in this case, there is a shared bookmark by “Adam Greco” and a shared bookmark by “Greco Inc.” While based upon his supreme prowess with SiteCatalyst, you might assume that Adam Greco’s bookmark is credible, that might not always be the case! Adam may have shared this bookmark a few years ago and it might no longer be valid. But if your administrator shares the second bookmark above while logged in as “Greco Inc.,” it can be used as a way to show users that the “Onsite Search Trend” report is sanctioned at the corporate level.

The same can be done for shared Dashboards:

In this case, Adam and David both have shared dashboards out there, but it is clear that the Key KPI’s dashboard is owned by Greco, Inc. as a whole. You can also apply the same concept to SiteCatalyst Targets:

If you have a large organization, you could even make a case for never letting anyone share bookmarks, dashboards or targets and only having this done via a corporate login. One process I work with clients on, is to have end-users suggest to the web analytics team reports and dashboards that they feel would benefit the entire company. If the corporate web analytics team likes the report/dashboard, they can login with the corporate ID and share it publicly. While this creates a bit of a bottleneck, I have seen that sometimes large organizations using SiteCatalyst require a bit of process to avoid chaos from breaking out!

Using a “CORP” Label

Another related technique that I have used is adjusting the naming of SiteCatalyst elements to communicate that an item is sanctioned by corporate. In the examples above, you may have noticed that I added the phrase “(CORP)” to the name of a Dashboard and a Target. While this may seem like a minor thing, when you are looking at many dashboards, bookmarks or targets, seeing an indicator of which items are approved by the core web analytics team can be invaluable. This can be redundant if you are using a corporate login as described above, but it doesn’t hurt to over communicate.

This concept becomes even more important when it comes to Calculated Metrics. It is not currently possible to manage calculated metrics and the sharing of them in the same manner as you can for bookmarks, dashboards and targets. The sharing of calculated metrics takes place in the Administration Console so there is no way to see which calculated metrics are sanctioned by the company using my corporate login method described above.

To make matters worse, it is possible for end users to create their own calculated metrics and name them anything they want. This can create some real issues. Look at the following screenshot from the Add Metrics window in SiteCatalyst:

In this case, there are two identical calculated metrics and there is no way to determine which one is the corporate version and which is the version the current logged in user had created. If both formulas are identical then there should be no issues, but what if they are not? This can also be very confusing to your end users. However, the simple act of adding a more descriptive name to the corporate metric (like “CORP” at the end of the name) can create a view like this:

This makes things much more clear and is an easy workaround for a shortcoming in the SiteCatalyst product.

Final Thoughts

Using a corporate login and corporate labels is not a significant undertaking, but these tips can save you a lot of time and heartache in the long run if used correctly. You will be amazed at how quickly SiteCatalyst implementations can get out of hand and these techniques will hopefully help you control the madness! If you have similar techniques, feel free to leave them as comments here…

Adobe Analytics

De-Duped Success Metrics

When working with SiteCatalyst clients, I often see them ask questions related to how often a particular Success Event takes place at least once during a visit. Examples of this might include the following questions:

  • In what percent of visits do visitors add an item to the shopping cart?
  • How often to visitors who add items to the cart reach checkout?
  • What percent of visits do visitors conduct an onsite search?

At first glance, these seem like easy questions to answer, but I see clients making mistakes with these questions. For example, let’s say that you want to answer the first question above and see the percent of all visits that add items to the shopping cart. Most clients would approach this question by creating a calculated metric that divides Cart Additions (scAdd) by Visits. While this seems logical, it will not give you the correct answer, since visitors can add multiple items to the shopping cart within the visit. If Visitor X adds three items to the cart in the visit, the formula in our calculated metric would be:

The issue is that since most people look at this metric for all visits, the individual Cart Addition numbers are obfuscated and you are often seeing an inflated percentage for Cart Add/Visit %. In fact, the same issue applies to all of the questions listed above. If you are looking to compare Cart Additions to Checkouts, multiple Cart Additions or Checkouts taking place in a visit could inflate your ratio.

So how would you resolve this issue? There are several ways in SiteCatalyst to accurately report on the preceding questions so I will share the various methods at your disposal.

Using De-Duped Success Metrics

The easiest way to resolve the preceding dilemma is to set an additional “de-duped” version of metrics that you want to see in calculated metrics like the ones above. Personally, I wish Adobe provided an easy way in SiteCatalyst to see a de-duped version of every Success Event, but that is not currently available. Therefore, you will have to create a second Success Event for those metrics that you want to use in these types of Calculated Metrics. Keep in mind that you are limited to around one hundred Success Events so you won’t want to do this for all of your Success Events, so use your best judgment.

In this case, let’s assume that you are interested in seeing an accurate percent of visits in which a Cart Addition took place. To do this, every time you set the normal Cart Addition Success Event (scAdd), you should set a second, custom Success Event and call it something like “Cart Adds (De-Duped).” For this second Success Event, you will want to apply Success Event serialization to prevent the event from being counted more than once in a visit. I would recommend using “Once per Visit” serialization since it requires less tagging and can be enabled by ClientCare. By setting this new Success Event, you will have a count of how often visitors add items to the cart, but it will only be counted once, regardless of how many times the visitor adds items to the shopping cart within the visit. When this is complete, you can create a calculated metric that divides this “Cart Adds (De-Duped)” metric by the Visits metric to see an accurate ratio for visits in which at least one Cart Addition took place:

To see the impact of this, let’s imagine that you had five website visitors that performed the following actions:

In this scenario, if we used a calculated metric that used Cart Additions and Visits, our ratio would be 120% for these five visitors. Obviously, this isn’t representative of what really happened. However, if we use our new “Cart Additions De-Duped” Success Event, we will only count one Cart Addition per Visit and see the following data:

Doing this provides a more accurate representation that 60% of visits contained at least one Cart Addition. And since you now have a Calculated Metric that is trustworthy, you can see the answer to this question trended over time using a report like the one shown here:

This Calculated Metric can be easily added to a SiteCatalyst Dashboard and can be used like any other Calculated Metric.

Note: Some companies implement the Carts (scOpen) Success Event at the first shopping cart addition and de-dup it using “Once per Visit” serialization. This is a similar approach, so if you are doing this, you can use the Carts Success Event divided by Visits to see the same cart rate.

As you can see, the addition of one more Success Event allows us to greatly improve our reporting for cases in which you want to see if something happened at least once in a website visit. If you look at the other questions posed above, you will see that the same concept can be applied. For example, if you want to see an accurate ratio of times that visitors do at least one Cart Addition and one Checkout, you might create a “De-Duplicated” version of Cart Additions and Checkouts and use those versions in your Calculated Metric.

Keep in mind that these new “De-Duplicated” metrics will not be accurate when used in Conversion Variable reports (i.e. Products Report, eVars, etc…) since they will only be counted the first time the Success Event takes place. This means that if a visitor adds three products to the shopping cart, only the first product will be associated with a value in the conversion variable (i.e. Product XYZ). These new “De-duplicated” metrics should only be used in global website calculated metrics and the normal metrics (i.e. Cart Additions) should be used in detailed Conversion Variable reports.

Segmentation Approach

If you are averse to using more Success Events to answer the questions above, it is possible to answer them using Segmentation. I think this method is more cumbersome, but will describe how to do it for educational purposes.

To use Segmentation to answer any “how often did X happen in a visit” question, you will have to create a Visit-based segment that isolates visits in which the Success Event in question took place. Using the preceding example, if you wanted to see how often visits contained a Cart Addition, you would create a Visit segment and add the Cart Addition Success Event to the segment as shown here:

Once you have this segment, you can open the Visits report and see how many Visits took place in the desired timeframe. For this example, let’s use the data for the five visitors we described above. In this case, three of the five visits would qualify to be included in the segment so our Visits report would show a total of three. Now you can write that number down and then remove the segment (go back to “All Visits”) and look at the same Visits report for the entire population. In this case, you would see a total of five visits, so you can divide the three Cart Addition visits by the total visits to get the same 60% we saw above. If this is something you will be doing on a recurring basis, you can automate this process using Adobe ReportBuilder. To do this, you would create two different data blocks in Excel – one for all Visits and one for Visits with the above segment applied. Then you can create a formula that divides the totals of these two data blocks and trend it over time using a custom graph.

As I mentioned previously, I think this approach is more time consuming, but it does save Success Events if that is a concern.

Page Name Approach

In theory, there is another way to answer these types of questions, though I don’t recommend it. This approach involves using Page Names. To do this, you can use Adobe ReportBuilder to isolate the specific page on which a Success Event takes place (i.e. Cart Addition page) and look at that pages’ Visit count and divide it by total Visits. However, since page names can be unreliable and it still requires work in Adobe ReportBuilder, I don’t recommend this approach.

Final Thoughts

If you ever have questions in which people ask you how often something took place at least once in a website visit, I hope that you will think about these concepts and make sure that you are accurately answering them for your organization. While some of these concepts are a bit complex, they can save you the embarrassment of reporting inflated conversion metrics to your organization.

Adobe Analytics, Technical/Implementation

SiteCatalyst Variable Naming Tips

One of the parts of Adobe SiteCatalyst implementations that is often overlooked is the actual naming of SiteCatalyst variables in the Administration Console. In this post, I’d like to share some tips that have helped me over the years in hopes that it will make your lives easier. If you are an administrator you can use these tips directly in the Administration Console. If you are an end-user, you can suggest these to your local SiteCatalyst administrator.

Use ALL CAPS For Impending Variables

There are often cases in which you will define SiteCatalyst variables with a name, but not yet have data contained within them. This may be due to an impending code release or you may have data being passed to the new variable, but it hasn’t yet been fully QA’d to the point that you are willing to let people use the data. Of course, you always have the option to use the menu customization tool to hide new variable reports until they are ready, but sometimes it is fun to let your users know what types of data are planned and coming soon. Anther reason to enter names into variable slots ahead of time is to make sure that your co-workers don’t re-use a specific variable slot for a different piece of data, which can mess up your multi-suite tagging architecture.

So now, let’s get to the first tip. If you have cases in which you have variables that are coming soon, I use the Administration Console to name these variables in ALL CAPS. This is an easy way to communicate to your users that these variables are coming soon, but not ready to be used. All you have to do is explain to your SiteCatalyst users what the ALL CAPS naming convention means. Below is an example of what this might look like in real life:

 

I have found that this simple trick can prevent many implementation issues. For example, I have seen many cases where SiteCatalyst clients open a variable report and either see no data or faulty data. This diminishes the credibility of your web analytics program and over time can turn people off with respect to using SiteCatalyst. By making sure that reports that are not in ALL CAPS (proper case) are dependable, you can build trust with your users. When you are sure that one of your new variables is ready for prime time, simply go to the Administration Console and rename the variable to remove the ALL CAPS and you will have let your end-users know that you have a new variable/report that they can dig into.

Some of my customers ask me why I wouldn’t simply use the user security feature of SiteCatalyst to only let administrators and testers see these soon to be deployed variables. That is a good question. It is possible to hand-pick which variables each SiteCatalyst user has access to using the Administration area. Unfortunately, you can only limit access to Success Events and Traffic Variables (sProps). For reasons unbeknownst to me, you cannot limit access to Conversion Variables (eVars), which are often the most important variables (I have requested the ability to limi access to eVars in the Idea Exchange if you want to vote for it!). But you can certainly use this approach to limit access to two out of the three variable types if desired. Another approach I have seen used is to to move all of these impending ALL CAPS variables to an “Admin” folder using the menu customizer.

Add Variable Identifiers to Variable Names

As you learn more about SiteCatalyst, you will eventually learn the differences between the different variable types (Success Events, eVars and sProps). I have even seen that some power users end up learning the numbers of the specific variables they use for a specific analysis, such as eVar10 or sProp12. While normally, only administrators and developers care about which specific variable numbers are used for each data element, I have found that there are benefits to sharing this information with end-users in a non-obtrusive manner.

For example, let’s say that you want to capture which onsite (internal) search terms are used by website visitors. You would want to capture that in a Conversion Variable (eVar) to see KPI success taking place after that search term is used, but you also might want to capture the phrases in a Traffic Variable (sProp) so you can enable Pathing and see the order in which terms are used. In this case, if you create an eVar and an sProp for “Internal Search Terms” and label them as such, it can be difficult for your SiteCatalyst users to distinguish between the eVar version of the variable and the sProp version of the variable (which is even more difficult if you customize your menus).

 

Therefore, my second variable naming tip is to add an identifier to the end of each variable so smart end-users know which variable they are looking at in the interface. As you can see in the screenshot above, I have added a “(v24)” to the Internal Search Terms eVar and “(c6) to the “Internal Search Term” sProp as well as identifiers for all other variables. This identifier doesn’t get in the way of end-users, but it adds some clarity for power users who now know that internal search phrases are contained within eVar 24 and sProp6. Being a bit “old school” when it comes to SIteCtaalyst, I use the old fashioned labels from older versions of the JavaScript Debugger as follows:

  • Success Events = (scAdd), (scCheckout), (e1), (e2), (e3), etc…
  • Conversion Variables = (v0) for s.campaigns, (v1), (v2), etc…
  • Traffic Variables = (s.channel), (c1), (c2), (c3), etc…

Obviously, you can choose any identifier that you’d like, but these have worked for me since they are short and make sense to those who have used SiteCatalyst for a while. Another side benefit of this approach is that if you ever need to find a report in a hurry and you know its variable number, you can simply enter this identifier in the report search box to access the report without having to figure out where it has been placed in the menu structure. Here is an example of this:

 

Front-Load Success Event Names

When you are naming SiteCatalyst variables, you should do your best to be as succinct as possible as having long variable names can have adverse effects on your menus and report column headings. However, there is one issue related to variable naming that is unique to Success Events I wanted to highlight. Let’s imagine that you have a multi-step credit card application process and you want to track a few of the steps in different Success Events. In this case, you might use the Administration Console and set-up variables as shown here:

 

In this case, the variable name is a bit lenghty, but more importantly, the key differentiator of the variable name occurs at the end of the name. So why does this matter? Well let’s take a look at how these Success Event names will look when we go to add them to a report in SiteCatalyst:

 

Uh, oh! Since the key aspects of these variable names are at the end, they are not visible when it comes to adding metrics to reports. This makes it difficult to know which Success Event is for step1, 2, 3, etc… You can hover over the variable name to see its full description, but this is much more time consuming. I have asked Adobe repeatedly to make the “Add Metrics” dialog box horizontal instead of vertical but have not had any success with this (you can vote for this!). In this case, I would suggest you change the names of these Success Events to something like this:

 

Which would then look like this when selecting metrics:

 

Keep in mind that there is no correlation between the length of the variable definition box in the Admin Console and when the Success Event name will get cut-off in the Add Metrics dialog box so don’t get tricked into believing that if it fits in the box you will be ok!

Final Thoughts

These are just a few variable naming tips that I would suggest you consider to make your life a bit easier. If you have other suggestions or ideas, please leave them here as comments so others can benefit from them. Thanks!

Adobe Analytics

New Calculated Metrics in Adobe Discover

You have always been able to use segments and calculated metrics in Adobe Discover but now you can include segments WITHIN your calculated metrics! This greatly increases the flexibility of your metrics and will enable you to do more comparison work within Discover which historically has been very difficult.

As we walk through this feature let’s use an example. Assume that you are interested in understanding the mobile vs non-mobile breakdown of your campaigns. Previously you could segment to get the same data but now we can build out metrics that make this easier and help to differentiate mobile from everything else. This is useful since, by default, there is only one mobile-specific metric in Discover–mobile views.

To start, access the new metric builder by going to the Metrics pane on the left-hand side, select the options icon, and then select “Calculated Metric Builder”:

You will then see the Metric Builder which allows you to drag metrics and operators over to the formula field. Below is how you would build a simple Order Conversion metric:

Adding Segments to Your Discover Metrics

Now we can make it really fun by adding segments to the mix. The segments are hiding behind the metric tab on the top left. For our mobile example, let’s say that we want to build a metric that gives us the percentage of visits that were from a mobile device. To do this you would drag over and divide two visits metrics, apply a “Visits from Mobile Devices” segment to the numerator (as shown in the screenshot below), and adjust your metric name and formatting as needed:

After you save this metric you can then include it in your campaigns report to see the percentage of the campaign that came from mobile. You can also sort by this metric to see what campaign has the highest percentage of mobile usage.

Include Calculated Metrics in Other Calculated Metrics

After you start building your calculated metrics you may want to include an existing calculation in another metric. The new builder lets you do that as well. Once you create a metric, as we did with our “% Visits from Mobile” metric it will appear in your metric list with a small chart-looking icon next to it. We will build on this to get the percentage of traffic NOT from mobile. We do this by entering a number field of “1” (red arrow in screenshot below) and then subtract the previously-created “% Visits from Mobile” metric as shown here.

Other metrics you could build for our mobile report may include:

Mobile Conversion 

Non Mobile Conversion (you have to make the Non Mobile segment first)

Tablet Visits as a percent of all Mobile Devices (you have to make the tablet segment first)

Return Mobile Visits as a percent of all Mobile Devices (you have to make the return mobile segment first)

You can go on an on but hopefully that gives you an idea of what you could do.

Comparisons using Metrics

If you think about comparison, they are just an extension of the new formulas that we can now make. All you have to do is create a metric that compares the data points you are interested in. To make this easier, Discover lets you select two columns that are already in your report and you can right click on the column header to select some of the quick calculation options. I wish it had an (A-B)/B option in the list but for now we will use an A/B Percent comparison to quickly see the percentage change between our Mobile and Non-Mobile Conversion metrics. Here is where you select the option:

This will then give you a new column with the comparison as shown here:

That makes for an easy comparison. If you would like to tweak the comparison you can right click on the column header and select edit. I would then modify the comparison as follows to get an (A-B)/B comparison instead of just A/B.

Be careful to keep track of what is in your comparison and use meaningful names since the metric doesn’t dynamically reference the columns that it was built from. If you were to switch out one of the original metrics the comparison would not automatically update. That would be a cool feature, though.

Final Thoughts

While this functionality has been in tools like Adobe Insight for a long time I am happy to see it available in Discover. It provides much more flexibility in creating metrics and comparisons. I had a client once in the theme park business that liked to segment their orders by the many different checkout types they had. They could use this to create specific metrics for each type without having to burn up a lot of events. Hopefully this makes its way into SiteCatalyst.

Adobe Analytics, Analytics Strategy

eMetrics Chicago – Wrapup

Before too much time passes during these dog days of summer, I thought that I’d offer a recap of the eMetrics Marketing Optimization Summit that took place in Chicago recently. First of all, Chicago really digs analytics. Despite a smallish eMetrics crowd of around ~100 or so people, there was lots of energy, young talent and academic interest.

I had the privilege of sharing a few minutes of the opening keynote with Jim Sterne where I made a few announcements about the newly rebranded DAA (Digital Analytics Association). I proudly announced that we transitioned 25% of our Board of Directors by adding new members Eric Feinberg, Peter Fader and Terry Cohen to our diverse assembly of directors. I also took the stage in my new role as President of the DAA and shared my thoughts about the epic journey we’ve collectively embarked on in this industry that we call digital analytics. This is a theme that I reiterated during my closing presentation on The Evolution of Analytics, whereby I concluded, that the future state of evolution is up to each of us to determine.

But speaking of future success, I commend the local DAA Chicago Chapter for the great strides they’ve made in not only organizing our open industry meeting, but also in championing the cause for digital analytics in the windy city. The DAA has much better brand recognition and awareness in Chicago than I thought. But I suppose I shouldn’t be too surprised because after all, according to the DAA Compensation scan, Chicago is the second best place to live if your seeking a job in analytics.

Moving onto more details about the conference, Jim Sterne always encourages attendees to measure the value of eMetrics not just in the content, but also in the hallway conversations and the key tibits that you take back to your desk when all the sessions and lobby bar fun is over. In Chicago, for me the hallway conversations focused on several hot topics in analytics including: tag management, privacy and of course, the perennial analytics issues of people, process and technology.

On the privacy front, the controversial WSJ article about Orbitz’ targeting was a hot topic of conversation for me (and Scot Wheeler) during the conference. Despite the fact that the WSJ got the headline wrong…it reiterated the fact of how very little the average consumer knows about what we all do…

I also learned (privately) that Amazon is doing some crazy brilliant stuff, but it’s so good that they can’t even talk about it. The senior brass at the really good companies are very protective, but web analysts can still be plied (at least a little) with alcohol at a Web Analytics Wednesday.

And finally, people who do know what we do are struggling to pull together the pieces for making an analytics program work…finding staff, selecting tools, building process. These are perennial issues in digital analytics and why we’ve built our consulting practice here at Analytics Demystified to help solve these problems.

But as always at eMetrics, I was invigorated to speak with new entrants to digital analytics and the usual suspects as well. For me, I’ll be taking from this eMetrics something back to my desk and to my clients…and that is a fresh perspective.

Anyone who has been in this game for any length of time should recognize that it’s easy to become steeped in your own myopic view of digital analytics and continue to rehash the same perennial issues with the same examples over and over again. Yet, any good analysis – or method of teaching – needs to evolve to remain relevant. And thus, for me this eMetrics taught me that experience needs to be tempered with the fresh eyes of unbridled passion and enthusiasm. While we may hold the frameworks and fundamentals, it is they who hold the spark. I for one appreciate what the next generation of digital analyst is bringing to this industry and hope to learn as much from them as I can offer.

What do you think?

Adobe Analytics

Cart Persistence and Duration [SiteCatalyst]

I was recently working with a client who had some interesting questions. In general, he wanted to see different derivations of how long products had been in the shopping cart prior to being purchased. Some of his detailed questions included:

  1. Of all visitors hitting my website today, how many already have items in their Shopping Cart (which is persistent on this website)?
  2. For those visiting the site today, for how long have they had items in their Shopping Cart? (i.e. 1 Day, 10 Days, etc…)?
  3. At the time visitors purchase items, for how many days had they had items in the Shopping Cart?
  4. Is it possible to see cart duration by product?

While it is easy to see why this might be interesting to know, after some reflection, it turned out to not be a very straight-forward thing to understand/report upon using Adobe SiteCatalyst. I wrestled with a few different ways to answer these questions, but ran into a few roadblocks. In the end (and after bouncing some ideas off some friends), I settled on an approach that seemed to work (by no means the only one), so thought I would share it in case it is helpful to others out there with the same questions. If you have the Adobe Insight product, solving this question is much easier, but this post will deal with answering it for those of us who only have SiteCatalyst.

Establishing Cart Addition Date

The first challenge is to identify the date on which each visitor added items to the Shopping Cart. This is similar to an earlier post I had about Date Stamping, but with a twist. In the Date Stamping post, we just set the current date of each visit, but in this case, we want to set the date that a Product was added to the Shopping Cart (I suggest you use an eVar with original value, expiring at the Purchase event). Once you have done this and have data processing for a while, you can open the new Persistent Cart Date eVar report and add the Visits metric and see a report like this (in this example using the current date of 3/3/12):

Here we can see that we have answered our first question. By looking at the “None” row, we can see that approximately 92% of the time, Visits are from people that have not previously added items to their Shopping Cart (does not include those pesky cookie deleters!). If you broke this report down by the Products variable, you would be able to see the actual products that were associated with each date:

Identifying Duration in Cart

Our next challenge is to determine exactly how many days products have been in the shopping cart. As mentioned above, there are actually two flavors of this question. The first is to see how long ago products were added to the shopping cart at the time the current visit takes place, and the other is to see how long ago products were added to the shopping cart at the time a purchase takes place. We’ll start with the former.

With the preceding report and its breakdown by the Products variable, we have all of the key elements needed to figure out how long items have been in the shopping cart. However, to calculate this, it’s easier to use Microsoft Excel so I suggest you move the data to a spreadsheet using ReportBuilder or Data Extracts and then adding some formulas to break out the data as shown here (I have replaced the None row with the value “NO CART” in Excel):

Once this is done, you can create a pivot table to group like items together and build a report like this (for illustrative purposes, I only created a few rows but in reality there would be many more rows of dates in this report):

In this pivot table, we can still see our same 8% of Visits with no items in the shopping cart, but now we can see that our largest percentage is tied to cases where visits had items in the cart for 6 days. If you had more data, the next logical step would be to group the number of days into meaningful buckets using SAINT Classifications or directly in Excel. Also, note that instead of moving data to Excel, another way to create a report like the one shown here would be to create a SAINT Classification file that maps the current date to the number of days in the past (i.e. 3/2/12 = 1 Day), but we’d have to update the SAINT file each time to adjust for the current date which would be a pain!

Next, since we have the report data by product ID, we can also break down the above pivot table by product to see which products are associated with each # of Days in the cart:

Conversely, if your organization is more product-focused, you can flip the pivot table and look at Product ID’s by days in shopping cart like this (which will have more values per product ID when the data is real!):

You will note that these reports help us answer the first cart duration question which is how long products were in the shopping cart at the time a Visit took place, but the same process can be used to answer the second question which is how long products have been in the shopping cart at the time of purchase. To do this, all we need to do is modify our original SiteCatalyst report to show Orders instead of Visits like this:

Note that in this case, we should no longer see a “None” row since to complete an Order, something must have been added to the shopping cart prior to purchase. It’s likely that you will also see that the majority of the rows are for the current date (which in this case is 3/3/2012). Once you create this report, you export it to Excel and create the table and pivot table in the same manner described above. This might result in a report that looks something like this:

Product-Specific Cart Duration

The last question to be answered is related to the duration in cart of each product. In the examples above, we have set a date when products were added to cart, but this date was a general one or the date that the first product was added to the shopping cart. There will be cases when you want to get more granular and know the date for each product since visitors can add a product to the cart on 2/28/12 and then add different products to the cart on 3/1/12. If you desire this level of detail, in addition to setting the Persistent Cart eVar described above, you can set an additional Merchandising eVar (with “Original Value” allocation and expire at the “Purchase” event). This will “bind” the date to the specific product that is being added to the shopping cart. Since this is more complex, I won’t go into all of the intricacies here, but if you have questions, feel free to contact me.

Final Thoughts

As you can see, this is a somewhat complex solution, but should get you the answers you need. There may be other ways to answer these questions, so if you have tackled this, feel free to leave a comment here. Thanks!

Adobe Analytics

ACCELERATE Chicago Debrief

I’m on the plane returning home from the second ever Analytics Demystified ACCELERATE Conference and I can’t help but smile as I think about what an incredible event this was. For starters, demand for this event maxed out the ~200 person capacity of our Chicago venue at the Gleacher Center, but we managed to comfortably squeeze in all of our registered guests as well as everyone who showed up on the waiting list into the room. Of course, Chicago was well represented but there was also a preponderance of Ohio Analysts in the house as well. The OHiO solidarity was reiterated with incessant demands for a Columbus, ACCELERATE sometime in the not too distant future…to which we say, Anything’s possible 😉

Once we kicked off, the room was electrified by Eric Peterson’s inspiring opening comments and you could definitely feel the energy in the air. We promised our attendees a fire hose of content and delivered by honing our “10 Tips in 20 Minutes” format to keep things going at a frenetic but well managed pace. Based on comments and feedback we received, I think it’s safe to say that anyone who was there will tell you that we over-delivered. You can check out the recent Tweets on #ACCELERATE yourself, but I’ll offer up a few notable comments:

 

medmonds: Very impressed with the #ACCELERATE conference – insightful tips & strategies for optimizing digital channels from industry leaders #MEASURE

Jonghee: Completely satisfied with #ACCELERATE. It’s quality is better than some of the expensive ones. Great job @erictpeterson and the team!

Ableds2: Few industries/professions strive for excellence like this group. I am honored to be surrounded by amazing people #ACCELERATE #measure

 

#ACCELERATE by the Numbers (April 4, 2012)

One of my responsibilities during ACCELERATE, beyond delivering my 10 Tips on Using a Social Media Measurement Framework was to track the Twitter stream to see what was coming in throughout the day of the conference and who the BIG Tweeters were. I thank TweetReach for providing access to their monitoring tool, which allowed me to conduct my analysis in near-real time as Tweets tagged with #ACCELERATE were flying across the Interwebs.


***Note: My TweetReach Tracker is set up for East Coast time, so this reflects a -1hr Time Zone delay.***

Exposure: (measured in Top Contributors by impressions) We did a pretty good job overall of sharing the love emanating from ACCELERATE on Twitter with 3.23 million impressions reaching an estimated 240k people on April 4, 2012. The 6 top contributors delivered 69% of the total impressions and they included: @EricTPeterson, @EndressAnalytic, @johnlovett, @jennyweigle, @monishd, and @MicheleJKiss (who wasn’t even there!). If you’re looking for folks to get the word out on Twitter, consider this your shortlist.

Velocity: (measured in ReTweets and total impressions) Overall the most re-Tweeted tweet for the 24-hr period was by Erica Chain, who garnered 10 RT’s on her 140 character missive about Joan King’s Crate & Barrel presentation. Note to the velocity Tweeters: pictures get more RT’s! I had a chance to talk with Erica and learned of her amazing story which was an added bonus. But, Monish Datta won our cash money prize for the most Retweeted Tweet as of 3PM. He attained 7 RT’s and over 16k impressions. Monish and team from Victoria’s Secret were well represented at ACCELERATE and they all added great value and velocity to the Tweet stream.

Penetration: (measured as the percentage of #Measure Tweets containing the #ACCELERATE hashtag) Over the course of the day, #ACCELERATE occupied 71.2% of all Tweets on the #Measure. Since we were delivering a fire hose of information during ACCELERATE, we encouraged attendees to Tweet out over our hashtag as well as the #Measure hashtag throughout the day. Apparently they listened because we dominated #Measure by sharing the free content delivered at ACCELERATE with anyone who cared to listen in, one tip at a time. One UK onlooker even commented that either it was lunchtime or Twitter had crashed as our activity came to an abrupt slowdown during our noshing hour.

Impact: (measured as the perceived value generated by ACCELERATE) The true impact of this event is best measured by the actions that attendees will take when they arrive back at their desks and apply their newfound insights into their daily work. While this is a real tough one to quantify, measuring impact on these types of things always is. For me and my Partners at Demystified, we gauge our success by the speaker feedback we receive, the generous donations to our Analysis Exchange scholarship fund, and through the comments that we get from individual attendees. By all measures, this was a smashing success.

In closing, I’d like to issue one last word of thanks to our generous sponsors: Ensighten, ObservePoint, OpinionLab and Tealeaf who made this event possible. And if you missed ACCELERATE Chicago, try to make it to Boston. We’ll be doing it again on October 24th, and we hope to see you there.

Adobe Analytics

Are Your Employees Wasting Your Marketing Budgets?

Every once in a while, especially when I am working with large clients, I ask them a simple question that befuddles them. The question I ask is this:

“Do you know much money you are spending each month on paid advertising that is being used by your own internal employees?”

After they pause for a moment, and realize that they don’t in fact know the answer to this, I sometimes see a spark of panic in their eyes. If I could read their minds it might go something tike this:

“Why is he asking this? Should I know that? I’m sure our employees are smart enough to not use paid advertising like paid Google keywords or display ads to get to our own website right? I hope so… But what if 10% of our ad spend is on lazy employees coming to our website through paid search? Urghhh!!!”

At this point, I assure them that they are probably not wasting a huge amount of their marketing budget on their own employees, but the reason I ask is that it is very easy to know and make sure that you don’t have an issue. Therefore, in this post, I thought I’d show you a simple way to quantify this.

Excluding Employee Traffic

The first step in seeing how much you are spending in advertising on your own employees is to isolate your own internal employee traffic. The good news is that this is something you should already be doing today. If you aren’t, you should start doing this right away. The easiest way to exclude employee traffic is to identify the corporate IP address ranges that your company uses. While this is not perfect, it should be close enough. Even if you have remote employees, hopefully they are using a secure VPN which will route their traffic through your corporate IP ranges (one hint for Adobe SiteCatalyst customers is that if you have IP ranges that change frequently I would suggest implementing a DB VISTA rule that has all of your IP address ranges since that will allow you to add/remove them as needed). Once you know these IP ranges, you can either move that traffic to a different data set (i.e. report suite for Adobe SiteCatalyst customers) or tag them as “employees” in a web analytics variable and build a segment to isolate this traffic. Regardless of how you do this, the ultimate goal is to have a data set or segment that you are pretty sure represents your employees.

External Campaign Reports

Once you have your employee traffic, the next step is to look at your campaigns report for this employee segment. If you are doing a good job with your external campaign reporting, you should have a way to see how many visits you are getting from each paid advertising element or from each marketing channel as a whole. For example, here is a sample SiteCatalyst report showing how many Paid Search (SEM) Visits took place as seen in a report suite that contains only employee traffic:

As we can see here, we’re getting a few hundred visits each week. Next we can compute an average cost for Paid Search advertising and get a rough estimate of how much money we are spending on employees for Paid Search. In this example, for the week of September 25th, we had 372 visits from Paid Search and if our average cost per ad was $3.50, our estimated cost would be $1,302 which annualized would be around $65,000. Depending upon the size of your advertising budgets, that could be good or bad. But let’s say that you work for American Express and have over 60,000 employees. It could be the case that you have 20,000 paid search visits from employees in a given month. If the average cost of paid search keywords were $.50, you could be spending $120,000 of your marketing budget on employees! That could get you a few extra web analysts on your team!

Also keep in mind that the same principle applies to display advertising and other marketing channels that cost you money. Finally, if you need to isolate the specific advertisements that are being used, you can do this by looking at the detailed tracking codes in your campaigns report. I have found that it is normally the branded advertisements that are the top culprits.

Next Steps

Hopefully, after doing this quick analysis, you’ll find out that you don’t have any major issues. But if you do find that your employees are being a bit lazy and eating up large portions of your marketing dollars, there are some simple ways to rectify this. I have found that at large companies, most of the time, employees aren’t aware that they are actually costing their employer money. While people like us live and breathe online marketing, your employees may have no concept of how online advertising works. By using the data you create in your analysis, you can spread the word through company newsletters or intranets and educate the company no how much money is being wasted and ask employees to use free tools like SEO links if they need to get to the website.

Besides saving your company a little bit (or a lot!) of money, this is a fun way to show executives at your company the power of web analytics. If the amount of money you save by doing this analysis is significant, feel free to use the data to get you some additional headcount or tools that you have been longing for!

Adobe Analytics

My 2012 Summit SiteCatalyst Feature Wishlist [SiteCatalyst]

Each year at the Adobe (Omniture) Marketing Summit, customers are given an opportunity to “vote” for new product features while Brett Error reviews them (and cracks a few jokes!). From the rumor mill, it sounds like Brett may no longer be around (??), but even if he isn’t, hopefully the session will live on. Each year around the time of Summit, I like to look back at the past year and think about what SiteCatalyst features are not available that would have helped me and my clients the most. The SiteCatalyst product team is always swamped with great ideas from the Idea Exchange and have been doing a great job of pounding them out. Therefore, this list is not meant to mean that my requests are more important than others they are working on, but rather just ones that I have personally experienced pain for not having (some of them were on my last year list as well which you can read here). All of these ideas are in the Idea Exchange, so if you have experienced similar cases where they would help you out, please vote for them there (shortcut links to items in the Idea Exchange are provided for each) and possibly in the Summit session should they arise…

Segmentation Enhancements

In a recent post, I went through some of the reasons why companies might decide to abandon multi-suite tagging and just rely on v15 Segments instead. Currently, there are a few features holding me back from going “all in” on v15 Segmentation. These are:

  1. The ability to compare segments in reports (http://bit.ly/yERLbr). As I mentioned in my previous post, it is easy to compare the same report for two report suites or ASI slots, but it is not yet possible to do the same for two (or more) segments without using Discover.
  2. The ability to have security for segments so you can assign who can see data for a segment (http://bit.ly/yh0djM). As Ben Gaines astutely pointed out, many people use report suites for security reasons to determine who at an organization can see which data. It would be great if this could somehow be duplicated using segments. However, I think that this is not possible until my next feature request is addressed.
  3. The ability to lock down eVars like you can Success Events and sProps (http://bit.ly/wSMLU3). For years, I have asked the SiteCatalyst team to provide the ability to add User/Group security to eVars. Currently, it is possible to prevent a user or group from seeing a specific sProp or Success Event, but for some strange reason, you cannot do this with eVars. Once you can lock down eVars and you can lock down segments, you can truly secure your data set and cease to rely on multi-suite tagging or additional company logins to enforce security.
  4. One item that is unrelated to v15 segmentation and multi-suite tagging, but still related to segmentation, is the ability to segment on a path (http://bit.ly/xzkeTr). There are many cases in which you would want to isolate visitors or visits where visitors navigated in a certain way. Hence, it would be great if you could add a 3 or 4 step flow as a valid way to segment. Since Pathing is available on all sProps, my hope is that this functionality would work for any sProp that has Pathing enabled, not just Pages and Sections.

Multi-Session Enhancements

One of the limitations of SiteCatalyst is that there are many aspects that are only session based. In the future, I would like to see this restriction lifted. Here are a few examples of what I’d like to see:

  1. Ability to see multi-session Paths (http://bit.ly/zimT2O) so you can see how visitors navigated the website across multiple sessions.
  2. Ability to see multi-visit campaign code attribution (http://bit.ly/yzBTUE) in a way that is better than just first touch and last touch or the Cross Visit Participation plug-in. Even if the current option for “Linear” allocation worked cross-session, that would be a great step forward.

Report Sorting Enhancements

If you spend a lot of time in SiteCatalyst reports, you are familiar with the fact that you can only sort by metric columns. I would like to have the following sorting enhancements:

  1. The ability to do a weighted sort so you can easily filter only the top X number of rows before sorting (http://bit.ly/zsJNps). I am sure the following scenario has happened to you at some point. You add a calculated metric like Bounce Rate to a report and then you choose to sort. You end up getting items with a 100% bounce rate, but when you dig deeper you see they have only a few values. What you really want is the ability to filter the report for only the top 50 values and then to apply a sort (like Google Analytics provides). Currently, this has to be done in Excel, but should be native to SiteCatalyst.
  2. The ability to sort by the value column (http://bit.ly/ylULpS). There are some cases in which you would like to sort by the actual values passed into SiteCatalyst instead of by a metric column. Currently, you can work around this by using search filters, so this isn’t a super-high priority, but it would be nice to simply have the ability to sort by the value column for times it is advantageous.

Final Thoughts

Obviously, I could list many more, but the above list of items are the ones that I have run into the most. If you have others that you would like to see elevated in priority, feel free to list them here. Thanks!

Adobe Analytics

Uber Success Events [SiteCatalyst]

Every now and then, I run into a unique situation with a client that requires what I call an “Über” Success Event. It isn’t possible to define this easily, so in this post, I will illustrate what it is and when you might want to use it…

eVar Expiration Limitation

For those who faithfully read my SiteCatalyst blog posts, you will have heard me lament two major eVar expiration limitations. The first limitation is that you cannot expire an eVar at either a Success Event taking place OR a time frame (whichever comes first). This limitation can be rough, since there are some cases in which you’d like to expire an eVar when Success Event X takes place, but if it doesn’t take place after three months, you might want to clear out the existing eVar value. Not cleaning out this value could result in that eVar value receiving credit for a Success Event that takes place a year later when it really shouldn’t. I have suggested this change in the Idea Exchange (http://bit.ly/yXqtqS) so feel free to vote for it there.

However, this post is focused on the second eVar expiration limitation, which is that you cannot expire an eVar at one Success Event OR another Success Event. In this case, you basically want to tell SiteCatalyst to expire the eVar when Event X or Event Y or Event Z takes place. Unfortunately, this isn’t possible in the Admin Console, since you can only pick one expiration item (Event or Time Period) from the list. This may not sound like too much of a limitation, but the following example will illustrate how it can cause problems.

Let’s imagine that you are a B2B Lead Generation website that sells its products online or allows its visitors to fill out a form and work with a sales rep to complete the purchase. You have a standard conversion flow with three steps (Event 1, Event 2, Event 3). Each of these steps has an associated Success Event. So far, so good. However, when visitors reach the third step of the process, they can proceed to purchase on line (scCheckout, purchase) or view and submit a form (Event 4, Event 5) to have a sales rep call them and finish the sale.

In this situation, a website visitor can be viewed as successfully completing the conversion funnel two different ways. One way is to purchase online and the other is to submit a form. It’s as if there is a fork in the road, but both paths can lead to a successful conversion. Obviously we can track each of these steps using Success Events, but the following quirky situations arise as a result of this:

  1. It is easy to combine both of the final Success Events (Orders and Event 5 in this case) in a regular eVar report by creating a Calculated Metric that adds Orders to Lead Form Submissions (Event5 ).
  2. However, it is not possible to use a standard SiteCatalyst Conversion Funnel report since you cannot include Calculated Metrics in Funnel reports (to help me change this, vote for this: http://bit.ly/zl3bUs). There are also a host of other issues with Calculated Metrics that you can read about in the Idea Exchange (i.e. Not available in DW, Can’t segment on them, No Participation, Can’t see totals in reports, etc…) so they are not really meant for “heavy lifting,” so to speak.
  3. But the biggest issue is the one I raised earlier. What if we want to expire a bunch of eVars when the visitor Orders OR they submit a Lead Generation Form to a sales rep? We are pretty much out of luck since we can only pick one Event in the Admin Console to use for eVar expiration purposes. Bummer!

As I stated previously, this isn’t an everyday occurrence, but I have seen it wreak havoc on some clients so I wanted to share an easy workaround to solve this last point.

The “Über” Success Event

So now that we have framed the problem, here’s how you can solve it. In the scenario above, what we would want to do is set a new Success Event at the same time that we set both the Order (purchase event) and the Form Submission (Event5). This new Success Event (let’s say that it is Event 20), would be set with every Order and Form submission so it should add up to the total of both. Doing this one simple thing has some wonderful consequences:

  1. There is no need to create the Calculated Metric described previously since this new “Event 20” would add up to the same figure of Orders + Lead Generation Form Submissions
  2. Unlike the Calculated Metric, you would be able to use this new “Event 20” in a Conversion Funnel so you can have a funnel of Event 1, Event 2, Event 3 and then Event 20 which would represent ALL success (obviously we don’t know if the people filling out forms were truly successful, but for this scenario, let’s not worry about that an assume you know how to do this by reading this post!). This also removes all of the shortcomings of Calculated Metrics I mentioned earlier.
  3. But most importantly, if we wanted to expire any eVars when one of these two Success Events takes place, we now have a way to do that. All we have to do is to go to the Admin Console and set the eVars to expire at Event 20!

Hence, setting this extra Success Event that sits on top of the other two Success Events is what I affectionately call my “Über” Success Event! This is just one example of how you can use this concept, but I have seen many more. Enjoy!

Adobe Analytics

Product Page Tab Usage [SiteCatalyst]

If you sell products on your website, you will often find the need to provide detailed information to those browsing your products. For example, below you can see a product detail page for a Gas Grill. As you can see, there are tabs for Specifications, Ratings & Reviews, etc…

One of the things I have been asked by clients is to provide a way for them to see how often each of these pieces of information (usually in the form of tabs) is used. Specifically, I am usually asked the following questions:

  • Which tabs are used the most?
  • Which tabs are used the most for each product?
  • In what order are these tabs clicked in general and for each product?
  • Are there certain tabs that lead to conversion more than others?

Therefore, in this post, I will share some tips on how to answer these questions…

Tracking Tab Usage

To start, let’s focus on the easiest question – which tabs are being used most often. To do this, we need to capture the name of the tab each time it is accessed. While I normally use eVar variables over Traffic (sProp) variables, this is one case in which I prefer to use sProps for reasons you will see below. Therefore, when a visitor clicks on one of the tabs on a Product Detail Page (PDP), I like to pass the name of the tab and the product to which it is related to an sProp. For example, on the Product Detail Page shown above, if the visitor clicks on the “Ratings & Reviews” tab while on the “Kenmore 4-Burner LP Gas Grill,” I would pass the following:

s.prop60=”Kenmore 4-Burner Gas Grill:Ratings & Reviews”

By concatenating these values, I know that I had one instance of the combination of this particular product and the specific tab that was clicked. If the page doesn’t reload when the tab is clicked, you may have to use Custom Link tagging to set this sProp. In addition, it is important that you also capture the default tab item, which is normally some variation of “Overview.” In this case, when the Product Detail Page first loads, the value passed to the sProp would be “Kenmore 4-Burner Gas Grill:Overview.” By setting these values to an sProp, you can easily see how often each Product/tab combination is viewed and if you have unique visitors enabled for the sProp, you can see uniques for each combination as well:

Next, you can use SAINT Classifications to group all similar tabs together to see a rollup of use across all products. In the preceding example, we might want to group all cases of “Ratings & Reviews” across all products to see which types of tabs are getting the most action:

Product Tab Pathing

Now that we can see a general idea of which tabs are being used and which tabs are used for each product, the next question we want to answer is in which order are tabs being used. Whenever you want to see sequence in SiteCatalyst, you will want to use Pathing reports. This is the reason why I chose to use an sProp instead of an eVar for this setup since Pathing only works on sProps. In this case, once you have implemented the sProp described above, you can enable Pathing and you will be able to see the order in which tabs are used for each product like this:

However, this sProp and its Pathing capabilities will only allow you to see how visitors used tabs at the product level. What if you want to see a Pathing report that shows how tabs were used regardless of product? Unfortunately, this isn’t as easy as it should be. If you have the Discover product, you can see Pathing on the SAINT Classification we created above, but if you don’t have Discover, you will have to create a second sProp that captures only the tab name and also has Pathing enabled.

Product Tab Influence

Another question I get from clients related to Product Tabs has to do with the impact they have on conversion. For example, they want to know if visitors who view the Specifications tab are more likely to convert than those who do not. In SiteCatalyst, there are a few ways to accomplish this. First, once you have implemented the items above, you can create a Segment to filter sessions or people using specific tabs and see how that segment of visits/visitors compares to those who did not use the tabs. Keep in mind that you can segment on both detailed values (product+ tab) or the classified value (tab only) in the segment builder or Discover.

Another way to see the influence of tabs on KPI’s is to use Success Event Participation. By enabling Participation on the sProp described above for your key Success Events, you can see which ones have the most influence over time. For example, if we turn on Participation for the sProp shown above related to Orders, we can see a report like the one shown here:

In this report, we can see how many Orders each product/tab combination was in the flow of across weeks or months of visits. Then we can create a calculated metric which divides this Order Participation by the number of times each product/tab combination took place to see how influential it was as compared to other product tab combinations (since the numbers are small, in this example, I multiplied by 100 to make the differences easier to see). Obviously, the same principle can be applied to the sProp that does not contain the product as long as you are passing the values natively to an sProp and not creating it via a SAINT Classification. Finally, you could also pass the tab names to an eVar and set the allocation to Linear to spread credit across all tabs that are used, but since you may already be setting the sProp decried above for pathing purposes, Participation may be the logical way to go.

Final Thoughts

Keep in mind that the same principles described here can be applied to other items related to products – not just product detail page tabs. For example, you might have 360 degree views of products, product images, etc. that can all have an influence on conversion. You can treat these items the same as product tabs and capture them as shown above. Therefore, if you are curious about how website visitors are using tabs on your product detail pages or any other supplemental product information you provide, give the techniques shown here a try. If you have other tips on tracking this type of product content, leave a comment here.

Adobe Analytics

Internal Search Position Placement [SiteCatalyst]

When it comes to searching on the Internet, where a particular search result appears in the list of results can make an enormous difference. Companies pay big bucks to SEM and SEO experts to tell them how they can be ranked higher for specific search keywords. While you cannot control all that happens to you on Google or Bing, when it comes to your own website, you have more control over which internal search results you choose to show to your visitors. In the past, I have shown several ways to track what is happening with your internal search, but in this post, I will explore a new internal search topic – how to see if placement matters. After reading this post you will be able to see how each search result placement performs and even be able to break it down by internal search term.

Conversion By Placement

Let’s begin with some basic stuff. Imagine you have a website and internal search is a heavily used function. You should already be setting a Success Event for every internal search and capturing the internal search term used in an eVar (for more advanced internal search tips click here). Doing this might result in something like this:

However, as you can see, in this setup, it would be difficult to tell whether the visitor clicked on the first item in the list, the second, the third, etc.. Some of my customers want to know if it is worthwhile to have more than three or four search results at all. As you can see here, the visitor was presented with almost 38,000 search results, but how many went beyond the first five? Is less more?

To answer this question, we need to tell SiteCatalyst which position the link that is clicked was in. For example, if this visitor clicked on the second search result above (the one that goes to “www.salesforce.com/chatter”), that would be considered the second spot. What would be cool is if we could see how many Internal Searches contained a “Spot #2” and how many Internal Search Clicks took place for “Spot#2.” If we had that, we could use a Calculated Metric to see the conversion rate of each internal search result placement.

So here is how you would do this? First, you would set the Products Variable (or if you are using v15, possibly use a List eVar with expiration set to Page View or Internal Searches Success Event) value for all “spots” that took place on the search results page. For example, if there were ten internal search results shown, the Products Variable (or List eVar) would have ten values (spot1, spot2, spot3, etc…) and each would be associated with the Internal Search Success Event. Next, when a visitor clicks on a specific item in the internal search results list, you would pass the spot# to the Products variable (or List eVar) and set an Internal Search Results Clicks Success Event. Once you have done this, you now have a list of spot values and two Success Events that are associated with each. Then you create a Calculated Metric for the Click-Through Rate (Internal Search Clicks/Internal Searches) and add it to the List eVar like this:

In this fictitious example, we can see that the items with the top-most placement spot get clicked the most. However, the most interesting aspect of this report is that the first five internal search placement slots account for almost 60% of all search result clicks! If we use the 80/20 rule, we could probably get almost the same number of internal search result clicks by having seven results as if we had hundreds.

Also, keep in mind that you can add other Success Events to the above report such as Orders or Lead Forms Completed to see how internal search spot # impacts website success. For example, if you add Orders to the above report, you will be able to see how each internal search spot # converts by dividing Internal Search Clicks by Orders as shown in this mocked-up report:

Spots & Keywords

The next questions I get from clients when I show them this are related to the combinations of internal search keywords and search placements. For example, they want to know if a specific search phrase does better or worse based upon where it is in the internal search result list (which is often determined by algorithms). The good news is that seeing this is easy using an eVar Subrelations report (keep in mind that in SiteCatayst v15 all eVars have full subrelations!). You can breakdown the report above by internal search phrase or perform the converse by first opening the internal search phrase eVar report and breaking it down by the Internal Search placement eVar as shown here:

If you are not using SiteCatalyst v15 yet and don’t have any eVars left for which you can add Full Subrelations, you can also concatenate the search term and the spot # into an eVar to see similar information as long as you don’t have too many internal search terms.

Product ListCollection Pages

Keep in mind that this same principle can also be applied to product collection pages where you highlight a few key products on a landing page:

For example, you might see a page like the one above and want to know if items in the top-left perform better than those in the middle. Doing this is easy if you leverage the concepts above. In this case, the “spots” we discussed are not vertical, but rather go left to right and row by row. You can come up with any spot labeling system that makes sense to your organization (i.e. row1-spot1, row1-spot2, etc…).

In this case, instead of breaking down the preceding report by internal search term, you could break it down by the Products Variable to see this:

Final Thoughts

If internal search and/or product lists are important to your business, you might want to try this out and see if you can learn some good tidbits about how placement affects your conversion. If you have any questions, please leave a comment here…Thanks!

Adobe Analytics, Analytics Strategy, Technical/Implementation

Integrating SiteCatalyst & Tealeaf

In the past, I have written about ways to integrate SiteCatalyst with other tools including Voice of Customer, CRM, etc… In this post, I will discuss how SiteCatalyst can be integrated with Tealeaf and how to implement the integration. This post was inspired and co-written by my friend Ryan Ekins who used to work at Omniture and now works at Tealeaf.

About Tealeaf

For those of you unfamiliar with Tealeaf, it is a software product in the Customer Experience Management space. One key feature that I will highlight in this post is that Tealeaf customers can use their set of products to record every minute detail that happens on the website and are then able to “replay” sessions at a later time to see how website visitors interacted with the website. While this “session replay” feature is just a portion of what you can do in Tealeaf, for the purposes of this post, that is the only feature I will focus on. In general, Tealeaf collects all data that is passed between the browser and the web/application servers, so when someone says, “Tealeaf collects everything” that is just about right. While there is some third party data that may need to be passed over in another way, for the most part, out of the box you get all communications between browser and server. Tealeaf clients use their products to improve the user experience, identify fraud or to simply learn how visitors use the website. Whereas tools like SiteCatalyst are primarily meant to look at aggregated trends in website data, Tealeaf is built to analyze data at the lowest possible level – the session. However, one of the challenges with having this much data, is that sometimes finding exactly what you are looking for is like looking for a needle in a haystack if you have an earlier version of Tealeaf (i.e. earlier than 8.x). While the Tealeaf UI has gotten better over the years and is used by business and technical users, it was not built to replace the need for a web analytical package. It is for this reason that an integration with web analytical packages such as SiteCatalyst makes so much sense.

SiteCatalyst Integration

Since SiteCatalyst is a tool that can be used by many folks at an organization, years ago, the folks at Omniture and Tealeaf decided to partner to create a Genesis integration that leverages the strengths of both products. The philosophy of the integration was as follows:

  • SiteCatalyst is an easy tool to use to segment website visits, but that it doesn’t have a lot of granular data
  • Tealeaf has tons of granular data, but isn’t built for many end-users to access it and build segments of visits on the fly
  • Establishing a “key” between the SiteCatalyst visit and the Tealeaf session identifier could bridge the gap between the two tools

Based upon this philosophy, the two companies were able to create a Genesis integration that is easy to implement and provides some very exciting benefits. When you sign up for the Tealeaf/SiteCatalyst Genesis integration, a piece of JavaScript is added to your SiteCatalyst code. This JavaScript merely takes the Tealeaf session identifier and places it into an sProp or eVar. That sProp or eVar then becomes the key across both products. Once the Tealeaf session identifier is passed into SiteCatalyst, it acts like any other value. This means that you can associate SiteCatalyst Success Events to Tealeaf ID’s, segment on them or even export these ID’s. However, if you go back to the original philosophy of the integration, you will recall that the primary objective of the integration is to combine SiteCatalyst’s segmentation capability with Tealeaf’s granular session replay capability. This is where you will find the most value as demonstrated in the following example.

Let’s say that you have an eCommerce website and that you have a high cart abandonment rate. In SiteCatalyst, it is easy to build a segment of website visits where a Cart Checkout Success Event took place, but no Purchase Success Event occurred:

Once you create this segment, you can use SiteCatalyst or Discover to see anything you want including Visit Number, Paths, Items in the Cart, Browser, etc… However, the one thing that is difficult to see in SiteCatalyst is the actual pages the visitor saw, how these pages looked, where the user entered data, the exact messages they saw, etc… As the old saying goes, “a picture is worth a thousand words” and sometimes simply “seeing” visitors use your site can open your eyes to ways you can improve the experience and make more money! However, watching every shopping cart session would be tedious. But by using the SiteCatalyst-Tealeaf integration, once you have built the segment shown above, you could isolate the exact Tealeaf session ID’s that match the criteria of the segment, which in this case are visits where a checkout event took place, but there was no purchase. To do this, simply apply this segment in SiteCatalyst v15, Discover or DataWarehouse and you can get a list of the exact Tealeaf session ID’s that are now stored in an sProp or eVar:

Once you have these Tealeaf ID’s, you can open Tealeaf and view session replays to see if you can find an issue that is common to many visits, such as a data validation error, a type of credit card that is causing issues, etc… Here is a screenshot of what you might see in Tealeaf:

It is easy to see how simply passing a unique Tealeaf session ID to a SiteCatalyst variable can establish a powerful connection between the two tools that can be exploited in many interesting ways. The above example is the primary method of leveraging the integration, but you could also upload meta-data from Tealeaf into SiteCatalyst using SAINT Classifications and many, many more.

One additional point to keep in mind is that for many clients, the number of unique Tealeaf session ID’s stored in SiteCatalyst will exceed the 500,000 monthly limit. As shown in the screenshot above, 96% of the values exceeded the monthly limit. This means that you may have to rely heavily on DataWarehouse, which can sometimes take a day or two to get data back. It also means that you may want to consider using an sProp instead of an eVar if you have a heavily trafficked site.

The Future

In the future, we’d like to see Adobe and Tealeaf build a deeper integration that allows SiteCatalyst users to simply click on a segment and automatically be taken into Tealeaf where they could have the same segment created in Tealeaf and begin replaying sessions. This functionality exists for OpinionLab, Google Analytics and others already. It would also be interesting if one day joint customers could use Tealeaf to assist with SiteCatalyst tagging itself. Since Tealeaf has all of the data anyway, why not use this, combined with SiteCatalyst API’s to populate data in SiteCatalyst instead of using lots of complex JavaScript? Currently, the cost of API tokens make this cost-prohibitive, but technically, there is no reason this cannot be done.

Final Thoughts

So there you have it. If you have both SiteCatalyst and Tealeaf, I recommend that you check-out this integration and think about the use cases that might make sense for you. Also keep in mind that similar integrations exist with other vendors that offer “session replay” features like ClickTale and RobotReplay (now part of Foresee). If you have any detailed questions about the Tealeaf integration, feel free to reach out to @solanalytics.

Adobe Analytics, Social Media

Google’s New Social Data Hub

Google’s Eric Schmidt appeared today at LeWeb 2011 and dropped some notable quotes during his interview with conference organizer Loic Le Meur (@loic), including this prescient perspective: “It’s reasonable to say that in the future, the majority of cars will be driverless or driving-assisted.” Foreshadowing perhaps? Could be…but closer to reality:

Google’s Executive Chairman also quipped, “It’s easier to start a revolution and more difficult to finish it.” Google should know. They’ve been revolutionizing the way in which consumers interact on the Web since their inception and news posted today following the LeWeb chat follows suit.

The news reveals a new initiative launching today called the Social Data Hub. What’s even more exciting is the Google Analytics Social Analytics reporting to appear sometime next year. While the details were somewhat vague, I got the inside scoop and what was published should be enough to incite a minor frenzy in the Social Analytics circles.

The “Social Data Hub” is a data platform that is based on open standards allowing Google to aggregate public social media posts, comments, tags, and a plethora of other activities using ActivityStream protocol and Pubsubhubbub hooks. (Yea, that’s a real thing…I had to look it up too.) Early partners in the initiative include social platforms such as Digg, Delicious, Reddit, Slashdot, TypePad, Vkontakte, and Gigya among others. Of course Google’s own social platforms, Google+, Blogger, and Google Groups are included as well. Noticeably absent from the list are social media moguls like Facebook, Twitter, and LinkedIn who have yet to buy into the new Googley idea of a Social Data Hub.

So What…?

If you’re scratching your head wondering how this is different than Google just trying to get more of the world’s data, you’re not alone. At first glance this may seem like yet another big enterprise ploy to get more data (and oh yeah, Don’t be evil). Well, I see this as a huge win for marketers, bloggers, publishers and anyone else trying to discern the impact of social media marketing across the multitude of channels and platforms available today. Currently, most marketers are forced to evaluate their social media activities through the lens that the platform (or their social monitoring tool) offers. Typically this yields low-hanging counting metrics which can be of some value, but more often than not end up as isolated bits of information that don’t provide business value.

Getting at this all important business value in many cases requires wrangling the metrics into another system, processing data and just generally working hard to gain some incremental insight. This is laborious work for the average marketer, so it’s no wonder that eConsultancy just reported that 41% of marketers surveyed had no idea what their return on investment was for social media spending in 2011. Yikes!

Google’s new Social Data Hub – coupled with Google’s Social Analytics reporting – has the potential to knock the socks off these unknowing marketers. By aggregating data from multiple social platforms into the Social Data Hub, they have the ability to make comparisons across platforms to show which channels are driving referrals, which are generating the most interactions, and which are potentially not worth investing in. It’s not that big of a stretch to imagine Google linking this information to data within their Google Analytics product such as Adwords, Goal completion rates and cool new flow visualizations. If/when Google applies the lens of their analytics tool to this new aggregated data set, look out marketers — you just hit the jackpot! Of course, I’m speculating here, but the possibilities are intriguing for a Social Analytics geek like me. That is of course, if platforms open their APIs to the Social Data Hub. A big if…

So Why Would a Platform buy into the Social Data Hub?

Well, it’s questionable if Facebook ever will opt in for this system so I wouldn’t hold your breath on that one. However for other social platforms, being part of the hub has some distinct advantages. They get to prove their value by partnering up with one of the only solutions on the Web that is capable of providing real comparative data on the performance of social channels.

This is a no-brainer for fledgling platforms that want to increase their visibility and even for established players, opting into Google Social Hub could mean the difference in gaining advertising dollars from skeptical marketers. While the big dogs in social media may take a while to come around, I see this new Hub as a potentially great equalizer for understanding the impact of social media as it relates to referrals for on-site activities which can ultimately lead to conversions and bottom line impact.

While today’s announcement may be just a small ripple in the social media pond, I see big waves building for Marketers. But that’s just my take on the disruptive and revolutionary force that is Google…

If you want in on the action, here’s a link to request access to the private beta for Google’s Social Analytics Reporting: https://services.google.com/fb/forms/socialpilot/

And here’s one to for platforms to join the Social Data Hub: http://code.google.com/apis/analytics/docs/socialData/socialOverview.html

Adobe Analytics

Date Stamp Variable [SiteCatalyst]

I was recently working with a client that had a unique situation arise. This client is well-versed in the usage of the Adobe Discover product and frequently takes advantage of its ability to segment by date. For those unfamiliar with this feature, you might use it to address the following scenario: “I’d like to build a segment of people who filled out a form in the third week of January 2011, but I want to see their behavior for the months of February, March and April.” Here is how this segment could be built using Discover:

This functionality is cool since you can use it to limit your population to folks who took some action in a specific time period and then observe their subsequent behavior across a future time period. Another example might be the desire to see purchase behavior of people in Q4 who looked at products in Q3.

However, the challenge facing this client is that very few people in the organization had access to Discover so they wanted to have the ability to apply this date-based segmentation to their SiteCatalyst reports to which everyone had access (and take advantage of the new v15 segmentation capabilities). I hadn’t thought about doing this in SiteCatalyst due to its segmentation limitations (see below), but after contemplating a bit, I came up with a cool trick that should allow SiteCatalyst users to take advantage of this Discover functionality. If this is of interest to you, please read on…

Date Stamp Variable

In order to build a segment that crosses multiple visits, the obvious starting point is the Visitor container within SiteCatalyst’s Segmentation tool. If you want to select a Visit in one time frame, but look at data for another time frame, you will need to use a Visitor container and nest a Visit container and/or Success Event container within it. In the preceding example, we would want to create a Visitor container, but nest a Visit container within it in which the visitor had a Visit where a Form was completed in a specific week of the month of January. Sounds easy right?

Unfortunately, it isn’t as easy as you’d think, because there is no way to segment on a date or month within SiteCatalyst like you can in Discover. Therefore, the trick is to pass the date to a SiteCatalyst variable within each Visit. I suggest you add one new eVar and one new sProp and set the date on every page. In addition, you can easily create a SAINT Classification for each date which rolls these dates up into weeks, months or years as needed.

Once we have set the date to a variable, let’s see an example of how we would create the aforementioned segment from within SiteCatalyst. First, we grab the Visitor container, then we nest a Visit container and within that Visit, we nest a Form Completion Success Event. To narrow down the Form Completion to a specific week in January, we can use our new Date Stamp variable (eVar or sProp version):

Of course, as I mentioned earlier, it may be easier to classify these variables and segment on them by week or month. This process would be identical to the segment shown above, but instead, would use a Classification of the Date Stamp variable. Here is an example of a SAINT Classification of the Date Stamp variable:

If you’ve read my past blog posts, you will soon realize that this trick is similar to the Time-Parting plug-in I described years ago. In fact, it is really just a variation on that, but without the time of the day. However, limiting the values to just the date makes the data much more manageable and more easily classified. The use of this, plus segmentation allows you to mimic what has been possible in Discover for a while so if you have lots of SiteCatalyst users, give this workaround a whirl…Enjoy!

Adobe Analytics, Reporting

v15 Segmentation vs. Multi-Suite Tagging [SiteCatalyst]

With the arrival of SiteCatalyst v15, one of the most intriguing questions is whether or not clients should take advantage of segmentation and replace the historic usage of multi-suite tagging. This is an interesting question so I thought I’d share some of the things to think about…

Multi-Suite Tagging Review

As a quick refresher, if you have multiple websites, it has traditionally been common to send data to more than one SiteCatalyst data set (known as report suites). The benefits of this multi-suite tagging were as follows:

  1. You could have different suites for each data set (i.e. see Spain data separately from Italy data)
  2. If you sent data to many sub-suites and one global (master) report suite, you could see de-duplicated unique visitors from all suites in the global report suite
  3. If you wanted to, you could see Pathing data across multiple sites in the global report suite to see how people navigate from one website to another
  4. You could create one dashboard and easily see the same dashboard for different data sets in SiteCatalyst or in Excel
  5. You want to see metrics at a sub-site level, but also roll them up to see company totals in the global report suite

As you can see, there are quite a few benefits of multi-suite tagging and most large websites tend to do this as a best practice. Of course, where there is value, there is usually a cost! Since you are storing twice as much data in SiteCatalyst, our friends at Omniture (Adobe) have always charged extra for doing this, but normally these “secondary server calls” are charged at a dramatically reduced rate.

Along Comes Instant Segmentation

However, once SiteCatalyst v15 came out, it brought with it the ability to instantly segment your data. Suddenly, you have the capability to narrow down your focus to a specific group of visitors. Therefore, many smart people started asking themselves the following question:

“If I track the website name on every page of every one of my websites, why can’t I just send all data to one global report suite and build a segment for each website instead of paying Omniture extra money to collect my data twice through multi-suite tagging?”

If you look at the list of multi-suite tagging benefits above, you can see that you can accomplish pretty much all of them by simply creating a website segment. For example, if you currently pass data to a global report suite and an Italy report suite, you could simply pass the phrase “Italy” or “it” on every page and build the following Italy segment:

Doing this would narrow the data to just Italy traffic and you don’t have to pay Omniture any extra money! Most clients I have spoken to are very interested in this concept since it will allow them to move some budget to other things they might need (like more analysts or A/B Testing). I think many companies are taking a “wait and see” attitude to this while they get comfortable with SiteCatalyst v15. However, I expect that in the next twelve months, many large enterprises will decide to go this route in order to save a little money and simplify their implementations (one can only dream about not having to keep 50-100 report suites consistent in the Admin Console!). To date, I have not heard Omniture’s stance on this, but I expect that they are not opposed to companies doing this, but will probably not broadcast this concept too loudly since they will lose some recurring revenue as a result.

Any Downsides?

While it is still early days for SiteCatalyst v15, I have tried to think about what, if any, the downsides might be from throwing away multi-suite tagging in favor of an instant segmentation approach. While I hate to rain on the parade of those who want to move forward with this, I have found a few potential downsides that I think you should consider. I don’t think any of these will dissuade you, but I like to present both sides of the story so you can make an informed decision!

The first downside I can see is that moving to one global report suite will make the creation and usage of segments inherently more difficult. For example, let’s say that you create an Italy segment as shown above. That works well if you are in Italy and want to see all Italy traffic. But what if you are in Italy and want to see all first time visitors from a specific list of keywords who have abandoned the shopping cart. That is a semi-complex segment and you have to be careful to include the Italy part of the segment at the same time! Creating segments is tricky enough, but if you use segments to split out countries (or brands), you have to build even more complex segments to take these into account. Should you use an AND clause, an OR clause, combine Visit containers, use a Visitor container, etc? These are tricky questions for everyday end-users, while having a separate report suite (data set) for each country allows you to simplify your segments and just segment within that report suite and not worry about the additional country container. For advanced SiteCatalyst users, this nuance shouldn’t be a showstopper, but it can definitely trip up novice users and is something that should be considered.

Another downside is a lack of security around your data. While you can add security controls to report suites, you cannot do the same when it comes to segments within one master report suite. This means that if you use the one-suite approach, anyone who has access to that suite can see any data within it. You can lock down success events and sProps in the Admin Console, but that is the limit of what you can do. Security remains one of the key reasons why companies continue to use multiple report suites.

Lastly, if you work for a multi-national company, individual report suites allow you to use a different currency type for each suite. This means that a german-based site can use Euros, while a British site can use Pounds. When you send data to a global report suite, these currencies are translated into the one used for the global report suite (i.e. US Dollars). However, if you use only one suite and segmentation, you lose the ability to see data in different currencies. You can use the report settings feature to translate what you see in the interface into your own native currency, but this is much different than seeing the data collected in a native currency. The former simply translates historical data using today’s exchange rate, while the latter uses the currency rates associated with the date that currency was collected. Obviously, the latter is the more accurate approach.

Final Thoughts

So there you have it. Some of my thoughts on this monumental decision that many large SiteCatalyst customers will have to make over the next year. What do you think? Will you take the plunge? Have you thought of any other benefits and/or downsides of making the switch? If so, leave a comment here…

Adobe Analytics

Purchases to Date – Part II [SiteCatalyst]

Last week I described a new way to track how much money visitors had spent on your site prior to their current visit. This week, I am going to expand on this topic and provide some other cool uses of this concept. If you haven’t read my last post, I suggest you do that before reading this one.

Revenue by Product Category

In the last post, you may recall that we were able to quantify how much money the visitor had purchased in the past and break down current reports by those amounts. In the scenario I described previously, we could only see the total revenue amount across all product categories (in the previous scenario the product categories we discussed were Electronics, Clothing and Furniture). However, there is no reason that you cannot create a separate Counter eVar for each product category (or your major product categories if you have too many!). Doing this will allow you to see how much visitors had spent on just Electronics, for example, prior to future Success Events like Cart Adds or Orders. This might be good for companies that have distinct teams focused on each product category. To do this, the code might look like this:

s.events=”purchase”
s.products=”;SKU111;1;300.00;; evar1=Electronics,;SKU222;1;400.00;; evar1=Clothing,;SKU333;1;200.00;;evar1=Furniture”
s.eVar40=+900
s.eVar41=”+300″
s.eVar42=”+400″
s.eVar43=”+200″

By doing this, there would be one Counter eVar which shows that the visitor in our example above had spent $300 (row five) in Electronics prior to his/her second visit which might result in a report like this:

You would then see a report like this for each product category, though I would still recommend one Counter eVar like the one first described, which combines revenue for all product categories combined. Keep in mind that you could also use Product Merchandising to see total previous revenue (eVar40 in our example) by product category, but since you only get two levels of breakdown in SiteCatalyst reports, splitting out each product category into its own Counter eVar provides one more level of breakdown…

Orders to Date

As long so you are going to go through the effort to see how much money the current visitor had spent on your site, why not also track how many Orders they had completed? Doing this is very similar, though it will use up more eVars. Here is how you would do it. First, set a new eVar in the Admin Console and set it to be a Counter eVar with an expiration of “Never” or possibly “1 Year” depending upon how long you want to keep the data. Once this is done, on the purchase thank you page, simply set the Order Counter eVar to “+1,” as you normally would set a Counter eVar like this:

s.events=”purchase”
s.products=”;SKU111;1;300.00,;SKU222;1;400.00,;SKU333;1;200.00″
s.eVar41=”+1″

Kind of anticlimactic huh? By doing this on every purchase thank you page, you can track how many orders each website visitor completed and can then use this in analysis efforts. Next time you want to see how many times people who have added items to the shopping cart today have ordered in the past, simply open this new “Previous Orders” Counter eVar and add the appropriate metric(s):

Here we can see that 21.13% of the Cart Additions that took place today were from visitors who had not ordered on our site in the past (ignoring those pesky cookie deleters!). If we wanted, we could also break this report down by Product to see which Products they had purchased. Also, keep in mind that this example shows Cart Additions, but that we could have just as easily added Orders, Revenue, Internal Searches or any other website metric we wanted to this report to see how many orders had taken place prior to that Success Event. If desired, we could also use SAINT Classifications to group this “Previous Orders” Counter eVar into logical buckets of say “1-2 Orders,” 3-5 Orders,” “5-10 Orders,” etc…

Final Thoughts

So there you have it! Between this post and the last one, hopefully you have some new ideas to try out on your website so you can leverage past purchase behavior when doing your web analyses. If you have any questions/comments, feel free to leave them here. Thanks!

Adobe Analytics

Purchases to Date – Part I [SiteCatalyst]

Website visits don’t occur in a vacuum. People who are on your site today may or may not have been there in the past and if they have been there, some have purchased items and some have not. But how do you know if the current reports you are looking at in SiteCatalyst reflect those who have purchased in the past or not? How do you look at SiteCatalyst reports by how much they have purchased in the past? Having this context can greatly improve the analysis you are doing so in this post, I will share some techniques which allow you to easily segment your visitors by how much they have spent in the past…

Why Do This?

Before diving into how to do this, let’s explore the rationale. Imagine that you are a retailer selling Electronics, Clothing and Furniture. One question you might ask is “I wonder how much money all of the people who are on my site today have spent in the past?” Wouldn’t it be cool to see that 25% of the people who bought something today had purchased $500 or more in prior visits? Do people who have purchased more than $700 in the past convert at higher rates than those who have only purchased $300? Do people who have bought $400 or more in Electronics tend to only buy and look at Electronics products? As you can see, there are an endless number of analytics questions that can be studied once you know how much money current visitors have previously spent.

Surprisingly, however, there is no easy way to see this in SiteCatalyst. One way to do this is to create Segments. However, since there are so many segments that could be built, this is not always an easy option. To answer the questions above, you’d have to create different segments for each dollar amount and product category (i.e. people who have spent $100, $200, $500, etc…). Plus, you’d have to pull the data using DataWarehouse or ASI. Of course, this becomes much easier in SiteCatalyst v15 (if you are lucky enough to have access to it!), but it still requires a lot of segments to be built. Therefore, I will share a different approach that you can consider to accomplish this using a Counter eVar. As a quick refresher, a Counter eVar is a type of eVar that you increment as needed and retains a numeric value for each website visitor. This counter can be incremented by “1” each time it is set, or it can be incremented by any other number as needed. In past posts, I have described using Counter eVars to track # of Pages Viewed and Ben Gaines described how to use Counter eVars to score visitors. If you want to learn more about Counter eVars, please review this old blog post.

The Solution

With the set-up and refresher out of the way, let’s dig in. As mentioned above, in this scenario, we are a retailer selling three main product categories and want to see how much money each visitor has spent prior to the current visit. To do this, in addition to setting the Products string during the purchase event, we would set a Counter eVar equal to the amount that is being purchased like this:

s.events=”purchase”
s.products=”;SKU111;1;300.00,;SKU222;1;400.00,;SKU333;1;200.00
s.eVar40=”+900″

Notice that we have added up the purchase amount and passed it to a new Counter eVar40. In the above example, if the current visitor hadn’t previously visited the site, the value in his/her Counter eVar after this purchase would be $900. Since Counter eVars don’t have a notion of currency, the value that will be stored in the Counter eVar report in this case would be “900.00” (I would suggest that you round numbers to the nearest dollar since having decimals will make applying SAINT Classifications difficult). Keep in mind that you should set the Counter eVar to be Most Recent (Last) Allocation and set expiration to “Never” (or something like 90 days) in the Admin Console. That is all of that we have to do from an implementation standpoint.

So now let’s see how we use this. If the above visitor comes back to the website next week and adds a few products to the shopping cart and we pause time for a second and were to look at the resulting SiteCatalyst report, we would see something like this:

As shown here, we can now answer the question of how much money visitors had spent in the past at the time they added items to the shopping cart today. In this case, it looks like about half (49%) of people adding items to the cart today had not purchased previously. The visitor mentioned above would fall into row five in this report as part of the 1.38% of people who had purchased $900 in a previous visit. The same principle would apply to Orders and Revenue, so you could see a report like this:

When you extrapolate this principle by thousands of website visitors, you can see some interesting trends about what percent of website visitors transacting today had purchased in the past and how much they had spent. Next we can make this report more readable by applying SAINT Classifications to the Counter eVar to bucket the dollar amounts spent into logical groupings:

Now we have a new report that was previously unavailable! Pretty cool, huh?

In addition, if we wanted to take things to the next level, we could break this report down by Products to see which Products made up the Revenue in past visits:

 

Final Thoughts
So that is one way to see how much visitors on your site have purchased previously so you can add that to your existing web analyses. Next week, I will continue with “Part II” of this topic and go into some additional ways you can apply this concept so stay tuned…Thanks!

Adobe Analytics

Merchandising eVars [SiteCatalyst]

After blogging about Omniture SiteCatalyst for a few years now, one of the topics I have always avoided discussing is Merchandising eVars (not to be confused with the separate Omniture Merchandising product). The reason for this, is that I find them to be very confusing and was sure that no matter how hard I tried to explain them, I would probably mess it up. For years, I have waited for someone to write about them, but seeing as no one has written extensively about them (at least according to a quick Google search!) and having been inspired by some other great blog posts I have read lately in which people have said that it is ok to not have all of the answers, I have decided to face my fears and go ahead and do my best to describe Merchandising eVars. My hope is that this post will serve as a first step in getting the SiteCatalyst community to understand these nuanced eVar and that it might spawn some good discussion and other blog posts by others who have spent a lot more time with them (like Kevin W.) so that one way or another, the topic will be adequately covered.

Why Merchandising eVars?

So why did Omniture make a special type of Merchandising eVar and why are they so complicated? If we go back in time to when I started using SiteCatalyst (version 9.x) and there were no Merchandising eVars, there were a few problems that existed. First was the Category parameter in the Products string. If you have been using SiteCatalyst for a while, someone has probably told you to NEVER use the first parameter (Category) in the Products string. They often don’t tell you why, but the reason is that if you do, the Product you pass will be forever tied to the Category in that string. That means that if you later decide to put the same product in a different product category, SiteCatalyst will ignore it and always use the first one it saw. If each of your products has only one product category and it will be that way forever, you can go ahead and use the Category parameter (or simply classify products using SAINT Classifications). But since most clients like to have products in more than one category, they asked for a way to assign the same product to different merchandising categories, hence, Merchandising eVars!

Let’s look at an example. Say that you have a retail site and that you sell ceiling fans, but those fans can be found by people going through “Lighting” or “Bedroom” product categories. Now let’s say that you would like to know how many Cart Adds or Purchases take place when people found ceiling fans through one of these product categories, but not the other. Sounds simple enough right? But it wasn’t in the past. If you had used the Products string to assign a specific ceiling fan to “Lighting,” it would always be bound to that product category. Instead, you would need a way to dynamically assign the specific product category for each product in each specific instance to get the data you were looking for. By doing this, you could see how often the ceiling fan was purchased via “Lighting” and how often it was purchased via “Bedroom.” Since then, there have been many different uses for Merchandising eVars, but I think it is important to understand the underlying problem that they were created to solve, as I find this helps to understand how they work and why they are different from traditional eVars. So when you think of Merchandising eVars just remember that their purpose is to assign a different eVar value to each product at the time Success Events take place.

Using Merchandising eVars

So now that we know a bit about how Merchandising eVars originated, let’s discuss how they are used. As you can imagine, connecting a different eVar value to each product is not a simple task. That is a lot of information for SiteCatalyst to keep straight! There would have to be some specific ways for you to implement this such that SiteCatalyst knows when you want each product to be tied to each Merchandising eVar value. Fortunately (or unfortunately!), SiteCatalyst has not one, but two methods of binding eVar values to products. One method is called Product Syntax and the other is called Conversion Variable Syntax.

Product Syntax
I find the Product Syntax method to be the most straightforward, and what I recommend most often, so I will start with that one. In this method, you use a special parameter slot within the Products string to declare which Merchandising Category you want to assign to each product. To do this, let’s re-visit the syntax for the Products string:

s.products=”category;product;quantity;price;event_incrementer;
merch_category1|merch_category2

As you can see, towards the end of the Products string, there is a slot reserved for setting Merchandising eVars. In fact, you can set more than one by using a “|” separator. Using this syntax, if a Cart Addition occurs, you can set your Cart Add Success Event and Merchandising eVars as shown in this example:

s.events=”scAdd”
s.products=”;Fan-11980;;;;evar1=Lighting”

Here we can see that we are manually assigning the product category of “Lighting” to the product “Fan-11980” at the time of Cart Addition. However, there are some back-end settings that also need to be made to allow for this to function properly. First, we need to call Omniture Client Care and ask that Merchandising be enabled for the appropriate eVar (eVar1 in this case). Once Merchandising has been enabled, we need to go to the Admin Console and select the Product Syntax option under the new Merchandising setting that will now be visible. When using Product Syntax, the second Merchandising setting (called Merchandising Binding Event) is disabled (but for some reason looks like you can use it!) so my advice is to just ignore that setting altogether. Here is what the settings should look like when you are done:

As with other eVars, you still have to decide what Allocation you’d like (First or Last) and how long the eVar should retain its value before it expires. But beyond that, you are good to go and the hardest part is making sure your developers are keeping track of which product categories should be associated with each product. If you know the value that you want to pass to the eVar for each product on the page (product category in the preceding example), I recommend you use the Product Syntax approach.

Conversion Syntax
The second approach to setting Merchandising eVars is the Conversion Variable Syntax. This approach is a bit more confusing and is normally used when you want to associate a different eVar value to each product, but the value you want to set in that eVar is only known prior to the Success Event taking place, instead of on the same page. The only way I can think of to explain this is through an example. Let’s imagine that your boss wants to know which internal search phrases were used prior to each product being purchased. Now, let’s pretend that a visitor comes to the website and searches on “ceiling fans,” finds Product 123 in the list and adds it to the cart. Next, the visitor searches for “bathroom vanities,” again scans the list, finds Product 789 and adds it to the cart. Then the visitor purchases both items a few pages later. In this example, if we were to use a traditional eVar (with Most Recent allocation), each Cart Addition would be correctly associated with the correct search phrase – “ceiling fans” = product 123 and “bathroom vanities” = product 789. So far so good. But when the visitor purchases both products, guess which internal search phrase would get the credit? If you said “bathroom vanities” you are correct! Since that was the last search phrase SiteCatalyst saw, it would get credit for both products. This is because a traditional eVar cannot associate a different value for each product.

However, by using the Conversion Syntax and Merchandising, in this scenario, each product would be associated with the specific search phrase that was used to find it for both the Cart Add and Purchase Success Events. So how do we configure this? First, we would work with Client Care to declare eVar1 to be a Merchandising eVar. Next, we would decide when we would like to have Omniture bind the internal search phrase to the eVar value. For most clients, the default is to bind at the Product View (prodView) event and the Cart Add (scAdd) event (though you can choose from any Success Events you’d like). By binding to the Product View and Cart Add, you are telling Omniture that if one of those two events happens, you want Omniture to bind the last value passed to the Merchandising eVar (internal search phrase in our example) with the product being viewed or added to cart. This is how these settings would look in the Admin Console:

Well…there you have it. My first attempt at facing my fears and explaining about Merchandising eVars. I have also written a more advanced post on Merchandising you can check out. Please comment here and I will do my best to get any question answered. Thanks!

Adobe Analytics, Analytics Strategy, Conferences/Community

More seats opening for ACCELERATE 2011!

As I have mentioned a few times before, the initial response to our ACCELERATE event announcement caught us off guard — we honestly didn’t plan to be full after a single day of registrations. Because we hate to disappoint folks we set about figuring out how to increase our room capacity, and thanks to the generosity of our sponsors Tealeaf, Ensighten, and OpinionLab, I’m happy to announce we have succeeded!

Between today and October 1st we will be accepting more registrations for the event on Friday, November 18th in San Francisco. These registrations will still be provisional (e.g., on the “wait list”) but we are committed to having a final list by the first week in October so that folks can make travel plans, etc. If you are interested in joining us, I strongly recommend you go to the ACCELERATE site and register today.

Speaking of the ACCELERATE site, we have added information about many of the fine folks who will be presenting “Ten Tips in Twenty Minutes.” We are extremely honored to have great speakers including Bill Macaitis, VP of Online Marketing at Salesforce.com, Michael Gulmann, VP of Global Site Conversion at Expedia, and a half-dozen other brilliant analysts, practitioners, and vendors representing great companies like Sony Entertainment, AutoDesk, Symantec, and many more.

What’s more, we are honored to have ESPN’s Ben Gaines, formerly of Omniture/Adobe fame and the creator of the @OmnitureCare twitter account. Ben will be sharing tips on managing expectations in vendor relationships and I have to say we’re pretty excited to be hosting Ben’s first “non-vendor” appearance in the web analytics world.

We have also put up a registration for the big Web Analytics Wednesday event we will be holding on Thursday, November 17th, generously sponsored by Causata, Coremetrics/IBM, iJento, and ObservePoint. The location is still TBD but is looking like Roe in downtown San Francisco.

So, if you’re interested in joining us at ACCELERATE, your action items today are:

  1. Register on the expanded wait list at the ACCELERATE web site
  2. Register for the Web Analytics Wednesday event
  3. Tweet something like “I want to attend #ACCELERATE 2011! http://j.mp/accelerate2011 #measure”

(Okay, the last action item is more of a wish-list thing for us … 😉

Adobe Analytics, Conferences/Community, Social Media

Are you a Super Accelerator?

When John, Adam, and I announced the ACCELERATE conference last week we really didn’t expect the response we got, much less that the seats we had planned for would fill in just over a day. Once we got over the initial shock we set about trying to figure out how to accommodate more of the over 300 people who have already registered for the event … and we’re getting closer every day to solving that problem.

We are continuing to take provisional registrations and being on this list is the most sure way to be able to join us in November. If you’re interested, please sign up for the ACCELERATE 2011 wait list.

In the interim we wanted to call your collective attention to our “Super Accelerator” session at the end of the day. Unlike our main speaking slots where brilliant practitioners from companies including Sony, Nike, Expedia, Autodesk, Symantec, Salesforce.com and more will be sharing “Ten Tips in Twenty Minutes”, the Super Accelerator is designed to allow up-and-comers in our community to share a single idea in five minutes or less.

Five minutes! How easy is that?

Just think about the amazing things you could share with ACCELERATE attendees in five minutes? Off the top of my head:

  • The Number One Reason You Should Join the Web Analytics Association
  • The Best Way to Get Your Manager to Think About Web Analytics Data
  • How to Make I.T. Your Friend (and How That Will Help You as an Analyst)
  • How to Take Advantage of Web Analytics Wednesday for Social Networking
  • The Most Important Hashtags Analysts Should Follow in Twitter
  • Why Strategy is Important to your Company’s Investment in Web Analytics

That list goes on and on and on, and I’m sure the best ideas are those that I’m not even thinking of!

We already have five people signed up for the dozen slots we have but we are looking for seven more folks who meet the following criteria:

  • Really want to attend ACCELERATE 2011 (since if you’re presenting, you have to be there)
  • Are willing to commit to creating and presenting a three-slide, five minute talk
  • Have a true passion for digital measurement, analysis, and optimization
  • Love to present, or want to learn to love presenting
  • Love awesome technology …

If the last criteria seems out-of-place, you need to know that the audience will be providing real-time feedback on each Super Accelerator session (thanks to our friends at OpinionLab) and the presenter who earns the best overall score will get a $500 gift card from Best Buy!

How cool is that? I know!

If you’re interested in joining us at ACCELERATE 2011 and being part of the Super Accelerator session I would encourage you to do the following RIGHT AWAY since we expect this session to fill up fast:

  1. Go to the ACCELERATE 2011 web site and REGISTER (you’ll be put on the wait list)
  2. Go to Twitter and tweet “I want to present at #ACCELERATE 2011 as a Super Accelerator! http://j.mp/accelerate2011 #measure”

We are watching the #ACCELERATE tag and will get back to you ASAP. These slots are filled on a first-come basis so DON’T DELAY and sign up today!

 

 

Adobe Analytics, Analytics Strategy, Conferences/Community

ACCELERATE 2011 is SOLD OUT

Yesterday we announced that Analytics Demystified was bringing an entirely new type of event to San Francisco in November: ACCELERATE!

Today I am chagrined to announce that ACCELERATE 2011 in San Francisco is SOLD OUT!

Suffice to say, we didn’t expect to sell out overnight, nor did we expect to have so many people traveling to the event from around the globe. We have registrations from as far away as London, Spain, Shangahi, and India; we have registrations from New York, Boston, Seattle, Portland, Phoenix, Boulder, and more!

We are still accepting provisional (“wait listed”) registrations but will likely stop doing that by the end of the week. If you want to join us I strongly recommend registering for the ACCELERATE 2011 wait list IMMEDIATELY.

Also, if you’re already on the list, you will help ensure your seat at the table by joining our “Super Accelerator” session at the end of the day. More details are available at the ACCELERATE mini-site under the “LEARN MORE” link.

As our clients, prospects, and friends complete their registrations we will develop a better sense of exactly how many we can accommodate. At that point we will email registrants directly and provide confirmation.

On behalf of John, Adam, our sponsors at Tealeaf, OpinionLab, and Ensighten, and especially myself we are grateful for the community’s response to ACCELERATE and will do everything possible to get as many folks to the table as we can.

 

Adobe Analytics

Thoughts On Our 1st G+ Foray

This week the Demystified Partners forfeited our weekly meeting to hang out with our fellow #measure peeps on Google+. We felt that the thread about web analytics technologies and where innovation would surface in our industry needed a deeper discussion. If you haven’t seen the original thread yet, make sure you go check it out. Also, if you need a G+ invite, let us know we’ve got plenty to share. But our exercise was as much about continuing the conversation as it was testing out a new social medium.

Before going live on G+, we practiced for about a half hour, where all of our browsers crashed and we experienced various video connection in’s and out’s as we tinkered with and tuned our machines. By showtime, we had a few stalwart veterans join, including Tim Wilson who was dialed in on a 4G connection while driving home with his wife from camping. As our discussion grew, We added up to nine people, which didn’t quite push the limits of G+ as we had hoped, but it was an all-star #measure cast including: @erictpeterson @adamgreco @mymo @Exxx @tgwilson @joestanhope @OMlee @keithburris and yours truly.

Our conversation began with quips from each of the participants about how we’re still grappling with digital measurement technologies. Despite most of us being in this web analytics industry for years, and in some cases decades. Time passed quickly as we debated from all sides of the vendor/consultant/practitioner perspective. After a brief privacy sidebar, we asked each other where innovation would emerge from in analytics and really why we were pursuing these digital data anyway? I think we edged the needle just a little bit by agreeing that what we do matters because we’re educating our employers and clients on the power of data; and just possibly making the Internet a slightly better place. Ok, when I chatted this in the G+ hangout mid discussion, I warned everyone not to throw up on their keyboards, so I’ll do the same for you. But as cheesy as that sounds, our quorum agreed that wasn’t such a bad goal. What do you think?

So, the technology of G+ did prove that it was up to the task of handling this type of group discussion. People could talk and share their ideas on video or contribute to the conversation using the chat functionality. But we’re curious to know if you’re interested in joining us for a future G+ hangout?

If you are and willing to hang out with us to discuss the hottest topics is digital analytics, let us know because we’ll plan another one soon. Heck, we’ll even make this a regular event if you’re interested. What do you all say?

Adobe Analytics, Analytics Strategy, Conferences/Community

Announcing ACCELERATE 2011!

We are incredibly excited to announce that registrations are open for our newest community initiative designed for digital measurement, analysis, and optimization professionals, ACCELERATE!

The first event will be held this year in San Francisco on Friday, November 18th at the Mission Bay Conference Center at UCSF, and thanks to generous support from Tealeaf, OpinionLab, and Ensighten, ACCELERATE 2011 is completely free.

Our agenda is still being finalized, but we will have thought- and practice-leaders from amazing companies including Nike, Symantec, AutoDesk, Salesforce.com and, of course, Analytics Demystified. Also, since we recognize that some of the brightest talent in our field works for solution providers, we’ve invited a few practice leaders from the vendor community to present as well, thusly ensuring great content across the board.

The format at ACCELERATE is completely new, and we believe our “Ten Tips in Twenty Minutes” style will create the maximum number of insights possible for attendees of all backgrounds. What’s more, we have a dozen ten open slots for new speakers in our “Super Accelerator” session to showcase up-and-coming talent — and we’re having those folks compete for a $500 gift card from Best Buy based on audience votes.

Did I mention that ACCELERATE 2011 is completely free?

If you’re interested in joining us we encourage you to visit the ACCELERATE 2011 mini-site sign-up register today. Space is limited to the first hundred or so folks who sign up and we’ve already had registrations from New York, Boston, Seattle, Portland, Columbus, and San Francisco.

Go to the ACCELERATE 2011 site and register to attend right now!

EVENT DETAILS:

Location: Mission Bay Conference Center, San Francisco
Date: Friday, November 18, 2011 from 9:00 AM to 4:30 PM
Registration: Open now, limited to the first hundred or so folks who sign up

If you have any questions about ACCELERATE 2011 please leave comments below or email us directly.

Adobe Analytics, Technical/Implementation

SiteCatalyst Implementation Pet Peeves – Follow-up [SiteCatalyst]

I recently blogged a list of my top Omniture SiteCatalyst implementation “Pet Peeves.” While the response to my post was very positive, one reader agreed with most of what I said, but disagreed with a few of my assertions or felt I had made some omissions. First, let me state that I always encourage feedback and comments to my blog posts since that helps everyone in the community learn. In general, the reader was making the point that my post only took into account an implementer’s perspective vs. the perspective of the web analyst. Personally, I don’t like to divide the world into implementers and analysts, since some of the best implementers I know also have a deep understanding of web analysis and vice-versa. Having been a web analytics practitioner using SiteCatalyst at two different organizations, I feel that I am in a good position to know if items I suggest (or discourage) will lead to fruitful analysis. I always try to write my blog from the perspective of the in-house web analyst who has to deal with things that I dealt with in the past, such as adoption, enterprise scalability, training, variable documentation, etc… In fact, I attribute much of my consulting success to the fact that I have been in the shoes of my clients and that they appreciate that my recommendations are based upon actual pains that I have experienced.

Since my original post was a very quick “Top-10” list and didn’t provide an enormous amount detail, and given the interest that it generated, I thought it would be worthwhile to write this follow-up post to address the concerns raised related to my post and to elaborate on the rationale behind some of my original assertions. In the process, it will become clear that I don’t necessarily agree with the concerns raised to my original post, but I am always cognizant of the fact that every client situation is different and every SiteCatalyst implementer has experiences that color their own implementation preferences. I don’t see it as my place to say which techniques are right and which are wrong, but rather to do my best to state what I think is/is not “best practice” and why based upon what I have seen and experienced over the past ten years and let my readers decide how to proceed from there…

Tracking Every eVar as an sProp

The first pet peeve I mentioned is when I find clients that have duplicated every eVar with a similar sProp. I stated that there are only specific cases in which an sProp should be used including a need for unique visitor counts, Pathing, Correlations and to store data that exceeds unique limits for accessing in DataWarehouse. The reader seemed to think I was being hard on the poor sProp and listed a few other cases where they felt duplicating an eVar with an identical sProp or adding additional sProps was justified including:

  1. Using List sProps – The reader suggested that I had made an omission, by not mentioning List sProps as another reason to consider using an sProp in an implementation. I maintain that the use of List sProps was justifiably covered in my statement of other sProp uses that are “few and far between.” I don’t use List sProps very often because I feel that there are better ways to achieve the same goals. As the reader stated, List sProps have severe limitations and there is a reason that they are rarely used (maybe 2% of the implementations I have seen use them). I have found that you can achieve almost any goal you want to use List sProps for by re-using the Products variable and its multi-value capabilities instead. By using the Products variable, you can associate list items to KPI’s (Success Events) rather than just Traffic metrics. Using the reader’s own example of tracking impressions, illustrates the differences perfectly. You can store impressions and clicks of internal ads and calculate a CTR using the Products variable and two success events. This also gives you charts for impressions, clicks and the ratio of the two which can be easily added to SiteCatalyst dashboards. I have found that doing this with a List sProp is difficult, if not impossible and reporting on it is tedious. For more information on my approach, please check out my blog post on the subject.
  2. Page-Based Containers & Segmentation – Here the reader suggested that the need to isolate specific pages using a Page View-based container is important to the life of the web analyst. Ben Gaines from Omniture also commented about this on my original post and I do agree that this can be useful for some advanced segmentation cases. I did not include this in my original list because I find it to be a much more advanced topic than I intended to cover for this quick “Top 10” post. While there may be cases in which you want to set an sProp to filter out specific items using a Page View-based segment container, I find that I often do this using the Page Name sProp which is already present. I do not see too many cases where a client is storing an eVar (let’s say Zip Code) and will say, “I am going to duplicate it as an sProp for the sole purpose of building a Page-Based container segment to include or exclude page views where a page is seen where a Zip Code equaled 123456.” Maybe that happens sometimes, but I still think it falls out of the scope of the primary things you should be considering when deciding whether to duplicate an eVar and I think it is a stretch to say that this functionality establishes the line between those who care about implementation and those who care about web analysis.
  3. Correlations – With respect to Correlations, the reader suggested that users correlate as often as they can since cross-tabulation is so essential to the web analyst. This is exactly why I included Correlations in my list! I also mentioned that this justification for using an sProp may go away in SiteCatalyst v15 where all eVars have Full Subrelations. Also, one of the reasons I prefer Subrelations to Correlations is that Correlations only show intersections (Page Views) and do not show any cross-tabulation of KPI’s (Success Events). Personally, I would disagree with the reader about over-doing Correlations, since in my experience, implementing too many Correlations (especially 5-item or 20-item Correlations), with too many unique values, can cost a lot of $$$, lead to corruption and latency.
  4. Pathing – In the area of Pathing, I think the reader and I are on the same page about its importance which is why I have published so many posts related to Pathing such as KPI (Success Event) Pathing, Product Pathing, Page Type Pathing, etc… Again, I might differ with the reader in that I don’t think enabling Pathing on too many sProps is a good idea since it can cost $$$ and produce report suite latency, which is why I prefer to use Pathing only when it adds value.

At the end of the sProp duplication section, the reader stated that there was no downside to duplicating every eVar as an sProp since it has no additional cost. To this, I would reiterate that my post was not advocating abandoning the use of sProps, but instead, attempting to help readers determine when they might want to use sProps so as to avoid over-using them when they will not add additional value. Even after years of education, I still find that many clients get confused as to whether they should use an eVar or an sProp in various situation, and most people I speak to welcome advice on how to decide if each is necessary.

However, I disagree with the reader’s assertion that duplicating every eVar as an sProp has no costs. Maybe it is due to the fact that I have “been in the trenches,” but in my experience I have seen the following potential negative ramifications:

  • Over-implementing variables and enabling features unnecessarily can cause report suite latency
  • Over-implementing variables can increase page load time, which can negatively impact conversion
  • Over-implementing variables and features can cost additional $$$ as described above (e.g. Pathing, Correlations)
  • When you implement SiteCatalyst on a global scale, you often need to conserve variables for different departments or countries to track their own unique data points. This means that variables (even 75 of them!) are at a premium. Therefore, duplicating variables has, at times, caused issues in which clients run out of usable variables.
  • Most importantly, however, is the impact on adoption. Again, I may be biased due to my in-house experience, but here is a real-life example: Let’s say that you have duplicated all eVars as sProps. Now you get a phone call from a new SiteCatalyst user (who you have begged/pleaded for six months to get to login!). The end-user says they are trying to see Form Completions broken down by City. They opened the City report, but were only able to see Page Views or Visits as metrics. Why can’t they find the Form Completions metric? Is SiteCatalyst broken? Of course not! The issue is that they have chosen to view the sProp version of the report instead of the eVar version. That makes sense to a SiteCatalyst expert, but I have seen the puzzled look on the faces of people who don’t have any desire to understand the difference between an sProp and an eVar! In fact, if you try to explain it to them, you will win the battle, but lose the war. In their minds, you just implemented something that is way too complicated. You’ve just lost one advocate for your web analytics program – all so that you can track City in an sProp when you may not have needed to in the first place. In my experience, adoption is a huge problem for web analytics and is a valid reason to think twice about whether duplicating an sProp is worthwhile. While I’ll admit that duplicating all variables certainly helps “cover your butt,” I worry about the people who are at the client, left to navigate a bloated, confusing implementation…

Therefore, for the reasons listed above, I remain steadfast in my assertion that there are cases where sProps add value and cases where they just create noise. While there will always be edge cases, I think that the justifications I laid out in my original post are the big ones that the majority of SiteCatalyst clients should think about when deciding if they want to duplicate an eVar as an sProp or use an sProp in general.

As an aside, while we are revisiting my original post, I thought of a few more items I wish I would have included so I will list them here:

  1. One other justification for setting an sProp I should have mentioned is Participation. There are some fun uses of Participation that can improve analysis and I find that sProp Participation is easier to understand for most people than eVar Participation so I would add that to my original list.
  2. If you do find a need to duplicate an eVar as an sProp, but it is only for “power users,” keep in mind that you can hide the sProp variable from your novice end-users through the security settings under Groups.
  3. Finally, I see Omniture ultimately moving to a world where there will only be one variable so if you want to be part of that world, please vote for my suggestion of doing this in the Ideas Exchange here.

VISTA Rules

Another pet peeve I mentioned is that I often find clients who are using VISTA rules too often or as band-aids. The reader stated that VISTA rules are a good alternative to JavaScript tagging since they can speed up page load times. I think this is another situation where my time working at Omniture and in-house managing SiteCatalyst implementations may bias my recommendations. While I agree that page load time is important, most Omniture clients I saw never mentioned using VISTA rules as a way to decrease page load time, but rather as a way to avoid working with IT! Usually, when I find a client that has many VISTA rules, it is because they have a bad relationship with IT, who doesn’t want to do additional tagging, rather than to save page load time. However, if I were to address the reader’s point of page load speed, I would agree that there are cases where using VISTA rules over JavaScript can decrease page load time, but I certainly do not think this should be the primary deciding factor. Great strides have been made in tagging including things like dynamic variable tagging and tag management tools which have greatly reduced page load speeds. I suggest readers check out Ben Robison’s excellent post on VISTA vs. JavaScript which discusses not only page load speed, but also the many other important factors to consider before jumping into VISTA rules.

Another point I’d like to make about VISTA rules is that, in my experience, they have a high likelihood of breaking and leading to periods of bad data. VISTA rules are like Excel macros. They do what you tell them to do, but if something changes, it can easily throw off a VISTA rule and cause incomplete or inaccurate data to be reported in SiteCatalyst. In this point, perhaps I am a bit jaded because I have seen so many different VISTA implementations go awry while I was at Omniture. In fact, it is rare that I find clients that have a VISTA rule that has worked for several years without ever having an issue. And if you do encounter an issue, you will have to pay Omniture around $2,000 to update it – every time. Want to make an update to the VISTA rule? $2,000. Want to turn off the VISTA rule or move it to a different report suite? $2,000! Consultants don’t have to write these checks, but guess who does – the in-house people do! This is why people are so excited about the new V15 processing rules and emerging tag management vendors. It is this tendency to break and the risk of bad data that makes me a bit gun-shy about using VISTA rules simply as a replacement for JavaScript tagging. Moreover, since the reader’s overall premise was that one must keep the web analyst in mind during implementation, I would be cautious about being overly-reliant on a solution like VISTA that is so prone to causing data issues which could thwart the analyst’s ability to do web analysis. I have seen companies that have 20+ VISTA rules and I promise you that they are not huge fans of VISTA right now (though they should really blame themselves not the tool!)! If you do pursue VISTA rules, my advice is that you consider using DB VISTA over VISTA. DB VISTA rules cost a bit more, but do offer more flexibility since you can at least make updates to the data portion of your rules without having to pay Omniture additional $$$.

One additional point to think about when it comes to VISTA rules is the impact they can have on report suite latency. Having too many VISTA rules can slow down your ability to get timely data in SiteCatalyst and I have seen many large organizations have severe (several days) report suite latency due to multiple VISTA rules acting on each server call. This impacts the web analyst’s ability to get the data they need and should be factored into decisions about VISTA rules.

As I stated in my original post, I have nothing against VISTA rules, but do find the overuse of them to be a potential red flag when I look at a new implementation. I often find that excessive use of VISTA Rules can be a symptom of bigger problems which merit investigation. Just like I don’t advocate duplicating sProps or enabling Pathing when not necessary, I don’t advocate the use of too many VISTA rules since it can be great in the short term, but bad in the long term. Now that I am a consultant again, it would be easy for me to recommend VISTA rules left and right, but since I like to have long-term relationships with my clients, I don’t do this since I know what it is like to be around later if/when they have issues!

Final Thoughts
I hope this post provides some good food for thought and more in-depth information about some of the items I listed in my original post. If you would like to discuss any of the above topics in more detail, feel free to leave comments here or e-mail me. Thanks!

Adobe Analytics, Conferences/Community, General

Great jobs and a great gathering in Atlanta next week

Just a quick note from my vacation getwaway to call reader’s attention to two great jobs at The Home Depot and to let Atlanta-area readers know that I will be in town next week for a special “Web Analytics Wednesday on Tuesday” put together by Keystone Solutions Rudi Shumpert and HD’s own Wesley “Big Wes” Hall. The event will be at the Gordon Birsch in Buckhead and I’m hoping that Rudi and Wes will allow an informal Q&A session about some of the great things that are happening in our industry lately.

>>> Register to join us at Web Analytics Wednesday, Atlanta, on Tuesday, July 19th

Regarding the jobs, our client at Home Depot is aggressively putting together a team of digital measurement specialists to help lead the company’s digital efforts forward. We have been helping the company with their digital measurement strategy now for about six months and the effort is really beginning to pay off in terms of their use of technology, the talent they are getting in the door, and the value web analytics brings to the company both online and off.

Have a look at the Senior Analyst and Manager, Web Analytics jobs on our web site and come see me next week at Web Analytics Wednesday if you’d like a personal introduction or have any questions:

>>> Job description, Senior Web Business Analyst at The Home Depot

>>> Job description, Web Analytics Manager at The Home Depot

I hope you are all having a great, relaxing summer and look forward to seeing you at a conference, event, or Web Analytics Wednesday sometime in the near future.

Adobe Analytics

Some SiteCatalyst Implementation Pet Peeves [SiteCatalyst]

Over the years, when I have consulted clients who use the SiteCatalyst product, I have encountered some strange implementation items that made me scratch my head. In the beginning, when I saw these odd implementation quirks, I was mildly entertained, but as I saw them more and more, they were soon elevated to “pet peeve” status. Therefore, I thought I’d share some of these items with you to make sure that you are not doing any of them, and also because I am curious to see what other “pet peeves” you may have. Please check out my list (which is by no means exhaustive!) and if you have items that you have seen that bug you, please leave them here as a comment!

Tracking Every eVar as an sProp

I would say that my biggest pet peeve is when clients have an sProp for every eVar they have set (or vice versa). When I see this, it is an early warning sign that the client doesn’t fully understand the fundamentals of SiteCatalyst. While there are definitely cases where you would capture the same data in both an eVar and an sProp, they are usually few and far between. As a rule of thumb, I only set an sProp if:

  • There is a need to see Unique Visitor counts for the values stored in the sProp
  • There is a need for Pathing
  • You have run out of eVar Subrelations and need to break one variable down by another through the use of a Correlation (which will go away in SiteCatalyst v15)
  • There will be many values (exceeding the unique limits and you just want data stored so I can get to it in DataWarehouse or Adobe Insight

For the most part, that is it… Beyond that, I tend to use eVars and Success Events for most of my implementation items.

This is why I shudder when I see 40 eVars set and the same 40 sProps set. I find that this only confuses users since most don’t really understand the difference between the two variable types to begin with! Therefore, my advice is to make sure you understand the difference between eVars and sProps and make sure you use the right variable for the right purpose.

Pathing Enabled Unnecessarily

Another item I have seen a lot is when a customer will have Pathing enabled on an sProp that doesn’t change in a session. For example, let’s say you have people log into your website and you store the Customer ID in an sProp. That Customer ID is designed to be the same for each visitor during the entire visit. However, I often see clients who enable Pathing on this Customer ID sProp. My hunch is that they think this will show them the paths of that Customer ID, but the truth is that it will show no paths for each Customer ID so it is a complete waste of time. Keep in mind that pathing is only useful if values change in the same session. If you pass the same value in on every page of the session, SiteCatalyst will see that as a Bounce of 100% for every Customer ID! Since Adobe (Omniture) will only let you have so many variables with Pathing enabled, you need to make sure you are using them wisely!

No Friendly Page Names

The next pet peeve is when clients don’t pass any values to the Page Name variable and use the default of the URL. This really makes my blood boil! There are so many downsides to doing this when it comes to the Pages report since it impacts Page Views, Unique Visitors and Pathing. For better or worse, the Pages report tends to be a very popular one and I feel that, even if just for the perception of the integrity of your web analytics implementation, you need to take the time to make sure this report is accurate and understandable. For more information on this topic, please refer to my Page Naming Best Practices post by clicking here.

Passing Query Strings to Page Name Variable

On a related note, I have another gripe related to the Page Name variable and it has to do with query string parameters. Many times I find that companies are including query string parameters in the Page Name variable. This is a really bad idea. Here are two common things I see:

  • When a visitor arrives to the website from a campaign, the URL will have a campaign code in the query string parameter and pass this to the Page Name variable (i.e. zyz corp:home:homepage:cid-12345)
  • A company will have a search results page and include the keyword/phrase that the user searched upon to get to that page in the Page Name (i.e. zyz corp:search:searchresults:user manual)

Both of these examples involve the company having one page name essentially split out into hundreds (or thousands) of versions of the Page Name due to the query string parameter. Creating many versions of the same page has the effect of losing Visits, Unique Visitors and Pathing for the true Page Name. Most of the time this situation can be solved by using one Page Name and passing the query string parameter to another variable and using a Correlation. If you really need to have these extra query string parameters associated with pages, I recommend using another sProp instead of the Page Name variable…

Reports with No Data

Another thing I see quite often are implementations that have tons of variables labeled, but that have no data. As a rule of thumb, I recommend you disable any variables that have no data or at a minimum hide them from the menus using the Admin Console. There is nothing more frustrating to an end-user than opening up a report, getting excited to see the data and then realize that there is none! Besides being annoying, it hurts the credibility of your web analytics program. When I am in the midst of a new implementation and things are in flux, one thing I do is to put all reports that are coming, but have no data in ALL CAPS or I add the phrase “(COMING SOON)” after the variable name. This helps me see which variables are left to do and which ones I can begin to QA. However, once the implementation is semi-stable, I urge you to hide variables that are not coming for a while so you don’t annoy people unnecessarily!

No Menu Customization

On a related note, how many SiteCatalyst implementations have you seen where they use the default menu structure? Why would you want to tell users to look in “Customer Conversion 1-10” to find the report they are looking for? Not very helpful is it?

Instead, you should customize your menus so they make sense for your users. This will help in your adoption and make training much easier. For some great tips on how to customize your menus, check out Brent Dykes’ post by clicking here.

No Variable Standardization

The next one is when you have a situation where you have multiple report suites that are really the same website, just for different business units and/or locations and none of them are set-up consistently. I see many clients who are tracking some things in the US, but not in the UK or Japan, even though the websites are identical. When this happens and you select multiple report suites in the Admin Console, here is what you see in the variable screen:

I call the “Multiple Madness” due to what you see in the Admin Console and it is not a good thing! You should make sure that as many of your report suites are as consistent as possible so you can minimize your development time and roll-up data into higher-level report suites.

Wasting of Variables

This next one is a minor one but it is related to wasting variables. Even though there are more variables available now, it doesn’t mean that you should track everything or that every piece of data requires its own variable. For example, I recently ran into a client that was tracking Salutation (Mr., Mrs., Dr., etc…) in an eVar. This makes very little sense. How are you going to do cutting-edge analysis on that? Gender maybe, but I don’t think Salutation is worthwhile. Just because you know it, doesn’t mean you need to track it.

This leads to the other type of waste I see – not using SAINT Classifications to save variables. There are many cases where you can accomplish the same analysis objectives by using SAINT Classifications and save variables along the way. Using the prior example, instead of storing Salutation as an eVar, if you really need it, why not store a Customer ID value and then add Salutation as a classification value of that Customer ID? That saves you one eVar and if you happen to have Full Subrelations on that eVar, you get them on the classification of that eVar as well (which will be less of an advantage when using SiteCatalyst v15 since all eVars will have Full Subrelations).

But here is my favorite example since I see this all of the time! One of Omniture’s common JavaScript Plug-ins is the Time Parting plug-in. This allows you to see data segmented by Day of Week and Hour of Day. However, many clients also store an sProp and/or eVar for Weekday/Weekend through this plug-in. It makes sense that you might want to segment data by Weekday/Weekend, but why use an entirely new variable just to track the binary values of Weekday vs. Weekend? You can easily do a one-time classification of Day of Week and lump Mon-Fri into “Weekday” and Sat-Sun into “Weekend.” That will allow you to achieve the same goal, but saves a variable. Again, this is a minor annoyance, but it is the principle that counts. You can extrapolate this concept by thinking back to the Customer ID example I mentioned above. What if there were ten data points related to a customer that you chose to store in ten separate eVars? You might be able to make these classifications and save ten eVars!

My advice here is to just be thoughtful when assigning variables and if you have cases where there is a direct relationship between two variables that won’t change very often, consider using a SAINT classification and also think about whether you will ever use that data point for an analysis before tracking it in the first place.

VISTA Rule Chaos

The final pet peeve I will mention is related to VISTA Rules. Let me start by saying that VISTA and DB Vista rules are not bad. They can be very powerful, but it is also true that they can be easily misused and wreak havoc on a SiteCatalyst implementation. When using VISTA rules, it is critical that you and your entire team understand WHEN the rules are being used and WHAT they do in terms of setting variables. I have seen many cases where a developer will change a variable not knowing that there are VISTA rules impacting it. You need to make sure VISTA rules are heavily documented and as you change your site or implementation, they need to be factored into the equation. One suggestion I have is to add the phrase (SET VIA VISTA) in the name of any variable that is set via a VISTA rule in your documentation so there is no missing it!

The other pet peeve I have related to VISTA rules is when they are used as a “band-aid” to avoid doing real tagging. In the long-run, this always comes back to haunt you. I see many clients creating band-aids on top of band-aids until things fall apart. I am ok with companies using Vista rules to get things done quickly, but I recommend that, over time, you phase out as many VISTA rules as you can and move their logic to your regular tagging so you have all of your logic in one place.

Final Thoughts
Well, there you have it. Not all of my implementation pet peeves, but a bunch of them that popped into my head. I am sure you have seen some fun ones out there and I’d love to hear about them…Please leave them as comments here!

NOTE: For more details on these points, check out my follow-up post here.

Adobe Analytics, General

SiteCatalyst Advanced Search Filters [SiteCatalyst]

One of the features that I find deceptively difficult at times in SiteCatalyst is the use of the Search feature. I feel like there are many times I use this and end up messing it up. Therefore, I decided to do my best to share what I have learned about what works and doesn’t work in the hopes that it will save you aggravation and time! I also hope that many you can add a comment to this post with your tips and tricks so we can all learn something…

The Basics

First, let’s start out with the basics. Hopefully if you are a SiteCatalyst user you know that the search function is used to filter results in eVar and sProp reports. You simply enter a value and SiteCatalyst will look for those values in the active report and return those rows. This is handy because you can bookmark reports, make custom reports or add reports to dashboards after you have created the filter so that you never have to apply it again.

For example, let’s start with a Pages report like this:

Obviously we have pages from all sorts of countries, but if we only wanted to look at pages from England, all we would have to do is enter “SFDC:uk:” in the search box (top-right) and we would then see a report like this:

But what if we wanted to see pages from England or France? At this point we have two options. You can either enter “SFDC:uk: OR SFDC:fr” in the search box or use the advanced search editor. Here is what it would look like with the OR statement in the regular search box (look at the top-right portion):

However, believe it or not, if you change the “OR” to be a lower case “or” you will get no results! I kid you not! I call that an “Omniture-ism” and you just have to remember it…

The other way to get to the same report is to use the Advanced Search tool. You get there by clicking on the Advanced link to the right of the search box. Once there, you would enter the appropriate phrase in the first box, click the “+” sign to add another search criteria and then enter the second phrase so it looks like this:

However, it is important that you change the top drop-down box from the default of “if all criteria are met” to “if any criteria are met” or you will get no results.

If you wanted to look for cases where there were pages on the UK website that had the phrase “form” in the pagename, that would be a case where you would use the “if all criteria are met” option and your query should look like this:

This would result in a report like this:

Finally, we can come full-circle and get more advanced and use an “AND” statement in the standard box to get the same result. Here is what the search box would look like:

Again, keep in mind that the “AND” is case-sensitive…

More Difficult Searches

So now that we have covered the basics, let’s get a bit more advanced. First, let’s keep going with our example and say that we need to find all pages in the UK or France that have the word “form” in them. This gets a bit tricky because we are mixing OR and AND statements. Using the Advanced Search query builder, here is how you would enter it:

Conversely, if for some reason we wanted to see any UK Pages that had the phrase “form” in them and all France pages (not sure why, but this is just an example), we would enter this:

Which would result in a report like this:

Note that in this case we had to change the drop-down box back to the “any criteria” option since we did the AND statement within one of the criteria (hey…I told you this was the difficult part!).

The trick here is to combine any OR and AND statements into each row since each of the individual search criteria have to be either an “AND” or “OR” clause.

On a separate note, in the advanced search area, you can change the drop-down which defaults to “Contains” to “Does Not Contain” so if, for example, you wanted to see all UK pages, but exclude those that had “login” in the name you would enter the following criteria:

Note that for this instance, we need the “all criteria are met” option…

Finally, just for fun I entered the following phrase in the “simple” search box…

…and miraculously it produced the same results!! I decided to stop here before I broke anything, but you can feel free to see how far you can push this!!

But wait…There’s more! I have been amazed by how few people I meet know this next one… Imagine that you are looking at an eVar report and you have broken it down by another eVar via Subrelations. Here is an example where I have taken the Site Locale eVar and broken it down by Internal Search Term:

Now, let’s say that you wanted to do a search filter to only see items that mention “Outlook.” The easy way to do this is to just enter the phrase “Outlook” in the search box and SiteCatalyst will show any rows that have that phrase. But what if you wanted to see the phrase “Outlook” in just United States or Japan? No matter what you put in the search box, you will not get the results you are looking for (i.e. outlook AND “united states” OR japan). Would you know how to do this? Most people I meet don’t. Here is how…

When you are using a Subrelation report, you have to keep in mind that SiteCatalyst is running two reports and it doesn’t know which report you want to filter on. Therefore, we need to tell SiteCatalyst which report we want the search term to be associated with. You can do this in the Advanced Search area. When you have a Subrelation report, and you click on the Advanced Search area, you will see a new option that allows you to select one of the two reports being subrelated like this:

Most people haven’t ever noticed this new option so now that we know it is there, all we have to do is select the right report and then enter the search term in the right report and we can get our results. For the example above, we would enter “Outlook” in the search box next to Internal Search Term and “United States OR Japan” in the search box next to Site Locale like this:

Now, since we have been a bit more specific, we can get a nice, clean report like this:

Just keep this handy feature in mind the next time you are trying to search in a Subrelations report and pulling your hair out because you can’t get the results you think you should!

Even More Difficult Stuff

Phew! If you’ve made it this far, you are really devoted to your craft. We’re almost there so hang on…

The next thing that is important to know is that you can use wildcards in your searches. To do this, you use the “*” symbol in the search query. For example, if we wanted to find any pages in the UK that has the phrase “landing” somewhere in the name, we could simply do a search like this:

The next thing to know is that Omniture can be a bit quirky when it comes to the [SPACE] separator in the search box. Let me illustrate. If I enter the phrase “home page” in the search box, here are the results I get:

This seems strange to me since none of these pages have a space in them. That would make you think that a [SPACE] is a valid separator and that this query is the same as “home OR page” right? But if I use that logic and enter this phrase “SFDC:uk: SFDC:fr:” which is really just two phrases separated by a space (just with a colon in the phrase), I get no results. I am sure there is a logical reason for this, but I am not sure what it is. Maybe if SiteCatalyst sees a “:” or a “|” it acts differently (maybe Jorgen can enlighten us on this)?

To be safe, I use the next feature – using quotes – whenever possible. My advice is that if you ever have phrases with spaces in them that you enclose them in quotes and stick to using OR statements. In the preceding example, if I change my “home page” query to be “home page” in quotes, I get the expected result which is no results. Another lesson to be learned here is that you should, whenever possible, avoid putting spaces in values that you think you will search upon. I do my best to remove all spaces from page names since that is the variable I search on the most!

Finally, you can use the “-” sign to remove things from search results. This produces the same effect as using the “Does Not Contain” feature in the advanced search area. As in the previous example, if I want to see all UK pages, but not ones that have the phrase “login”, I can enter the following in the search box:

To see UK pages that do have login in the name, you can also enter this phrase:

But when the results come back, it will mysteriously remove the “+” sign and just uses space as the separator producing the same results.

Final Thoughts…
So there you have it! Pretty much everything I know about using search and advanced search in SiteCatalyst. Do you have any additional tips or tricks? If so, leave a comment here…Thanks!

Adobe Analytics

My Latest SiteCatalyst Wishlist Items [SiteCatalyst]

A few weeks ago I was at the European Adobe (Omniture) Summit in the UK and had the pleasure of being in another one of Brett Error’s “what features are we missing” sessions. I find these sessions to be good and bad at the same time. The good part is that people are expressing what they need and others can validate or invalidate ideas in real-time. The bad part is that I often feel that the features that get voted up are the ones that are easy to understand (like Bounce Rate as a standard metric!), but that there are many features that people SHOULD want, but don’t know it yet. I don’t mean that to come out as sounding pretentious, but the fact is that many people have been using the product for only a few years and it is natural that the needs of those who have been using the product for many more years will have some more advanced feature requests. Unfortunately, many of these advanced features, no matter how important, will be trumped by more basic, globally understood feature requests.

The creation of the Ideas Exchange has been a great help in getting ideas big and small into the product and I am so pleased to see that many of the ideas in there have been added to the product and for that I commend Adobe (Omniture). I think the positive feedback around SiteCatalyst v15 is a direct result of people seeing their ideas manifested in the release.

In this post, I wanted to highlight a few ideas that are in the exchange that might not get as much “play” as they should and why I think they should be undertaken. If you agree and have a Login ID to SiteCatalyst, please feel free to login and vote for them!

SAINT Auto-Classifications
One of the ideas that came up in the UK session I mentioned earlier (and received the most votes!) was the notion of SAINT Auto-Classifications. This idea was submitted by Ben Gaines (probably as an initial test of the Idea Exchange!) the day the exchange came online. As most users know, SAINT Classifications are a way to add meta-data to values you have already captured in SiteCatalyst. It is similar to a pivot table in Microsoft Excel. However, SAINT Classifications have to be uploaded manually and it becomes very tedious over time. The feature request is to provide a way where administrators could set-up rules to auto-classify items or classify them on the fly (as reports open up). For example, if I have a report of campaign tracking codes and a bunch of them start with “seo|,” I could set something up where these would all be automatically classified as “SEO” in the Marketing Channel classification I have set-up. This is just one example, and the possibilities are endless.

The great news is that this idea has recently been changed to “Under Review” and geniuses like Sean Gubler have started playing around with tools to do this so I feel like it is only a matter of time before we see this. Keep your fingers crossed and vote for the idea by clicking here.

Multi-Session Attribution (Allocation)
The next idea is related to eVar attribution. Currently, you can attribute success to eVar values for First Touch, Last Touch. There is an option for Linear allocation, but that only works within one session so it is rarely used. The closest thing available for multi-session attribution is the Cross-Visit Participation plug-in which is really just a “hack” that concatenates eVar values into one string. This plug-in can be useful at times, but has some serious drawbacks.

In today’s world of people bouncing between websites and social media, you cannot count on the visit that people convert being the same one in which they came from a marketing campaign. Therefore, you often have cases where a visitor comes from an SEO keyword, does some product research, leaves the site, comes back the next day from a paid search ad, leaves the site and then comes back a third time just typing in the URL and then converts. This string of traffic sources is difficult to track and analyze using the eVar allocation feature set available today. What I feel is needed is a way to simply have SiteCatalyst extend its Linear Allocation feature to include multiple visits and make that a legitimate setting in the Admin Console. I’d even pay more for it if needed, since not everyone will need that level of sophistication. I personally think that attribution will become a bigger issue in the future as the current browser model fractures so I think this will be an important feature for all web analytics vendors going forward. You can read some of my partner Eric Peterson’s thoughts on appropriate attribution in this white paper. If you’d like to see SiteCatalyst go deeper with attribution, please vote for this idea by clicking here.

Multi-Session Pathing
Along the same lines, the next idea I’d like to suggest is the notion of multi-session Pathing. I suggested this to the Ideas Exchange over a year ago and was surprised to see that it only has 7 votes! Currently, pathing reports are limited to one session. However, it is often the case that visitors come to your website multiple times before they convert. Wouldn’t you want to see paths that span multiple visits for the same person? I realize that this can be data intensive, but even if it is for a subset of data, I think it would be interesting to pick a subset of visitors and see what they do over multiple visits. Currently, you can’t even do this in Discover. While I am not sure of the exact way the feature should be implemented, I feel that having some insight into multi-session pathing is important and should be somewhere on the roadmap. If you agree, you can vote for this idea by clicking here.

Expire eVars Based Upon Event or Time
The last feature request I’ll mention has to do with expiring eVars. Currently, you can expire an eVar based upon a time period (like Visit or 30 Days) or a Success Event but not both. So why is this important? Imagine that you have a situation where you have an eVar set to expire at the Purchase event. A person could come to your website from a specific campaign code and then not return for an entire year and then convert. In that scenario, the campaign code they came from a year ago would get credit for the conversion. However, there are cases in which you would not want that to happen so it would be great if it were possible to have SiteCatalyst expire the eVar at the Purchase event or after 30 days – whichever comes first. That would offer much more flexibility and tighten up eVar attribution across the board. Someone also added a comment to this idea with the idea of allowing an eVar to expire at Success Event X or Success Event Y. That would also be helpful. If you’d like to see this implemented, please click here to vote for it.

Final Thoughts
As I mentioned at the beginning of this post, there are some features that could have a big impact if added to SiteCatalyst, but they are ones that only those who have been through some big battles would know are needed. My hope is that you will think about these features and support them with your votes so we can all benefit. Thanks!

If you have any questions or want to learn more, feel free to contact me for more information.

Adobe Analytics

5 Social Media Secrets – M.Tech 2011

The folks over at Thoughtlead have put together what they’re calling a Digital Influence Collaborative. It’s innovative and exciting and a new way to consume content in microbursts. If you haven’t gotten wind of these events yet, you’re missing out.

They typically feature 60 influencers on 60 topics in 60 seconds. Topics vary from Social Media to Enterprise Marketing Management.

Here’s a mashed version of the one I delivered for Mtech 2011:

5 Secrets for LEARNING from Social Media

Adobe Analytics

Time Zone Trick [SiteCatalyst]

EDITOR’S NOTE:
Since joining Analytics Demystified, the most common email/comment I have received goes something like this:

“When are you going to get back to blogging about cool, advanced stuff you can do in SiteCatalyst?”

While in my new role, I am vendor-agnostic, I will do my best to keep sharing the SiteCatalyst tips & tricks I used to on my old blog. My hope is that as I work with clients using all web analytics vendors, I will branch out and share tips & tricks for all technologies. However, as I always tell people, the goal of my blog posts are to introduce concepts that can be applied to all web analytics tools…

Now on with a new tip/trick…

Dealing With Time of Day (Time Parting)

One of the analyses that I have done from time to time is Time Parting Analysis. Time Parting Analysis consists of looking at the time of the day (or day of week) that website success takes place in order to better understand its importance. While I don’t usually put a whole lot of stock into the time of day, there can be times where websites do much better/worse in the morning vs. evening. Knowing this can be used when planning advertising so you can “strike while the iron is hot,” so to speak.

If you think Time Parting might be important to your business, you should capture the time of day in some manner into variables in your web analytics tool. For example, if you use Omniture SiteCatalyst, you might use the Time Parting Plug-in to pass the time of day (in half-hour increments) to an eVar or sProp. Doing this allows you to look at a report that might resemble the one shown here:

As you can see, this report allows us to see what the action is taking place on our website down to the half-hour increment. If you are not already doing this type of analysis, it may be worthwhile since you can glean some new insights and use this data point for visitor segmentation.

Time Zone Hell!

However, inevitably you will run into a few problems with the above report. First, someone at your organization will ask you which time zone the above report is related to. Therefore, the first thing I recommend is that you clearly label your Time Parting reports with the time zone that the JavaScript file is using to capture the data. In the example above, the “Hour of Day” report was labeled do be PST (Pacific Standard Time) so it can be easily interpreted by everyone using it.

The next problem you will encounter is that of multiple time zones. If you work at a global organization and have people focusing on business in various locales, the above report is pretty much useless to many of your internal customers. If they happen to be good at math and can calculate time zone differences in their head, then you’ll be ok, but most people have trouble interpreting web analytics reports without the added labor of doing on-the-fly time zone translation!

Want to see this problem in action? Take a closer look at the report above. Do you notice anything strange? If you look closely, most of the Visits and Form activity took place in the evening. People might like your product(s), but not so much that they are willing to spend their evenings looking at them! The reason the above report looks strange is because it is for an Australian website, but the time zone is Pacific Standard Time. If you are a web analyst in Australia, seeing your website success events in the Pacific Time Zone is not super-helpful!

So how do we fix this? All it takes is a bit of creativity and meta-data. Keeping in mind that there is a direct relationship between time zones, you can take the above report and apply meta-data to it to adjust for alternative time zones. If you are using Omniture SiteCatalyst as in the example above, this means using SAINT Classifications. By applying a different SAINT Classification for each time zone you care about, you can create new reports for each time zone. Here is an example of what the SAINT file might look like for a few additional cities:

As you can see here, we took the data that was already being collected (the Key column, which in this case is PST) and added meta-data for four additional cities. You can add as many cities as you want and each column you add will create a new report for that city time zone. Once you have done this, you can see a new version of the report above adjusted for each time zone. Now if we look at the same report above, but use the Sydney Time Zone classification report, we see a report like this:

You will notice that now we are seeing the same exact data as the first report, but now the times of the website successes are adjusted for the Sydney time zone. This makes the report look a bit more normal for the Australian web analyst as the success events are now shown as taking place during more realistic business hours. The best part of this solution is that anyone using the standard Time Parting plug-in Omniture provides can use the same SAINT Classification file. It just needs to be adjusted so the “Key” column is the time zone for which you are collected the data. If you are using the PST time zone, you can download the file I showed above. If you are in a different time zone, you can still download the file and adjust it as necessary.

Caveats

As always, there are a few caveats with any “hack,” so here are mine:

  • I take no responsibility for daylight savings time which can wreak havoc on time zone translations, but even in that worst case, your data will be an hour off…
  • Time Parting reports can also be used to track Day of Week. This is harder to adjust for than is time zone unless you are time stamping using the actual date and are willing to have a massive, multi-year SAINT Classification file. This is not a bad approach, but is much more involved. Contact me if you’d like to explore this.
  • It is possible to collect time zone data using different time zones for each report suite. For example, it may be better for you in the long run to have your Sydney data collected in the Sydney time zone and your London data in the London time zone, but I have often seen clients have issues with this and if you don’t start doing this from the onset, you can have issues going to it later. Please consult your account manager for more details.

Final Thoughts

So there you have it, a few thoughts on Time Parting and a fun trick to make it more useful if you do business in multiple time zones. Give it a whirl and let me know what you think…

If you have any questions or want to learn more, feel free to contact me for more information.

Adobe Analytics

Welcome to SiteCatalyst v15

Among the many announcements Adobe made at the 2011 Omniture Summit (#omtrsummit), probably the most anticipated was the release of version 15 of the flagship SiteCatalyst product. Those of us who follow SiteCatalyst regularly know that this release has been a long time in the making. Unfortunately, Omniture didn’t provide much detail in the keynote about specific enhancements so in this post, I will try to highlight some of the key things that I have heard about this new release (but in the interest of sharing info in semi-real-time, forgive me if I am not 100% correct and keep in mind I am writing this between sessions!). Since version 15 isn’t scheduled to be released right away (April?), not all features listed here are set in stone and as more details emerge about the release, I will follow-up with additional information/corrections…

Instant Segmentation Segmentation
The ability to segment data has always been a two-step process in SiteCatalyst. You could segment your data by passing values into eVars and sProps, utilize DataWarehouse/ASI, but if you wanted real-time segmentation you had to pay additional $$ for Discover or Insight. Unfortunately, most of the available options required you to wait for your segmented data which is not ideal from a web analytics perspective. However, when Google’s free analytics product released the ability to segment data in real-time, it became apparent that SiteCatalyst’s segmentation capabilities needed to be improved. The masses asked why they were getting less functionality than a free product?

With version 15, Omniture will now provide the ability to segment data in real-time. This will go a long way to appeasing those who realize that segmenting data is almost as critical as collecting the data. Instant segmentation will allow casual users to slice and dice website data without having to go through power users and then wait for the data to process. As you might expect, when business users have questions, they usually want the answer NOW! Forcing them to wait causes you to lose momentum and prohibits adoption so I think this feature will really help create more data-driven cultures. While this feature will be a big hit with the SiteCatalyst community, I expect that other web analytics vendors will position this as Omniture gaining parity with what they have already had.

One outstanding question I have is what this release means for ASI? Does this product/feature go away? Do customers who have paid for it, get some $$$ back?

New Architecture
So why did it take so long to introduce instant segmentation? Well the answer lies in the next big item Omniture discussed – a next-generation architecture. While I am not privy to all of the details, Omniture has stated that they redesigned the entire back-end of SiteCatalyst so that it could scale better and provide additional functionality like instant segmentation. Unfortunately, since most end-users won’t ever see the “back-end” of SiteCatalyst it will be hard to appreciate what went into it, but if this new architecture is as described, it should allow for more features and faster product improvements in the coming months/years.

However, there is one important catch to this v15 architecture. End-users will not be able to upgrade to v15 be themselves, but instead will need to work with Omniture to upgrade. This is due to the fact that once you upgrade, there is no going back. This is because v15 processes data differently than its predecessor. In general, v15 will process data in a manner that is more similar to Discover so users of both products should find that their data between SiteCatalyst match much more closely going forward. However, this means that SiteCatalyst v15 will approach things in a slightly different manner which could result in key metrics like Visits being slightly different than they were in previous versions. This means that looking at YoY data could show some variances, but SiteCatalyst will have an alert that tells you when you are comparing pre v15 data to v15 data so you are at least aware of this potential anomaly.

While it will remain to be seen how the SiteCatalyst community reacts to this, my hunch tells me that most clients will bite the bullet and upgrade to v15 and deal with this one time data discrepancy and take the benefits that v15 provides with respect to functionality.

More eVar Subrelations
Power SiteCatalyst users will rejoice in the fact that hey can now have full subrelations on more (all?) eVars. This means that you can break down more eVars by other eVars. In the past, you could only select a few conversion variables for which you wanted to see breakdowns, but this limitation will be reduced (abolished?) in v15. This is huge news and is another example of why the new back-end architecture is so vital.

UPDATE: Brett’s closing session suggested that ALL eVars and sProps could be broken down by each other. I have heard conflicting things on this so stay tuned!

Trend Multiple Metrics!
While it may not sound super-sexy, one new feature of the v15 release is the ability to trend more than one metric at the same time. To date, you can view multiple metrics in a “Ranked” report, but as soon as you switch to the trended view, only the 1st metric is trended. This has been a real bottle-neck and forced people like me to create additional reports in ReportBuilder to get this functionality. Version 15 solves this and I am told that it will continue to improve over time.

Visits & Visitors in all Reports
Another sticking point for SiteCatalyst users was that you could not see Visit/Visitor metrics in all reports. There were numerous workarounds, but most had an additional cost associated with them, but in v15 you can see both metrics in most reports. This will be a welcome addition, especially in conversion reports where they are needed the most. I have not confirmed whether Visits and Visitors will have full subrelations or not.

Ad-Hoc Unique Visitors
Currently in SiteCatalyst you can see unique visitors for set time periods, such as day, week, month, but if you choose an ad-hoc date range, you cannot see an accurate unique visitor count. In version 15, Omniture has rectified this like it had done in the Discover product. This feature will bring SiteCatalyst to closer parity to other web analytics vendors who have been providing similar unique visitor counts for any timeframe.

UPDATE: Brett’s closing session mentioned that you can also see ad-hoc unique visitors for Pages as well. That might mean that arbitrary time frame ad-hoc unique visitors might be available for all sProps?

Bounce Rate
After many requests from customers, Bounce Rate will finally be a standard metric in SiteCatalyst. Initially it sounds like it will be limited to a few reports, but it looks like it will be more pervasive in the future. While it has been possible to create Bounce Rate work arounds to compensate for not having Bounce Rate as a default metric, I think the gerenal population will be happy to have this baked into the product.

Video Enhancements
Previously, video data was relegated to video-specific reports only. Those clever enough would add custom metrics and eVars to get around this, but now it appears that you no longer have to do this to see video data in all SiteCatalyst reports.

New iPad App
In v15, the SiteCatalyst iPad app is getting a huge overhaul and will allow for more advanced web analysis on-the-go:

General UI Enhancements
Mixed into this release are a bunch of UI enhancements that people will probably like. These include searchable menus, report-specific default metrics, some new dashboard stuff and some more hand-offs between multiple Omniture products (like sharing segments with Test&Target). I think most will notice that that Omniture spent some cycles thinking about how an analyst uses the tool on a daily basis.

Long Live the Idea Exchange!
Lastly, I wanted to take a moment to thank Omniture for listening to its customers via the Idea Exchange. Many of the items above were highly voted upon by the Omniture community through the Idea Exchange. Omniture has done a great job of listening to its clients throughout the year (in addition to Brett’s fun Summit session!) so that it can focus its development efforts on what the majority of people are saying they want. It takes real courage as an organization to open up and ask customers what they want, interact with them, let all customers see this and then deliver the top items. While it sounds like common sense, there are very few vendors doing this today and I applaud Omniture for being forward-thinking about it. A special shout-out goes to Bill, JD and Ben who worked hard to champion this effort and I hope that v15 and beyond are the better for it…

UPDATE – ADDITIONAL FEATURES MENTIONED AT BRETT’S CLOSING SESSION

Dashboard Segmentation
In v15 it will be possible to apply real-time segments to SiteCatalyst Dashboards which will change all reportlets on the dashboard

Default Metrics by Report
In current versions of SiteCatalyst, you can set default success event metrics for conversion reports, but it is an all or nothing proposition. In v15, it sounds like you will be able to assign different default metrics for different conversion reports.

Data Warehouse Improvements
It sounds like v15 will provide more information about pending DataWarehouse requests and possibly allow for re-running (or “Save As”) of DataWarehouse requests. The latter will be a huge time-saver since today, you have to re-create each from scratch to make any changes…

Adobe SocialAnalytics
In addition to the new version of SiteCatalyst, another related product release is the Adobe SocialAnalytics product. This new product will compliment SiteCatalyst and will allow companies to monitor all social media activity. This product will be positioned as a competitor to Radian 6 and others in the social media measurement space. Key parts of this product are measurement for Twitter, Facebook and YouTube. Personally, I am excited that Omniture has formalized some of the cool social media tracking things I spoke about a few years ago and delivering on past promised features like viral video measurement. This new product will allow you to see Social Media metrics side by side, but and filter on specific influential users, but doesn’t appear to show if it is the same people who are coming from Social Media sites and then converting (if that is even possible!). Unfortunately, I believe this new product won’t be available to everyone until Q3, but it is interesting to see this new product, especially in the context of what was announced by Webtrends last week around social dashboards.

Final Thoughts
While I will defer final judgement until I learn more about all of the new v15 features, at a high level, I think that version 15 is a big step forward for Omniture. While there are not hundreds of new features, they have hit some really big ones that will have a real impact for power users. I predict that Omniture’s competitors will discount this release by saying that SiteCatalyst is now providing functionality they have had for years. While I could see that argument (and don’t disagree with it), I will offer the following perspective. SiteCatalyst was a product that experienced tremendous growth over a very short time frame as they went from a vendor that no one had heard of ten years ago, to one of the most popular web analytics tools used in the enterprise. With that growth, it was likely hard for SiteCatalyst to change its back-end architecture during this growth spurt, whereas other tools have either been around longer (and had more time), or come around afterwards (and had the ability to start with a clean slate). It is in that context that I still believe that Omniture is taking a big step forward and that the move to a new architecture is probably the right move for the SiteCatalyst product. I will be curious to hear your thoughts as you start seeing more about the product this week and beyond…

So those are a few of my favorite new features of SiteCatalyst v15 and some thoughts on SocialAnalytics and the new platform. What are your favorites? Have you heard of others? Are there any I listed that are not true? Which features were you hoping for that didn’t make the release?

As always, if you have any questions about SiteCatalyst or migrating to v15, feel free to contact me to learn more. Thanks!

Adobe Analytics, Analytics Strategy, Conferences/Community, General

Conference Season is Upon Us

Wow, I just got done looking more closely at the Analytics Demystified team calendar for the next few months and it is a doozy! Chances are if you live in the U.S. and do any type of digital measurement, analysis, or optimization professionally we are going to see you between now and the end of March.

If that is the case, we’d like to buy you a drink!

Despite each of us presenting, often multiple times, we are always happy to make time for our clients and potential clients when we are out-and-about.  If you realize you’re going to be at one of the following events why not drop us a line and we’ll see if we can connect. Who knows, maybe we’re planning a great party or something …

After all that the three of us are going to slink home to our loved ones and try and convince them we are in fact their fathers, husbands, and sons.

Seriously, though, we never get enough opportunities to meet with partners, friends, and prospects at these events so if you’d like to meet with any or all of us please drop us a line sooner than later so that we can block time and make plans.

Adobe Analytics, Analytics Strategy, General

Free webcast on Tag Management Systems on Jan 25th

Given the considerable buzz in the marketplace regarding Tag Management Systems and vendors like Ensighten, TagMan, and BrightTag I wanted to call your collective attention to a free webcast I am participating in next week on “The Myth of the Universal Tag.” On Tuesday, January 25th at 1:00 PM Pacific time I will presenting with Josh Manion, CEO of Ensighten and Brandon Bunker, Senior Manager of Analytics at Sony, detailing some of the advantages I see in the adoption of a tag management platform.

What’s more, the nice folks at Ensighten have taken the registration form off of my white paper on tag management systems and so everyone is free to read all of my thoughts on Tag Management without prompting a sales call.  How cool is that?

Spread the word:

“The Myth of the Universal Tag” free webcast sponsored by Ensighten
Tuesday, January 25th, 1:00 PM Pacific / 4:00 PM Eastern
Register online now at GoTo Meeting!

Don’t forget to download that free copy of my white paper on tag management systems!

Adobe Analytics, Analytics Strategy, Conferences/Community, General

Want to meet Adam Greco? Go to OMS 2011 in San Diego!

By now I hope you have heard that Adam Greco is joining John and I as a Senior Partner in Analytics Demystified. While his official start date isn’t still for a few weeks he’s already on the road as part of the Demystified team. If you’d like to meet Adam in person and talk with him about the practice he is building there are a few places I just happened to know he will be in the coming months:

  • Adam will be participating in the Web Analytics Association (WAA) Symposium in Austin, Texas on Monday, January 24th. Adam is talking about integrating web analytics and CRM which is core to his practice area given his past work at Salesforce.com and Omniture.
  • Adam will also be presenting at the Online Marketing Summit in San Diego, California on Tuesday, February 8th. He’ll be giving the same presentation on web analytics and CRM, discussing how to move marketing analytics from the server room to the board room.
  • Adam will also be joining me in Minneapolis on Wednesday, February 16th for a special Web Analytics Wednesday sponsored by our good friends at SiteSpect and with the generous help from our friends at Stratigent.  We don’t have the details on the site yet but the event will be downtown Minneapolis and Adam and I will be doing some prognostication and fielding questions from Twin Cities locals.

Adam will also be at Webtrends Engage, Adobe’s Omniture Summit, and the Emetrics Marketing Optimization Summit but we’ll post more on that when additional details emerge.  Suffice to say Adam will be busy in his first few months on the job.

If you haven’t met Adam I would encourage you to head out to one of these events and introduce yourself. Especially if you’re a marketer and are considering the Online Marketing Summit — if you haven’t been to OMS you really need to go.  Every year I am absolutely blown away by the job that Aaron Kahlow and the OMS team do bringing that conference together.  OMS draws amazing speakers, amazing sponsors, and most importantly amazing conference participants and delivers an absolute fire-hose of information.

I’m sincerely bummed that Adam is taking my place at OMS this year — I haven’t actually missed a big OMS event in California ever — but I am confident that the audience will benefit greatly from Adam’s message about CRM integration, his direct experience at Salesforce.com, and his distinct presentation style.

Adobe Analytics, General

Form Submit Button Clicks

At the end of last year, I spent a bunch of time showing how you could dissect your website forms to see which were performing well and not so well. While this post will be different from those, it is still related to website forms. In this post, I am going to share a concept that will let you determine which of those visitors seeing your forms have the intention to complete them and which do not. This information can be very valuable as I hope to show.

Which Forms Get Visitors to Take Action?
If you have forms on your website, I hope that you are at least doing the basics and tracking how many people View each Form and how many Complete each Form like this:

This will allow you to have a rudimentary view about how each website form is performing. However, one short-coming of this is that you only have two points of comparison. As a web analyst, I always like to have more data points to slice, dice and analyze. The report above answers the question: “How many people who see each form decide to complete it?” What if you wanted to know how many people who see each form try to complete it? That might be an interesting data point, since sometimes when you do a lot of Paid Search or Display Advertising you could be driving less qualified traffic to your website. Therefore, what I like to do is to create a new metric that I call Form [Submit] Button Clicks. This Success Event is set when website visitors click the button that you place on your form (duh!). By doing this, you have essentially created a wedge between the Form Views and Form Completes metrics shown above such that you can create a report that looks like this:

As you can see here, in the first report above we knew that only 786 of the 2,246 Form Views turned into Form Completions. However, with the second report, we now know that visitors to that specific form clicked the Form Submit button 830 times. That means that 44 times they tried to complete the Form, but were unable to for one reason or another (maybe Form Errors).

Dig Deeper With Calculated Metrics
Once you have this cool new Form Button Clicks metric, you can then create some fun new Calculated Metrics that let you dig even deeper. Here are two that I suggest: Form Button Click Rate & Form Button Click Fail Rate. The Form Button Click Rate is the number of Form Button Clicks divided by the number of Form Views. This metric shows you what percent of people viewing the Form actually click the button as shown here:

In this report you can see which forms on your website are doing a good job at getting visitors to click the button. Forms with low percentages might indicate that there are too many fields, poor content or a bad offer. You can use this report to zero in on which forms represent the biggest opportunity for improvement. I like to bubble-chart this data such that the forms with the most Form Views and the lowest Button Click Rate move to the “magic quadrant.”

The next Calculated Metric is the Form Button Click Fail Rate. This represents the percentage of times visitors click the Form Submit button, but fail to have a Form Complete. These people represent your “lowest hanging fruit” as by clicking the button, they have implicitly told you they are somewhat interested in you! You create this metric by dividing the difference between Form Button Clicks and Form Completes by the number of Form Button Clicks as shown here:

In this case, for the first form, about 5% of people who click the button don’t make it to a Form Complete, but the last form shown in the report seems to have some issues since 62% of Form Button Clicks don’t make it to a Form Complete. You may want to start doing some testing on that form!

As is always the case, whenever you create new Calculated Metrics you can see them as general metrics in addition to using them in eVar reports. Therefore you can set Alerts and see trends for both of the metrics described above:

What I like about these two metrics is that one shows you how good you are at getting people to click the button on the form (how good your offer/content is) and the other tells you how good you are at closing the deal once a visitor has decided to give you a chance. Those who have managed websites realize that there are very different tactics used to solve these two very different questions so having these metrics can really help you focus and use your precious website resources as efficiently as possible.

Don’t Forget Your Other Reports!
While the above reports hopefully get you excited, don’t forget that you already have many reports that can be combined with the information above to get even more value. For example, one of the reports I use a lot is the Traffic Driver (Unified Sources) report which shows me how each visitor got to my website. Wouldn’t it be cool if I could see Form Button Clicks and the above two new Calculated Metrics by Traffic Source? Well…you can! All you have to do is add these metrics to your existing Traffic Sources report like this:

Now you can see how each channel is doing! Looks like Paid Search (SEM) is generating lots of Form Views, but only gets 12% of these to turn into Form Button Clicks. If they do get someone to click the button, it looks like 55% of them don’t end up successfully making it to a Form Complete. This can be contrasted by SEO which seems to fare a bit better by getting 30% of its Form Viewers to click the button and of those 75% make it through to Form Completion. You can imagine how powerful this data could be and how you could use a product like Test&Target to come up with ways to improve these conversion rates by traffic source.

If you want to get even more granular, you can break this report down by the root traffic driver so you can take specific actions. In the following report, I can see the Paid Search ID’s that make up the Form Views and the other metrics and see how each performs individually:

Here we can see that there are some Paid Search keywords that are doing well (get people to click on the submit button over 20% of the time) and others that are under-performing (less than 15%). You can use these metrics to help drive your Paid Search strategy or possible automate this using SearchCenter. Finally, in this fictitious example, I have made row three have zero Form Completes, but a 32% Form Button Click Rate, which would indicate a major issue with the form that should be addressed.

One last example of leveraging an existing report would be the Visit Number report:

Here we can see that the Form Button Click Rate is pretty consistent, but up a bit in the 3rd visit, but interestingly, our Form Button Click Fail Rate appears to decrease over time. Perhaps the more time visitors take to get to know us, the more likely they are willing to deal with all of the information we are asking for on our forms!

Final Thoughts
Well there you have it. I always find it so amazing that adding one simple Success Event in the right place can open up so many new web analysis opportunities. If you have forms on your website, I hope this will help you learn more about your users and how they are interacting with your forms. Let me know if you have any questions…

Adobe Analytics, General

Tracking Form Errors (Part 3)

(Estimated Time to Read this Post = 4 Minutes)

In this series of blog posts, I have been talking about how to see what types of Form Errors your website visitors are receiving so you can improve conversion. So far, we have learned how to see how many Form Errors your website is getting, which fields are causing those and how many Form Errors you get per Form and Visit. As my regular readers know, I like to go beyond the basics, so now we are going to kick it up a notch and get into some real fun stuff. Fasten your seat belts!

Which Fields on Which Forms?
In my first post of this series I shared a simplistic way to learn which form fields caused errors using a List sProp. However, correlating this to specific forms was a bit trickier. Here I will show how to do this, even if you don’t have Discover. The trick here is to set a Form Errors eVar that stores all of the fields which had an error when the above Form Errors Success Event is set. Since eVars have a longer character length, this should be possible for most forms that aren’t too long (which they shouldn’t be anyway!). I like to do this by concatenating the field values into one long string with a separator between each field. Here is an example of the report you want to have:

This report will look a bit like the one I described in the previous post, but as you will see, it is much more powerful since it is in the conversion area and can take advantage of Conversion Subrelations. Besides being able to see which combination of field errors are troubling users, you can open your Form ID reports, find a specific form and then break it down by this new Form Error eVar to see the specific form fields causing problems by form as shown here:

Using this report, we can see that for the first form shown above, 66% of the times visitors get a Form Error, they had eight form field errors (or left them blank). This data, when coupled with observational data using a tool like ClickTale can be invaluable in driving increased form conversions!

What % of Required Form Fields Have Errors?

While the above report, which shows Form Field Errors by Form, is powerful, one question it doesn’t answer is: How many of the required fields on my forms are not being filled out by users? The answer to this question can help you figure out which fields should/shouldn’t be required. So to answer this question, what you want to do is to look at each form that loads on your website and calculate how many fields the user received an error for and then divide that number by the total number of required form fields. For example, if you have a form with eight required fields, and the current user received two errors on that form, the calculation would be 2/8 or 25%. You should then pass this 25% value to an eVar when you are setting the Form Errors Success Event. Once you do this for all forms, you will have a report that looks like the one shown here. Using this report we can see that the highest number of Form Errors are cases where users are getting errors on every field (which is most likely people leaving all fields blank). Maybe our users don’t realize that these fields are required and we can do some testing to create a better experience or reduce the number of required fields?

If we want to see which forms are the ones that have the highest 100% Form Field Error Rate, all we need to do is break the above report down by Form ID:

Finally, if you are doing a good job of grouping your website forms using SAINT Classifications, you can see some super-cool reports. In the following report, I have grouped all of my website forms into high-level buckets of Demo and Free Trial. Then I broke this report down by the percentage of required fields that result in Form Errors.

You can see here that most website visitors on Demo forms are getting errors for 100% of the fields (probably leaving them blank!), while for the Free Trial, the largest percentage of required fields with errors is 10%. Interesting data indeed!

Final Thoughts
In this post, we have covered some advanced ways to see which fields produce errors on each form, see this by form and seen how to know which forms have the highest total required field error rates. These reports can provide an enormous amount of insight into what is happening on your forms with respect to errors and once you understand your visitor’s form behavior, you can apply these learnings to all forms on your site. In my next post, I will cover a tangentially related item (related to Forms, but not as much about Form Errors) that I think is super-cool.

Between this post and the last post, hopefully you have some food for thought when it comes to tracking how your website forms are doing so you improve your conversion rates…

Adobe Analytics

Phew…!

Phew. It’s been crazy weeks for me lately. At the moment, we just put up the tree, kids are all quiet and I’m drinking a glass of red. It’s one of the rare moments these days that I have in solace…and it’s gone…the littlest one is squirmy with hiccups.

Okay, I’m back. Made a bottle and made the hand off to Mommy. I haven’t blogged in a long while and there so much to say but I just haven’t had time. So here’s the johnlovett highlight reel for Fall 2010:

  • We welcomed a new baby into our home. And that makes three. Three boys that is. I always thought the jump from one kid to two was really no problem. But, I can tell you that increasing the number of kids another 33% 50% is a big jump indeed. [The 33% designates the percentage of quantitative reasoning skills I’ve lost in the past month.] Our house is busier than ever with an 18-month old climbing the walls and an eldest brother at five running the show. Everybody is happy and healthy so I’m immensely grateful for the lack of sleep and craziness.
  • I’m writing a book for Wiley on Social Media Metrics. And it’s one of the hardest things I’ve ever done. I’ve got the story in my head and know what I want to write, yet cranking out 40 page chapters every other week is really tough. I’m nearly half way through my manuscript and I love the way its coming together. Although, if you’ve got a social analytics story of smashing success, miserable failure or sheer brilliance, I’d love to talk with you. I could always use more.
  • My business is off-the-charts busy. Looking back on twelve months since joining Demystified and I couldn’t be happier. It’s been a great year and the work I’m doing is motivating me to maintain work-a-holic proportions. Since Labor Day I spent 8 weeks on the road visiting clients, working on changing our industry and speaking at events from coast to coast with a business trip to Italy as a big November finale. I made it home with four days to spare before the baby was born. Whew.
  • And I’m happier than I’ve ever been. Who knew that chaos could be so rewarding? I always knew this was the case, but I love my job and I truly love the #measure industry. As measurers of digital medium, our roles are about to become indispensable. We’re on the precipice of a big data explosion and we’ll have the skills to float to the top. Big data is going to rush like a flood over enterprises and marketers alike and we measurers will be ready to slice and dice our way to sensibility. I like our chances.

More to follow on all these topics as I’m working three concurrent projects, writing two white papers and working through book chapters at present… Oh, and it’s my turn to change diapers, so I’m out.

Talk to y’all soon.
John

Adobe Analytics

Tracking Form Errors (Part 2)

In my last post, I started the process of identifying which form fields were producing the most errors. In this post, I will cover some related topics that will allow you to quantify how often you are getting Form Errors and how effective, in general, your forms are at converting website visitors.

How Many Form Errors Are You Producing?
While the solution I identified in my last post showed which form fields had more errors than others, in the web analytics space, we like hard, concrete numbers! Therefore, I would recommend that you set a Success Event each time website visitors encounter at least one form error (assuming you do validation when the Form Submit button is clicked). By setting a Success Event, you will have a nice chart that shows you the overall trend of Form Errors as shown here:

If you are passing a Name or ID for each form you have on your website, you can also use this Success Event to see which forms are getting the most number of errors like this:

In addition, you can set an Alert for the overall Form Error metric or for a specific Form Name/ID:

 

How Is Each Form Doing?
While knowing how many Errors a form gets is cool, as is often the case, we in the web analytics field care more about ratios! In the report above, it is alarming to see that the first form had 85 Form Errors but how do we know if that is good or bad? If we create a Calculated Metric to compare Form Errors to Form Views, we can see how many Form Errors visitors had in relation to each time the same Form was viewed. Based upon the data below, we can see a wide range of Form Error percentages depending upon the form:


Some of these percentages are quite high and represent amazing opportunities to do testing to see if they can be improved! In addition, when you create a calculated metric, besides just seeing it in an eVar report like the one above, you can also see it as a standalone metric. This means that you can see the overall trend of Form Errors per Form View (or Visit) to see if we are getting better or worse over time. This might make a great KPI metric for the team focused on Forms and Form Completions:

Final Thoughts
In my last post I covered a simple way to see which fields are causing problems for your visitors. In this post, I showed you how to quantify your Form Errors, see how much of an issue you may have and even see which Forms have the most Errors. In my next post I will show you some advanced ways to see which fields are causing errors and how to break this down by Form. Stay tuned!

Between this post and the last post, hopefully you have some food for thought when it comes to tracking how your website forms are doing so you improve your conversion rates…

Adobe Analytics, General

Tracking Form Errors (Part 1)

Almost all websites have forms. Whether you are a B2B/Lead generation site, an eCommerce site, a travel site, etc… you most likely have forms. More importantly, you have people who don’t fill out your forms correctly and get some sort of error message. While error messages are a fact of life, in the web analytics/optimization world these are painful since you work so hard to get people to your site, to read your content and then agree to give you personal information. That is a lot of time and money spent only to have someone potentially abandon because they have problems with your forms. This represents your “low hanging fruit” so to speak – people who have already decided they like you and want to give you their information! In this series of posts, I am going to share some techniques for seeing how much of a problem your website has with form errors and in the next few posts I will cover some more advanced things you can do to diagnose these form error issues.


Which Fields Produce the Most Errors?
The first step in diagnosing form error issues is understanding which form fields are causing issues. Unfortunately, since a user might receive more than one error message, you have to pass in multiple values to a SiteCatalyst variable. This can be done using the Products variable, but since that is often already being used for more important purposes, I will suggest that you use a List Traffic Variable (sProp) to capture these values. Unfortunately, List sProps are not well documented and have some specific limitations (see Knowledge Base ID# 2305). All you need to know is that List sProps allow you to pass in delimited values and when you view them in the sProp report, these values will be split out. Let’s look at an example. Here we see a form in which a user has attempted to submit the form without filling out some required fields. What we want to do is capture which fields this user messed up (could mean incorrect value or leaving blank) so we can see which ones are messed up the most often. In this case, we see that the form errors are related to Job Title, E-mail Address, Phone #, Company Name and the MSA checkbox.

So in this case we can use a List sProp to capture the fields giving us errors. Here is how it would look in the JavaScript Debugger:

Unfortunately, List sProps are still constrained to the 100 character limit so if you have long forms you are out of luck or you can select the most important form fields to capture. Once you have captured the fields, you can open the sProp report and you will see something that looks like this:

In this case, we can see that we are getting the greatest number of errors on the Phone Number form field on the US website (I have added the site since forms exist in multiple sites). I could also filter this sProp report for just US or Japan form fields by using a text search of “us:” or “jp:” as needed. This report should help steer you in the right direction when it comes to fixing basic form field issues.

Correlating Form Field Errors to Forms
Once you have seen which form field errors, the next logical question is to see which forms had which errors. Unfortunately, one of the limitations of List sProps is that they cannot be used in Traffic Data Correlations. Therefore, if you want to breakdown form field errors by Form, you will need to use the Discover product as shown here:

If you don’t have access to Discover and seeing this type of breakdown is important to you, you may want to consider using the Products variable instead of a List sProp since the Products variable comes with full Subrelations by default (though this implementation will be significantly more difficult). I will also be covering a different way to approach this in my next post so stay tuned!

Final Thoughts
If you are not currently tracking form field errors, hopefully this will give you some ideas on how you can start the process of seeing where you are tripping up your visitors. Keep in mind that this post is just a start and that the next few posts will go into more advanced stuff you can do and how you can identify your biggest opportunities for improving conversion.

Adobe Analytics, General

A/B Test Bounce Rates

(Estimated Time to Read this Post = 4 Minutes)

In the past, I have written about Bounce Rates, Traffic Source Bounce Rates , Segment Bounce Rates and Site Wide Bounce Rates. In the latter, I even promised I was finished writing about Bounce Rates, but, alas, I have yet another Bounce Rate installment. I was recently in a conversation with a peer and she asked me how they could see the bounce rates of the various landing page A/B tests they were running via Test&Target. I told her that this was easy to do if you follow my instructions in the Segment Bounce Rate post, but she asked if I could write a brief post with more specifics so here it is…

Why A/B Bounce Rates?
Before getting into the solution, let’s re-visit why this is of interest. Test&Target (and other tools like GWO) are wonderful when it comes to optimizing landing pages. They allow you to alter content/creative elements and see what works and what doesn’t. I have seen many cases where clients have used tools like Test&Target to change content based upon when brought the user to the website (i.e. Search Keyword) or demographic information (i.e. Location). Regardless of the reason you want to test, if it is a landing page, one of the questions you often get asked is related to Bounce Rate. Understanding how many people saw “Version A” of a test and bounced vs. those who saw “Version B” and bounced usually comes up for discussion. To answer this question using Omniture/Adobe tools you have the following options:

  • Create a unique page name for each test variation and use the regular Pages report and Bounce rate metric. However, this can get very messy, so unless your website is small, I don’t recommend this approach.
  • Use ASI or Discover to build a segment for people coming from “Version A” or “Version B” and then compare the bounce rates. This is a viable option if you have access to these tools and are well versed in Segmentation.
  • Attempt to track Bounce Rates from within Test&Target. This does not come out-of-the-box, but if you have mboxes on all of the pages the landing page links to, I have heard of some people setting conversion events on the landing page and the subsequent pages, but I don’t think this is for novices (if you are interested, I’m sure @brianthawkins could figure out a way to hack this together!)
  • Do what I suggest below!

Implementing A/B Bounce Rates
Luckily, implementing this in SiteCatalyst is relatively simple. All you need to do is the following:

  1. Enable a new Traffic Variable (sProp)
  2. In this new sProp, concatenate the Test&Target ID and the Page Name on each page of your website
  3. Enable Pathing on the new sProp

That’s it! By concatenating the Test&Target ID and the Page Name, you create a unique join between the two and can find the combination of the Test ID you care about and the page name that you expect them to have landed on. Once you find this combination in the report, you can add your Bounce Rate Calculated Metric (Single Access/Entries – which hopefully you already have as a Global Calculated Metric) and you are done. Here is an example of a report:

In this report, you have all of the ID’s associated with the US Home Page, how many Entries each received and the associated Bounce Rate. If you wanted, you could perform a search for the specific Test&Target Test ID you care about and then your report would be limited to just those ID’s. In the example above, we have multiple tests taking place on the US Home Page. However, in the following example we can see a case where there is just one test taking place on the UK Home Page and the associated Bounce Rate of each:

Other Cool Stuff
But wait…there’s more! Since you have enabled Pathing on this new A/B Test sProp, there are some other cool things you can do. First, you can look at a trended view of the report above to see how the Bounce Rate fluctuates during the course of the test. To do this, simply switch to the trended view and choose your time frame:

Another benefit of having Pathing enabled on this sProp is that you can see how visitors from various tests navigated your site using all of the out-of-the-box Pathing reports. Here is an example of a next page flow for one of the tests:

You can run the preceding report for each test variation and compare the path flows to see if one version pushes people more often to the places you want them to go. Another report you could run is a Fall-Out report which can show you how often people from a specific test made it through your desired checkpoints:

In this example, instead of seeing how the general population falls-out from the Home Page to a Product Page and then to a Form Page, we can limit the funnel to only those people who were part of Test ID “18964:1:0.” I like to run this report and the corresponding one for the other test version(s) and add them all to a SiteCatalyst Dashboard where I can see the fall-out rates side by side.

Final Thoughts
As you can see, by doing a little up-front work, you can add an enormous amount of insight into how your A/B tests are performing on your site including Bounce Rates, Next Page Flows, Fall-Out, etc…Enjoy!

Adobe Analytics, Analytics Strategy, General

Tracking Lead Gen Forms by Page Name

Every once in a while, as a web analyst, I get frustrated by stuff and feel like there has to be a better way to do what I am trying to do. Many times you are able to find a better way, often times you are not. In this case, I had a particular challenge and did find a cool way to solve it. You may not have the same problem, but, if for no other reason than to get it off my chest, I am writing this as a way to exhale and bask in my happiness of solving a web analytics problem…

My Recent Problem
So what was the recent problem I was facing that got me all bent out of shape? It had to do with Lead Generation forms, which are a staple of B2B websites like mine. Let me explain. Many websites out there, especially B2B websites, have Lead Generation as their primary objective. In past blog posts, I have discussed how you can track Form Views, Form Completes and Form Completion Rates. However, over time, your website may end up with lots of forms (we have hundreds at Salesforce.com!). In a perfect world, each website form would have a unique identifier so you can see completion rates independently. That isn’t asking too much is it? However, as I have learned, we rarely live in a perfect world!

Through some work I did in SiteCatalyst, I found that our [supposedly unique] form identifier codes were being copied to multiple pages on multiple websites. While this causes no problems from a functionality standpoint – visitors can still complete forms – what I found was that the same Form ID used in the US was also being used in the UK, India, China, etc… Therefore, when I ran our Form reports and looked at Form Views, Form Completes and Form Completion Rate by Form ID, I had no idea that I was looking at data for multiple countries. For example, if you look at this report nothing seems out of the ordinary right?

However, look what happened when I broke this report (last row of above report) down by a Page Name eVar:

At first, I thought I was going crazy! How can this unique Form ID be passed into SiteCatalyst on eleven different form pages on nine country sites? This caused me to dig deeper, so I did a DataWarehouse report of Form ID’s by Page Name and found that an astounding number of Form Pages on our global websites shared ID’s. Suddenly, I panicked and realized that whenever I had been reporting on how Forms were performing, I was really reporting on how they were performing across several pages on multiple websites. In the example above, I realized that the 34.669% Form Completion Rate I was reporting for the US version of the form in question was really reporting data with the same ID for forms residing on websites in Germany, China, Mexico, etc… While the majority was coming the the form I was expecting, 22% was coming from other pages! Not good!

The Solution
So there I was. Stuck in web analytics hell, reporting something different than I thought I was. What do you do? The logical solution was be to do an audit and make sure each Form page on the website had a truly unique ID. However, that is easier said than done when your web development team is already swamped. Also, even if you somehow manager to fix all of the ID’s, what is preventing these ID’s from getting duplicated again? We looked at all types of process/technology solutions and then realized that there is an easy way to fix this by doing a little SiteCatalyst trickery.

So what did we do? We simply replaced the Form ID eVar value with a new value that concatenated the Page Name and the Form ID on every Form Page and Form Confirmation Page. By concatenating the Page Name value, even if the same Form ID was used on multiple pages, the concatenated value would still be unique. For example, the old Form ID report looked like the one above:

But the new version looked like this:

With this new & improved report, when I was reporting for a particular form on a particular site/page, I could search by the form pagename and be sure I was only looking at results from that page. Also, a cool side benefit of this approach is that you could add a Form ID to the search function to quickly find all pages that had the same Form ID in case you ever did want to clean up your Form ID’s:

Implementation Gothca!
However, there is one tricky part of this solution. While it is certainly easy to concatenate the s.pagename value with the Form ID on the Form page, what about the Form Confirmation page? The Form Confirmation page is where you should be setting your Form Completion Success Event and that page is going to have a different pagename. If your Form ID report doesn’t have the same Page Name + Form ID value for both the Form View and Form Complete Success Event, you cannot use a Form Completion Rate Calculated Metric. For this reason, you need to use the Previous Value Plug-in to pass the previous pagename on the Form Confirmation page. Doing this will allow you to pass the name of the “Form View” page on both the Form View and Form Complete page of your site so you have the same page name value merged with the Form ID.

A Few More Things
Finally, while the Form ID report above serves this particular function, it is not very glamorous and it might not be the most user-friendly report for your users. If you want to provide a more friendly experience you can do the following with SAINT Classifications:

  1. Classify the Form ID value by its Page Name so your users can see Form Views, Form Completions and the Form Completion Rate by Page Name
  2. Classify the Form ID value by the Form ID if for some reason you want to go back to seeing the report you had previously

Final Thoughts
Well there you have it. A very specific solution to a specific problem I encountered. If you have Lead Generation Forms on your website, maybe it will help you out one day. If not, thanks for letting me get this out of my system!