Featured, Testing and Optimization

Simple and oh so very sweet

Informatica is a very large B2B company and one of the most successful players in the data management market.  Informatica also has an impressive testing and optimization program and they make heavy use of data and visitor behavior to provide the ideal experience for their digital consumers.

Like most spaces, in the B2B space, there are countless opportunities for testing and learning.  The more data that you have, the more opportunities exist for quantifying personalization efforts through targeted tests and for machine learning through solutions like Adobe’s Automated Personalization tools.  In fact, many B2B optimization programs are focused on the knowns and the unknowns with integrations between the testing solution(s) and with demand generation platforms as I wrote about a few years ago.

In a world with relatively complex testing options available with first-party data, third-party data such as Demandbase (great data source for B2B), and with limitless behavior data, it is important to not lose sight on simpler tests.  Just because rich data is available and complex testing capabilities exist, doesn’t mean the more basic tests and user experience tests should be deprioritized.  It is ideal for organizations to have a nice balance of targeted advanced tests along with an array of more general tests as it gives the organization a wider basket to catch opportunities to learn more about what is important to their digital consumers.   Informatica knows this and here is a very successful user experience test that they recently ran.

Informatica was recently named a leader in Gartner’s Magic Quadrant report and the testing team wanted to optimize how to get this report to their digital consumers of their product pages on their website.  Many different ideas were discussed and the user experience team decided to use a sticky banner that would appear on the bottom of the page.  Two key concepts were introduced into this test with the first being the height of the banner and the second being the inclusion of an image.  Both sticky banners allow for the user to X or close the banner as well.

The Test Design

Here is what Experience A or the Control test variant looked like (small sticky footer and no image) on one of their product pages:

and the Experience B test variant on the same product page (increased height and inclusion of image):

 

And up close:

vs.

 

The primary metric for this test was Form Completes which translates to visitors clicking on the banner and then filling out the subsequent form on the landing page.  We also set up the test to report on these additional metrics:

  • Clicks on the “Get the Reports” CTA in banner
  • Clicking on the Image (which lead to the same landing page)
  • Clicking on the “X” which made the banner go away

The Results

And here is what was learned.  For the “Get the Reports” call to action in both footers:

While our primary test metric is “Form Completes”, this was a great finding and learning.  There was a 32.42% increase in the same call to action either because of the increased height or the image.

For the “Image Click”:

This was not surprising since visitors could technically only click on the image for Experience B since the image didn’t exist for Experience A.  Some might wonder why this metric was even included in the test setup but by doing so, we were able to learn something pretty interesting.   The primary metric is “Form Completes” and in order to get a form complete we need to get visitors to that landing page.  The way that visitors get to that landing page is by either clicking on the “Get the Report” call to action or by clicking on the image.  We wanted to see what percentage of “clickers” for Experience B came from the Image vs. the “Get the Report” call to action.  Turns out 52.6% of clicks in Experience B came from the Image vs. the call to action which had 47.5% of the clicks.  Keep in mind though, while the image did marginally better in clicks, the same call to action in Experience B got a 32.42% increase vs. Experience A.  The image clickers represented an additional net gain of possible form completers!

For the “X” or close clickers:

This was another interesting finding.  There was a significant increase of over 127% of visitors clicking on the X for Experience B.  This metric was included so as to see engagement rates with the “X” and to compare those rates with the other metrics.  We found that engagement with the “X” was significantly higher, almost tenfold, compared to the calls to action or the image.  The increase of “X” clicks of Experience B compared to Experience A was surmised to be because of the increased height of Experience B.

And now, for the primary “Form Complete” metric:

A huge win!  They got close to a 94% lift in form completes with the taller sticky footer and image.  The Experience B “Get the Report” call to action led to a 32.42% increase in visitors arriving on the form page.  The image in this same Experience B brought a significant number of additional visitors to this same form page.  Couple this and we have a massive increase in form completions!

For a test like this, it often also helps to visualize the distribution of clicks across the test content.  In the image below, X represents the number of clicks on the Experience A “Get the Reports” call to action.  Using “X” as the multiplier, you can see the distribution of clicks across the test experiences.

Was it the image or the height or the combination of the two that led to this change in behavior?  Subsequent testing will shed more light but at the end of the day, this relatively simple test led to significant increases in a substantial organizational key performance indicator and provided the user experience teams and designers with fascinating learnings.

 

Leave a Reply