How to Prove Your Testing Results
(Estimated Time to Read this Post = 4 Minutes)
If you are in the Web Analytics space, besides tracking what people do on your website, hopefully you are actively doing testing and content targeting to try and improve your conversion. If you are an Omniture customer, you might be using their Test&Target product or you may be using Google’s Website Optimizer. If you are just getting into the testing area, you may simply be using an eVar to see how your tests are performing. Regardless of what tool you are using, there is a common question that arises in the testing/targeting area. Here is the scenario:
- You come up with a great hypothesis you want to test
- You run a test and see awesome results (say a 10% uplift in conversion)
- You broadcast it to your company only to hear the inevitable “well that was just a test…how do you know we’ll see the same result in real life?”
As a web analyst, this can be infuriating and can be compounded by the fact that you often cannot simply run with the winning recipe and show the results in your testing tool because:
- You may be running multiple tests and things can get confusing
- You may want to apply what you have learned from the test to many places on your website which may or may not have the required “MBoxes”
In reality, it may take time for you to take your awesome test and let it out “into the wild” and when you do so, how can you prove that the uplift you saw in your test will actually occur over the next year on the website? The following will tell you exactly how you can do this and hopefully put the naysayers in their place!
How To Prove Your Test Results
So now that I have framed the situation, let’s learn how to do it. Our objective is to prove the long-term results of a test we did using our chosen testing/targeting tool. In this example, let’s imagine that your website has twenty forms on it and you have just done a test showing that if you reduce the number of fields on a form, you can see a 15% uplift in Form Completion Rates. This test was conducted using Test&Target for three weeks with a high level of statistical confidence (+95%). Now you want to go ahead and take five of the twenty forms and remove the same fields you did in the test for the next three months and see what happens. One way to do this would be to add lots of “MBoxes” and use Test&Target to deploy the winner in hopes of seeing the same lift results, but in this example, let’s assume that your conversion team has closed the books on this test, moved onto other tests and has told you that you now need to work with the web team to reduce the fields on your five forms.
So what do you do? How will you know if these five forms will really see a 15% uplift over the next three months? All you need to do is the following:
- Create a new Testing eVar (not the T&T eVar)
- On each of the five forms you modify on your website, pass in the name of the the test that it was based on to this new eVar. This may be the name of the winning T&T recipe or you can use any descriptive name you’d like. In this case, we’ll pass in the value “Remove Form Fields Test”
- Set the eVar to “Most Recent Value” and expire “Never” in the Admin Console
That’s it. Now when you open this new Testing eVar report, you can see how these five new forms are doing with respect to Form Completion Rate (assuming you have the right Success Events set – in this case Form Views and Form Completes). When you look in this new eVar report, all forms that were not modified based upon a testing initiative will fall into the “None” row so you can easily compare those forms that are based upon testing with those that are not:
In the preceding example, we can see that the “Remove Form Fields Test” seems to have about a 17% uplift in Form Completion Rate after it was fully deployed so we are doing even better than the 15% expected! What’s better, is that if you repeat this process every time you make changes to things on your website based upon testing, you can see how each is doing:
And, if you look at them all together, you can show your boss at the end of the year how much uplift you have been responsible for overall! In this example, if we look at all of the tests we have implemented, we are seeing a cumulative uplift of 16.2% over forms that are not based upon any testing. This is a great way to show the value of your conversion efforts and justify more headcount, get promoted, get more budget, etc… In fact, you can show your boss, that if all of the “Form Views” on your website were, in this case, seeing optimized forms, you could produce 5,800 Form Completes instead of the 5,000 you are currently getting at the lower Form Completion Rate.
The only downside of this solution is that it might actually show you that something you expected to have an uplift, in reality didn’t. For example, in the preceding screen shot, the “Form Headline Bold” change doesn’t seem to be pulling its weight (losing against the control) and may need to be revisited. However, even though this is disappointing, it is great information to have since it might prompt you to do some further testing in Test&Target and abandon the losers.
Finally, if you want to get a little more advanced, you could also apply SAINT Classifications to this new Testing eVar and group your tests into types (i.e. “Field-Related Tests” or “Color Related Tests”) so you can calculate the uplift of each type and see which ones you may want to focus on going forward.
Final Thoughts
So there you have it. As a rule of thumb, I would build a step for passing in the Test Name a change was based upon into a Testing eVar into your conversion testing process so that you can look at how your tests ultimately perform. While this will add one small step to your overall process, I think that in the long run you will be happy that you have this variable to show how your team is doing…
Adam,
Interesting concept.
I was hoping you can clarify the following point.
In your scenario you made a change to five of your 20 forms (reducing fields on form). You then added an eVar to all forms and populated the value “Remove Form Field Test” on the five modified forms.
You then measured the uplift in Form Completion % over time.
However, what if each of the forms has a different Form Completion % to begin with?
If forms 1 to 5 have an average FC% of 15% and forms 6 to 20 have an average FC% of 3% then how could we compare the two groups of forms?
Or have I misunderstood the methodology?
Thanks,
Michael
Michael,
You are correct. However, you can always look at a specific form before and after it has been changed due to a test by using Subrelations. E-mail me if you want more details.
Adam
Adam,
Great post as usual. I would add allocating a sProp, leveraging the getAndPersistValue plugin for the page name and then turning on pathing for that variable so that you can see the various paths taken for each test. There might be additional insight that can be gleamed by seeing these paths (maybe one test is inadvertently making users take a “dead-end” path vs. the one you want them to go on).
Dorian
Hi Adam –
Great site, useful information, always reliable. Thank you for sharing your knowledge.
I have a question to build off of this. What if we ran a test on 5 forms, and the business owner decided to push it to all 20 forms on our site? How could we prove conversion lift (or drop) before and after the test implementation?
Elizabeth
Elizabeth,
In this case, what you would do is to find a way to set the eVarI described on any forms that have changed (due to a test). This doesn’t mean you need to change the name of the form, but rather, just populate a value (I suggest the name or type of test) on those 20 forms. This allows you to have the flexibility to see which forms were part of tests and which were not. As I described, it also let’s you see the conversion rates of forms you have affected so you can show your value!. Let me know if you have additional questions…