Hypothesis test to find your way.

Statistical methods can help us with more than just examining trends in a given population. Used correctly we can also determine the best of two options available. This is exceptionally important in A/B testing land.

For those of you not familiar with A/B testing, here is a brief primer. A/B testing is the process of modifying site elements to increase conversions. What constitutes a conversion is defined by the site owner and can be anything from purchasing a product to visiting a deeper page within the site.  In a properly set up A/B test either page A (with no change) or page B (with the change being tested) is shown randomly to a site visitor and the conversion rate is measured for that specific instance. At the end of a test the data is aggregated to determine which page, A or B converted, and site is changed to reflect the winning modification.

Below is a great diagram of A/B testing process from the Unbounce Blog

 

Once the data is gathered the real fun of assessing the statistical significance of A vs. B begins.

You might be asking, can’t we just look at the conversion rate and move on? The answer is sometimes. Only when there is an overwhelming winner is the decision easy. In most tests the data is less definitive. Let’s take a look at an A/B test we recently conducted. In our experiment, we generated the following results.

Test Case Views Conversion Rate
A 31500 1.01%
B 33500 1.11%

 

Strictly based on the conversion numbers it looks like Page B converted 10% better than Page A, but when we ran the numbers through a standard hypothesis test (also known as a Z test – more on that later) we found that the two scenarios were in fact equal.

So the moral of the story is, use hypothesis testing on all your A/B data otherwise you may be setting yourself up for a surprise in your conversion optimization efforts.

In my next post I will describe how we came up with the results.