This year’s reports show a continued growth in the practice of A/B testing. In five years the number of companies routinely conducting A/B tests on digital content has risen by approximately 80%.
Lead generation and eCommerce sites maintained a steady lead in testing. Having been able to measure campaign ROI more accurately, they were able to get a head start a number of years ago. Engagement orientated sites have increased their PPC activity in recent months. This extra cost has made a stronger case for testing. In 2014 engagement testing shows the strongest growth area. So far B2B is in the testing minority. More often than not B2B sites experience lower traffic rates making quantative tests less effective with current models, though I expect we will see improvements in the near future.
For eCommerce sites the most effective pages to test are those that are as nearest to the targeted conversion. Again with lead generation, the landing page and form field tests turned out to have the best results. With engagement sites, social media icon tests were reported as having some of the best conversion rates.
Recently testers are asking how far down the conversion funnel is it worth testing? More clicks do not always equal more conversions. Some tests with fewer initial clicks resulted with higher conversion rates. Following more sophisticated KPIs can tell you what happened but not why. Usability studies and visitor surveys can help you with the ‘why’.
The growth of testing and CRO is similar to the development of search marketing in the early 2000s. Small companies are testing more with the arrival of inexpensive tools that are becoming easier to use. Companies with low traffic often make the common mistake of running a test for months to get statistically conclusive results. Six to eight weeks is the longest amount of time to run a test in most cases. With fewer visitors, reduce the number of versions you’re testing and on fewer pages. Low traffic sites should also focus on qualitative data and best design practices.
It is important for any test to make sure the timing is in line with your business cycles, so as to take into account momentary changes caused by anything from the time of day, day of week, marketing campaigns, seasonal differences, news etc.
Finally, when is a test conclusive ? The industry standard best practice is to require 95% or better before deciding on a test winner, not forgetting to keep within best practice time frames.