A/B/n split testing helps website owners build a culture of data-informed decision-making by enabling businesses to take into account audience preferences. Conflicting opinions regarding website design can be settled with a split-testing methodology, benefiting the website owner and visitors by helping create the best site design. The following are the basic steps for conducting an A/B/n split test:
- Determine your reason for testing: Does your site need more visual appeal? Are users spending a small amount of time on your landing page before abandoning it? Is it possible to reach more users? Are your ads working? Take a look at what customers say about your brand and your products. Explore customer reviews, speak with your product designers, sales, and support staff. Use your analytics data to help determine what to test. If you’re planning to test multiple pages, prioritize the ones with the greatest potential to affect leads and sales.
- Create a hypothesis: Once you’ve decided what to test, develop a hypothesis; an idea regarding why you are seeing certain results and how you could improve them. A good hypothesis states a specific goal and how you will measure success. Testing will help you accept or reject this hypothesis.
- Calculate your sample size: Use Optimizely’s calculator to calculate the sample size that you’ll need to get results that are statistically significant. You’ll input your baseline conversion rate (the conversion rate of your control page) and minimum detectable effect (the change in conversion rate that you would like to detect). The lower your minimum detectable effect is, the more visits you’ll need before you can conclude your test. You can also raise or lower statistical significance but anything under 95% will result in unreliable data. The higher your level of significance, the more visits you’ll need before you can call your test.
- Make the page/element adjustments your hypothesis involves. Ensure your original post-click landing page remains the same so that your baseline for testing will be accurate.
- Eliminate confounding variables: There is potential for an outside factor(s) to affect your test, resulting in a misleading outcome. Identify and eliminate as many confounding variables as you can before undertaking your test. All things other than the element you’re evaluating should be equal and remain equal throughout the entirety of your A/B/n tests. Then, you can be confident the difference in performance is a result of the element you’re testing.
- Run the test long enough to get reliable results. Depending on your site’s traffic, you’ll need to run your experiment for at least a couple of weeks.
- Analyze and optimize: Use your data to determine which version performs best, then decide whether the results are significant enough to warrant a change.
- Take action based on your results: If no variation is statistically better, you’ve determined that this element didn’t impact results. If one variation is better, disable the losing variation(s).
- Test again: You can choose to test again, using other elements. Use the data received to help you figure out a new iteration to test for improvement. Apply the lessons learned from the last testing to make the next test even more efficient/effective.
A/B/n split Testing is a powerful tool for digital marketers. Testing advertising, creative and audiences, gives marketers the flexibility and insights they need to continually improve their efforts.
Want to optimize your website features? Interested in A/B/n split testing for your site? Need help with your marketing campaigns? Call us at (403) 456-0072 or email CARE@CAYK.CA. Allow us to be your very own marketing department! Connect with our tightly-knit team of knowledgeable digital-first consultants, each eager to help your business grow. Contact us today.