In an A/B test, software typically holds a control group constant to see how those visitors respond to the changes being tested. In multivariate testing, there is more than one variable change at a time, but in A/B testing there are two versions being compared. This method allows you to determine which of the variables have a significant impact on the desired outcome, and which don't. The first step, then, is to determine what your goal is for this A/B test. Simply copying other sites won't give you the results you're looking for.
"Don't make big changes all at once." This is an important rule. You need to start with one variable change at a time. If you make more than one chance, you won't know which of those changes had an impact on your results. Once you've made multiple changes and discovered an improvement, it's very hard to separate which of those changes led to that result.
How A/B testing works?
A/B testing is used everywhere, from calculating conversion rates on eCommerce sites to improving user experience on web apps. But far too often the results are skewed by an issue that's well-known in research circles: selection bias.
The experiment was simple: we wanted to increase signups for a free account, so we tried moving the signup CTA (call-to-action) bar down from its prominent top left position. We ran the test for two weeks and more than doubled our total signups.
The A/B test was a huge success! By moving the signup bar to the left we were able to increase conversions by 106%.
Why you should A/B test?
The experiment is conducted by creating two nearly identical pages (A-Variation & B-Variation) with some changes. The experiment runs for a predefined time period, after which the sales team measure the results to determine if there was any change in conversion rates. If so, they will stick with the winning variant i.e., variation B and if not, they will try some more changes and experiment another round.
This is just an example and the actual process is much more complex than this. The team might also want to find out how users engage with the page with session recordings/heatmaps or even eye-tracking which is possible nowadays. You can learn about most of these items from here.
The process of running an A/B test is also called as online controlled experiments. These are randomized, either controlled experiments performed on the web or other types where statistical inference can be applied to various degrees.
•Data Collection (A/B testing): You can either do this yourself or with the help of an A/B testing service. Sending users to the original version of your page is known as "Original Testing", while sending them to a different version of your page is called "Variant Testing". You'll typically want to send only 5-10% of your traffic per test, so that you can get statistically significant results.
•Identify goals: Tracking helps you measure the number of sales, leads, and customers generated by your website. You can use Analytics goals to measure how effectively your online marketing campaigns reach your target audience and generate business results. For example, you may want to track the number of people who visit a certain landing page from a specific source—such as organic search—and later purchase a product from your site.
•Generate Hypothesis: After this process you should have a prioritized list of hypotheses that will be tested as A/B experiments. Prioritize: In order to rank your hypotheses, it's important for each one to have a measurable impact on the metric you identified as your goal and for those metrics to be actionable by users in your app. Do not use metrics that are deeply buried in your app, for example "number of sessions" or "impressions". These metrics are not useful to the majority of people. Instead, use something like "sessions per active user" or "visits per session" which is easy to find and will directly impact the lifetime value of your users.
•Create Variations: This is where the magic happens! You can see things like your time on page, number of pages visited, bounce rates, conversion rates and much more. This gives you the ability to compare different versions of your pages against one another. The data is super useful for understanding whether certain changes are actually resulting in desired outcomes or not. Ideally you want to see a gentle, steady increase in the success metrics over time.
A/B testing & SEO
Make sure you don’t use your A/B testing tool to show different content or URLs to Googlebot than what is shown to visitors. If Googlebot sees different content or URLs than a human user, we may ignore the rel canonical tag that tells us where the “real” page is and index either one of them.
1.No cloaking: Still, it's not a good idea to regularly show different content to users than you do to search engines since it opens up opportunities for abuse and can cause confusion. It can also result in lower rankings because your site won't be as relevant as other sites that show the same information across all devices.
2.Use rel="canonical": Google has said that if you use rel="canonical" properly, this shouldn't cause a problem with duplicate content. For example, if all the pages in the test link back to the original, there shouldn't be an issue with Google indexing all of them. Of course, if you intentionally try to trick Google by placing only a rel="canonical" for a page without a corresponding link anywhere else on the site, then that could cause problems.
3.Use 302 redirects instead of 301s: Using a 302 redirect will not pass rank to the test. On-page variables are typically better in most situations when it comes to testing, but sometimes the test you want isn’t possible with on-page variables. For example, if you would like to A/B Test your entire home page at once instead of running different tests for each landing page, that is not possible to do within your theme’s existing structure. If that is the case, you can use a URL redirect as a way to test different home pages without changing the on-page variables. In other words, this means creating two identical versions of your theme—one for original and one for test—
4.Run experiments: It is important to note that Google considers running tests longer than necessary as an attempt to deceive search engines and may take action on your site. It is best practice to update your site, remove all test variations the moment you conclude a test and avoid testing longer than necessary. For more on Google's stance on this topic see their Webmaster Guidelines section 1.4: "Avoid tricks intended to improve search engine rankings." Source – Barry Schwartz