A/B testing allows you to test the user product experience and lets you improve it by safely rolling out the features strategically.
Available for Enterprise Edition
A/B testing is available for Enterprise Edition only.
After you have logged in to your Countly Enterprise account, you can access A/B testing via main menu on the left side.
With A/B testing, you can experiment using remote config by making changes to parameters and grouping them into multiple variants to alter the behavior and appearance of your app in a variety of ways across each variant group.
Once you open the A/B testing you will see three categories, Running, Drafts and Completed. These are basically the possible states that an experiment can have. Also you will see a button to create a new experiment. But first let us understand what is an experiment.
What is an experiment?
An experiment is a procedure in which you evaluate multiple variants using different Remote config parameters that you have already created or will create for this experiment. Once you have created the experiment, you can check which of your variant performed better than the other and based on what you observe, you can rollout the winning variant with the set parameter values.
How to create an experiment?
You can create an experiment by clicking the Create experiment button in the A/B testing view which will open the experiment creation drawer. The experiment creation drawer consists of four sections - Basics, Targeting, Goals and Variants.
- Basics: In this section you define the experiment basics like name and description.
- Targeting: In this section you can describe your targeted audience, on which the experiment will run. This includes the percentage of the total app users and a target users filter where you can chose the users based on their segmentation properties. The filter and percentage works as an AND condition. For example - Target 50% of the app users using iphone 6s
- Goals: This section is similar to creating cohorts, where you just set your goal for the experiment. You can either set your goal based on either User property Segmentation or User Behaviour Segmentation or both. The first goal will be your primary goal which will decide the outcome of the experiment, and rest will be just additional goals.
- You can set a maximum of 3 goals per experiment.
For example - The goal of the experiment is to find a variant that leads to atleast 5 sessions per users.
- Variants: In this section you can create variants for your experiment. For a variant you can either choose an existing remote config parameter or create a new parameter that hasn't been created before, provided it exists in your app, only then will it take effect. There will be same parameters in each variant with whatever value you choose to set to them. Each variant will be competing against the control group. Control Group is nothing but a variant itself against which all other variants test their performances. There will be atleast two variants in any experiment including the Control Group.
- You can have a maximum of 8 variants in an experiment.
- In each variant you can have a maximum of 8 parameters.
- A parameter can only be involved in a single running experiment at once.
- For any remote config parameter, the experiment values will be given priority over any of its existing conditional values or the default value, provided the experiment is running.
Manage your experiment
After you create an experiment, its gets added to the Drafts section by default.
Start the experiment
From the drafts section, you can start the experiment which will move the experiment to the Running section. Once it is Running, you cannot make any changes to the experiment other than stopping the experiment. * Once the experiment has started, it will go on for 30 days. After which the experiment will be rendered inconclusive if no leader is found amongst the variants.
Stop the experiment
You can stop a running experiment from the options menu which will move the experiment to the Completed section. An experiment can be stopped at any point regardless of whether the leader is found or not. If a leader is found or the experiment was inconclusive, it will stop processing thereafter, and you can end the experiment.
Monitor the experiment
Once an experiment has been running for a while, you can check its progress and see what the results look like for the users who have participated in your experiment so far. Just click on your experiment in the running section. On this page, you can view various statistics about your running experiment, including the general experiment information. You will find the following information for each Goal:
- Improvement over baseline: This is a measure of improvement of the variant over baseline for the selected goal.
- Conversion rate: This is conversion rate of the users falling into the subjected variants for an experiment.
- Probability to baseline: The probability that a given variant will beat the baseline for the selected goal.
- Conversions: Total user conversions for the variant
The winning variant
To decide the winning variant of an experiment we check if the lower limit of the conversion rate of a variant is 1% greater than the upper limit of the conversion rate of control group, if so we declare a winner and stop the experiment. This also ensures that even in the worst case, there will always be 1% conversion rate.
Rollout a variant
After you have a leader, or winning variant, for you primary goal, you can roll out the winning variant from the experiment to 100% of users. You can select whichever variant you like and publish it in Remote Config for all users moving forward. Even if your experiment does not have a clear winner, you can still choose to roll out a variant to all of your users.
By clicking the Rollout variant button, you will see a drawer open up where you can choose your variant and rollout the variant.
Once the variant has been rolled out, you can check it in the Remote config.