A/B Testing allows you to test the user product experience and lets you improve it by safely rolling out the features strategically.
Available for Countly Enterprise
The A/B Testing plugin is available for Countly Enterprise only.
First of all, make sure the A/B Testing plugin is enabled. To do so, in the main Countly Dashboard, go to
Plugins and enable the A/B Testing toggle. A/B Testing works closely with the Remote Config, Cohorts, and Drill plugins, so please ensure you have these enabled as well.
After that, you will find A/B Testing in the Improve section of your Countly Dashboard.
With A/B Testing, you can experiment using Remote Config by making changes to parameters and grouping them into multiple variants to alter the behavior and appearance of your app in a variety of ways across each variant group.
Once you open A/B Testing you will see three tabs, Running, Drafts and Completed. These are basically the possible states that an experiment can have. Also you will see a button to create a new experiment. But first let us understand what is an experiment.
What Is an Experiment?
An experiment is a procedure in which you evaluate multiple variants using different Remote Config parameters that you have already created or will create for this experiment. Once you have created the experiment, you can check which variant performed better than the other and based on what you observe, you can rollout the winning variant with the set parameter values.
How to Create an Experiment?
You can create an experiment by clicking the
Create Experiment button on the uppen-right corner which will open the Create new Experiment drawer. The experiment creation drawer consists of four sections: Basics, Targeting, Goals and Variants.
1. Basics: In this section you define the experiment basics like name and description.
2. Targeting: In this section you can describe your targeted audience, on which the experiment will run. This includes the Percentage of target users from your app's total users and a Target Users filter, where you can chose the users based on their segmentation properties. The filter and percentage works as an AND condition. For example: to target 50% of the app users using iphone 6s.
3. Goals: This section is similar to creating cohorts, where you just set your goal for the experiment. You can either set your goal based on either User Property Segmentation or User Behavior Segmentation or both at the same time. The first goal will be your primary goal which will decide the outcome of the experiment, and rest will be just additional goals.
You can set a maximum of 3 goals per experiment
For example, the goal of the experiment is to find a variant that leads to at least 5 sessions per user.
4. Variants: In this section you can create variants for your experiment. For a variant you can either choose an existing Remote Config parameter or create a new parameter that has not been created before.
Make sure that the parameter already exists in your app; otherwise, it will not take effect in the experiment.
Each variant will have the same parameters with the values you choose to set to them. Each variant will be competing against a control group. The "Control Group" is nothing but a variant itself against which all other variants test their performances. There must be at least two variants in any experiment, plus the Control Group.
In this step, keep in mind the following:
- You can have a maximum of 8 variants in an experiment.
- In each variant you can have a maximum of 8 parameters.
- A parameter can only be involved in a single running experiment at once.
- For any remote config parameter, the experiment values will be given priority over any of its existing conditional values or the default value, provided the experiment is running.
Managing Your Experiment
After you create an experiment, it gets added to the Drafts tab by default.
Start the Experiment
From the Drafts tab, you can start the experiment which will move the experiment to the Running tab. Once it is running, you cannot make any changes to the experiment other than stopping the experiment.
Once the experiment has started, it will go on for 30 days. After which the experiment will be rendered inconclusive if no leader is found amongst the variants.
Stop the Experiment
You can stop a running experiment from the 3-dot ellipsis menu on each experiment card. This action will move the experiment to the Completed tab. An experiment can be stopped at any point regardless of whether a winner is found or not. If a winner is found or the experiment was inconclusive, it will stop processing thereafter, and you can end the experiment.
Monitor the Experiment
Once an experiment has been running for a while, you can check its progress and see what the results look like for the users who have participated in your experiment so far. Just click on your experiment in the running section. On this page, you can view various statistics about your running experiment, including the general experiment information. Underneath, you will find the following information for each Goal:
- Improvement over baseline: This is a measure of improvement of the variant over the baseline for the selected goal.
- Conversion rate: This is conversion rate of the users falling into the subjected variants for an experiment.
- Probability to baseline: The probability that a given variant will beat the baseline for the selected goal.
- Conversions: Total user conversions for the variant
The Winning Variant
To decide the winning variant of an experiment we check if the lower limit of the conversion rate of a variant is 1% greater than the upper limit of the conversion rate of the Control Group. If so, we declare a winner and stop the experiment. This also ensures that even in the worst case, there will always be a 1% conversion rate.
Rolling Out a Variant
After you have a winning variant for you primary goal, you can roll out this variant from the experiment to a 100% of the users. You can select whichever variant you like and publish it in Remote Config for all users moving forward. Even if your experiment does not have a conclusive winner, you can still choose to roll out a variant to all of your users.
By clicking the
Rollout variant button, you will see a drawer open up where you can choose your variant and roll it out.
Once the variant has been rolled out, you can check it in the Remote Config.