Whenever you set up an A/B/n test with multiple variations, it’s important to determine how you want the traffic to be distributed between the variations. The behavior of each traffic allocation option is as follows:
Manual Traffic Allocation
Traffic is distributed between the variations either evenly or according to predefined allocation rates. For example, if you launch a test with four variations, you may decide that all variations should have equal exposure, 25% of traffic each. Alternatively, you can favor certain variations over the other and go for any other combination of allocation rates that amount to 100%, such as 50/20/20/10. Manual allocation is de-facto a standard A/B/n test, and the assumption is that once results are significant, the test administrator will assign solely the best variation to all visitors.
Manual Allocation tests are by definition tests between variations (and Control Group, if relevant), in which ultimately one variation will be declared the winner with high confidence levels.
The statistical engine dynamically and automatically allocates traffic to the most appropriate variation, using big data and machine learning algorithms, in order to guarantee optimal performance in real-time. In Dynamic Yield, the algorithm that is primarily at work here is Multi-Armed Bandit (MAB), computing the weight and reallocation of traffic every 30 minutes.
The following diagram illustrates the traffic allocation behavior over a week’s long experiment between two variations, where a decision is required by the eighth day:
How to Choose Traffic Allocation?
Each of the Traffic Allocation is optimized for distinct use-cases. To understand which one is the most appropriate for the use case you have at hand, ask yourself which of the following assertions better describes your use case:
- I am looking for the best variation so I can present it to all users in the long run. In this case, choose Manual Allocation. Use-case example: Layout and UX changes.
- I am looking to make the most out of several variations during the limited time the test will run. In this case, choose Automatic Allocation. Use-case examples: Promotions on the hero banner.
Manual Allocation should be used when statistically significant results are required for making the decision to carry out a stark permanent change to the website, and time is not of the essence. Manual Allocation tests can run for as long as required collecting data that will result in highly conclusive statistically significant results. The downside of such tests is that while you are waiting for significant results – which may take time – there is no exploitation of the data collected. Visitors will still be exposed to the poor-performing variations in the mix. In cases where promotion variations are updated frequently, there may not even be enough time to reach significant results and therefore any optimization opportunity is lost.
If you are managing campaigns in which the variations have a short shelf life, or if they change and are updated frequently, then Automatic Allocation is the optimal way to go. Automatic Allocation has a much higher exploit rate of readily available data and is much more aggressive when driving traffic allocation decisions. Automatic Allocation knows to weigh in on new variations, variations that perform differently, different time periods, and more.