Machine Learning Based Optimization vs. A/B Testing
Is it possible to personalize experiences at scale with A/B Testing alone or is machine learning necessary in the face of human limitation? CEO, Liad Agmon, answers.
Read the full transcript
First of all, if you’re just A/B testing for all your audience at the same time, you’re basically optimizing for the quote-on-quote average user, but what is the average user? You have your frequent shoppers.
You have first time visitors. You have people who came from a Google campaign. You have people who came through your app.
So, we tie A/B testing technology very closely to segmentation’s strategy. But there is a limit to how many segments you can manage as a human being before it becomes too complicated and this why we introduced automated optimization. Which basically means that we’re using ad serving like techniques for changing the onsite experience.
So, what our customers do today is, instead of just doing an A/B test of five different banners or five different call-to-actions, they just create all these variations and they upload them to Dynamic Yield and we make a real-time machine learning based decision on what variation to show each individual user based on all the data we have on that individual whether it’s first-party data or third-party data.
The other big advantage of optimization versus A/B testing is the duration of the test. When it comes to running, experimenting with real-time events for a holiday or you have like back-to-school. Instead of doing an A/B test, they go for automated optimization. And this is where the machine learning algorithms kick in and they start predicting for each individual user what we should show them in order to maximize revenue.
And then we keep a control group and after the holiday is over, you can see, oh my optimization mechanism has generated 10% more than my control group which was the variation I had before I started the test. So, the idea of using machine learning and real-time optimization versus A/B testing is very important for anything that is short-lived.
It’s no secret that human intuition is naturally subject to ego and bias, which is why marketers adopted the scientific method of A/B testing for evaluating and serving experiences instead of relying on gut-based decisions that produce subpar results and quickly diminishing returns.
As a data-driven company, we are obsessed with A/B testing — that’s why we’ve baked experimentation and into all areas of our omnichannel personalization platform. We also A/B test every single thing we do as a company.
However, it is important to acknowledge the limitations of A/B testing.
Let’s start with A/B testing for your entire audience…
The Problem with the Average User
First of all, who is the average user?
There are so many different types of users or segments who interact with your brand, like:
- Frequent shoppers
- First-time visitors
- Visitors from a Google campaign
- Visitors who came through your app
Not to mention, only a small portion of these folks will end up responsible for actually contributing to a business in meaningful ways. Therefore, marketers can no longer optimize according to how they think the ‘average visitor’ is going to interact with their site, or even how a large segment of users will.
But properly executed segmentation for true personalization is a tedious and data-heavy task requiring numerous test deployments with conclusive results, analyzed data, and measurement of every tested variation against each audience segment in order to determine optimal programmatic targeting rules for an experience.
Leading to another human shortcoming within A/B testing.
The Problem with Winner Takes All
No matter how mathematical an individual’s brain, there will always be a limit to how many segments can be managed before becoming too complicated. Especially when factoring in contextual data such as user activity, affinities, geography, etc. And with a host of permutations and combinations, picking a winning variation in the face of a constantly changing customer base becomes impossible.
So, while A/B testing might be a relatively easy-to-execute practice, most marketers will continue to faithfully serve a “winner takes all” approach in absence of being able to handle the heavy-duty analysis required despite knowing it will compromise the experience for a portion of their visitors.
Automatically deliver the right experience to the right user at the right time
This strategy quickly dismantles an effective personalization program (even at the customer segment level) and leaves money on the table for any business looking to take their marketing to a 1:1 level.
And this is exactly why automated optimization is necessary.
Machine Learnings Pwns
By using ad serving-like techniques for changing the onsite experience, instead of doing an A/B test of five different banners or five different call-to-actions, marketers can create all the variations they need and let a real-time machine learning engine do the work. Using algorithms that constantly collect all user data and signals, the best variation can then be delivered to each individual user, regardless of where they arrive from, what device they are using, and so on.
In addition to ridding the marketer of infinite tedious data analysis, the other big advantage of optimization through machine learning versus A/B testing is the duration of the test and associated impact on revenue generated.
For instance, when it comes to experimenting with short-lived events for say, a holiday or back-to-school event, instead of running an A/B test and trying to optimize on the fly, machine learning algorithms are able to predict positive outcomes for each individual and thus maximize revenue over the duration of the entire campaign. Upon completion, marketers can compare the optimization mechanism against the control group and then validate their results.
The key to personalizing experiences that influence action is to treat each outcome as unique and dynamically respond to each customer individually, a feat which can only be scaled with machine learning.