Marketers who are actively optimizing and personalizing their web properties often experience a common scenario: experiments show successful results with high statistical significance and uplifts in KPIs. However, in reality, when measuring the effects of these results over time, more often than not the impact of these alleged performance uplifts becomes either vague, insignificant or even reversed. Meaningful optimization requires marketers to be persistent and patient, and necessitates that short-term optimization successes be scrutinized within the context of long-term occurrences and initiatives.
Optimization results do not always provide clear, valuable and actionable insight. The reason for this is that visitors’ behavior is constantly changing based on a variety of internal (targeted promotions, design changes) and external factors (seasonality, social trends etc). To illustrate the importance of accounting for external factors, let’s borrow an example from Daniel Waisberg, Analytics Advocate at Google: an electronics company’s revenue plummets during January and February of a given year. If the CMO is analyzing his sales data during those months by looking at the previous eight weeks, he may jump to the conclusion that sales are in a sudden free fall and use this information to structure the company’s optimization strategy for the upcoming quarters. However, If he patiently compared the same period YoY for the past 36 months, he’d be able to identify repeated behavior as a result of seasonality, as generally January and February are slow months for consumer electronics while November and December are almost always big.
The Single-Variant Optimization Head Fake
What’s wrong with placing a winning variation if it is indeed a winner? Quite simply, permanent implementation of a single winning variation provides a homogenic experience for all visitors, making it a fundamentally flawed optimization strategy. Simply put, one variation cannot be suitable for all visitors. Marketers need to be patient and understand that a good experiment should look beyond short-term results and include long-term KPIs such as predicted Average Lifetime Value and impact on revenue — which is where dynamic experimentation solutions come in handy. With the help of Contextual Bandit algorithms and predictive analytics, effective optimization solutions can focus on dynamically maximizing the collateral impact of continuous optimizations in the long-run, instead of statically optimizing single A/B testing initiatives for the short-term.
The Hawthorne Effect
Another good reason to run tests with the long-term in mind is the Hawthorne effect. As psychologists would explain, the Hawthorne effect refers to the awareness of subjects to an experiment that they are under observation, and the automatic change in behavior of those subjects. When it comes to online experiments, visitors may be introduced to new and drastic changes (like site rebranding or significant design and layout changes). This can cause loyal customers or experienced users to become less active until they get used to the changes, resulting in a primacy or “newness” bias. Dominic Basulto, an innovation blogger at The Washington Post and Big Think, contemplates our ability to recognize these types of changes in an article titled “Humans Are the World’s Best Pattern-Recognition Machines, But for How Long?”, and asserts that for marketers, solving the newness bias is a matter of patience:
“Fortunately, by running experiments for longer periods, even when statistical significance has already reached a safe confidence level, marketers can reduce (and sometimes eliminate) any potential test biases, such as with this “newness” bias.”
Running experiments for short periods of time will not accurately provide insight about the nature of your business during that period. Longer experimentation is therefore increasingly essential in gauging visitors’ true behavior, as it will account for the internal and external factors that affect the way visitors’ interact with your site.
In summary, an effective optimization strategy is the product of patience and persistence on the part of the marketer, and necessitates that short-term optimization successes be scrutinized within the context of long-term occurrences and initiatives. Achieving this requires looking beyond short-term results and accounting for the external and internal factors that may have influenced visitor behavior.
Gaining actionable insights from A/B test data means understanding that that no single test variation can fit all users (no matter how statistically successful) and running those experiments in the long-term will best inform your optimization strategy going forward.