Challenges Publishers are Facing Today with Homepage A/B Testing

Marketers have long embraced A/B testing as a powerful tactic to test and optimize various layouts and digital media offerings. Rather than relying on human intuition, naturally subject to ego and bias, marketers figure that introducing a scientific mechanism for evaluating and serving offerings increases their likelihood of maximizing their marketing ROI.

The exact same logic applies to publishers, especially news companies, who can undoubtedly reap the benefits of an automated process for selecting and displaying dynamic and optimized content on the fly. However, optimization and personalization service providers are dumbfounded when they receive so much pushback from publishers on the concept of automating their content selection and display process.

Are publishers being irrational when resisting automated optimization technologies? Is A/B testing simply not suitable for publishers? Can publishers, despite their initial resistance, still benefit from A/B testing?

In fact, while marketers are focused on increasing their marketing ROI and are tested at the end of the day by how much revenue different variations generate (and are often less attached to any specific selected variation), publishing decisions are made by editors. These editors’ expertise is in selecting winning content. Understanding this is key to why some publishers may be resistant to introducing automated optimization solution to manual decision making based on editorial expertise (or at least conceived expertise).

Even so, many publishers can indeed, and in fact are, benefiting from A/B testing. Some are utilizing automated testing to complement manual decision making, and some are gradually placing greater significance and importance on automated testing.

However, publishers face many unique challenges when implementing A/B testing. In the next few paragraphs we detail 5 challenges publishers face when implementing traditional, “out-of-the-box” A/B testing solutions. A/B testing service providers should be wary of these challenges when tailoring their solutions to publishers, and publishers should search for service providers who best address these challenges to maximize their goal accomplishments, whatever these may be:

No “Best” Variation

While marketers primarily care about increasing revenues, publishers care about the quality of content served, providing in-depth and interesting articles to capture the public’s attention. But different people have very different interests. How can publishers find a “winning” variation for content when an in-depth investigative story may be less appealing to some than a sports update? Are publishers simply expected to concede certain audiences to stories purporting to generate a higher total interest?

In order to combat this challenge, publishers can utilize a solution bearing dynamic traffic allocation capabilities such as a Contextual Bandit algorithm. In essence, this means that an A/B testing solution should not only display one “winning variation” but rather dynamically display multiple high-performing variations to audiences, based on context and personal intent. This way, each audience received a tested and optimized variation best suited for its preferences.

Ever Changing Environments

Front page article lifecycles are brief, and editors are constantly changing the position and content of lead articles. For A/B testing to be effective for publishers, results must be reached quickly and in a short period of time . Otherwise, publishers may experience low statistical significance.

This challenge is not easily overcome. Clearly, low homepage traffic or CTR positions will hinder testing capabilities and produce incomplete results. This is why traditional solutions will rarely yield complete and accurate results which actually help publishers.

A/B testing solutions originally designed for front page A/B testing tend to address these issues more effectively and have more sophisticated testing and dynamic allocation algorithms specifically designed to overcome this challenge and achieve high confident results in short periods of time.

Limited Testing Scope

When thinking about publisher front page A/B testing, the first areas to test are headlines and images. However, article position and subcomponents play a big role in affecting bottom line performance (be it pageviews, clicks, newsletter signups etc.). simply put, A/B testing a headline and an image is incomplete and may not result in significant returns for publishers.

Publishers should embrace A/B testing solutions and methodologies which incorporate not only headlines and images testing but also testing of different layouts and article positions. Not all A/B testing tools allow this sort of advanced testing and to extract maximal value from an automated testing framework, a wide feature of testing capabilities should be sought.

Improper Analysis

Publishers may be eager to integrate automated solutions. However, some fail to understand that with low CTR’s it is virtually impossible to test what works and what doesn’t, since CTR’s are generally the benchmark against which success and failure are measured.

In order to avoid wasting precious time on testing which yields no meaningful insights, publishers should analyze ahead of time which elements they should subject to automated testing. Testing elements which carry no indication of success or failure can be a waste of time  and energy and can lead to frustration and desertion of the automated testing process. Conduct an analysis to define which areas will benefit the most from automated testing – and waste no time and getting right to it!

Not Testing Element Combinations

What should publishers focus on when creating variations? Should they test 3 different titles with the same accompanying image? 3 different images with the same title? Or a combination of different titles and images? And how can publishers be sure a certain combination of elements yielded the best results?

A/B testing algorithms will determine a result based on the combination of the 3 elements:

  • Minimum run time of the experiment;
  • Minimum real impressions per variation; and
  • Minimum clicks per variation.

A high significance level will be achieved based on the above three elements. The more variations a publisher test, the more time it takes for the A/B testing algorithm to achieve the above minimums. Therefore, A/B testing should be tailored based on website traffic levels – the higher the traffic the more elaborate the test. It is on the publisher to conduct a proper analysis to understand how its traffic allows for test sophistication and not jump ahead to elaborate testing yielding results with low significance.

We reviewed a few of the challenges these editors face with introducing automated testing elements to their workflow. On top of the above, editors may not be particularly technologically oriented, meaning they should also search for a solution served with intuitive and easy-to-use features and layouts.

By acknowledging these changeless, publishers can effectively scope A/B testing solutions and implement the solution best suited for their needs to help them achieve their desired goals.

Challenges Publishers are Facing Today with Homepage A/B Testing
4.8 (96%) 5 votes

Menu Title
Contact Us
×