Read about Dynamic Yield’s A/B/n test versioning controls that give marketers the mandate to decide what changes should trigger new test versions.
The days where marketers made changes to their websites based on gut-feelings alone are far behind us. We are now deep in the A/B/n testing era, basing our decisions on as much empirical data as possible by running experiments with different variations for a portion of our site traffic.
Since the results of such tests drive long-term decisions that can impact the entire user experience, we generally recommend testing for a minimum of two weeks in order to reduce any effects from seasonality. It takes time and patience, but the stakes are high for a marketing team.
Due to the nature of modern web development, marketing teams often want to make edits to their tests in hopes of optimizing on the fly, even if they’re currently running. And without the proper tools and guardrails, that’s a risky endeavor, because once you make any edits, you can’t conclusively rely on the results from the test.
For example, if you edit a variation of a button from green to blue, without proper version handling, you would be looking at “dirty” data, including performance results from when the variation was green. Due to this concern, many testing vendors don’t allow for any edits to running tests at all.
At Dynamic Yield, we understand web development is dynamic by nature, and marketers will often need to make adjustments to their tests for various reasons. Previously, we allowed users to make changes to a running test, but protected them by splitting the test’s performance data into versions and only weighing data collected after the edit in order to declare the winning variation.
If a test was initiated on Monday but then edited on Tuesday, only data from Tuesday onwards would be used for declaring a test winner. This gave marketers the flexibility to make changes while keeping the decision-making mechanism clean. Data collected prior to editing the test is kept and accessible from the test’s report, but the default view is the current test version.
Marketers were empowered to edit their tests and variations while Dynamic Yield’s platform managed the heavy lifting to ensure statistical reliability.
While this mechanism was an improvement from not allowing users to edit their test at all, we learned from our customers it wasn’t enough.
Introducing Test Versioning Controls
After surveying Dynamic Yield user’s for direct feedback, we found marketers wanted even more control, and to decide if any changes made should trigger a new test version (delaying the test conclusion by two weeks at minimum). In many cases, a marketer might want to make updates or changes to a variation that won’t have an effect on the results or a visitor’s experience – such as adding a tracking URL parameter to a link – and therefore, won’t want a new test version to be triggered.
Understanding this, we’ve built out test versioning controls that give marketers the mandate to decide: should this change trigger a new test version or not? Is this change going to be meaningful or significant? And now, our users can decide.
From fixing typos to making changes which equally affect all running variations, or more – Dynamic Yield users can decide if the data collected so far should be used to determine a winner. Or, decide a change or update is too significant, and that the performance should be tracked separately in a new test event.
This recent update gives our users more power, and in turn, much more responsibility.