Battling the Devils in Your Analytics & Optimization Data
An interview with analytics and optimization expert Nikolay Gradinarov on the challenge of scaling your analytics and optimization across the entire user journey.
With all due respect to Mark Twain, there are really four kinds of lies: lies, damned lies, statistics and bad marketing analytics data. Nikolay Gradinarov is on an eternal quest to defeat the latter.
Nikolay is an optimization aficionado who has implemented analytics solutions at some of the world’s largest companies, including Microsoft, Aetna, Merck, and Chase. He is currently the principal data analyst for Monster.com and a co-founder of QA2L – a tag auditing solution that automates the QA of tags across key user journeys.
We chatted with Nikolay to discuss the future of analytics, machine-learning-driven optimization and when data is no match for the dude with the biggest paycheck.
Q: What does an “ideal understanding” of the user journey look like from an analytics perspective?
A: I have always liked the concept of measuring the visitor journey through a set of well-defined, meaningful interactions that users accomplish on a digital property. Think about user actions such as signing up for an account, purchasing a product, or reading a blog post,
These actions and the various attributes associated with these KPIs describe what was most meaningful about the user experience on the digital property. Focusing on defining these KPIs should be the first step in an analytics implementation.
Q: What is the biggest challenge in syncing myriad data sources to reach this ideal understanding of the entire user journey?
A: The challenge can be most succinctly summarized in two words: data quality. All attempts at trying to fully understand and optimize user journeys rely on having confidence in the data. A 2015 Experian study found that 92% of organizations have experienced problems as a result of inaccurate data. A 2017 Experian study suggested that less than half (44%) of organizations trust their data to make important business decisions.
Even when dealing with a single data source, there are inherent issues with the fragility of data collection. Specialized data collection cookies, vendor-specific syntax for collection variables, privacy rules – these are just some examples of how complex data collection can be. You can then add the complexities associated with storing, processing, analyzing, and actually extracting insights out of the data. Finally, mix in other factors such as the pure volume of data, the neck-breaking speed of change, and lack of data governance. Is your head spinning yet?
Dangers lurk behind every corner, a small failure in any of the above steps will jeopardize the quality of the data and the outcome of all decisions made on such data. And we are still in the realm of a single data source! When you start layering disparate additional data sources, you are immediately compounding the complexity.
Q: Let’s focus on optimization. What is the most interesting piece of data you’ve uncovered in your career from an optimization campaign?
A: The most interesting experience that I have had in relation to testing has actually very little to do with the data itself. Around 2007, a company I consulted for experimented with several different versions of their home page. The testing produced a clear winner – a clean design that featured several calls to action and lots of white space. This winning experience was placed into production…. but only for a few days, at which point an alternate design was implemented. The alternate design contained significantly more content, additional calls to action, and a different background color.
The reason for the change: after reviewing the data, one of the company’s executives overturned the decision and insisted on the alternate design, despite having data that proved the cleaner design was performing better.
This was my first real encounter with the HiPPO (Highest Paid Person’s Opinion) phenomenon.
Q: What is one piece of conventional wisdom about optimization and A/B testing that you feel is incorrect?
A: Often times the success stories associated with A/B testing, optimization, and analytics generate false hype. This leads people to believe that optimization is a magic wand that you wave around and get immediate results.
In my experience that’s rarely the case. Adding an A/B testing capability can be a great asset to your digital business, but it needs to be coupled with proper planning, identifying opportunities, creating clear hypotheses to test, etc. In short, the role of the business in continuing to try to solve for various optimization challenges does not evaporate just because you buy a sophisticated testing solution.
Q: So will there always be a place at the optimization table for humans?
A: Yes. Manual A/B testing is a poor rival to the automated approaches when it comes to deciding which experience to serve based on the available user data. But this is just one step. You still very much need people to ask the smart questions and formulate the testing hypotheses, as well as people to create the various new experiences that you’d like to test.
Q: What is holding back the evolution of optimization and analytics? Is it technology, mindset, lack of skilled optimization folks, something else?
There is an inherent level of complexity that comes with optimization and A/B testing. If you think about just a single A/B test, it requires generating two separate experiences, managing the segments of users that you are going to assign these experiences to, and managing the logic that assigns the experiences. This is still a significant roadblock for companies with limited technical resources.
Furthermore, to run an A/B test in some industries, you may require approvals for each version you are testing from various company departments, including Finance, Legal, Design, QA, etc… Having many stakeholders can prove to be tricky and slows down the process.
Even for companies that have embraced optimization, you still need to make sure you do your upfront planning, address the business goals and clearly identify the testing opportunities.