Update: The folks at Optimizely let us know that they’ve launched a new statistical approach to address the concerns raised in this post.
It had been six months since we started concerted A/B testing efforts at SumAll, and we had come to an uncomfortable conclusion: most of our winning results were not translating into improved user acquisition. If anything, we were going sideways, and given that one of my chief responsibilities was to move the user acquisition needle, this was decidedly not good. Not good for me. Not good for my career. And not good for SumAll.
Having worked for an A/B testing and website personalization company in the past (disclosure: Monetate is an Optimizely competitor), I’ve always been a believer in the merits of A/B testing, and one of the first things I did when I started working with SumAll was to get a testing program in place. Things were going well (or so it seemed) and we were simply astonished by the performance of the tests we had been running.
Optimizely had been predicting huge gains, known as “lift,” from our efforts. We were seeing 60% lift here, 15% lift there and, surprisingly, almost no losers – only winners. These results made great fodder for weekly e-mails, and did a lot to get the team completely bought into the A/B testing philosophy. Continue reading “How Optimizely (Almost) Got Me Fired”