SaaS teams use the best practice of A/B testing like a crutch for marketing, messaging and product growth decisions.
And it makes sense.
If you think about it, A/B testing something relieves you of so much burden. Not sure which headline is gonna resonate? Test it! Teammates have an (ahem, questionable) idea for how to improve conversions? Test it!
Here’s the problem with the “test everything” approach: The vast majority of A/B testing is void of customer understanding and designed to make incremental improvements, typically without impacting revenue in any meaningful way.
If you can 'cut the cord’ from the purely data-driven statistical model of trying to optimize a page or user journey, you can build bigger, better, more powerful A/B tests that hit customers directly in the feels (and your ARR).
On this episode of the Forget the Funnel Podcast, Marc Thomas from Podia shares why testing isn’t the best practice it’s cracked up to be. Georgiana and Claire share when running tests is valuable and why knowing how your customers think, feel, and behave helps you run more productive and profitable tests.
Discussed:
Why “let’s just test it” isn’t the helpful universal best practice everyone thinks it is and where testing without customer insights can go wrong.
Why should you try getting closer to your ‘clonable’ customer before you run your next A/B test, and when testing can be most valuable?
Why you shouldn’t always believe what the test tells you, what you need to know to build more productive tests and the measures of success you should be paying attention to instead.
Timestamps
01:04 - Marc Thomas discusses A/B testing as a flawed “best practice” in SaaS. He explains why it isn’t always the answer and why getting close to your best customers is a better alternative.
08:18 - Georgiana sets the stage by explaining A/B testing (AKA split testing), how it is conducted, and what it usually tests.
10:28 - Georgiana discusses scenarios when teams might use A/B testing, like validating a hypothesis before taking a big swing. She also explores when a smaller conversion rate can actually be a good thing.
15:07 - Claire explains why testing for testing’s sake will not produce the desired results. Instead, effective testing starts with understanding your customers to form stronger hypotheses.
18:11 - Georgiana tells a story of how testing went south for one company when the team tried to find a suitable pricing model. A/B testing steered them wrong until they zeroed in on their best-fit, higher-value customers and what value meant to them.
21:55 - Claire explores excellent and bad testing examples to show how it can be a valuable use of time when it starts from in-depth customer knowledge.
23:29—Georgiana talks about the value of identifying Jobs-to-be-Done for one company so that they can build the customer experience around their high-value customers and optimize for the right things for the right customer.
27:25 - Claire points out how major organizational change can come from a handful of conversations and when SaaS companies should run tests.
Useful links
Comments
Want to join the conversation?
Loading comments...