A/B Testing + Product Management

Product Managers develop hypotheses on an almost daily basis. It’s a critical part of the job.

They develop hypotheses about new features, changes to UX, even the specific copy that best conveys to a customer what the customer should do next. 

Many times, however, especially at smaller companies or startups, these changes are simply “rolled out” or deployed to 100% of visitors/users/customers without a second thought.

However, this can result in several significant problems:

  • You can never definitively measure the impact of the change. 

Because many changes often happen at the same time, including changes outside of your product features and functionality (seasonality, user mix, etc.), you cannot determine the impact of one specific change without these other variables confounding your analysis.

  • You may actually cause issues or changes in parts of the product that you’re not paying attention to or didn’t think would be impacted.
  • You may make the experience worse for users.
  • You may negatively impact your KPIs and your business.

Why Do We Test?

We don’t have (all) the answers. 

I can’t tell you how many times the majority of my company (myself often included) thought a particular test would have one result and then we were completely surprised that it turned out the complete opposite! Usually, for an very intuitive reason that we missed in our initial discussions.

Even if you think you understand your user personas very well, it’s incredibly hard to predict exactly how they’ll react to a new design, a new pricing scheme, or a new onboarding UX.

We A/B test because it allows us to “control” for all of the other changes that are happening and measure the impact of only 1 specific change (or group of changes) for our testing group.

You also can learn much more from a test than just “win or lose.”

For example, with a simple pricing grid test, you can understand price sensitivity, how merchandising different features impacts plan selection, which design treatments that work better on mobile vs. desktop, and much much more. 

Plus you can learn what makes you more money! 

What Makes a Good Test (Or a Bad one?)

Good

* Small, focused
* Simple to explain
* Paired with KPIs that you expect the test will move
* Stastically significant results in weeks
* BIG win or BIG lose

Bad

* Large, bloated
* Complex, lots of moving parts
* Not tracked or doesn’t affect any important KPIs
* Significant results in months or never
* No impact, more likely to have a neutral result than a win or a lose

Note: This is not an exhaustive list of what makes a good or a bad test, but includes many of the more important items to keep in mind as you build your A/B tests.

How Can You Create a Testing Culture?

Are you starting from scratch with testing at your company or startup? Here are a few tips to help you successfully launch your testing culture and get the organization excited about experimentation! 

  1. Start by identifying a few very high impact tests 

Very clearly define the hypotheses of these tests, how they will be tested, what metrics you expect to impact, and how you’ll determine which testing cohort won.

2. Determine what testing software to use → Find a company with a free trial

I’ve used both Optimizely and Split and would recommend either for helping you manage the cohort assignment, measurement, and other aspects of A/B test management.

3. Present your plan for “testing” testing to an executive sponsor and get buy-in 

Do any of your proposed high value tests require resources outside of you? 

Can those resources be reduced or eliminated by designing the test differently without compromising the testing learnings? 

Even if you are the only resource needed for these initial tests, make sure you have an executive sponsor who sees the opportunity as clearly as you do! 

A/B testing has the tendency to shake up things a bit and having an executive partner to calm down other parts of the business when your tests impact their world is very valuable.

4. Create a Slack/Teams channel and invite the initial testing team

Overcommunicate! 

Leverage this channel to share and discuss the planned tests, how they will be measured, when and where they will run, and invite feedback. 

Beyond your executive sponsor, try to engage at least 1 person from each major functional area (Marketing, Sales, Engineering, etc.) to join this channel and represent their group.

5. Make results as public as possible (win or lose!) 

Results should be highly visible and paired with an analysis (by you!) of the “why” behind the “what.” Results are also a great chance to share additional follow-up hypotheses and gather additional experiment ideas from others in the business. 

6. Host a testing hackathon! 

Once you have a foundation, testing software, and an experimentation team, hackathons can be a great way to “launch” A/B testing to the rest of the organization and get people excited! 

Need a few tips? Check out my post below.10 Tips for Hosting a Successful A/B Testing Hackathon
There is no better way to find new testing ideas, generate excitement about testing, and push the boundaries of…blog.optimizely.com

Should We Test Everything?

Some tech large companies, Facebook for example, test almost everything via smaller exposures before deploying more widely. 

However at your company, especially if you are in the early days of testing, testing everything is not a great idea. 

Channel your testing energy into high impact tests that match the characteristics of a “good” test that I shared before. 

In addition, you almost certainly don’t have the volume of traffic, users, etc. to test everything. Facebook is clearly in a unique place in this regard with over 2B users to experiment with.

What Have You Learned About Experimentation as a PM? How Have You Built a Testing Culture?

Let me know in the comments or on Twitter at @amitch5903!


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *