
What is the problem with running too many tests?
You have to find the balance between quantity AND quality, especially when it comes to running tests to optimize your digital experience.
Let’s imagine a scenario together:
You want to optimize a page on your website. You think running more tests will help you get more winning results. After a few weeks, you analyze the results and can’t decipher which tests led to what outcome because everything was active at the same time.
Not only have you wasted time, but you’ve also lost a portion of your budget on it.
Contrary to popular belief, sometimes less testing is better in an optimization program.
For each test, your goal is to validate your hypothesis with data. Data can get overwhelming, and running too many tests on the same page or audience at once can lead to poor data cleanliness.
“It’s important to look at behavior goals to assess why your metrics improved after a series of tests. So if you’re running too many similar tests at once, it will be difficult to pinpoint and assess exactly which test led to the positive result.”
Natalie Thomas
Considerations for How Many Tests to Run
Test ideas should come from research. Testing is typically 80% research and 20% experimentation, so the more you research customer pain points and come up with strong hypotheses to solve them, the more you can determine quality test ideas.
So, you might be asking what is “too many tests”?
There’s no one answer to the ideal number of tests you should run. It depends on:
- Your optimization goals
- The complexity of your site
- Your optimization strategy
Guidelines in Testing
While I can’t tell you exactly how many tests to run, the following guidelines can help you determine if you are running too many or too few tests. As a general rule of thumb:
If your win rate is low, you need to increase the quality and tone down the quantity.
If your win rate is high, you’re too cautious, so your testing quality and learnings won’t be very meaningful.
A “good” win rate depends on what, where, and how you’re testing:
What: If you’re testing on a site with an enormous amount of data, you might feel comfortable failing regularly because even a small win here and there has a large dollar value in the end. If you don’t have a lot of data, your testing program takes time, and you’ll be tempted to make sure every test counts and swings for the trees.
Where: At volume, large companies can start optimizing even the smallest parts of the funnel, like the return customer dashboard and the reordering experience. Smaller organizations may want to focus on only the highest volume landing pages and the most popular products or services. Limited pages mean a limited number of tests, so your cadence or volume of tests won’t be as consistent.
How: For organizations with many variables contributing to the bottom line (e.g., cancellation rates, return rates), post-test analysis could take months. Long post-test analysis cycles may become a limiting factor in your testing velocity and win rate.
If you’re wondering how to find the balance between quantity and quality, the short answer is: You should only run the number of tests you can research and manage.
Once you shift your focus from trying random tests based on your gut feeling to solving specific problems your customers face, your results will increase significantly.
Enjoying this article?
Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox every week.

About the Author
Caroline Appert
Caroline Appert is the Director of Marketing at The Good. She has proven success in crafting marketing strategies and executing revenue-boosting campaigns for companies in a diverse set of industries.