How to Conduct High Impact User Testing: Part 3 – Analyzing the Results

User testing is the best way to understand how your target users are interacting with your website or product. This Insight focuses on how to analyze and interpret the data you collect in your user testing program.

This is the third and final installment of our three-part series, How to Conduct High Impact User Testing. To start from the beginning, read Part 1: Thoughtful Panel Selection.

Analyzing and interpreting the results of your user testing sessions can be a daunting task. A standard test with five to ten participants can potentially produce sixty or more UX issues. It’s important that you know how to filter through these issues so you don’t end up wasting your time developing solutions that don’t address the broader usability issues at hand. 

We previously covered in parts one and two of this series how to select appropriate user testers and what questions you should be asking those users. In this final part we’ll be focusing on how you can turn the data that was collected in your user testing sessions into quick wins, A/B test ideas, and bigger strategies. 

How to properly analyze and interpret your testing results

After you’ve invested your time and resources into selecting a great panel of testers, formulating tasks and questions, then collecting all of the results of the testing session, you’re ready to interpret those results into usable findings.

Through our experience (listening to dozens of user recordings every day!), we’ve identified a number of advantageous tips that you can use to get the most value out of your user research.

Return to Your Original Objectives

As you start to analyze your user testing sessions, reflect on the objectives you created when you first launched your user tests. What were the goals that you and your team established for the test? Reflecting on these objectives will allow you to generate valuable insights that you can share with your team. 

Don’t take user tester suggestions literally

User testers often suggest solutions to the micro-frustrations they encounter, but users are notoriously bad at thinking outside the box, and we caution you from letting these off-the-cuff “solutions” directly guide your design process. In our experience we’ve heard some pretty wild suggestions from user testers, but oftentimes these suggestions don’t address the root of the issue. 

As the old (most likely made up, but regularly cited) adage goes, Henry Ford said, “If I had asked people what they wanted, they would have said faster horses.” While user frustrations are valid, their suggestions are typically laser-focused on the problem directly in front of them and lack the data and context that researchers are armed with. 

So rather than getting caught up with tedious UX fixes, wait until enough feedback is collected to roll up these issues into a hypothesis that will address greater opportunities.

“If I had asked people what they wanted, they would have said faster horses.” –Henry Ford

Capture quotes that synthesize misunderstandings, not just opinions 

Direct user quotes are a great way to understand what page elements miss the mark, but make sure to capture the misunderstandings as well. Users will frequently say they’re likely to do something but end up getting lost along the way. 

Quotes can contain more subjective feedback, but capturing the language that they’re using can be valuable as it helps you understand sentiment, both positive and negative. Even if what the user is saying doesn’t necessarily match up with their actions, it allows us to see what they’re noticing. This information is most easily found in the form of user feedback/direct quotes because it’s nearly impossible to tell what users are expecting to find on your site through heatmaps or analytics alone.

Don’t trust the user. Be skeptical and read between the lines

Listening to what your user testers are saying is important, but it’s even more important to pay attention to what they do. Users will often rate their experience on a site favorably despite having struggled more than they would on a more optimized site experience. Even when users spend longer than average looking for pertinent information, they often rate ease of navigability highly. So take users opinions with a grain of salt. 

Users are often willing to forgive and forget a subpar site experience because they believe they have failed the site, rather than the site has failed them. Even if a user is not providing verbal feedback, pay attention to how long they spend on a task versus what they do not notice. 

Users are often willing to forgive and forget a subpar site experience because they believe they have failed the site, rather than the site has failed them. #usertesting #UXdesign Click To Tweet

Prioritize the Identified Issues

It’s important that you prioritize the issues you’ve identified throughout the user testing sessions to help you determine which problems are more critical than others. It can be easy to underestimate the volume of problems even a handful of user testers may uncover on your site, so once you have a complete list, begin organizing them based on impact and significance. You don’t want to find yourself caught up with tedious minor fixes to your UI when there are larger glaring issues that should be addressed first. 

Save yourself time and create a spreadsheet of all the problems you’ve identified. Organize them based on the frequency that the issue occurred, and the impact it had on the overall user experience. Then use your spreadsheet as a checklist as you determine what usability issues to tackle first on your site.

It’s time to start testing!

Analyzing and interpreting the results of your testing sessions is easily the most time-consuming but most valuable aspect of the user research process. Once you’ve invested the time and resources into a series of user testing sessions, it’s crucial that you draw as much insight and value as you can from the results. 

We hope the tips we provided can help you run a more successful and efficient user testing program in the future. This concludes our three-part series on user testing. If you missed parts one or two, make sure you read those as well and let us know your thoughts!

If you’re looking for the best usability testing platform to use for your next round of user research, make sure to check our comprehensive list of every usability testing tool imaginable.

About the authors:

Natalie Thomas is the Director of CRO & UX Strategy at The Good. She works alongside ecommerce and lead generation brands every day to produce sustainable, long term growth strategies.

Maggie Paveza is a CRO Strategist at The Good. She has over five years of experience in UX research and Human-Computer Interaction, and acts as an expert on the team in the area of user research. 

About the Author

The Good