When conducting user research, one of the most common questions product managers ask is: How many participants do I need? 

The answer depends on whether you're conducting qualitative or quantitative research and what level of confidence or insight you're aiming for.

In this article, we'll break down the basic concepts of sampling, statistical significance, and saturation, helping you determine the right number of participants for your study.

Qualitative vs. Quantitative Research: Different Approaches to Sampling

The number of participants you need is largely dictated by the type of research you're conducting:
  • Quantitative research aims for generalisability, meaning its findings should be representative of a larger population. To achieve this, you need a large sample size and use probability sampling—where participants are randomly selected to avoid bias.
  • Qualitative research is more about deep insights from a specific user group. Rather than large numbers, qualitative research uses non-probability sampling, selecting participants based on specific characteristics that align with the research goals.

For example, if you’re trying to understand why people over 35 aren't using your mobile app, you wouldn’t randomly recruit participants of all ages. Instead, you’d specifically select smartphone owners over 35 who know about the app but don’t use it, with behaviours and motivations you are interested in exploring further.


Understanding Validity: Statistical Significance vs. Saturation

Research validity is about ensuring your findings truly reflect user behavior. But how you achieve validity differs between quantitative and qualitative research.

For Quantitative Research: Statistical Significance

Quantitative research relies on statistical significance—a measure that tells us whether a result is real or just due to chance. Achieving statistical significance requires a large sample size. Generally, the larger the sample, the more reliable the data. You will also need to identify your hypothesis, both null & alternative, then choose your significance level, a common choice is 0.05 (5%), meaning there's a 5% chance the result is due to random variation rather than a true effect. You will then need to analyse your data, e.g., a T-test for comparing two group means; such as an AB test comparing old design to new design and then calculate your p-value, which tells you how likely your observed results occurred by chance. For example, if your A/B test shows p = 0.03, then there is a 3% chance the observed difference is due to randomness, which is below the 5% threshold—so you can conclude the new design significantly improves performance.


If you are lucky enough to have data scientists that you can work with, then you, the designer, lead engineer (at the very least) should all partner together to debate all of this through and align.

For Qualitative Research: Saturation

Unlike quantitative research, qualitative studies don’t aim for statistical significance. Instead, they use the concept of saturation—the point at which additional participants stop providing new insights. Once you reach saturation, you can be confident that you’ve gathered enough data for meaningful conclusions.
The number of participants needed to reach saturation depends on the research method:
  • Usability Testing: Studies suggest that 5 participants per round uncover about 85% of usability issues. Testing with 15 users in a single round can reveal nearly all issues, but a better approach is to run three rounds with 5 users each, iterating and improving after each round.
  • User Interviews: Because interviews are open-ended and explore experiences and motivations, saturation often occurs after 12-20 participants, depending on the number of distinct user personas and the complexity of the research questions. Conducting research in multiple rounds (e.g., three rounds of 5-7 participants) allows you to refine questions and focus on gaps.

Special Considerations: When You Need More Participants

While these general guidelines work in most cases, some situations require a larger sample:
  1. Diverse User Groups: If your product serves very different user segments, you may need 5+ participants per group to ensure each perspective is captured.
  2. Safety-Critical Products: For medical devices, cars, aviation systems, or anything where usability failures could result in harm, testing should go beyond 5 users per round.
  3. Highly Variable Responses: If early research suggests a high degree of variability in user behavior or opinions, increasing the sample size may help in detecting meaningful patterns.

A Smarter Approach: Iterative Research

Instead of front-loading research with a large sample size, a smarter approach is to conduct research in iterative rounds.
For example:
  • Start with 5 usability test participants and analyze results.
  • If new issues continue to emerge, test with 5 more.
  • If you reach saturation (no new insights), shift focus to improving the product based on findings.

For interviews, begin with 6-8 participants and evaluate whether new themes are emerging. If insights are still evolving, continue with additional rounds.

Conclusion: Right-Size Your Research for Impact

The right number of participants depends on your research goals:
  • Quantitative research needs large numbers (often 100+) to achieve statistical significance.
  • Qualitative research relies on smaller, focused samples (often 5-20) to reach saturation.
  • Usability tests typically use 5 participants per round, while interviews require 12-20 depending on complexity.
User research isn’t about hitting a magic number—it’s about ensuring you gather enough reliable insights to make better product decisions. Keep your research iterative, adjust as you go, and let your findings guide the next step.
What’s been your experience with determining the right sample size for research? Share your thoughts in the comments!

0 Comments

Leave a Comment