Wednesday, November 19, 2008

Sample Size Oddities

Usability testing sample size ...

"It might seem counterintuitive, but the larger the proportion of a population that holds a given opinion, the fewer people you need to interview when doing user research. Conversely, the smaller the minority of people who share an opinion, the more people you need to interview.

Mariana Da Silva has written an article about sample sizes in market research—or user research—titled “The More the Merrier.” In the article, Mariana made a comment that has caused some consternation—and for good reason.

“It all comes down to the size of the effect you intend to detect. Imagine you wanted to know whether people in London are taller than people in New York. If people in London and people in New York are actually pretty much the same height, you will need to measure a high number of citizens of both cities. If, on the other hand, people in London were particularly tall and people in New York were shorter than average, this will be obvious after measuring just a handful of people.”—Mariana Da Silva

Surely, popular thinking went, the larger the difference, the more people you’d need to ask to make sure it was real? It makes intuitive sense, but ignores the underlying principles of probability theory that govern such situations.

Now, before there’s a stampede for the exit, this article is not going to be heavy on mathematics, probability, statistics, or any other related esoterica. What we’re going to do is take a look at the underlying principles of probability theory—in general terms—and see how we can make use of them to understand issues such as the following:

* how many people to include in a usability test
* how to efficiently identify population norms and popular beliefs
* how to do quick-and-easy A/B test analysis

Then we’ll move on to take a look at a case study that shows why a large sample size doesn’t always guarantee accuracy in user research, when such situations can arise, and what we can do about it.

Understanding Optimal Usability Test Size

Across the usability landscape, conventional wisdom holds—as characterized by the title of Jakob Nielsen’s Alertbox article from 2000, “Why You Only Need to Test with 5 Users”—you can do usability testing with just a handful of users. With more, you’ll see diminishing returns on each successive test session, because it is likely that another user will already have found the bulk of the issues a user finds.

Nielsen provides the reasoning that each user—on average and in isolation—identifies approximately 31% of all usability issues in the product under evaluation. So, the first test session uncovers 31% of all issues; the second, 31% of issues, with some overlap with session 1; and so on. After five test sessions, you’ve already recorded approximately 75-80% of all issues, so the value of the sixth, seventh, and subsequent test sessions gets lower and lower."    (Continued via UXmatters, Steve Baty)    [Usability Resources]

0 Comments:

Post a Comment

<< Home

<< Home
.