Monday, October 01, 2007

Surveys - What is an Acceptable Response Rate?

The number of subjects you need for a survey ...

"It's been a while since I ranted on about response rates on surveys. In that article, I took the view that "2% is a terrible response rate" and had a few reasons why and tips for doing better. Recently, I've had a couple of challenging questions on the topic. So here goes at trying to answer them.

WHAT IS AN ACCEPTABLE RESPONSE RATE?
First of all, if a 2% response rate is no good: what should you aim for?

The answer is in two parts: first of all, the question of non-response bias and secondly, an actual percentage.

Non-response bias happens when the population who do not respond display characteristics that are different to the population who do respond.

If you've got a properly random sample, and the people who respond are just as random within your sample, then you still have a random sample no matter how small the response. Indeed, you might even accept a 2% response rate.

But in real life, it doesn't work like that. With a very low response rate, you're extremely likely to have found that people who do respond are unusual in some way: a massive non-response bias.

Whatever your response rate, you need to devote some thought and investigation to the reasons why some people didn't respond. Most of all, you have to have a clear understanding of who you asked and who didn't respond - preferably contacting some of them to ask why. And the lower the response rate, the larger your task is going to be on finding out why people didn't respond.

So that brings us to the actual percentage response rate you should be looking for. Personally, I aim for 70% to 80%. You may take a different view. If it's a difficult-to-reach population and an onerous task (for example, a survey that requires more than one round of questionnaire with some other task or intervention in-between), then you might accept 40% to 50%.

You need to balance these two factors:
- the effort required to get people to respond
- the effort required to find out why they didn't respond (plus the risk that you'll find some horrible non-response bias that destroys your whole survey)."    (Continued via Usability News, Caroline's Corner)    [Usability Resources]

0 Comments:

Post a Comment

<< Home

<< Home
.