Tuesday, May 20, 2008

A Counter-Intuitive Approach to Evaluating Design Alternatives

Comparing design alternatives ...

"Last week, a client called looking for advice on their first usability study. The client is a large consumer information site with millions of visitors each month. (A similar site might be a large financial information site, with details of individual stocks, investment strategies, and "celebrity" investor/analysts that people like to follow.)

They are about to redesign their home page and navigation. They have three home page design alternatives and five navigation alternatives, created by an outside firm who didn't do any evaluations of the designs.

To help figure out which design to pick, the team has (finally!) received approval for their first usability testing study. While their site has been around for years, they've never watched visitors use it before now.

Up until now, management has perceived usability testing as a nice-to-have luxury they couldn't afford, primarily because of the time it takes. The team called us because they are very concerned everyone views their very first test as an overwhelming success.

They fought long and hard to get this project approved. If it's a success, it will be easier to approve future studies. If anyone thinks that it didn't help pick the right design, it will be a huge political challenge to convince management to conduct a second project.
The Challenges of Comparing Design Alternatives

When we started our conversation, the first thing the team members asked was how to compare the design alternatives. Ideally, they thought, we'd have each participant try each of the home page designs and each of the navigation designs, then, somehow render a decision on which one is "best." After two days of testing, we'd tally up the scores, declaring a winner.

Comparing designs is tricky under the best situations. First, you have to assume the alternatives are truly different from each other. If they aren't, all the alternatives may share a core assumption that could render each as a poor choice.

Assuming the team has done a good job creating the alternatives, the next problem is evaluating them with users. To do this, you'd need to run each alternative through a series of realistic tasks.

Choosing tasks is difficult in any study, but it's more complicated when the team has never really studied their users in the past. They've collected some data from market research and site analytics, but, as we talked to them, it was clear they weren't confident they understand why people came to the site.

If we think the team could come up with realistic tasks, there's still one more big challenge: evaluating all the alternatives. Since they wanted to test new designs, the best thing is to test against a benchmark.

A minimum study design would have each alternative (along with the current design) going first, to correct for "learning effects." (Learning effects happen in studies where the tasks and design alternatives are similar. How do you know if the second design succeeds because it's better or because the user learned something from the first design?)

For ratings, we wouldn't recommend less than four people evaluating each alternative in the first slot. That means, for six alternatives, we're talking a minimum of 24 users.

This presented the problem -- there's no effective way to test all these alternatives with 24 users in their allotted two days, within their budget. We needed to think creatively."    (Continued via UIE, Jared Spool)    [Usability Resources]

0 Comments:

Post a Comment

<< Home

<< Home
.