Tuesday, September 25, 2007

How Do Users Really Feel About Your Design?

Measuring how users feel about your design ...

"Perhaps you’ve done contextual inquiries to discover your users’ requirements and understand their workflows. You may have carried out participatory design sessions, usability tested your design, then iterated and improved it. But do you know how users really feel about your design? Probably not.

The user experience field has been trying to move beyond mere usability and utility for years. So far, no one seems to have developed easy-to-implement, non-retrospective, valid, and reliable measures for gauging users’ emotional reactions to a system, application, or Web site.

In this column, I’ll introduce you to a promising method that just might solve this problem. While this method has not yet been subjected to rigorous peer review or experimental testing, it offers an intriguing solution and is endlessly fascinating to me. And it just might prove to be the kind of powerful technique we’ve been looking for to illuminate users’ emotional reactions to our designs.

Why Measure How Users Feel?

Many of us in the field of user experience believe that utility and usability are necessary, but somehow insufficient. Even for the most staid and straightforward business application, users form an affective reaction to the application during initial and subsequent use. It might not be as strongly valenced as, say, the reaction they have when browsing an online store, but the affective reaction is there nonetheless.

All things being equal, users evaluate a system that engenders positive emotional reactions more positively than a system that doesn’t. So we need to know whether—and the extent to which—a system’s use triggers positively valenced emotions.

And on the flip side, we’ve all seen users become frustrated when they can’t figure out how to accomplish their tasks. Just how frustrated are they? Mildly? Moderately? Are they so irritated with your design they’re ready to heave the device out the nearest window? Let’s hope not.

The point I’m trying to make is that, up till now, we’ve assumed users are somewhat frustrated when we observe indirect behavioral indicators such as menu hunting, false starts, and input errors. And we’ve assumed they’re really frustrated when we’ve heard a sigh of exasperation. However, these observations provide really coarse measures of affect. And very few of us capture them or use them systematically to make comparative judgments.

What’s more, alternatives for measuring delight and frustration—after-the-fact survey questions, verbal self-reports, and retrospective video self-evaluation—are notoriously subject to positivity bias and other vagaries of attribution bias. Wouldn’t you like to have a method that’s more granular and valid when you test your next design?"    (Continued via UXmatters)    [Usability Resources]

0 Comments:

Post a Comment

<< Home

<< Home
.