Wednesday, March 25, 2009

The User Experience of Enterprise Software Matters, Part 2: Strategic User Experience

Usability Testing enterprise applications ...

"In my previous column, “The User Experience of Enterprise Software Matters,” I argued that organizations making enterprise-level technology selections often do an incomplete job of assessing the real-world effects of the new applications they impose on their staffs’ workflows and processes, saying:

“The technology selection process typically neglects methods of evaluating the goodness of fit between the enterprise users’ processes, workflow, and needs, and the vendors’ solutions. Organizations could avoid many a rollout disaster simply by testing the usability of vendors’ solutions with employees during a trial phase.”

I also encouraged enterprises to demand more usable software that meets their organizations’ needs.
In this column, I’ll provide a technology selection framework that can help enterprises better assess the usability and appropriateness of enterprise applications they’re considering purchasing, with the goal of ensuring their IT (Information Technology) investments deliver fully on their value propositions.

It’s Not Rocket Science

As you may have suspected—and as UX professionals are fond of saying—the answer to this problem is not rocket science. It’s actually pretty simple: Organizations making technology investments need to do a few things in addition to their typical processes for evaluating technology:

* Identify and describe the target user groups that currently perform the task or process the software will automate, so their characteristics, motivations, and appetite for change are well understood.
* Model and describe the current workflow the target users employ to accomplish the task or process, using simple methods like task analysis and time-on-task measurement.
* Discover what the target users and other staff typically do before and after the task being automated, to gain an understanding of whether—and, if so, how—you can automate the task’s precursors and antecedents or somehow include them in the potential solution.
* Finally—and only after doing all of the above—begin to assess the technology solutions in detail for their goodness-of-fit to the qualitative, real-world characteristics of the target users and the existing workflow.

At this point in technology assessment, feature lists and demos matter a whole lot less than actually putting real target users on the system and having them perform their tasks. Does doing this consume more time and resources? Yes. Is it worth it? Absolutely! Not doing this increases the risk that your organization will suffer reduced productivity, decreased morale, and the other risks attendant on technology rejection that I described in Part 1. And, just in case you don’t really buy the examples I described there, let me relate two more stories of technology rejection that I recently encountered—this time, in high-risk, mission-critical environments."    (Continued via UXMatters, Paul J. Sherman)    [Usability Resources]


Post a Comment

<< Home

<< Home