Sunday, August 26, 2007

Recognizing gestures: Interface design beyond point-and-click

The state of gesture interface design ...

"The most basic and simplest gesture is pointing, and it is an effective method for most people to communicate with each other, even in the presence of language barriers. However, pointing quickly fails as a way to communicate when the object or concept that a person is trying to convey is not in sight to point at. Taking gesture recognition beyond simple pointing greatly increases the type of information that two people can communicate with each other. Gesture communication is so natural and powerful that parents are increasingly using it to enable their babies to engage in direct, two-way communication with their care givers, through baby sign language, long before the babies can clearly speak (Reference 1).

The level of communication between users and their electronic devices has been largely limited to a pointing interface. To date, a few common extensions to the pointing interface exist. They include single- versus double-click or tap devices and devices that allow users to hold down a button while moving the pointing focus, such as mice, trackballs, and touchscreens. A user's ability to naturally communicate with a computing device through a gesture interface and a speech-recognition interface, such as a multitouch display or an optical-input system, is still largely an emerging capability. Consider the new and revolutionary mobile phone that relies on a touchscreen-driven user interface instead of physical buttons and uses a predictive engine that helps users with typing on the flat panel. This description could apply to Apple's iPhone, which the company launched in June, but it can also apply to the IBM Simon, which the company launched with Bell South in 1993, 14 years earlier than the iPhone. Differences exist between the two touch interfaces. For example, the newer units support multitouch gestures, such as “pinching” an image to size it and flicking the display to scroll the content. This article touches on the nature of how gesture interfaces are evolving and what they mean for future interfaces.

Much of the technology driving many of today's latest and innovative gesturelike interfaces is not exactly new: Most of these interfaces can trace their heritage in products or projects from the past few decades. According to Reference 2, multitouch panel interfaces have existed for at least 25 years, and that length of time is on par with the 30 years that elapsed between the invention of the mouse in 1965 and the mouse's reaching its tipping point as a ubiquitous pointing device, which happened with the release of Microsoft Windows 95. Improvements in the hardware for these types of interfaces enable designers to shrink and lower the cost of end systems. More important, however, these improved interfaces enable designers to leverage additional low-cost software-processing capacity to use it to better identify more contexts so they can better interpret what a user is trying to tell the system to do. In other words, most of the advances in emerging gesture interfaces will come not so much from new hardware as from more complex software algorithms that best use the strengths and compensate for the weaknesses of each type of input interface. Reference 3 provides a work-in-progress directory of sources for input technologies."    (Continued via EDN)    [Usability Resources]

Gesture Interface - Usability, User Interface Design

Gesture Interface


Post a Comment

<< Home

<< Home