Monday, February 11, 2008

Multimodal Affective Computing

Defining multimodal affective computing ...

"Affective Computing is computing that relates to, arises from, or deliberately influences emotion or other affective phenomena (Picard 1997).

Research on automatic emotion recognition did not start until the 1990s. Although researchers like Ekman published studies on how people recognized emotions from face display in the 1960s (Ekman and Friesen 1968), people would find it absurd that anyone would even propose giving machines such abilities when emotional mechanisms were not considered to have a significant role in various aspects of human life. However, scientists found out that even in the most rational of decisions, emotions persist: emotions always exist, we always feel something.

In the early 1990s, Salovey and Mayer published a series of papers on emotional intelligence (Salovey and Mayer 1990). They suggested that the capacity to perceive and understand emotions define a new variable in personality. Goleman popularized his view of emotional intelligence or Emotional Quotient (EQ) in his 1995 best­selling book by discussing why EQ mattered more than Intelligence Quotient (IQ) (Goleman 1995). Goleman drew together research in neurophysiology, psychology and cognitive science. Other scientists also provided evidence that emotions were tightly coupled with all functions we, humans, are engaged with: attention, perception, learning, reasoning, decision making, planning, action selection, memory storage and retrieval (Isen 2000 and Picard 2003).

This new scientific understanding of emotions provided inspiration to various researchers for building machines that will have abilities to recognize, express, model, communicate, and respond to emotions. The initial focus has been on the recognition of the prototypical emotions from posed visual input, namely face expressions. All existing work in the early 1990s attempted to recognize prototypical emotions from two static face images: neutral and expressive. In the second half of 1990s, automated face expression analysis started focusing on posed video sequences and exploiting temporal information in the displayed face expressions. In parallel to the automatic emotion recognition from visual input, works focusing on audio input emerged. Rosalind Picard's award­winning book, Affective Computing, was published in 1997, laying the groundwork for giving machines the skills of emotional intelligence. The book triggered an explosion of interest in the emotional side of computers and their users and a new research area called affective computing emerged. Affective computing advocated the idea that it might not be essential for machines to posses all the emotional intelligence and skills humans do. However, for natural and effective human­computer interaction, computers still needed to look intelligent to some extent (Picard, 1997). Experiments conducted by Reeves and Nass showed that for an intelligent interaction, the basic human­human issues should hold (Reeves and Nass 1996)."    (Continued via Interaction-Design.org)    [Usability Resources]

0 Comments:

Post a Comment

<< Home

<< Home
.