Tuesday, October 30, 2007

The other side of the interface

Theories of interface design ...

"... It has been said that if users were meant to understand computers they would have been given brains. But, in fairness to users, the problem is often that interfaces are not designed to take account of their strengths and weaknesses. I have struggled with my fair share of dire user interfaces, and I’m supposed to be an expert user.

An interface is, by definition, a boundary between two systems. On one side of a user interface is the computer hardware and software. On the other side is the user with (hopefully) a brain and associated sensory systems. To design a good interface it is necessary to have some understanding of both of these systems. Programmers are familiar with the computer side (it is their job after all) but what about the other side? The brain is a remarkable organ, but to own one is not necessarily to understand how it works. Cognitive psychologists have managed to uncover a fair amount about thought processes, memory and perception. As computer models have played quite a large role in understanding the brain, it seems only fair to take something back. With apologies to psychologists everywhere, I will try to summarise some of the most important theory in the hope that this will lead to a better understanding of what makes a good user interface. Also, I think it is interesting to look at the remarkable design of a computer produced by millions of years of evolution, and possibly the most sophisticated structure in the universe (or at least in our little cosmic neighbourhood).

The human brain is approximately 1.3kg in weight and contains approximately 10,000,000,000 neurons. Processing is basically digital, with ‘firing’ neurons triggering other neurons to fire. A single neuron is rather unimpressive compared with a modern CPU. It can only fire a sluggish maximum of 1000 times a second, and impulses travel down it a painfully slow maximum of 100 meters per second. However, the brain’s architecture is staggeringly parallel, with every neuron having a potential 25,000 interconnections with neighbouring neurons. That’s up to 2.5 x 1014 interconnections. This parallel construction means that it has massive amounts of store, fantastic pattern recognition abilities and a high degree of fault tolerance. But the poor performance of the individual neurons means that the brain performs badly at tasks that cannot be easily parallelised, for example arithmetic. Also the brain carries out its processing and storage using a complex combination of electrical, chemical, hormonal and structural processes. Consequently the results of processing are probabilistic, rather than deterministic and the ability to store information reliably and unchanged for long periods is not quite what one might hope for.

Perhaps unsurprisingly, the brain has a similar multi-level storage approach to a modern computer. Where a computer has cache, RAM and hard-disk memory (in increasing order of capacity and decreasing order of access speed) the brain has sensory memory, short-term memory and long-term memory. Sensory memory has a large capacity, but a very short retention period. Short-term memory has a very small capacity but can store and retrieve quickly. Long-term memory has a much larger capacity, but storage and retrieval is more difficult. New information from sensory memory and knowledge from long-term memory are integrated with information in short-term memory to produce solutions."    (Continued via Successful Software)    [Usability Resources]

Memory Model - Usability, User Interface Design

Memory Model


Post a Comment

<< Home

<< Home