Multimodal context-sensitive human-computer interfaces

We entered an era of enhanced digital connectivity. Computers and Internet have become so embedded in the daily fabric of people’s lives that they simply cannot live without them. We use this technology to work, to communicate, to shop, to seek out new information, and to entertain ourselves. With this ever-increasing diffusion of computers in society, human-computer interaction (HCI) is becoming increasingly essential to our daily lives. Yet, it is widely believed that the existing HCI techniques form a bottleneck in the effective utilization of the available information flow by a broad user audience. For example, the most popular mode of HCI still relies on the keyboard and mouse. These devices have grown to be familiar but they are the main cause for Repetitive Strain Injury (RSI), they are usually useless for users with physical disabilities and, in any case, they tend to restrict natural information and command flow. People normally communicate using both a subtle combination of gesture, facial expression, and vocal prosody in conjunction with spoken words and the knowledge about the context in which the interaction takes place.

We want to develop multimodal context-aware HCI which would form a first step in achieving a HCI that is qualitatively better than what we have today and that potentially resolves the interaction bottleneck in question. This novel HCI design should enable the computer to determine the context in which the user acts (i.e., who the current user is and what task he/she is doing) and to understand (in the given context) a number of human natural communicative cues including the gaze direction, hand gestures, speech, and facial expressions. In turn, such a system would allow the user to control his/her system using gaze, speech, hand and facial gestures.

The application domain of multimodal context-aware interfaces that we work at includes home and health support appliances, living and working spaces, and interactive devices including automobiles and robots.

Individual Final Projects in this area of research consider the following topics:

Multimodal Human-Computer Interface

The focus of the research in this field is to use natural human interactive modalities (speech, hand gestures, facial expressions, emotions) to interact with a machine (robot / computer). Different multimodal affective interfaces can be developed.

AIBO Messenger

AIBO is a small dog-like robot that can walk and talk. It can be programmed to perform various functions including delivery of messages. The focus of the research in this project is on person identification, speech recognition, emotion recognition, speech production, and dialogue management. The AIBO should recognize a person, understand if he or she wants to leave a message for another person, understand who the recipient of the message should be, record the message and the related emotional undercurrent (positive, negative, neutral), recognize the recipient ones he or she is in vicinity, and deliver the message.

RSI Prevention System

Repetitive Strain Injury (RSI) becomes a more and more present danger of the ever-increasing usage of computers in our daily lives. Making pauses and exercising regularly drastically decreases the danger of getting RSI. The focus of the research in this project will be on monitoring a person working with a computer, announcing (in a user-preferred manner) when it is the time to take a break, monitoring the execution of the exercises (including vocal and facial signs of pain according to which planning for the new break should be made), reporting on the correctness of the performed exercises, and providing feedback on how to improve the execution of the exercises.

Lower Back Pain Prevention System

Since most of the current jobs involve sitting, lower back pain became a commonly encountered problem. Some statistical reports mention that even 60% of the working population in UK report lower back pain every year. One of the main reasons for experiencing lower back pain is the sitting position. The focus of the research in this project will be on monitoring person’s sitting position, announcing (in a user-preferred manner) when the position can cause lower back pain, and providing feedback on how to improve the sitting position. Inclusion of specific short exercises that can help prevention of the lower back pain may be considered.

Contact:

Dr. Maja Pantic
Computing Department
Imperial College London
e-mail: m.pantic@imperial.ac.uk
website: http://www.doc.ic.ac.uk/~maja/