My research addresses the problem of sensing and understanding human non-verbal interactive actions and intentions. A critical issue here is that human face and body exhibit complex and rich dynamic behaviour that is all non-linear, time varying, and context dependent (person, task, mood/ affect dependent).

Thus, the main focus of my research is on computer vision and machine learning for sensing and modelling multimodal human affective and interactive signals. The research topics include realizing spatiotemporal analysis of human
facial/ bodily/ vocal signals, integrating multiple sensors and pertinent modalities according to the model of the human sensory
system, and learning individual- and context- dependent human-behaviour models. These models will be further used for human behaviour analysis including affective states and social stances.

The application domains include seamless and proactive design of home and health support appliances, living and working spaces, interactive devices including automobiles and robots, tutoring systems, security support systems, etc. More info about the research can be found at the i·BUG web pages.