Current Projects

Past Projects

TESLA - An Adaptive Trust-based e-assesment System for Learning

Although online education is a paramount pillar of formal, non-formal and informal learning, institutions may still be reluctant to wager for a fully online educational model. As such, there is still a reliance on face-to-face assessment, since online alternatives do not have the deserved expected social recognition and reliability. Thus, the creation of an e-assessment system that will be able to provide effective proof of student identity, authorship within the integration of selected technologies in current learning activities in a scalable and cost efficient manner would be very advantageous. The TeSLA project provides to educational institutions, an adaptive trust e-assessment system for assuring e-assessment processes in online and blended environments. It will support both continuous and final assessment to improve the trust level across students, teachers and institutions.

Running time:

2016 - 2019

EPSRC FACER2VM

This is a large Program Grant aimed at advancing the science of machine face perception and face matching technology in order to enable automatic retrieval, recognition, and verification of faces and facial behaviours in images and videos recorded in the wild (e.g. CCTV camera recordings). The coordinator of the project is Prof. Josef Kittler of Surrey University and the team at Imperial College is responsible for tracking of faces in the wild, for learning and tracking of dynamic facial biomarkers (i.e. facial behaviometrics), and  data-driven learning and tracking of soft biometrics.

Running time:

2016 - 2021

Project website:

https://facer2vm.org/about/

H2020 DE-ENIGMA

The main aim of this project is to realise robust, context-sensitive, and multimodal Human-Robot Interaction aimed at enhancing the mind reading and social imagination skills of children with autism.

Running time:

2016 - 2020

Project website:

http://de-enigma.eu/

JUS-ASC: Behavior Analysis of Children with Autism Spectrum Conditions (ASC) in Context of Assistive Robots

This project is a joint collaboration of Chubu University (Japan), Imperial College London (UK) and Serbian Autism Association (SAA). 

Running time:

2015 - 2017

Funding agency:

Chubu University, Int’l Collaboration Grant, and Local Governments

SEWA Project

The Automatic Sentiment Analysis in the Wild (SEWA) is a EC H2020 funded project. The main aim of SEWA is to deploy and capitalise on existing state-of-the-art methodologies, models and algorithms for machine analysis of facial, vocal and verbal behaviour, and then adjust and combine them to realise naturalistic human-centric human-computer interaction (HCI) and computer-mediated face-to-face interaction (FF-HCI).

Running time:

2015 - 2018

Funding agency:

European Commision Horizon 2020 Programme

Project website:

http://sewaproject.eu/

H2020 ARIA-VALUSPA: Artificial Retrieval of Information Assistants - Virtual Agents with Linguistic Understanding, Social skills, and Personalised Aspects

The ARIA-VALUSPA project will create a ground-breaking new framework that will allow easy creation of Artificial Retrieval of Information Assistants (ARIAs) that are capable of holding multi-modal social interactions in challenging and unexpected situations. The system can generate search queries and return the information requested by interacting with humans through virtual characters. These virtual humans will be able to sustain an interaction with a user for some time, and react appropriately to the user's verbal and non-verbal behaviour when presenting the requested information and refining search results. Using audio and video signals as input, both verbal and non-verbal components of human communication are captured. Together with a rich and realistic emotive personality model, a sophisticated dialogue management system decides how to respond to a user's input, be it a spoken sentence, a head nod, or a smile. The ARIA uses special speech synthesisers to create emotionally coloured speech and a fully expressive 3D face to create the chosen response. Back-channelling, indicating that the ARIA understood what the user meant, or returning a smile are but a few of the many ways in which it can employ emotionally coloured social signals to improve communication. As part of the project, the consortium will develop two specific implementations of ARIAs for two different industrial applications. A ‘speaking book’ application will create an ARIA with a rich personality capturing the essence of a novel, whom users can ask novel-related questions. An ‘artificial travel agent’ web-based ARIA will be developed to help users find their perfect holiday – something that is difficult to do with existing web interfaces.

Running time:

2015 - 2017

Researchers:

Björn Schuller, Eduardo Coutinho, Yue Zhang, Zixing Zhang

EPSRC First Grant: Adaptive Facial Deformable Models for Tracking (ADAManT)

The project proposes to develop methodologies for automatic construction of person-specific facial deformable models for robust tracking of facial motion in unconstrained videos (recorded 'in-the-wild'). The tools are expected to work well for data recorded by a device as cheap as a web-cam and in almost arbitrary recording conditions. The technology that will be developed in the project is expected to have a huge impact in many different applications including but not limited to, biometrics (face recognition), Human Computer Interaction (HCI) systems, as well as, analysis and indexing of videos using facial information (e.g., YouTube), capturing of facial motion in games and film industry, creating virtual avatars, just to name a few. 

Running time:

2015 - 2016

Funding agency:

EPSRC

Researchers:

Stefanos Zafeiriou

 

Large Scale 3D Facial Morphable Models

The grant is provided by GOSH hospital in order to develop the first large scale statistical facial 3D morphable model.

Running time:

2014 - 2015

EPSRC 4D-FAB: Automatic analysis of facial behaviour in 4D

The main aim of this project is the development of automated tools for automatic spatio-temporal analysis and understanding of human facial behaviour from 4D facial information (i.e. 3D high-quality video recordings of facial behaviour). Two exemplar applications related to security issues will be specifically addressed in this proposal: (a) person verification (i.e. facial behaviour as a form of behaviometrics), and (b) deception indication.

Running time:

2013 - 2017

Funding agency:

EPSRC

EC FP7 TERESA: Telepresence Reinforcement-learning Social Agent

TERESA aims to develop a telepresence robot having social intelligence. A human controller remotely interacts with people by
guiding a remotely located robot, allowing the controller to be more physically present than with standard teleconferencing. TERESA will develop a new telepresence system that frees the controller from low-level decisions regarding navigation and body pose in social settings. Instead, TERESA will have the social intelligence to perform these functions automatically. iBUG will work on automatic analysis of the controllers.

Running time:

2013 - 2016

Funding agency:

European Commission FP7

Project website:

http://teresaproject.eu/

EC FP7 FROG: Fun Robotic Outdoor Guide

The main focus of the FROG project that will engage tourists in a fun exploration of outdoor attractions. The work proposed encompasses innovation in the areas of vision-based detection of facial and body gestures in unconstrained outdoor environments, robotics design and navigation that will make FROG possible, and affect-sensitive human-robot interaction capable of recognizing and responding appropriately to the level of interest and engagement shown by the audience. 

Running time:

2011 - 2014

Funding agency:

European Commission FP7

EmoPain - Pain Rehabilitation: E/Motion based automatic coaching

The main aim of the project is the design and development of an intelligent system that will enable ubiquitous monitoring and assessment of patients’ pain-related mood and movements inside (and in the longer term, outside) the clinical environment. Specifically, the focus of the project is a twofold: (i) the development of a set of methods for automatic recognition of audiovisual cues related to pain, behavioural patterns typical of low back pain, and affective states influencing pain, and (ii) the integration of these methods into a system that will provide appropriate feedback and prompts to the patient based on his/her behaviour measured during self-directed physical therapy sessions. Imperial College Team is responsible for visual analysis of facial behaviour and audiovisual recognition of affective states.

Running time:

2010 - 2013

Funding agency:

EPSRC

Project website:

http://www.emo-pain.ac.uk/

EC FP7 SSPNet: Social Signal Processing Network of Excellence

The main focus of the SSPNet is automatic assessing and synthesis of human social behaviour, which have been predicted to be the crux of the next-generation computing. The mission of the SSPNet is to create a sufficient momentum by integrating an existing large amount of knowledge and available resources in SSP research domains including cognitive modelling, machine understanding, and synthesis of social behaviour, and so:

  1. enable creation of the European and world research agenda in SSP,
  2. provide efficient and effective access to SSP-relevant tools and data repositories to the research community within and beyond the SSPNet, and
  3. further develop complementary and multidisciplinary expertise necessary for pushing forward the cutting edge of the research in SSP.
Running time:

2009 - 2014

Funding agency:

European Commission FP7

Project website:

http://sspnet.eu/

MAHNOB: Multimodal Analysis of Human Nonverbal Behaviour in Real-World Settings

The main aim of the project is to address the problem of automatic analysis of human expressive behaviour found in real-world settings. The core technical aim is to develop a set of tools that will be based on findings in cognitive sciences and it will represent a set of audiovisual spatiotemporal methods for automatic analysis of human spontaneous (as opposed to posed and exaggerated) patterns of behavioural cues including head pose, facial expression, visual focus of attention, hands and body movements, and vocal outbursts like laughter and yawns. As a proof of concept, MAHNOB technology will be developed for two specific application areas: automatic analysis of mental states like fatigue and confusion in Human-Computer Interaction contexts and non-obtrusive deception detection in standard interview settings.

Running time:

2008 - 2013

Funding agency:

ERC Starting Grant

EC FP7 SEMAINE: Sustained Emotionally coloured Machine-human Interaction using Nonverbal Expression

The main aim of the project is to enable affect-sensitive interaction between the human user and a virtual actor (avatar), forming a step forward towards development of more natural human-machine interfaces. The core technical aim is a twofold:

  1. enabling real-time analysis of user’s behaviour from speech, facial expression, and gaze, with respect to interest (bored vs. interested), emotion-related states (negative vs. positive), social signals (agreeing vs. disagreeing) and dialogue dynamics (turn taking)
  2. enabling both real-time generation of avatar’s behaviour driven by avatar’s current personality through a synthetic voice, face, and gestures, and real-time generation of multimodal dialogue contributions that help sustain the interaction.
Running time:

2008 - 2010

Funding agency:

European Commission FP7

NWO VENI - FIFAI: Facial Information For Advanced Interaction

The main aim of the project is to investigate whether and how human facial gestures and gaze could be included into standard HCI systems as new modes of HCI and as providers of context-discriminative information for revealing how the user feels (e.g., pleased, frustrated, tired, etc.). The core technical aim is to develop a novel facial-information analyzer which would process human-face image sequences to detect the user’s facial gestures, their temporal patterns, and the user’s gaze direction, and then to fuse and interpret them in terms of command/affect/mood- descriptive interpretation labels in a user-profiled and user-point-of-regard-sensitive manner.

Running time:

2003 - 2006

Funding agency:

Netherlands Organization for Scientific Research (NWO), TU Delft

Researchers:

Maja Pantic