Although online education is a paramount pillar of formal, non-formal and informal learning, institutions may still be reluctant to wager for a fully online educational model. As such, there is still a reliance on face-to-face assessment, since online alternatives do not have the deserved expected social recognition and reliability. Thus, the creation of an e-assessment system that will be able to provide effective proof of student identity, authorship within the integration of selected technologies in current learning activities in a scalable and cost efficient manner would be very advantageous. The TeSLA project provides to educational institutions, an adaptive trust e-assessment system for assuring e-assessment processes in online and blended environments. It will support both continuous and final assessment to improve the trust level across students, teachers and institutions.
2016 - 2019
This is a large Program Grant aimed at advancing the science of machine face perception and face matching technology in order to enable automatic retrieval, recognition, and verification of faces and facial behaviours in images and videos recorded in the wild (e.g. CCTV camera recordings). The coordinator of the project is Prof. Josef Kittler of Surrey University and the team at Imperial College is responsible for tracking of faces in the wild, for learning and tracking of dynamic facial biomarkers (i.e. facial behaviometrics), and data-driven learning and tracking of soft biometrics.
2016 - 2021
The main aim of this project is to realise robust, context-sensitive, and multimodal Human-Robot Interaction aimed at enhancing the mind reading and social imagination skills of children with autism.
2016 - 2020
The Automatic Sentiment Analysis in the Wild (SEWA) is a EC H2020 funded project. The main aim of SEWA is to deploy and capitalise on existing state-of-the-art methodologies, models and algorithms for machine analysis of facial, vocal and verbal behaviour, and then adjust and combine them to realise naturalistic human-centric human-computer interaction (HCI) and computer-mediated face-to-face interaction (FF-HCI).
2015 - 2018
European Commision Horizon 2020 Programme
The ARIA-VALUSPA project will create a ground-breaking new framework that will allow easy creation of Artificial Retrieval of Information Assistants (ARIAs) that are capable of holding multi-modal social interactions in challenging and unexpected situations. The system can generate search queries and return the information requested by interacting with humans through virtual characters. These virtual humans will be able to sustain an interaction with a user for some time, and react appropriately to the user's verbal and non-verbal behaviour when presenting the requested information and refining search results. Using audio and video signals as input, both verbal and non-verbal components of human communication are captured. Together with a rich and realistic emotive personality model, a sophisticated dialogue management system decides how to respond to a user's input, be it a spoken sentence, a head nod, or a smile. The ARIA uses special speech synthesisers to create emotionally coloured speech and a fully expressive 3D face to create the chosen response. Back-channelling, indicating that the ARIA understood what the user meant, or returning a smile are but a few of the many ways in which it can employ emotionally coloured social signals to improve communication. As part of the project, the consortium will develop two specific implementations of ARIAs for two different industrial applications. A ‘speaking book’ application will create an ARIA with a rich personality capturing the essence of a novel, whom users can ask novel-related questions. An ‘artificial travel agent’ web-based ARIA will be developed to help users find their perfect holiday – something that is difficult to do with existing web interfaces.
2015 - 2017
The main aim of this project is the development of automated tools for automatic spatio-temporal analysis and understanding of human facial behaviour from 4D facial information (i.e. 3D high-quality video recordings of facial behaviour). Two exemplar applications related to security issues will be specifically addressed in this proposal: (a) person verification (i.e. facial behaviour as a form of behaviometrics), and (b) deception indication.
2013 - 2017
The project proposes to develop methodologies for automatic construction of person-specific facial deformable models for robust tracking of facial motion in unconstrained videos (recorded 'in-the-wild'). The tools are expected to work well for data recorded by a device as cheap as a web-cam and in almost arbitrary recording conditions. The technology that will be developed in the project is expected to have a huge impact in many different applications including but not limited to, biometrics (face recognition), Human Computer Interaction (HCI) systems, as well as, analysis and indexing of videos using facial information (e.g., YouTube), capturing of facial motion in games and film industry, creating virtual avatars, just to name a few.
2015 - 2016
TERESA aims to develop a telepresence robot having social intelligence. A human controller remotely interacts with people by
guiding a remotely located robot, allowing the controller to be more physically present than with standard teleconferencing. TERESA will develop a new telepresence system that frees the controller from low-level decisions regarding navigation and body pose in social settings. Instead, TERESA will have the social intelligence to perform these functions automatically. iBUG will work on automatic analysis of the controllers.
2013 - 2016
European Commission FP7
The main focus of the FROG project that will engage tourists in a fun exploration of outdoor attractions. The work proposed encompasses innovation in the areas of vision-based detection of facial and body gestures in unconstrained outdoor environments, robotics design and navigation that will make FROG possible, and affect-sensitive human-robot interaction capable of recognizing and responding appropriately to the level of interest and engagement shown by the audience.
2011 - 2014
European Commission FP7
The main aim of the project is the design and development of an intelligent system that will enable ubiquitous monitoring and assessment of patients’ pain-related mood and movements inside (and in the longer term, outside) the clinical environment. Specifically, the focus of the project is a twofold: (i) the development of a set of methods for automatic recognition of audiovisual cues related to pain, behavioural patterns typical of low back pain, and affective states influencing pain, and (ii) the integration of these methods into a system that will provide appropriate feedback and prompts to the patient based on his/her behaviour measured during self-directed physical therapy sessions. Imperial College Team is responsible for visual analysis of facial behaviour and audiovisual recognition of affective states.
2010 - 2013
The main focus of the SSPNet is automatic assessing and synthesis of human social behaviour, which have been predicted to be the crux of the next-generation computing. The mission of the SSPNet is to create a sufficient momentum by integrating an existing large amount of knowledge and available resources in SSP research domains including cognitive modelling, machine understanding, and synthesis of social behaviour, and so:
2009 - 2014
European Commission FP7
The main aim of the project is to address the problem of automatic analysis of human expressive behaviour found in real-world settings. The core technical aim is to develop a set of tools that will be based on findings in cognitive sciences and it will represent a set of audiovisual spatiotemporal methods for automatic analysis of human spontaneous (as opposed to posed and exaggerated) patterns of behavioural cues including head pose, facial expression, visual focus of attention, hands and body movements, and vocal outbursts like laughter and yawns. As a proof of concept, MAHNOB technology will be developed for two specific application areas: automatic analysis of mental states like fatigue and confusion in Human-Computer Interaction contexts and non-obtrusive deception detection in standard interview settings.
2008 - 2013
ERC Starting Grant
The main aim of the project is to enable affect-sensitive interaction between the human user and a virtual actor (avatar), forming a step forward towards development of more natural human-machine interfaces. The core technical aim is a twofold:
2008 - 2010
European Commission FP7
The main aim of the project is to investigate whether and how human facial gestures and gaze could be included into standard HCI systems as new modes of HCI and as providers of context-discriminative information for revealing how the user feels (e.g., pleased, frustrated, tired, etc.). The core technical aim is to develop a novel facial-information analyzer which would process human-face image sequences to detect the user’s facial gestures, their temporal patterns, and the user’s gaze direction, and then to fuse and interpret them in terms of command/affect/mood- descriptive interpretation labels in a user-profiled and user-point-of-regard-sensitive manner.
2003 - 2006
Netherlands Organization for Scientific Research (NWO), TU Delft