ECCV 2022: 4th Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW)

 

 

The 4th Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW), will be held in conjunction with the European Conference on Computer Vision (ECCV), 2022. 

The ABAW Workshop and Competition is a continuation of the respective Workshops and Competitions held at IEEE CVPR 2022ICCV 2021IEEE FG 2020 (a), IEEE FG 2020 (b) and IEEE CVPR 2017 Conferences.

 

The ABAW Workshop and Competition has a unique aspect of fostering cross-pollination of different disciplines, bringing together experts (from academia, industry, and government) and researchers of mobile and ubiquitous computing, computer vision and pattern recognition, artificial intelligence and machine learning, multimedia, robotics, HCI, ambient intelligence and psychology. The diversity of human behavior, the richness of multi-modal data that arises from its analysis, and the multitude of applications that demand rapid progress in this area ensure that our events provide a timely and relevant discussion and dissemination platform.

 

Workshop's Agenda

The workshop's agenda can be found here. The workshop will be held on Sunday October 23 and  will be an online event.
Please note that all displayed times are Israel Daylight Time (i.e. GMT+3). 
 

 

Organisers

Dimitrios Kollias, Queen Mary University of London, UK                   d.kollias@qmul.ac.uk        

Stefanos Zafeiriou, Imperial College London, UK                              s.zafeiriou@imperial.ac.uk

Viktoriia Sharmanska, University of Sussex, UK                                sharmanska.v@sussex.ac.uk

Elnar Hajiyev,  Realeyes  - Emotional Intelligence                             elnar@realeyesit.com    

 

 


Keynote Speakers

 

Ioannis (Yiannis) Patras


Giannis Patras is a Professor of Computer Vision and Human Sensing in the School of Electronic Engineering and Computer Science in the Queen Mary, University of London. His research is in the area of ‘’Looking at /Sensing People’’, using Machine Learning, Computer Vision and Signal Processing methodologies to interpret and predict actions, behaviour, emotions and cognitive states of people by analysing their images, video and neuro-physiological signals. This includes detection, tracking and recognition of facial and body gestures in unconstrained environments.
He has more than 200 publications in the most selective Journals and conferences in the field of Computer Vision, with more than 10,000 citations. He is associate editor in three journals, and area chair or in the programme committee of all the major conferences in the field. His research has been funded by the EPSRC, EU FP7 and direct bilateral collaborations with research institutes and the industry.

 

Abhinav Dhall

 

Abhinav Dhall is the Head of the Centre for Applied Research in Data Sciences, IIT Ropar; he is an Adjunct Senior Lecturer at Monash University, Australia and an Adjunct Faculty at IIIT-Delhi, India. His research is in the areas of Human-Centred Computing, Affective Computing and Multimodal Systems. He has more than 120 publications in the most selective Journals and conferences in the respective fields, with more than 4,800 citations. He has been the co-organizer of many Workshops and Competitions/Challenges, such as the series of the Emotion Recognition in the Wild Challenges (EmotiW).

 

 

 

The Workshop 

 

Scope

This Workshop tackles the problem of affective behavior analysis in-the-wild, that is a major targeted characteristic of HCI systems used in real life applications. The target is to create machines and robots that are capable of understanding people's feelings, emotions and behaviors; thus, being able to interact in a 'human-centered' and engaging manner with them, and effectively serving them as their digital assistants. This interaction should not be dependent on the respective context, nor the human's age, sex, ethnicity, educational level, profession, or social position. As a result, the development of intelligent systems able to analyze human behaviors in-the-wild can contribute to generation of trust, understanding and closeness between humans and machines in real life environments.

 

Representing human emotions has been a basic topic of research. The most frequently used emotion representation is the categorical one, including the seven basic categories, i.e., Anger, Disgust, Fear, Happiness, Sadness, Surprise and Neutral. Discrete emotion representation can also be described in terms of the Facial Action Coding System model, in which all possible facial actions are described in terms of Action Units. Finally, the dimensional model of affect has been proposed as a means to distinguish between subtly different displays of affect and encode small changes in the intensity of each emotion on a continuous scale. The 2-D Valence and Arousal Space (VA-Space) is the most usual dimensional emotion representation; valence shows how positive or negative an emotional state is, whilst arousal shows how passive or active it is.

 

To this end, the developed systems should automatically sense and interpret facial and audio-visual signals relevant to emotions, traits, appraisals and intentions. Furthermore, since real-world settings entail uncontrolled conditions, where subjects operate in a diversity of contexts and environments, systems that perform automatic analysis of human behavior and emotion recognition should be robust to video recording conditions, diversity of contexts and timing of display.

 

Recently a lot of attention has been brought towards understanding and mitigating algorithmic bias in models. In the context of in-the-wild generalisation, the subgroup distribution shift is a challenging task. In this scenario, a difference in performance is observed across subgroups (e.g. demographic sub-populations of the training data), which can degrade the performance of the model deployed in-the-wild. The aim is to build fair machine learning models that perform well on all subgroups and improve in-the-wild generalisation. 

 

All these goals are scientifically and technically challenging.

  

Call for participation: 

This Workshop will solicit contributions on the recent progress of recognition, analysis, generation-synthesis and modelling of face, body, and gesture, while embracing the most advanced systems available for face and gesture analysis, particularly, in-the-wild (i.e., in unconstrained environments) and across modalities like face to voice. In parallel, this Workshop will solicit contributions towards building fair models that perform well on all subgroups and improve in-the-wild generalisation.

 

Original high-quality contributions, including:

 

- databases or

- surveys and comparative studies or

- Artificial Intelligence / Machine Learning / Deep Learning / AutoML / (Data-driven or physics-based) Generative

Modelling Methodologies (either Uni-Modal or Multi-Modal; Uni-Task or Multi-Task ones)

 

are solicited on the following topics:

 

i) "in-the-wild" facial expression (basic, compound or other) or micro-expression analysis,

ii) "in-the-wild" facial action unit detection,

iii) "in-the-wild" valence-arousal estimation,

iv) "in-the-wild" physiological-based (e.g.,EEG, EDA) affect analysis,

v) domain adaptation for affect recognition in the previous 4 cases

vi) "in-the-wild" face recognition, detection or tracking,

vii) "in-the-wild" body recognition, detection or tracking,

viii) "in-the-wild" gesture recognition or detection,

ix) "in-the-wild" pose estimation or tracking,

x) "in-the-wild" activity recognition or tracking,

xi) "in-the-wild" lip reading and voice understanding,

xii) "in-the-wild" face and body characterization (e.g., behavioral understanding),

xiii) "in-the-wild" characteristic analysis (e.g., gait, age, gender, ethnicity recognition),

xiv) "in-the-wild" group understanding via social cues (e.g., kinship, non-blood relationships, personality) 

xv) editing, manipulation, image-to-image translation, style mixing, interpolation, inversion and semantic diffusion for the afore mentioned cases

xvi) subgroup distribution shift analysis in affect recognition

xvii) subgroup distribution shift analysis in face and body behaviour

xviii) subgroup distribution shift analysis in characteristic analysis

 

 

Accepted papers will appear at ECCV 2022 proceedings.

 

Workshop Important Dates: (UPDATED)

  • Paper Submission Deadline:              

    July 28, 2022

  • Review decisions sent to authors; Notification of acceptance:

    August 17, 2022 

  • Camera ready version:

    August 22, 2022

   

 

Submission Information

The paper format should adhere to the paper submission guidelines for main ECCV 2022 proceedings style. Please have a look at the Submission Guidelines Section here.  

All papers should be submitted using CMT website: https://cmt3.research.microsoft.com/4thABAW2022

All accepted manuscripts will be part of ECCV 2022 conference proceedings. 

 

 

 

The Competition

 

The Competition is a continuation of the ABAW Competition held this year in CVPR, last year in ICCV and the year before in IEEE FG. It is split into the two below mentioned Challenges. These Challenges will produce a significant step forward when compared to previous events. 

Participants are invited to participate in at least one of these Challenges.

 

Leaderboard


  • Multi-Task Learning Challenge:

In total, 55 Teams participated in the Multi-Task-Learning Challenge. 25 Teams submitted their results. 11 Teams scored higher than the baseline and made valid submissions.
 
The winner of this Challenge is Situ-RUCAIM3 consisting of: Tenggan Zhang, Chuanhe Liu, Xiaolong Liu, Yuchen Liu, Liyu Meng, Lei SuN, Wenqiang Jiang, and Fengyuan Zhang (Renmin University of China; Beijing Seek Truth Data Technology Services Co Ltd).
 
The runner up is ICT-VIPL consisting of: Hu Han (Chinese Academy of Science), Yifan Li, Haomiao Sun, Zhaori Liu, Shiguang Shan and Xilin Chen (Institute of Computing Technology Chinese Academy of Sciences, China). 

 

 

  • Learning from Synthetic Data (LSD) Challenge

In total, 51 Teams participated in the Learning from Synthetic Data Challenge. 21 Teams submitted their results. 10 Teams scored higher than the baseline and made valid submissions.

The winner of this Challenge is HSE-NN consisting of: Andrey Savchenko (HSE University, Russia).

The runner up is PPAA consisting of: Jie Lei, Zhao Liu, Tong Li, Zeyu Zou, Xu Juan, Shuaiwei Wang, Guoyu Yang and Zunlei Feng (Zhejiang University of Technology;  Ping An Life Insurance Of China Ltd).

 

 

The leaderboards for the 2 Challenges can be found below:  

 

ECCV2022_ABAW4_MTL_Leaderboard

ECCV2022_ABAW4_LSD_Leaderboard

 

Congratulations to all teams, winning and non-winning ones! Thank you very much for participating in our Competition.

All teams are invited to submit their methodologies-papers (please see Submission Information section above). All accepted papers will be part of the ECCV 2022 proceedings.

We are looking forward to receiving your submissions! 

 

 

 

How to participate

In order to participate, teams will have to register; the lead researcher should send  an email from their official address (no personal emails will be accepted) to d.kollias@qmul.ac.uk with:
i) subject "4th ABAW Competition: Team Registration";
ii) this EULA (if the team is composed of only academics) or this EULA (if the team has at least one member coming from the industry) filled in, signed and attached;
iii) the lead researcher's official academic/industrial website; the lead researcher cannot be a student (UG/PG/Ph.D.);
iv) the emails of each team member  
v) the team's name
vi) the point of contact name and email address (which member of the team will be the main point of contact for future communications, data access etc)

There is a maximum number of 8 participants in each team.

As a reply, you will receive access to the dataset's cropped/cropped-aligned images and annotations and other important information.

 

General Information

At the end of the Challenges, each team will have to send us:

i) their predictions on the test set,
ii) a link to a Github repository where their solution/source code will be stored, and
iii) a link to an ArXiv paper with 2-8 pages describing their proposed methodology, data used and results.
After that, the winner of each Challenge, along with a leaderboard, will be announced.

There will be one winner per Challenge. The top-3 performing teams of each Challenge will have to contribute paper(s) describing their approach, methodology and results to our Workshop; the accepted papers will be part of the ECCV 2022 proceedings. All other teams are also able to submit paper(s) describing their solutions and final results; the accepted papers will be part of the ECCV 2022 proceedings.

 

The Competition's white paper (describing the Competition, the data, the baselines and results) will be ready at a later stage and will be distributed to the participating teams.

 

 

1) Multi-Task-Learning (MTL) Challenge

 

This is a continuation of the respective Challenge taking place earlier this year in IEEE CVPR 2022.

 

Database

For this Challenge, s-Aff-Wild2 database will be used. s-Aff-Wild2 is a static version of Aff-Wild2 database; it contains selected-specific frames-images from Aff-Wild2. 
In total, around 221K images will be used that contain annotations in terms of valence-arousal; 6 basic expressions, plus the neutral state, plus the 'other' category; 12 action units, namely AU1, AU2, AU4, AU6, AU7, AU10, AU12, AU15, AU23, AU24, AU25, AU26.

 

Rules

The participants are allowed to use the provided s-Aff-Wild2 database and/or any publicly available or private database; the participants are not allowed to use the (A/V) Aff-Wild2 database (images and annotations).
Any methodological solution will be accepted for this Challenge.

 

Performance Assessment

The performance measure will be the sum of: the mean Concordance Correlation Coefficient (CCC) of valence and arousal; the average F1 Score across all 8 expression categories; the average F1 Score across all 12 action units:
P = 0.5 * (CCC_arousal + CCC_valence) + 0.125 * ∑ (F1_expr) + ∑ (F1_au) / 12

 

Baseline Results

The baseline network is a pre-trained VGGFACE (with fixed convolutional weights) and its performance on the validation set is:
P = 0.30

 

 

2) Learning from Synthetic Data (LSD) Challenge

 

Database

For this Challenge, some specific frames-images from the Aff-Wild2 database have been selected and used for expression manipulation.  
In total, around 300K synthetic images have been generated that contain annotations in terms of the 6 basic facial expressions (anger, disgust, fear, happiness, sadness, surprise). These synthetic images will be provided to the participating teams for use in model training/when developing their methodology. 
The participating team's methodologies will be evaluated on real images of the Aff-Wild2 database.

 

Rules

Teams are allowed to use any -publicly or not- available pre-trained model (as long as it has not been pre-trained on Aff-Wild2). The pre-trained model can be pre-trained on any task (eg VA estimation, Expression Classification, AU detection, Face Recognition). However when the teams are refining the model and developing the methodology they must only use the provided synthetic data. No real data should be used in model training/methodology development.

 

Performance Assessment

The performance measure will be the average F1 Score across all 6 categories:
P = ∑ (F1) / 6

 

Baseline Results

The baseline network is a pre-trained on ImageNet ResNet-50 and its performance on the validation and test sets (real data from Aff-Wild2) is:
0.50 and 0.30
Let us note that the synthetic data have been generated from subjects of the validation set, but not of the test set.

 

 

General Rules

• Participants can contribute to any of the 2 Challenges.

• In order to take part in any Challenge, participants will have to register as described above

• Participants can use scene/background/body pose etc. information along with the face information.

• Any face detector whether commercial or academic can be used in the challenge. The paper accompanying the challenge result submission should contain clear details of the detectors/libraries used.

• The participants are free to use any pre-trained network, as long as this is not using Aff-Wild2's annotations. 

 

 

Important Dates: (UPDATED)

  • Call for participation announced, team registration begins, data available:       

 May 18, 2022

  • Test set release:                                                                                                 

 July 15, 2022

  • Final submission deadline (Results, Code and ArXiv paper):

 July 21, 2022

  • Winners Announcement:      

 July 24, 2022

  • Final paper submission deadline:                       
 

 July 28, 2022

  • Review decisions sent to authors; Notification of acceptance:                       
 

  August 17, 2022

  • Camera ready version deadline:                     
                                                 

  August 22, 2022

   

 

Regarding the database:

• All the training/validation/testing images of the dataset have been obtained from Youtube. We are not responsible for the content nor the meaning of these images.

• Participants will agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purposes, any portion of the images and any portion of derived data. They will also agree not to further copy, publish or distribute any portion of annotations of the dataset. Except, for internal use at a single site within the same organization it is allowed to make copies of the dataset.

• We reserve the right to terminate participants’ access to the dataset at any time.

• If a participant’s face is displayed in any video and (s)he wants it to be removed, (s)he can email us at any time

 

 

References

 

If you use the above data, you must cite all following papers (and the white paper that will be distributed at a later stage): 

 

  • D. Kollias: "ABAW: Learning from Synthetic Data & Multi-Task Learning Challenges", 2022

@article{kollias2022abaw, title={ABAW: Learning from Synthetic Data \& Multi-Task Learning Challenges}, author={Kollias, Dimitrios}, journal={arXiv preprint arXiv:2207.01138}, year={2022} }


  • D. Kollias: "ABAW: Valence-Arousal Estimation, Expression Recognition, Action Unit Detection & Multi-Task Learning Challenges", IEEE CVPR, 2022

@inproceedings{kollias2022abaw, title={Abaw: Valence-arousal estimation, expression recognition, action unit detection \& multi-task learning challenges}, author={Kollias, Dimitrios}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={2328--2336}, year={2022} } 

 

  • D. Kollias, et. al.: "Distribution Matching for Heterogeneous Multi-Task Learning: a Large-scale Face Study", 2021

@article{kollias2021distribution, title={Distribution Matching for Heterogeneous Multi-Task Learning: a Large-scale Face Study}, author={Kollias, Dimitrios and Sharmanska, Viktoriia and Zafeiriou, Stefanos}, journal={arXiv preprint arXiv:2105.03790}, year={2021} }

 

  • D. Kollias,S. Zafeiriou: "Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units and a Unified Framework, 2021

@article{kollias2021affect, title={Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units and a Unified Framework}, author={Kollias, Dimitrios and Zafeiriou, Stefanos}, journal={arXiv preprint arXiv:2103.15792}, year={2021}}

 

  • D. Kollias,et al.: "Deep neural network augmentation: Generating faces for affect analysis". International Journal of Computer Vision (IJCV), 2020

@article{kollias2020deep, title={Deep neural network augmentation: Generating faces for affect analysis}, author={Kollias, Dimitrios and Cheng, Shiyang and Ververas, Evangelos and Kotsia, Irene and Zafeiriou, Stefanos}, journal={International Journal of Computer Vision}, volume={128}, number={5}, pages={1455--1484}, year={2020}, publisher={Springer}}

 

  • D. Kollias,S. Zafeiriou: "Va-stargan: Continuous affect generation". ACIVS, 2020

@inproceedings{kollias2020va, title={Va-stargan: Continuous affect generation}, author={Kollias, Dimitrios and Zafeiriou, Stefanos}, booktitle={International Conference on Advanced Concepts for Intelligent Vision Systems}, pages={227--238}, year={2020}, organization={Springer}}

 

  • D. Kollias, S. Zafeiriou: "Expression, Affect, Action Unit Recognition: Aff-Wild2, Multi-Task Learning and ArcFace". BMVC, 2019

@article{kollias2019expression, title={Expression, Affect, Action Unit Recognition: Aff-Wild2, Multi-Task Learning and ArcFace}, author={Kollias, Dimitrios and Zafeiriou, Stefanos}, journal={arXiv preprint arXiv:1910.04855}, year={2019}}

 

  • D. Kollias, et. al.: "Deep Affect Prediction in-the-wild: Aff-Wild Database and Challenge, Deep Architectures, and Beyond". International Journal of Computer Vision (IJCV), 2019

@article{kollias2019deep, title={Deep affect prediction in-the-wild: Aff-wild database and challenge, deep architectures, and beyond}, author={Kollias, Dimitrios and Tzirakis, Panagiotis and Nicolaou, Mihalis A and Papaioannou, Athanasios and Zhao, Guoying and Schuller, Bj{\"o}rn and Kotsia, Irene and Zafeiriou, Stefanos}, journal={International Journal of Computer Vision}, pages={1--23}, year={2019}, publisher={Springer} }

 

  • D. Kollias, et. al.: "Photorealistic facial synthesis in the dimensional affect space". ECCV, 2018

@inproceedings{kollias2018photorealistic, title={Photorealistic facial synthesis in the dimensional affect space}, author={Kollias, Dimitrios and Cheng, Shiyang and Pantic, Maja and Zafeiriou, Stefanos}, booktitle={Proceedings of the European Conference on Computer Vision (ECCV) Workshops}, pages={0--0}, year={2018}}

 

  • S. Zafeiriou, et. al. "Aff-Wild: Valence and Arousal in-the-wild Challenge". IEEE CVPR, 2017

@inproceedings{zafeiriou2017aff, title={Aff-wild: Valence and arousal ‘in-the-wild’challenge}, author={Zafeiriou, Stefanos and Kollias, Dimitrios and Nicolaou, Mihalis A and Papaioannou, Athanasios and Zhao, Guoying and Kotsia, Irene}, booktitle={Computer Vision and Pattern Recognition Workshops (CVPRW), 2017 IEEE Conference on}, pages={1980--1987}, year={2017}, organization={IEEE} }

 

  • D. Kollias, et. al. "Recognition of affect in the wild using deep neural networks". CVPR, 2017

@inproceedings{kollias2017recognition, title={Recognition of affect in the wild using deep neural networks}, author={Kollias, Dimitrios and Nicolaou, Mihalis A and Kotsia, Irene and Zhao, Guoying and Zafeiriou, Stefanos}, booktitle={Computer Vision and Pattern Recognition Workshops (CVPRW), 2017 IEEE Conference on}, pages={1972--1979}, year={2017}, organization={IEEE} }

 

 

 

Sponsors:

 

The Affective Behavior Analysis in-the-wild Challenge has been generously supported by:

 

  • Queen Mary University of London

Queen Mary University of London - University Transcription Services

 

  • Imperial College London

 

 

 

 

 

Baseline_Toucango_fond_clair_H100px