CVPR 2023: 5th Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW)


 

The 5th Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW), will be held in conjunction with the IEEE Computer Vision and Pattern Recognition Conference (CVPR), 2023. 
The event will take place in the morning on 19 June. 

The ABAW Workshop and Competition is a continuation of the respective Workshops and Competitions held at ECCV 2022IEEE CVPR 2022ICCV 2021IEEE FG 2020 (a), IEEE FG 2020 (b) and IEEE CVPR 2017 Conferences.

 

The ABAW Workshop and Competition has a unique aspect of fostering cross-pollination of different disciplines, bringing together experts (from academia, industry, and government) and researchers of mobile and ubiquitous computing, computer vision and pattern recognition, artificial intelligence and machine learning, multimedia, robotics, HCI, ambient intelligence and psychology. The diversity of human behavior, the richness of multi-modal data that arises from its analysis, and the multitude of applications that demand rapid progress in this area ensure that our events provide a timely and relevant discussion and dissemination platform. 

 

Organisers

Dimitrios Kollias, Queen Mary University of London, UK                   d.kollias@qmul.ac.uk        

Stefanos Zafeiriou, Imperial College London, UK                              s.zafeiriou@imperial.ac.uk

Panagiotis Tzirakis,  Hume AI                                                            panagiotis@hume.ai  

Alice BairdHume AI                                                                         alice@hume.ai    

Alan CowenHume AI                                                                        alan@hume.ai

 

 

The Workshop 

 

Scope

This Workshop tackles the problem of affective behavior analysis in-the-wild, that is a major targeted characteristic of HCI systems used in real life applications. The target is to create machines and robots that are capable of understanding people's feelings, emotions and behaviors; thus, being able to interact in a 'human-centered' and engaging manner with them, and effectively serving them as their digital assistants. This interaction should not be dependent on the respective context, nor the human's age, sex, ethnicity, educational level, profession, or social position. As a result, the development of intelligent systems able to analyze human behaviors in-the-wild can contribute to generation of trust, understanding and closeness between humans and machines in real life environments.

 

Representing human emotions has been a basic topic of research. The most frequently used emotion representation is the categorical one, including the seven basic categories, i.e., Anger, Disgust, Fear, Happiness, Sadness, Surprise and Neutral. Discrete emotion representation can also be described in terms of the Facial Action Coding System model, in which all possible facial actions are described in terms of Action Units. Finally, the dimensional model of affect has been proposed as a means to distinguish between subtly different displays of affect and encode small changes in the intensity of each emotion on a continuous scale. The 2-D Valence and Arousal Space (VA-Space) is the most usual dimensional emotion representation; valence shows how positive or negative an emotional state is, whilst arousal shows how passive or active it is.

 

To this end, the developed systems should automatically sense and interpret facial and audio-visual signals relevant to emotions, traits, appraisals and intentions. Furthermore, since real-world settings entail uncontrolled conditions, where subjects operate in a diversity of contexts and environments, systems that perform automatic analysis of human behavior and emotion recognition should be robust to video recording conditions, diversity of contexts and timing of display.

 

Recently a lot of attention has been brought towards understanding and mitigating algorithmic bias in models. In the context of in-the-wild generalisation, the subgroup distribution shift is a challenging task. In this scenario, a difference in performance is observed across subgroups (e.g. demographic sub-populations of the training data), which can degrade the performance of the model deployed in-the-wild. The aim is to build fair machine learning models that perform well on all subgroups and improve in-the-wild generalisation. 

 

All these goals are scientifically and technically challenging.

  

Call for participation: 

This Workshop will solicit contributions on the recent progress of recognition, analysis, generation-synthesis and modelling of face, body, and gesture, while embracing the most advanced systems available for face and gesture analysis, particularly, in-the-wild (i.e., in unconstrained environments) and across modalities like face to voice. In parallel, this Workshop will solicit contributions towards building fair models that perform well on all subgroups and improve in-the-wild generalisation.

 

Original high-quality contributions, including:

 

- databases or

- surveys and comparative studies or

- Artificial Intelligence / Machine Learning / Deep Learning / AutoML / (Data-driven or physics-based) Generative

Modelling Methodologies (either Uni-Modal or Multi-Modal; Uni-Task or Multi-Task ones)

 

are solicited on the following topics:

 

i) "in-the-wild" facial expression (basic, compound or other) or micro-expression analysis,

ii) "in-the-wild" facial action unit detection,

iii) "in-the-wild" valence-arousal estimation,

iv) "in-the-wild" physiological-based (e.g.,EEG, EDA) affect analysis,

v) domain adaptation for affect recognition in the previous 4 cases

vi) "in-the-wild" face recognition, detection or tracking,

vii) "in-the-wild" body recognition, detection or tracking,

viii) "in-the-wild" gesture recognition or detection,

ix) "in-the-wild" pose estimation or tracking,

x) "in-the-wild" activity recognition or tracking,

xi) "in-the-wild" lip reading and voice understanding,

xii) "in-the-wild" face and body characterization (e.g., behavioral understanding),

xiii) "in-the-wild" characteristic analysis (e.g., gait, age, gender, ethnicity recognition),

xiv) "in-the-wild" group understanding via social cues (e.g., kinship, non-blood relationships, personality) 

xv) editing, manipulation, image-to-image translation, style mixing, interpolation, inversion and semantic diffusion for the afore mentioned cases

xvi) subgroup distribution shift analysis in affect recognition

xvii) subgroup distribution shift analysis in face and body behaviour

xviii) subgroup distribution shift analysis in characteristic analysis

 

 

Accepted papers will appear at CVPR 2023 proceedings.

 

Workshop Important Dates: 

  • Paper Submission Deadline:              

    March 24, 2023

  • Review decisions sent to authors; Notification of acceptance:

    April 3, 2023 

  • Camera ready version:

    April  8, 2023

   

 

Submission Information

The paper format should adhere to the paper submission guidelines for main CVPR 2023 proceedings style. Please have a look at the Submission Guidelines Section here.  

All papers should be submitted using CMT website (link will be updated in due time)

All accepted manuscripts will be part of CVPR 2023 conference proceedings. 

 

 

 

The Competition

 

The Competition is a continuation of the ABAW Competition held last year in ECCV and CVPR, the year before in ICCV and the year before in IEEE FG. It is split into the four below mentioned Challenges. These Challenges will produce a significant step forward when compared to previous events. 

Participants are invited to participate in at least one of these Challenges.

 

 

1) Valence-Arousal (VA) Estimation Challenge

 

Database

For this Challenge, an augmented version of the Aff-Wild2 database will be used. This database is audiovisual (A/V) and in total consists of 594 videos of around 3M frames of 584 subjects annotated in terms of valence and arousal.

 

Rules

Only uni-task solutions will be accepted for this Challenge; this means that the teams should only develop uni-task (valence-arousal estimation task) solutions.
Teams are allowed to use any -publicly or not- available pre-trained model (as long as it has not been pre-trained on Aff-Wild2). The pre-trained model can be pre-trained on any task (eg VA estimation, Expression Classification, AU detection, Face Recognition). However when the teams are refining the model and developing the methodology they should not use any other annotations (expressions or AUs): the methodology should be purely uni-task, using only the VA annotations. This means that teams are allowed to use other databases' VA annotations, or generated/synthetic data, or any affine transformations, or in general data augmentation techniques (e.g. our former work) for increasing the size of the training dataset.   

 

 

Performance Assessment

The performance measure is the mean Concordance Correlation Coefficient (CCC) of valence and arousal:
P = 0.5 * (CCC_arousal + CCC_valence)

 

Baseline Results

The baseline network is a pre-trained on ImageNet ResNet-50 and its performance on the validation set is:
CCC_valence = 0.24
CCC_arousal = 0.20
P = 0.5 * (CCC_arousal + CCC_valence) = 0.22

 

 

 

2) Expression (Expr) Classification Challenge

 

Database

For this Challenge, the Aff-Wild2 database will be used. This database is audiovisual (A/V) and in total consists of 548 videos of around 2.7M frames that are annotated in terms of the 6 basic expressions (i.e., anger, disgust, fear, happiness, sadness, surprise), plus the neutral state, plus a category 'other' that denotes expressions/affective states other than the 6 basic ones.

 

Rules

Only uni-task solutions will be accepted for this Challenge; this means that the teams should only develop uni-task (expression classification task) solutions.
Teams are allowed to use any -publicly or not- available pre-trained model (as long as it has not been pre-trained on Aff-Wild2). The pre-trained model can be pre-trained on any task (eg VA estimation, Expression Classification, AU detection, Face Recognition). However when the teams are refining the model and developing the methodology you should not use any other annotations (VA or AUs): the methodology should be purely uni-task, using only the Expr annotations. This means that teams are allowed to use other databases' Expr annotations, or generated/synthetic data (e.g. the data provided in the ECCV 2022 run of the ABAW Challenge), or any affine transformations, or in general data augmentation techniques (e.g. our former work) for increasing the size of the training dataset.   

 

Performance Assessment

The performance measure is the average F1 Score across all 8 categories:
P = ∑ (F1) / 8

 

Baseline Results

The baseline network is a pre-trained VGGFACE (with fixed convolutional weights) and its performance on the validation set is:
P = 0.23

 

 

 

3) Action Unit (AU) Detection Challenge

 

Database

For this Challenge, the Aff-Wild2 database will be used. This database is audiovisual (A/V) and in total consists of 547 videos of around 2.7M frames that are annotated in terms of 12 action units, namely AU1,AU2,AU4,AU6,AU7,AU10,AU12,AU15,AU23,AU24,AU25,AU26.

 

Rules

Only uni-task solutions will be accepted for this Challenge; this means that the teams should only develop uni-task (action unit detection task) solutions.
Teams are allowed to use any -publicly or not- available pre-trained model (as long as it has not been pre-trained on Aff-Wild2). The pre-trained model can be pre-trained on any task (eg VA estimation, Expression Classification, AU detection, Face Recognition). However when the teams are refining the model and developing the methodology you should not use any other annotations (VA or Expr): the methodology should be purely uni-task, using only the AU annotations. This means that teams are allowed to use other databases' AU annotations, or generated/synthetic data, or any affine transformations, or in general data augmentation techniques (e.g. our former work) for increasing the size of the training dataset.   

 

Performance Assessment

The performance measure is the average F1 Score across all 12 categories:
P = ∑ (F1) / 12

 

Baseline Results

The baseline network is a pre-trained VGGFACE (with fixed convolutional weights) and its performance on the validation set is:
P = 0.39

 

 

 

4) Emotional Reaction Intensity (ERI) Estimation Challenge

 

Database

For this Challenge, the Hume-Reaction dataset  will be used. It consists of subjects reacting to a wide range of various emotional video-based stimuli. It is multimodal and consists of about 75 hours of video recordings, recorded via a webcam, in the subjects’ homes. In total, 2222 subjects from two cultures, South Africa and the United States, are recorded. Each sample within the dataset has been self-annotated by the subjects themselves for the intensity of 7 emotional experiences in a range from 1-100: Adoration, Amusement, Anxiety, Disgust, Empathic Pain, Fear, and Surprise.

 

 

Performance Assessment

The performance measure is the average pearson’s correlations coefficient (ρ) across the 7 emotional reactions:

 

Baseline Results

The audio baseline network is the DeepSpectrum (CNN backbone is a DenseNet121 pre-trained on ImageNet) and its performance on the validation set is:

P = 0.1087

The visual baseline network is a ResNet50 trained on VGGface2 and its performance on the validation set is:

P = 0.2488

 

 

 

How to participate

In order to participate, teams will have to register.


If you want to participate in any of the first 3 Challenges (VA Estimation, Expr Classification, or AU Detection) you should follow the below procedure for registration:

The lead researcher should send  an email from their official address (no personal emails will be accepted) to d.kollias@qmul.ac.uk with:
i) subject "5th ABAW Competition: Team Registration";
ii) this EULA (if the team is composed of only academics) or this EULA (if the team has at least one member coming from the industry) filled in, signed and attached;
iii) the lead researcher's official academic/industrial website; the lead researcher cannot be a student (UG/PG/Ph.D.);
iv) the emails of each team member, each one in a separate line in the body of the email 
v) the team's name
vi) the point of contact name and email address (which member of the team will be the main point of contact for future communications, data access etc)

There is a maximum number of 8 participants in each team.

As a reply, you will receive access to the dataset's cropped/cropped-aligned images and annotations and other important information.

 

If you want to participate in the 4th Challenge (ERI Estimation) please email competitions@hume.ai with the following information:

i) subject "5th ABAW Competition: Team Registration"
ii) name and email for the lead researcher's official academic/industrial website; the lead researcher cannot be a student (UG/PG/Ph.D.)
iii) the names and emails of each team member, each one in separate line in the body of the email 
iv) team’s name
iv) the point of contact name and email address (which member of the team will be the main point of contact for future communications, data access etc) the team's name.

A reply to sign an EULA will be sent to all team members. When the EULA is signed by all team members a link to the data will be shared.

 

 

General Information

At the end of the Challenges, each team will have to send us:

i) a link to a Github repository where their solution/source code will be stored,
ii) a link to an ArXiv paper with 2-8 pages describing their proposed methodology, data used and results.

Each team will also need to upload their test set predictions on an evaluation server (details will be circulated when the test set is released).

After that, the winner of each Challenge, along with a leaderboard, will be announced.

There will be one winner per Challenge. The top-3 performing teams of each Challenge will have to contribute paper(s) describing their approach, methodology and results to our Workshop; the accepted papers will be part of the CVPR 2023 proceedings. All other teams are also able to submit paper(s) describing their solutions and final results; the accepted papers will be part of the CVPR 2023 proceedings.

 

The Competition's white paper (describing the Competition, the data, the baselines and results) will be ready at a later stage and will be distributed to the participating teams.

 

 

 

General Rules

• Participants can contribute to any of the 4 Challenges.

• In order to take part in any Challenge, participants will have to register as described above

• Participants can use audio/scene/background/body pose etc. information along with the face information.

• Any face detector whether commercial or academic can be used in the challenge. The paper accompanying the challenge result submission should contain clear details of the detectors/libraries used.

• For the first 3 Challenges, the participants are free to use any pre-trained network, as long as this has not used Aff-Wild2's annotations. 

 • The top performing teams will have to share their solution (code, model weights, executables) with the organizers upon completion of the challenge; in this way the organizers will check so as to prevent cheating or violation of rules.

 

Important Dates:

  • Call for participation announced, team registration begins, data available:       

 January 13, 2023

  • Test set release:                                                                                                 

 March 11, 2023

  • Final submission deadline (Predictions, Code and ArXiv paper):

 March 18, 2023

  • Winners Announcement:      

 March 19, 2023

  • Final paper submission deadline:                       
 

 March 24, 2023

  • Review decisions sent to authors; Notification of acceptance:                       
 

  April 3, 2023

  • Camera ready version deadline:                     
                                                 

  April 8, 2023

   

 

 

 

References

 

If you use the above data, you must cite all following papers (and the white paper that will be distributed at a later stage): 

 

  • D. Kollias: "ABAW: Learning from Synthetic Data & Multi-Task Learning Challenges", ECCV, 2022

@article{kollias2022abaw, title={ABAW: Learning from Synthetic Data \& Multi-Task Learning Challenges}, author={Kollias, Dimitrios}, journal={arXiv preprint arXiv:2207.01138}, year={2022} }

 

  • D. Kollias: "ABAW: Valence-Arousal Estimation, Expression Recognition, Action Unit Detection & Multi-Task Learning Challenges", IEEE CVPR, 2022

@inproceedings{kollias2022abaw, title={Abaw: Valence-arousal estimation, expression recognition, action unit detection \& multi-task learning challenges}, author={Kollias, Dimitrios}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={2328--2336}, year={2022} } 

 

  • D. Kollias, et. al.: "Distribution Matching for Heterogeneous Multi-Task Learning: a Large-scale Face Study", 2021

@article{kollias2021distribution, title={Distribution Matching for Heterogeneous Multi-Task Learning: a Large-scale Face Study}, author={Kollias, Dimitrios and Sharmanska, Viktoriia and Zafeiriou, Stefanos}, journal={arXiv preprint arXiv:2105.03790}, year={2021} }

 

  • D. Kollias, et. al.: "Analysing Affective Behavior in the second ABAW2 Competition". ICCV, 2021

@inproceedings{kollias2021analysing, title={Analysing affective behavior in the second abaw2 competition}, author={Kollias, Dimitrios and Zafeiriou, Stefanos}, booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision}, pages={3652--3660}, year={2021}}

 

  • D. Kollias,S. Zafeiriou: "Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units and a Unified Framework, 2021

@article{kollias2021affect, title={Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units and a Unified Framework}, author={Kollias, Dimitrios and Zafeiriou, Stefanos}, journal={arXiv preprint arXiv:2103.15792}, year={2021}}

 

  • D. Kollias, et. al.: "Analysing Affective Behavior in the First ABAW 2020 Competition". IEEE FG, 2020

@inproceedings{kollias2020analysing, title={Analysing Affective Behavior in the First ABAW 2020 Competition}, author={Kollias, D and Schulc, A and Hajiyev, E and Zafeiriou, S}, booktitle={2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020)(FG)}, pages={794--800}}

 

  • D. Kollias, S. Zafeiriou: "Expression, Affect, Action Unit Recognition: Aff-Wild2, Multi-Task Learning and ArcFace". BMVC, 2019

@article{kollias2019expression, title={Expression, Affect, Action Unit Recognition: Aff-Wild2, Multi-Task Learning and ArcFace}, author={Kollias, Dimitrios and Zafeiriou, Stefanos}, journal={arXiv preprint arXiv:1910.04855}, year={2019}}

 

  • D. Kollias, et at.: "Face Behavior a la carte: Expressions, Affect and Action Units in a Single Network", 2019

@article{kollias2019face,title={Face Behavior a la carte: Expressions, Affect and Action Units in a Single Network}, author={Kollias, Dimitrios and Sharmanska, Viktoriia and Zafeiriou, Stefanos}, journal={arXiv preprint arXiv:1910.11111}, year={2019}}

 

  • D. Kollias, et. al.: "Deep Affect Prediction in-the-wild: Aff-Wild Database and Challenge, Deep Architectures, and Beyond". International Journal of Computer Vision (IJCV), 2019

@article{kollias2019deep, title={Deep affect prediction in-the-wild: Aff-wild database and challenge, deep architectures, and beyond}, author={Kollias, Dimitrios and Tzirakis, Panagiotis and Nicolaou, Mihalis A and Papaioannou, Athanasios and Zhao, Guoying and Schuller, Bj{\"o}rn and Kotsia, Irene and Zafeiriou, Stefanos}, journal={International Journal of Computer Vision}, pages={1--23}, year={2019}, publisher={Springer} }

 

  • S. Zafeiriou, et. al. "Aff-Wild: Valence and Arousal in-the-wild Challenge". IEEE CVPR, 2017

@inproceedings{zafeiriou2017aff, title={Aff-wild: Valence and arousal ‘in-the-wild’challenge}, author={Zafeiriou, Stefanos and Kollias, Dimitrios and Nicolaou, Mihalis A and Papaioannou, Athanasios and Zhao, Guoying and Kotsia, Irene}, booktitle={Computer Vision and Pattern Recognition Workshops (CVPRW), 2017 IEEE Conference on}, pages={1980--1987}, year={2017}, organization={IEEE} }