CVPR 2023: 5th Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW)


 

The 5th Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW), will be held in conjunction with the IEEE Computer Vision and Pattern Recognition Conference (CVPR), 2023. 

The event will take place on 19 June, from 8am until 12.30pm (GMT-7 time zone) and will be a hybrid event (with both in-person and online attendance).


The ABAW Workshop and Competition is a continuation of the respective Workshops and Competitions held at ECCV 2022IEEE CVPR 2022ICCV 2021IEEE FG 2020 (a), IEEE FG 2020 (b) and IEEE CVPR 2017 Conferences.

 

The ABAW Workshop and Competition has a unique aspect of fostering cross-pollination of different disciplines, bringing together experts (from academia, industry, and government) and researchers of mobile and ubiquitous computing, computer vision and pattern recognition, artificial intelligence and machine learning, multimedia, robotics, HCI, ambient intelligence and psychology. The diversity of human behavior, the richness of multi-modal data that arises from its analysis, and the multitude of applications that demand rapid progress in this area ensure that our events provide a timely and relevant discussion and dissemination platform. 

 

Workshop's Agenda

The workshop's agenda can be found here. The workshop will be held on Monday June 19 and  will be a hybrid event (with both in-person and online attendance).

Please note that all displayed times are in local Vancouver Time (i.e. GMT-7).

 

Organisers

Dimitrios Kollias, Queen Mary University of London, UK                   d.kollias@qmul.ac.uk        

Stefanos Zafeiriou, Imperial College London, UK                              s.zafeiriou@imperial.ac.uk

Panagiotis Tzirakis,  Hume AI                                                            panagiotis@hume.ai  

Alice BairdHume AI                                                                         alice@hume.ai    

Alan CowenHume AI                                                                        alan@hume.ai

 

 

Keynote Speakers

 

Hatice Gunes 


Hatice Gunes is a Professor of Affective Intelligence and Robotics (AFAR) and the Director of the AFAR Lab at the University of Cambridge's Department of Computer Science and Technology. Her expertise is in the areas of affective computing and social signal processing cross-fertilising research in multimodal interaction, computer vision, machine learning, social robotics and human-robot interaction. She has published over 160 scientific papers, with  most  recent  works  focussing on graph representation for personality and facial affect recognition, continual learning for facial expression recognition, fairness, and affective  robotics; and  longitudinal  HRI  for  mental wellbeing. Prof Gunes has served as an Associate Editor for IEEE Transactions on Affective Computing, IEEE Transactions on Multimedia, and Image and Vision Computing Journal, and has guest edited many Special Issues, the latest ones being the 2022 Int’l Journal of Social Robotics Special Issue on Embodied Agents for Wellbeing, the 2022 Frontiers in Robotics and AI Special Issue on Lifelong Learning and Long-Term Human-Robot Interaction, and the 2021 IEEE Transactions on Affective Computing  Special  Issue  on  Automated Perception of Human Affect from Longitudinal Behavioural Data. Other research highlights  include Outstanding PC  Award  at ACM/IEEE HRI’23, RSJ/KROS Distinguished Interdisciplinary Research Award Finalist at IEEE RO-MAN’21, Distinguished PC  Award  at  IJCAI’21, Best Paper Award Finalist at IEEE RO-MAN’20, Finalist for the 2018 Frontiers Spotlight Award, Outstanding Paper Award at IEEE FG’11, and Best Demo Award at IEEE ACII’09. Prof Gunes is the former President of the Association for the Advancement of Affective Computing (2017-2019), is/was the General Co-Chair of ACM ICMI 2024, ACII 2019, and the Program Co-Chair of ACM/IEEE HRI 2020 and IEEE FG 2017. She was the Chair of the Steering Board of IEEE Transactions on Affective Computing (2017-2019) and was a member of the Human-Robot Interaction Steering Committee (2018-2021). In 2019 she was awarded a prestigious EPSRC Fellowship to investigate adaptive robotic emotional intelligence for well-being (2019-2025) and was named a Faculty Fellow of the Alan Turing Institute – UK’s national centre for data science and artificial intelligence (2019-2021). Prof Gunes is a Staff Fellow and Director of Studies in Computer Science at Trinity Hall, a Senior Member of the IEEE, and a member of the AAAC.


Agata Lapedriza 


Agata Lapedriza is a Principal Research Scientist at the Institute for Experiential AI (EAI, Northeastern University, Boston) and a Professor at Universitat Oberta de Catalunya (UOC, Barcelona). At EAI she leads research at the intersection of AI for Health and Responsible AI. At  UOC she is the head of the "AI for Human Well-being" lab. Her research interests are related to Computer Vision, Affective Computing, Social Robotics, and Explainable AI. She has been collaborating with the Massachusetts Institute of Technology (MIT) since 2012.  From 2012 to 2015 she was a Visiting Professor at MIT CSAIL, where she worked on Object Detection, Scene Recognition, and Explainability. From 2017 to 2020 she was a Research Affiliate at MIT Medialab, where she worked on Emotion Perception, Emotionally-Aware Dialog Systems, and Social Robotics. In 2020 she spent one year as a Visiting Faculty at Google (Cambridge, USA). 

 

 

 

The Workshop 

 

Scope

This Workshop tackles the problem of affective behavior analysis in-the-wild, that is a major targeted characteristic of HCI systems used in real life applications. The target is to create machines and robots that are capable of understanding people's feelings, emotions and behaviors; thus, being able to interact in a 'human-centered' and engaging manner with them, and effectively serving them as their digital assistants. This interaction should not be dependent on the respective context, nor the human's age, sex, ethnicity, educational level, profession, or social position. As a result, the development of intelligent systems able to analyze human behaviors in-the-wild can contribute to generation of trust, understanding and closeness between humans and machines in real life environments.

 

Representing human emotions has been a basic topic of research. The most frequently used emotion representation is the categorical one, including the seven basic categories, i.e., Anger, Disgust, Fear, Happiness, Sadness, Surprise and Neutral. Discrete emotion representation can also be described in terms of the Facial Action Coding System model, in which all possible facial actions are described in terms of Action Units. Finally, the dimensional model of affect has been proposed as a means to distinguish between subtly different displays of affect and encode small changes in the intensity of each emotion on a continuous scale. The 2-D Valence and Arousal Space (VA-Space) is the most usual dimensional emotion representation; valence shows how positive or negative an emotional state is, whilst arousal shows how passive or active it is.

 

To this end, the developed systems should automatically sense and interpret facial and audio-visual signals relevant to emotions, traits, appraisals and intentions. Furthermore, since real-world settings entail uncontrolled conditions, where subjects operate in a diversity of contexts and environments, systems that perform automatic analysis of human behavior and emotion recognition should be robust to video recording conditions, diversity of contexts and timing of display.

 

Recently a lot of attention has been brought towards understanding and mitigating algorithmic bias in models. In the context of in-the-wild generalisation, the subgroup distribution shift is a challenging task. In this scenario, a difference in performance is observed across subgroups (e.g. demographic sub-populations of the training data), which can degrade the performance of the model deployed in-the-wild. The aim is to build fair machine learning models that perform well on all subgroups and improve in-the-wild generalisation. 

 

All these goals are scientifically and technically challenging.

  

Call for participation: 

This Workshop will solicit contributions on the recent progress of recognition, analysis, generation-synthesis and modelling of face, body, and gesture, while embracing the most advanced systems available for face and gesture analysis, particularly, in-the-wild (i.e., in unconstrained environments) and across modalities like face to voice. In parallel, this Workshop will solicit contributions towards building fair models that perform well on all subgroups and improve in-the-wild generalisation.

 

Original high-quality contributions, including:

 

- databases or

- surveys and comparative studies or

- Artificial Intelligence / Machine Learning / Deep Learning / AutoML / (Data-driven or physics-based) Generative

Modelling Methodologies (either Uni-Modal or Multi-Modal; Uni-Task or Multi-Task ones)

 

are solicited on the following topics:

 

i) "in-the-wild" facial expression (basic, compound or other) or micro-expression analysis,

ii) "in-the-wild" facial action unit detection,

iii) "in-the-wild" valence-arousal estimation,

iv) "in-the-wild" physiological-based (e.g.,EEG, EDA) affect analysis,

v) domain adaptation for affect recognition in the previous 4 cases

vi) "in-the-wild" face recognition, detection or tracking,

vii) "in-the-wild" body recognition, detection or tracking,

viii) "in-the-wild" gesture recognition or detection,

ix) "in-the-wild" pose estimation or tracking,

x) "in-the-wild" activity recognition or tracking,

xi) "in-the-wild" lip reading and voice understanding,

xii) "in-the-wild" face and body characterization (e.g., behavioral understanding),

xiii) "in-the-wild" characteristic analysis (e.g., gait, age, gender, ethnicity recognition),

xiv) "in-the-wild" group understanding via social cues (e.g., kinship, non-blood relationships, personality) 

xv) editing, manipulation, image-to-image translation, style mixing, interpolation, inversion and semantic diffusion for the afore mentioned cases

xvi) subgroup distribution shift analysis in affect recognition

xvii) subgroup distribution shift analysis in face and body behaviour

xviii) subgroup distribution shift analysis in characteristic analysis

 

 

Accepted papers will appear at CVPR 2023 proceedings.

 

Workshop Important Dates: 

  • Paper Submission Deadline:              

    March 30, 2023

  • Review decisions sent to authors; Notification of acceptance:

    April 10, 2023 

  • Camera ready version:

    April  14, 2023

   

 

Submission Information

The paper format should adhere to the paper submission guidelines for main CVPR 2023 proceedings style. Please have a look at the Submission Guidelines Section here.  

All papers should be submitted using CMT website.

All accepted manuscripts will be part of CVPR 2023 conference proceedings. 

 

 

 

The Competition

 

The Competition is a continuation of the ABAW Competition held last year in ECCV and CVPR, the year before in ICCV and the year before in IEEE FG. It is split into the four below mentioned Challenges. These Challenges will produce a significant step forward when compared to previous events. 

Participants are invited to participate in at least one of these Challenges.

 


Leaderboard


  • Valence-Arousal Estimation Challenge:

In total, 57 Teams participated in the VA Estimation Challenge. 26 Teams submitted their results. 8 Teams made invalid (incomplete) submissions, whilst surpassing the baseline. 8 Teams scored lower than the baseline.

10 Teams scored higher than the baseline and made valid submissions.

 

The winner of this Challenge is SituTech  consisting of: Chuanhe Liu, Xiaolong Liu, Lei Sun, Wenqiang Jiang, Fengyuan Zhang, Yuanyuan Deng, Zhaopei Huang, Liyu Meng, Yuchen Liu ( (Beijing Seek Truth Data Technology Services Co Ltd). 

 

The runner up is Netease Fuxi Virtual Human consisting of: Wei Zhang, Feng Qiu, Haodong Sun, Suzhen Wang, Zhimeng Zhang, Bowen Ma, Rudong An, Yu Ding (Netease Fuxi AI Lab). 

 

It is worth mentioning that the difference in the performance between the winner of this Challenge and the runner-up is very small (0.6414 vs 0.6372). 

 

Let us also mention that both Teams have participated in our former Competitions at ECCV 2022, IEEE CVPR 2022 and ICCV 2021 and have ranked multiple times in the first, second and third positions in the Valence-Arousal Estimation, Expression Classification, Action Unit Detection and Multi-Task Learning Challenges!

 
 
 
  • Expression Classification Challenge:

In total, 67 Teams participated in the Expression Classification Challenge. 43 Teams submitted their results. 17 Teams made invalid (incomplete) submissions, whilst surpassing the baseline. 13 Teams scored lower than the baseline.

13 Teams scored higher than the baseline and made valid submissions.


The winner of this Challenge is Netease Fuxi Virtual Human consisting of: Wei Zhang, Feng Qiu, Haodong Sun, Suzhen Wang, Zhimeng Zhang, Bowen Ma, Rudong An, Yu Ding (Netease Fuxi AI Lab).


The runner up is SituTech  consisting of: Chuanhe Liu, Xinjie Zhang, Xiaolong Liu, Tenggan Zhang, Liyu Meng, Yuchen Liu, Yuanyuan Deng, Wenqiang Jiang  (Beijing Seek Truth Data Technology Services Co Ltd).


It is worth mentioning that the difference in the performance between the winner of this Challenge and the runner-up is quite small (0.4121 vs 0.4072).




  • Action Unit Detection Challenge:

In total, 60 Teams participated in the Action Unit Detection Challenge. 37 Teams submitted their results. 12 Teams made invalid (incomplete) submissions, whilst surpassing the baseline. 13 Teams scored lower than the baseline.

12 Teams scored higher than the baseline and made valid submissions.


 

The winner of this Challenge is Netease Fuxi Virtual Human consisting of: Wei Zhang, Feng Qiu, Haodong Sun, Suzhen Wang, Zhimeng Zhang, Bowen Ma, Rudong An, Yu Ding (Netease Fuxi AI Lab).


The runner up is SituTech  consisting of: Chuanhe Liu, Wenqiang Jiang, Liyu Meng, Xiaolong Liu, Yuanyuan Deng  (Beijing Seek Truth Data Technology Services Co Ltd).

 

 

  • Emotional Reaction Intensity (ERI) Estimation Challenge:
 

In total, 18 Teams participated in the Emotion Reaction Intensity. 9 Teams submitted their results with 8 of them surpassing the baseline, and 7 of them made a valid submission.

 

The winner of this Challenge is HFUT-CVers consisting of: Jia Li, Yin Chen, Xuesong Zhang, Jiantao Nie, Ziqiang Li, Yangchen Yu, Richang Hong, Meng Wang (Hefei University of Technology, China).

 

The runner-up is USTC-IAT-United consisting of: Jun Yu Jichao Zhu, Wangyuan Zhu, Zhongpeng Cai, Guochen Xie, Renda Li, Gongpeng Zhao (University of Science and Technology, China)

 

 

The leaderboards for all Challenges can be found below:  

 

CVPR2023_ABAW_Leaderboard  (first 3 Challenges)

CVPR2023_ABAW_ERI_Leaderboard   (ERI Estimation Challenge)

 

Congratulations to all teams, winning and non-winning ones! Thank you very much for participating in our Competition.

 

All teams are invited to submit their methodologies-papers (please see Submission Information section above). All accepted papers will be part of the IEEE CVPR 2023 proceedings.

We are looking forward to receiving your submissions! 

 

 

 

1) Valence-Arousal (VA) Estimation Challenge

 

Database

For this Challenge, an augmented version of the Aff-Wild2 database will be used. This database is audiovisual (A/V) and in total consists of 594 videos of around 3M frames of 584 subjects annotated in terms of valence and arousal.

 

Rules

Only uni-task solutions will be accepted for this Challenge; this means that the teams should only develop uni-task (valence-arousal estimation task) solutions.
Teams are allowed to use any -publicly or not- available pre-trained model (as long as it has not been pre-trained on Aff-Wild2). The pre-trained model can be pre-trained on any task (eg VA estimation, Expression Classification, AU detection, Face Recognition). However when the teams are refining the model and developing the methodology they should not use any other annotations (expressions or AUs): the methodology should be purely uni-task, using only the VA annotations. This means that teams are allowed to use other databases' VA annotations, or generated/synthetic data, or any affine transformations, or in general data augmentation techniques (e.g. our former work) for increasing the size of the training dataset.   

 

 

Performance Assessment

The performance measure is the mean Concordance Correlation Coefficient (CCC) of valence and arousal:
P = 0.5 * (CCC_arousal + CCC_valence)

 

Baseline Results

The baseline network is a pre-trained on ImageNet ResNet-50 and its performance on the validation set is:
CCC_valence = 0.24
CCC_arousal = 0.20
P = 0.5 * (CCC_arousal + CCC_valence) = 0.22

 

 

 

2) Expression (Expr) Classification Challenge

 

Database

For this Challenge, the Aff-Wild2 database will be used. This database is audiovisual (A/V) and in total consists of 548 videos of around 2.7M frames that are annotated in terms of the 6 basic expressions (i.e., anger, disgust, fear, happiness, sadness, surprise), plus the neutral state, plus a category 'other' that denotes expressions/affective states other than the 6 basic ones.

 

Rules

Only uni-task solutions will be accepted for this Challenge; this means that the teams should only develop uni-task (expression classification task) solutions.
Teams are allowed to use any -publicly or not- available pre-trained model (as long as it has not been pre-trained on Aff-Wild2). The pre-trained model can be pre-trained on any task (eg VA estimation, Expression Classification, AU detection, Face Recognition). However when the teams are refining the model and developing the methodology you should not use any other annotations (VA or AUs): the methodology should be purely uni-task, using only the Expr annotations. This means that teams are allowed to use other databases' Expr annotations, or generated/synthetic data (e.g. the data provided in the ECCV 2022 run of the ABAW Challenge), or any affine transformations, or in general data augmentation techniques (e.g. our former work) for increasing the size of the training dataset.   

 

Performance Assessment

The performance measure is the average F1 Score across all 8 categories:
P = ∑ (F1) / 8

 

Baseline Results

The baseline network is a pre-trained VGGFACE (with fixed convolutional weights) and its performance on the validation set is:
P = 0.23

 

 

 

3) Action Unit (AU) Detection Challenge

 

Database

For this Challenge, the Aff-Wild2 database will be used. This database is audiovisual (A/V) and in total consists of 547 videos of around 2.7M frames that are annotated in terms of 12 action units, namely AU1,AU2,AU4,AU6,AU7,AU10,AU12,AU15,AU23,AU24,AU25,AU26.

 

Rules

Only uni-task solutions will be accepted for this Challenge; this means that the teams should only develop uni-task (action unit detection task) solutions.
Teams are allowed to use any -publicly or not- available pre-trained model (as long as it has not been pre-trained on Aff-Wild2). The pre-trained model can be pre-trained on any task (eg VA estimation, Expression Classification, AU detection, Face Recognition). However when the teams are refining the model and developing the methodology you should not use any other annotations (VA or Expr): the methodology should be purely uni-task, using only the AU annotations. This means that teams are allowed to use other databases' AU annotations, or generated/synthetic data, or any affine transformations, or in general data augmentation techniques (e.g. our former work) for increasing the size of the training dataset.   

 

Performance Assessment

The performance measure is the average F1 Score across all 12 categories:
P = ∑ (F1) / 12

 

Baseline Results

The baseline network is a pre-trained VGGFACE (with fixed convolutional weights) and its performance on the validation set is:
P = 0.39

 

 

 

4) Emotional Reaction Intensity (ERI) Estimation Challenge

 

Database

For this Challenge, the Hume-Reaction dataset  will be used. It consists of subjects reacting to a wide range of various emotional video-based stimuli. It is multimodal and consists of about 75 hours of video recordings, recorded via a webcam, in the subjects’ homes. In total, 2222 subjects from two cultures, South Africa and the United States, are recorded. Each sample within the dataset has been self-annotated by the subjects themselves for the intensity of 7 emotional experiences in a range from 1-100: Adoration, Amusement, Anxiety, Disgust, Empathic Pain, Fear, and Surprise.

 

 

Performance Assessment

The performance measure is the average pearson’s correlations coefficient (ρ) across the 7 emotional reactions:

 

Baseline Results

The audio baseline network is the DeepSpectrum (CNN backbone is a DenseNet121 pre-trained on ImageNet) and its performance on the validation set is:

P = 0.1087

The visual baseline network is a ResNet50 trained on VGGface2 and its performance on the validation set is:

P = 0.2488

 

 

 

How to participate

In order to participate, teams will have to register.


If you want to participate in any of the first 3 Challenges (VA Estimation, Expr Classification, or AU Detection) you should follow the below procedure for registration:

The lead researcher should send  an email from their official address (no personal emails will be accepted) to d.kollias@qmul.ac.uk with:
i) subject "5th ABAW Competition: Team Registration";
ii) this EULA (if the team is composed of only academics) or this EULA (if the team has at least one member coming from the industry) filled in, signed and attached;
iii) the lead researcher's official academic/industrial website; the lead researcher cannot be a student (UG/PG/Ph.D.);
iv) the emails of each team member, each one in a separate line in the body of the email 
v) the team's name
vi) the point of contact name and email address (which member of the team will be the main point of contact for future communications, data access etc)

There is a maximum number of 8 participants in each team.

As a reply, you will receive access to the dataset's cropped/cropped-aligned images and annotations and other important information.

 

If you want to participate in the 4th Challenge (ERI Estimation) please email competitions@hume.ai with the following information:

i) subject "5th ABAW Competition: Team Registration"
ii) name and email for the lead researcher's official academic/industrial website; the lead researcher cannot be a student (UG/PG/Ph.D.)
iii) the names and emails of each team member, each one in separate line in the body of the email 
iv) team’s name
iv) the point of contact name and email address (which member of the team will be the main point of contact for future communications, data access etc) the team's name.

A reply to sign an EULA will be sent to all team members. When the EULA is signed by all team members a link to the data will be shared.

 

 

General Information

At the end of the Challenges, each team will have to send us:

i) a link to a Github repository where their solution/source code will be stored,
ii) a link to an ArXiv paper with 2-8 pages describing their proposed methodology, data used and results.

Each team will also need to upload their test set predictions on an evaluation server (details will be circulated when the test set is released).

After that, the winner of each Challenge, along with a leaderboard, will be announced.

There will be one winner per Challenge. The top-3 performing teams of each Challenge will have to contribute paper(s) describing their approach, methodology and results to our Workshop; the accepted papers will be part of the CVPR 2023 proceedings. All other teams are also able to submit paper(s) describing their solutions and final results; the accepted papers will be part of the CVPR 2023 proceedings.

 

The Competition's white paper (describing the Competition, the data, the baselines and results) will be ready at a later stage and will be distributed to the participating teams.

 

 

 

General Rules

• Participants can contribute to any of the 4 Challenges.

• In order to take part in any Challenge, participants will have to register as described above

• Participants can use audio/scene/background/body pose etc. information along with the face information.

• Any face detector whether commercial or academic can be used in the challenge. The paper accompanying the challenge result submission should contain clear details of the detectors/libraries used.

• For the first 3 Challenges, the participants are free to use any pre-trained network, as long as this has not used Aff-Wild2's annotations. 

 • The top performing teams will have to share their solution (code, model weights, executables) with the organizers upon completion of the challenge; in this way the organizers will check so as to prevent cheating or violation of rules.

 

Important Dates:

  • Call for participation announced, team registration begins, data available:       

 January 13, 2023

  • Test set release:                                                                                                 

 March 12, 2023

  • Final submission deadline (Predictions, Code and ArXiv paper):

 March 18, 2023

  • Winners Announcement:      

 March 20, 2023

  • Final paper submission deadline:                       
 

 March 30, 2023

  • Review decisions sent to authors; Notification of acceptance:                       
 

  April 10, 2023

  • Camera ready version deadline:                     
                                                 

  April 14, 2023

   

 

 

 

References

 

If you use the above data, you must cite all following papers (and the white paper that will be distributed at a later stage): 

 

  • D. Kollias, et. al.: "ABAW: Valence-Arousal Estimation, Expression Recognition, Action Unit Detection & Emotional Reaction Intensity Estimation Challenges". IEEE CVPR, 2023

@inproceedings{kollias2023abaw2, title={Abaw: Valence-arousal estimation, expression recognition, action unit detection \& emotional reaction intensity estimation challenges}, author={Kollias, Dimitrios and Tzirakis, Panagiotis and Baird, Alice and Cowen, Alan and Zafeiriou, Stefanos}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={5888--5897}, year={2023} }

 

  • D. Kollias: "ABAW: Learning from Synthetic Data & Multi-Task Learning Challenges". ECCV, 2022

@inproceedings{kollias2023abaw, title={ABAW: learning from synthetic data \& multi-task learning challenges}, author={Kollias, Dimitrios}, booktitle={European Conference on Computer Vision}, pages={157--172}, year={2023}, organization={Springer} }

 

  • D. Kollias: "ABAW: Valence-Arousal Estimation, Expression Recognition, Action Unit Detection & Multi-Task Learning Challenges". IEEE CVPR, 2022

@inproceedings{kollias2022abaw, title={Abaw: Valence-arousal estimation, expression recognition, action unit detection \& multi-task learning challenges}, author={Kollias, Dimitrios}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={2328--2336}, year={2022} }

 

  • D. Kollias, et. al.: "Distribution Matching for Heterogeneous Multi-Task Learning: a Large-scale Face Study", 2021

@article{kollias2021distribution, title={Distribution Matching for Heterogeneous Multi-Task Learning: a Large-scale Face Study}, author={Kollias, Dimitrios and Sharmanska, Viktoriia and Zafeiriou, Stefanos}, journal={arXiv preprint arXiv:2105.03790}, year={2021} } 

 

  • D. Kollias, et. al.: "Analysing Affective Behavior in the second ABAW2 Competition". ICCV, 2021

@inproceedings{kollias2021analysing, title={Analysing affective behavior in the second abaw2 competition}, author={Kollias, Dimitrios and Zafeiriou, Stefanos}, booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision}, pages={3652--3660}, year={2021}}

 

  • D. Kollias,S. Zafeiriou: "Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units and a Unified Framework, 2021

@article{kollias2021affect, title={Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units and a Unified Framework}, author={Kollias, Dimitrios and Zafeiriou, Stefanos}, journal={arXiv preprint arXiv:2103.15792}, year={2021}}

 

  • D. Kollias, et. al.: "Analysing Affective Behavior in the First ABAW 2020 Competition". IEEE FG, 2020

@inproceedings{kollias2020analysing, title={Analysing Affective Behavior in the First ABAW 2020 Competition}, author={Kollias, D and Schulc, A and Hajiyev, E and Zafeiriou, S}, booktitle={2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020)(FG)}, pages={794--800}}

 

  • D. Kollias, S. Zafeiriou: "Expression, Affect, Action Unit Recognition: Aff-Wild2, Multi-Task Learning and ArcFace". BMVC, 2019

@article{kollias2019expression, title={Expression, Affect, Action Unit Recognition: Aff-Wild2, Multi-Task Learning and ArcFace}, author={Kollias, Dimitrios and Zafeiriou, Stefanos}, journal={arXiv preprint arXiv:1910.04855}, year={2019}}

 

  • D. Kollias, et at.: "Face Behavior a la carte: Expressions, Affect and Action Units in a Single Network", 2019

@article{kollias2019face,title={Face Behavior a la carte: Expressions, Affect and Action Units in a Single Network}, author={Kollias, Dimitrios and Sharmanska, Viktoriia and Zafeiriou, Stefanos}, journal={arXiv preprint arXiv:1910.11111}, year={2019}}

 

  • D. Kollias, et. al.: "Deep Affect Prediction in-the-wild: Aff-Wild Database and Challenge, Deep Architectures, and Beyond". International Journal of Computer Vision (IJCV), 2019

@article{kollias2019deep, title={Deep affect prediction in-the-wild: Aff-wild database and challenge, deep architectures, and beyond}, author={Kollias, Dimitrios and Tzirakis, Panagiotis and Nicolaou, Mihalis A and Papaioannou, Athanasios and Zhao, Guoying and Schuller, Bj{\"o}rn and Kotsia, Irene and Zafeiriou, Stefanos}, journal={International Journal of Computer Vision}, pages={1--23}, year={2019}, publisher={Springer} }

 

  • S. Zafeiriou, et. al. "Aff-Wild: Valence and Arousal in-the-wild Challenge". IEEE CVPR, 2017

@inproceedings{zafeiriou2017aff, title={Aff-wild: Valence and arousal ‘in-the-wild’challenge}, author={Zafeiriou, Stefanos and Kollias, Dimitrios and Nicolaou, Mihalis A and Papaioannou, Athanasios and Zhao, Guoying and Kotsia, Irene}, booktitle={Computer Vision and Pattern Recognition Workshops (CVPRW), 2017 IEEE Conference on}, pages={1980--1987}, year={2017}, organization={IEEE} } 

 

 

Sponsors:

 

The Affective Behavior Analysis in-the-wild Challenge has been generously supported by:

 

  • Queen Mary University of London

Queen Mary University of London - University Transcription Services

 

  • Imperial College London

 

 

  • Hume AI

Hume AI | LinkedIn

 

  • Gentex Technologies

Gentex Technologies (Israel) Ltd.:Company Profile & Technical  Research,Competitor Monitor,Market Trends - Discovery | PatSnap