Aff-Wild2 database

Frames of Aff-Wild2, showing subjects of different ethnicities, age groups,  emotional states, head poses, illumination conditions and occlusions

Frames of Aff-Wild2, showing subjects of different ethnicities, age groups, emotional states, head poses, illumination conditions and occlusions

Affective computing has been largely limited in terms of available data resources. The need to collect and annotate diverse in-the-wild datasets has become apparent with the rise of deep learning models, as the default approach to address any computer vision task.
Some in-the-wild databases have been recently proposed. However: i) their size is small, ii) they are not audiovisual, iii) only a small part is manually annotated, iv) they contain a small number of subjects, or v) they are not annotated for all main behavior tasks (valencearousal estimation, action unit detection and basic expression classification).
To address these, we substantially extend the largest available in-the-wild database (Aff-Wild) to study continuous emotions such as valence and arousal. Furthermore, we annotate parts of the database with basic expressions and action units. We call this database Aff-Wild2. In total, Aff-Wild2 contains 558 videos with around 2.8 million frames. To the best of our knowledge, AffWild2 is the first large scale in-the-wild database containing annotations for all 3 main behavior tasks. It is also the first audiovisual database with annotations for AUs. All AU annotated databases do not contain audio, but only images or videos.


 

How to acquire Aff-Wild2


If you are an academic, (i.e., a person with a permanent position at a research institute or university), please:
i) fill in this EULA;
ii) use your official academic email (as data cannot be released to personal emails);
iii) send an email to d.kollias@qmul.ac.uk with subject: Aff-Wild2 request by academic;
iv) include in the email the above signed EULA, the reason why you require access to the Aff-Wild2 database, and your official academic website

Ph.D. students fall under the above category but their supervisor should perform the described steps.


If you are from industry and you want to acquire Aff-Wild2 (either for research or commercial purposes), please email d.kollias@qmul.ac.uk with subject: Aff-Wild2 request from industry and explain the reason why the database access is needed.

 

If you are an undergraduate or postgraduate student (but not a Ph.D. student), please:
i) fill in this EULA;
ii) use your official university email (data cannot be released to personal emails);
iii) send an email to d.kollias@qmul.ac.uk with subject: Aff-Wild2 request by student
iv) include in the email the above signed EULA and 
proof/verification of your current student status (eg student ID card, webpage in the university site)

 

 

 Due to the high volume of requests, please allow around a week for the reply to your request for access.

 

 

Latest News

We are currently organizing a Competition (split into 3 Tracks-Challenges) in conjunction with CVPR2022 on an updated version of Aff-Wild2 (augmented with more videos). Please check here for more information, how to acquire the data and to participate. 

Aff-Wild2 has been significantly augmented with more videos and annotations and has been used for two Competitions (ABAW) in FG2020 and ICCV2021 (you can have a look here and here). If you still want to use the version of the database that is described in our BMVC paper, send an email -specifying it- to d.kollias@qmul.ac.uk  

 

 

References

 

If you use the above data, you must cite all following papers (and the white paper): 

 

  • D. Kollias: "ABAW: Valence-Arousal Estimation, Expression Recognition, Action Unit Detection & Multi-Task Learning Challenges", 2021

@article{kollias2022abaw, title={ABAW: Valence-Arousal Estimation, Expression Recognition, Action Unit Detection \& Multi-Task Learning Challenges}, author={Kollias, Dimitrios}, journal={arXiv preprint arXiv:2202.10659}, year={2022}}

 

  • D. Kollias, et. al.: "Analysing Affective Behavior in the second ABAW2 Competition". ICCV, 2021

@inproceedings{kollias2021analysing, title={Analysing affective behavior in the second abaw2 competition}, author={Kollias, Dimitrios and Zafeiriou, Stefanos}, booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision}, pages={3652--3660}, year={2021}}

 

  • D. Kollias, et. al.: "Analysing Affective Behavior in the First ABAW 2020 Competition". IEEE FG, 2020

@inproceedings{kollias2020analysing, title={Analysing Affective Behavior in the First ABAW 2020 Competition}, author={Kollias, D and Schulc, A and Hajiyev, E and Zafeiriou, S}, booktitle={2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020)(FG)}, pages={794--800}}

 

  • D. Kollias, et. al.: "Distribution Matching for Heterogeneous Multi-Task Learning: a Large-scale Face Study", 2021

@article{kollias2021distribution, title={Distribution Matching for Heterogeneous Multi-Task Learning: a Large-scale Face Study}, author={Kollias, Dimitrios and Sharmanska, Viktoriia and Zafeiriou, Stefanos}, journal={arXiv preprint arXiv:2105.03790}, year={2021} }

 

  • D. Kollias,S. Zafeiriou: "Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units and a Unified Framework, 2021

@article{kollias2021affect, title={Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units and a Unified Framework}, author={Kollias, Dimitrios and Zafeiriou, Stefanos}, journal={arXiv preprint arXiv:2103.15792}, year={2021}}

 

  • D. Kollias, S. Zafeiriou: "Expression, Affect, Action Unit Recognition: Aff-Wild2, Multi-Task Learning and ArcFace". BMVC, 2019

@article{kollias2019expression, title={Expression, Affect, Action Unit Recognition: Aff-Wild2, Multi-Task Learning and ArcFace}, author={Kollias, Dimitrios and Zafeiriou, Stefanos}, journal={arXiv preprint arXiv:1910.04855}, year={2019} }


  • D. Kollias, et at.: "Face Behavior a la carte: Expressions, Affect and Action Units in a Single Network", 2019

@article{kollias2019face,title={Face Behavior a la carte: Expressions, Affect and Action Units in a Single Network}, author={Kollias, Dimitrios and Sharmanska, Viktoriia and Zafeiriou, Stefanos}, journal={arXiv preprint arXiv:1910.11111}, year={2019}}

 

  • D. Kollias, et. al.: "Deep Affect Prediction in-the-wild: Aff-Wild Database and Challenge, Deep Architectures, and Beyond". International Journal of Computer Vision (IJCV), 2019

@article{kollias2019deep, title={Deep affect prediction in-the-wild: Aff-wild database and challenge, deep architectures, and beyond}, author={Kollias, Dimitrios and Tzirakis, Panagiotis and Nicolaou, Mihalis A and Papaioannou, Athanasios and Zhao, Guoying and Schuller, Bj{\"o}rn and Kotsia, Irene and Zafeiriou, Stefanos}, journal={International Journal of Computer Vision}, pages={1--23}, year={2019}, publisher={Springer} }

 

  • S. Zafeiriou, et. al. "Aff-Wild: Valence and Arousal in-the-wild Challenge", CVPRW, 2017

@inproceedings{zafeiriou2017aff, title={Aff-wild: Valence and arousal ‘in-the-wild’challenge}, author={Zafeiriou, Stefanos and Kollias, Dimitrios and Nicolaou, Mihalis A and Papaioannou, Athanasios and Zhao, Guoying and Kotsia, Irene}, booktitle={Computer Vision and Pattern Recognition Workshops (CVPRW), 2017 IEEE Conference on}, pages={1980--1987}, year={2017}, organization={IEEE} }

 


Evaluation of your predictions on the test set


First of all, you should clarify to which set (VA, AU, Expression) the predictions correspond. The format of the predictions should follow the (same) format of the annotation files that we provide. In detail:

In the VA case:  Send the files with names as the corresponding videos. Each line of each file should contain the values of valence and arousal for the corresponding frame separated by comma ,i.e. for file 271.csv:

line 1 should be: valence,arousal
line 2 should be: valence_of_first_frame,arousal_of_first_frame     (for instance it could be: 0.53,0.28)
line 3 should be: valence_of_second_frame,arousal_of_second_frame
...
last line: valence_of_last_frame,arousal_of_last_frame

 

In the Expression case:  Send the files with names as the corresponding videos. Each line of each file should contain the corresponding basic expression prediction (0,1,2,3,4,5,6, where: 0 denotes neutral, 1 denotes anger, 2 denotes disgust, 3 denotes fear, 4 denotes happiness, 5 denotes sadness and 6 denotes surprise). For instance for file 282.csv:

first line should be: Neutral,Anger,Disgust,Fear,Happiness,Sadness,Surprise
second line should be: basic_expression_prediction_of_first_frame    (such as 5)
...
last line should be: basic_expression_prediction_of_last_frame    


In the AU case:  Send the files with names as the corresponding videos. Each line of each file should contain 8 numbers (0 or 1) comma separated, that correspond to the 8 Action Units (AU1, AU2, AU4, AU6, AU12, AU15, AU20, AU25). For instance for file video18.csv:

first line should be: AU1,AU2,AU4,AU6,AU12,AU15,AU20,AU25
second line should be: AU1_of_first_frame,AU2_of_first_frame,AU4_of_first_frame,AU6_of_first_frame,AU12_of_first_frame,AU15_of_first_frame,AU20_of_first_frame,AU25_of_first_frame    (such as: 0,1,1,0,0,0,0,1)
...
last line should be: AU1_of_last_frame,AU2_of_last_frame,AU4_of_last_frame,AU6_of_last_frame,AU12_of_last_frame,AU15_of_last_frame,AU20_of_last_frame,AU25_of_last_frame      

Note that in your files you should include predictions for all frames in the video (irregardless if the bounding box failed or not). 

 

 

Important Information:

  • All the training/validation/testing images of the dataset are obtained from Youtube. We are not responsible for the content nor the meaning of these images.
  • You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purposes, any portion of the images and any portion of derived data.
  • You agree not to further copy, publish or distribute any portion of  annotations of the dataset. Except, for internal use at a single site within the same organization it is allowed to make copies of the dataset.
  • We reserve the right to terminate your access to the dataset at any time.