First Affect-in-the-Wild Challenge

Frames from the Aff-Wild database which show subjects in different emotional states, of different ethnicities, in a variety of head poses, illumination conditions and  occlusions.

Frames from the Aff-Wild database which show subjects in different emotional states, of different ethnicities, in a variety of head poses, illumination conditions and occlusions.

 

How to acquire the data(Update)

The training and test video samples, annotation files, bounding boxes and landmarks can be downloaded from here

 

If you use the above data please cite the following papers: 

  • S. Zafeiriou, et. al. "Aff-Wild: Valence and Arousal in-the-wild Challenge", CVPRW, 2017. 
  • D. Kollias, et. al. "Deep Affect Prediction in-the-wild: Aff-Wild Database and Challenge, Deep Architectures, and Beyond", arXiv preprint, 2018.

 

 

Additional Info (Update)

These data are in accordance with the paper Deep Affect Prediction in-the-wild: Aff-Wild Database and Challenge, Deep Architectures, and Beyond.

In the download link, you will find a tar.gz file, which contains 4 folders named: videos, annotations, bboxes and landmarks.

The videos' folder contains the training and testing videos, which are named in the format #.avi or #.mp4, where # is the video name: a number/id.

The annotations' folder contains the annotations for only the training videos. The annotations for the test videos are not provided in order to keep the integrity of the challenge. Also we are currently extending the Aff-Wild in order to rerun the contest this year (2019).

The training annotation folder contains two folders named valence and arousal, with each containing the annotation files for valence/arousal, whose name is in the format: #.txt, where # is the video name/id. Each line in these files corresponds to a valence/arousal annotation for a specific video frame. For instance, the first line in valence annotation file 450.txt, shows the valence value of frame 0 of video 450.mp4, the second line shows the valence value of frame 1, etc.

 


Latest News of the Challenge

  • The Aff-Wild Challenge training sets are released. 
  • The test data are out. You have 8 days to submit your results if you want to be part of the challenge (until 30th of March).
  • We are providing the bounding boxes and the landmarks for the face in the video by using an automatic method.

 

Aff-Wild DATA

The  Affect-in-the-Wild Challenge to be held in conjunction with International Conference on Computer Vision & Pattern Recognition (CVPR) 2017, Hawaii, USA. 

 

Organisers

Chairs:

Stefanos Zafeiriou, Imperial College London, UK                                s.zafeiriou@imperial.ac.uk

Mihalis Nicolaou, Goldsmiths University of London, UK                       m.nicolaou@gold.ac.uk          

 Irene Kotsia, Hellenic Open University, Greeece,                                drkotsia@gmail.com   

Fabian Benitez-Quiroz,  Ohio State University, USA                            benitez-quiroz.1@osu.edu 

Guoying Zhao, University of Oulu,                                                        gyzhao@ee.oulu.fi 

Data Chairs:

Dimitris Kollias, Imperial College London, UK                               dimitrios.kollias15@imperial.ac.uk

Athanasios Papaioannou, Imperial College London, UK              a.papaioannou11@imperial.ac.uk 

 

Scope

The human Face is arguably the most studied object in computer vision. Recently, tens of databases have been collected under unconstrained conditions (also referred to as “in-thewild”) for many face related task such as face detection, face verification and facial landmark localisation. However, well-established databases and benchmarks “in-the-wild” do not exist, specifically for problems such as  estimation of affect in a continuous dimensional space (e.g., valence and arousal) in videos displaying spontaneous facial behaviour. In CVPR 2017, we propose to make a significant step further and propose new comprehensive benchmarks for assessing the performance of facial affect/behaviour analysis/understanding “in-the-wild”. To the best of our knowledge, this is the first time that an attempt for benchmarking the efforts of valence and arousal "in-the-wild". 


The Aff-Wild Challenge

For analysis of continuous emotion dimensions (such as valence and arousal) we propose to advance previous works by providing around 300 videos (over 15 hours of data) annotated with regards to valence and arousal all captured “in-the-wild” (the main source being Youtube videos). 252 videos will be provided for training and the remaining ones (46) for testing.

Even though the majority of the videos are under the creative commons licence (https://support.google.com/youtube/answer/2797468?hl=en-GB), the subjects have been notified about the use of their videos  in our study.          

 

 

Training

The training data contain the videos and their corresponding annotation (#_arousal.txt and  #_valence.txt, # is the number of video). Furthermore, to facilitate training, especially for people that do not have access to face detectors/tracking algorithms, we provide bounding boxes and landmarks for the face(s) in the videos.

  • The  dataset and annotations are available for non-commercial research purposes only.
  • All the training/testing images of the dataset are obtained from Youtube. We are not responsible for the content nor the meaning of these images.
  • You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purposes, any portion of the images and any portion of derived data.
  • You agree not to further copy, publish or distribute any portion of  annotations of the dataset. Except, for internal use at a single site within the same organization it is allowed to make copies of the dataset.
  • We reserve the right to terminate your access to the dataset at any time.
  • If your face is displayed in any video and you want it to be removed you can email us at any time. 

 

Testing

Participants will have their algorithms tested on other videos which will be provided in a predefined date (see below). This dataset aims at testing the ability of current systems for estimating valence and arousal in unseen subjects. To facilitate testing we provide bounding boxes and landmarks for the face(s) present in the testing videos.

    

Performance Assessment

Performance will be assessed using the standard concordance correlation coefficient (CCC), as well as  the mean squared error objective. 

 

Faces in-the-wild 2017 Workshop

Our aim is to accept up to 10 papers to be orally presented at the workshop.  

 

Submission Information:

Challenge participants should submit a paper to the Faces-in-the-wild Workshop, which summarises the methodology and the achieved performance of their algorithm. Submissions should adhere to the main CVPR 2017 proceedings style. The workshop papers will be published in the CVPR 2017 proceedings. Please sign up in the submissions system to submit your paper.


Important Dates:

  

  • 27 January: Announcement of the Challenges
  • 30 January: Release of the training videos
  • 22 March:  Release of the test data
  • 31 March: Deadline of returning results (Midnight GMT)
  • 3 April: Return of the results to authors
  • 20 April: Deadline for paper submission
  • 27 April: Decisions 
  • 19 May: Camera-ready deadline
  • 26 July: Workshop date

Contact:

Workshop Administrator: dimitrios.kollias15@imperial.ac.uk     

  

References 

[1] Stefanos Zafeiriou, Athanasios Papaioannou, Irene Kotsia, Mihalis Nicolaou, Guoying Zhao, Facial Affect in-the-wild: A survey and a new database, International Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Affect "in-the-wild" Workshop, 2016.

 

Program Committee:

  • Jorge Batista, University of Coimbra (Portugal)
  • Richard Bowden, University of Surrey (UK)
  • Jeff Cohn, CMU/University of Pittsburgh (USA)
  • Roland Goecke, University of Canberra (AU)
  • Peter Corcoran, NUI Galway (Ireland)
  • Fred Nicolls, University of Cape Town (South Africa)
  • Mircea C. Ionita, Daon (Ireland)
  • Ioannis Kakadiaris, University of Houston (USA)
  • Stan Z. Li, Institute of Automation Chinese Academy of Sciences (China)
  • Simon Lucey, CMU (USA)
  • Iain Matthews, Disney Research (USA)
  • Aleix Martinez, University of Ohio (USA)
  • Dimitris Metaxas, Rutgers University (USA)
  • Stephen Milborrow, sonic.net            
  • Louis P. Morency, University of South California (USA)      
  • Ioannis Patras, Queen Mary University (UK)      
  • Matti Pietikainen, University of Oulu (Finland)  
  • Deva Ramaman, University of Irvine (USA)
  • Jason Saragih, Commonwealth Sc. & Industrial Research Organisation (AU) 
  • Nicu Sebe, University of Trento (Italy)
  • Jian Sun, Microsoft Research Asia
  • Xiaoou Tang, Chinese University of Hong Kong (China)
  • Fernando De La Torre, Carnegie Mellon University (USA)
  • Philip A. Tresadern, University of Manchester (UK)
  • Michel Valstar, University of Nottingham (UK)
  • Xiaogang Wang, Chinese University of Hong Kong (China)
  • Fang Wen, Microsoft Research Asia
  • Lijun Yin, Binghamton University (USA)

 

Sponsors:

This challenge has been supported by a distinguished fellowship to Dr. Stefanos Zafeiriou by TEKES.