The Affect-in-the-Wild Challenge to be held in conjunction with International Conference on Computer Vision & Pattern Recognition (CVPR) 2017, Hawaii, USA.
Stefanos Zafeiriou, Imperial College London, UK firstname.lastname@example.org
Mihalis Nicolaou, Goldsmiths University of London, UK email@example.com
Irene Kotsia, Hellenic Open University, Greeece, firstname.lastname@example.org
Fabian Benitez-Quiroz, Ohio State University, USA email@example.com
Guoying Zhao, University of Oulu, firstname.lastname@example.org
Dimitris Kollias, Imperial College London, UK email@example.com
Athanasios Papaioannou, Imperial College London, UK firstname.lastname@example.org
The human Face is arguably the most studied object in computer vision. Recently, tens of databases have been collected under unconstrained conditions (also referred to as “in-thewild”) for many face related task such as face detection, face verification and facial landmark localisation. However, well-established databases and benchmarks “in-the-wild” do not exist, specifically for problems such as estimation of affect in a continuous dimensional space (e.g., valence and arousal) in videos displaying spontaneous facial behaviour. In CVPR 2017, we propose to make a significant step further and propose new comprehensive benchmarks for assessing the performance of facial affect/behaviour analysis/understanding “in-the-wild”. To the best of our knowledge, this is the first time that an attempt for benchmarking the efforts of valence and arousal "in-the-wild".
For analysis of continuous emotion dimensions (such as valence and arousal) we propose to advance previous works by providing more than 400 videos (over 2,000 minutes of data) annotated with regards to valence and arousal all captured “in-the-wild” (the main source being Youtube videos). 200 videos will be provided for training and the remaining ones for testing.
Even though the majority of the videos are under the creative commons licence (https://support.google.com/youtube/answer/2797468?hl=en-GB) the subjects have been notified about the use of their videos. in our study.
The training video samples are available to download from here and the annotations from here. The training data contain the videos and their corresponding annotation (#_arousal.txt and #_valence.txt, # is the number of video). Furthermore, to facilitate training, especially for people that do not have access to face detectors/tracking algorithms, we provide bounding boxes for the face(s) in the videos.
Participants will have their algorithms tested on other videos which will be provided in a predefined date (see below). This dataset aims at testing the ability of current systems for estimating valence and arousal in unseen subjects. To facilitate testing we provide bounding boxes for the face(s) present in the testing videos.
The test data are available here.
The bounding boxes for the test videos are available here.
Performance will be assessed using the standard concordance correlation coefficient, as well as the mean squared error objective.
Our aim is to accept up to 10 papers to be orally presented at the workshop.
Challenge participants should submit a paper to the Faces-in-the-wild Workshop, which summarises the methodology and the achieved performance of their algorithm. Submissions should adhere to the main CVPR 2017 proceedings style. The workshop papers will be published in the CVPR 2017 proceedings. Please sign up in the submissions system to submit your paper.
Workshop Administrator: email@example.com
 Stefanos Zafeiriou, Athanasios Papaioannou, Irene Kotsia, Mihalis Nicolaou, Guoying Zhao, Facial Affect in-the-wild: A survey and a new database, International Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Affect "in-the-wild" Workshop, 2016.
This challenge has been supported by a distinguished fellowship to Dr. Stefanos Zafeiriou by TEKES.