Fig 1: Representative sample images from the AFEW-VA Dataset.
This page contains the data accompanying the paper titled 'AFEW for valence and arousal estimation In-The-Wild'.
The AFEW-VA databaset is a collection of highly accurate per-frame annotations levels of valence and arousal for 600 challenging video clips extracted from feature films, along with per-frame annotations of 68 facial landmarks. Note that the original AFEW dataset can be obtained here.
The accurate annotations of valance, arousal and facial landmarks together with the frames of the clips to which these annotations belong can be obtained from the files listed below.
Fig 2: Sample video from the AFEW-VA Dataset.
Each of the zip files contains 50 annotated videos:
The AFEW-VA database is provided for research purposes only.
If you use this data, please cite:
 J. Kossaifi, G. Tzimiropoulos, S. Todorovic and M. Pantic, "AFEW for Valence and Arousal estimation In-The-Wild", in Image and Vision Computing, 2016 (accepted for publication).
 A. Dhall, R. Goecke, S. Lucey and T. Gedeon, "Collecting Large, Richly Annotated Facial-Expression Databases from Movies," in IEEE MultiMedia, vol. 19, no. 3, pp. 34-41, July-Sept. 2012.