Datasets
Code
Latest News!
C. Sagonas, E. Antonakos, G, Tzimiropoulos, S. Zafeiriou, M. Pantic. 300 faces In-the-wild challenge: Database and results. Image and Vision Computing (IMAVIS), Special Issue on Facial Landmark Localisation "In-The-Wild". 2016.
C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, M. Pantic. 300 Faces in-the-Wild Challenge: The first facial landmark localization Challenge. Proceedings of IEEE Int’l Conf. on Computer Vision (ICCV-W), 300 Faces in-the-Wild Challenge (300-W). Sydney, Australia, December 2013.
C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, M. Pantic. A semi-automatic methodology for facial landmark annotation. Proceedings of IEEE Int’l Conf. Computer Vision and Pattern Recognition (CVPR-W’), 5th Workshop on Analysis and Modeling of Faces and Gestures (AMFG 2013). Oregon, USA, June 2013.
300-W
The First Automatic Facial Landmark Detection in-the-Wild Challenge (300-W 2014) to be held for a second year for a special issue in Elsevier Image and Vision Computing Journal.
Organizers
Georgios Tzimiropoulos, University of Lincoln, UK
Epameinondas Antonakos, Imperial College London, UK
Stefanos Zafeiriou, Imperial College London, UK
Maja Pantic, Imperial College London, UK
Scope
Automatic facial landmark detection is a long-standing problem in computer vision and 300-W Challenge is the first event of its kind organized exclusively to benchmark the efforts in the field. The particular focus is on facial landmark detection in real-world datasets of facial images captured in-the-wild. The results of the Challenge will be presented in a special issue of Elsevier Image and Vision Computing Journal.
Training
To facilitate training, the recently collected in-the-wild data sets LFPW [1], AFW [2] and HELEN [3], as well as the controlled lab XM2VTS [4] and FRGC [5] have been re-annotated using a semi-supervised methodology [6] and the well-established landmark configuration of MultiPIE [7] (68 points mark-up, please see Fig. 1). To enhance accuracy, the final annotations have been manually corrected by an expert. Additionally, we provide annotations for the IBUG data set which consists of 135 images with highly expressive faces, difficult poses and occlusions.
All annotations have been made publicly available and can be downloaded from here. Each annotation file has the same name as the corresponding image file. For LFPW, AFW, HELEN, and IBUG datasets we also provide the images. The remaining image databases can be downloaded from their authors’ websites.
Participants are strongly encouraged (but are not restricted) to train their algorithms using the aforementioned training sets and the provided annotations. Should you use any of the provided annotations, please cite [6] and the paper presenting the corresponding database.
Please note that the provided re-annotated data are saved in the Matlab convention of 1 being the first index, i.e. the coordinates of the top-left pixel in an image are x=1, y=1.
Figure 1: The 68 and 51 points mark-up used for our annotations.
Test Database and Face Detection Initialization
Participants will have their algorithms tested on a newly collected data set with 2x300 (300 indoor and 300 outdoor) face images collected in the wild (300-W test set). The test set is aimed to test the ability of current systems to handle unseen subjects, independently of variations in pose, expression, illumination, background, occlusion and image quality. Participants will not have access to the testing data.
As opposed to last year's Challenge (300-W 2013), in this version we will not provide any face detection initializations. Each system should detect the face in the image and then localize the facial landmarks. For this purpose, we have appropriately cropped each test image, so that it includes only one face. Sample examples of such cropped images are shown in Fig. 2. Moreover, some examples of face detection methods can be found in the FDDB benchmark [8] here. Of course participants can use any other face detection method they prefer.
Finally, the range of the Euclidean distance between the eyes' corners in the testing images is [39, 805] pixels. The eyes' corners are represented by points 37 and 46 in the 68 landmark points mark-up of Fig. 1. This range is given in order to facilitate the training of accurate face detection models.
Figure 2: Examples of cropped images in order to include only one face. Top: outdoor. Bottom: indoor.
Evaluation
The submitted systems will be evaluated with respect to:
Important Dates
Please note that these are the valid deadlines; not the ones mentioned on the IMAVIS website.
Submission Information and Format
Winners
Results
Indoor
51 points | 68 points |
Outdoor
51 points | 68 points |
Indoor+Outdoor
51 points | 68 points |
Participants
1. J. Cech, V. Franc, M. Uricar, J. Matas. Multi-view facial landmark detection by using a 3d shape model
2. J. Deng, Q. Liu, J. Yang, D. Tao. M3 csr: Multi-view, multi-scale and multi-component cascade shape regression
3. H. Fan, E. Zhou. Approaching human level facial landmark localization by deep learning
4. B. Martinez, M. F. Valstar. L2,1-based regression and prediction accumulation across views for robust facial landmark detection
5. M. Uricar, V. Franc, D. Thomas, A. Sugimoto, V. Hlavac. Multi-view facial landmark detector learned by the structured output svm
Contact
Should you have any enquiries, please contact us through:
300faces.challenge@gmail.com
References
[1] Belhumeur, P., Jacobs, D., Kriegman, D. and Kumar, N., ‘Localizing parts of faces using a consensus of exemplars’. In IEEE Int'l Conf. on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, June 2011.
[2] Zhu, X. and Ramanan, D., ‘Face detection, pose estimation and landmark localization in the wild’. In IEEE Int'l Conf. on Computer Vision and Pattern Recognition (CVPR), Providence, RI, June 2012.
[3] Le, V., Brandt, J., Lin, Z., Boudev, L. and Huang, T.S., ‘Interactive Facial Feature Localization’, European Conference on Computer Vision (ECCV), Firenze, Italy, October 2012.
[4] Messer, K., Matas, J., Kittler, J., Luettin, J. and Maitre, G., ‘XM2VTSDB: The extended M2VTS database’. In 2nd International Conference on Audio and Video-Based Biometric Person Authentication, 1999.
[5] Phillips, P., Flynn, P., Scruggs, T., Bowyer, K., Chang, J., Hoffman, K., Marques, J., Min, J. and Worek, W., 'Overview of the face recognition grand challenge', In IEEE Int'l Conf. on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, June 2005.
[6] Sagonas, C., Tzimiropoulos, G., Zafeiriou, S. and Pantic, M., 'A semi-automatic methodology for facial landmark annotation', IEEE Int'l Conf. On Computer Vision and Pattern Recognition (CVPR-W'13), 5th Workshop on Analysis and Modeling of Faces and Gestures (AMFG2013), Portland, OR, June 2013.
[7] Gross, R., Matthews, I., Cohn, J., Kanade, T. and Baker, S., 'Multi-pie', Image and Vision Computing Journal, Elsevier, 28(5):807-813, 2010.
[8] Jain, V. and Learned-Miller, E., 'FDDB: A benchmark for face detection in unconstrained settings', University of Massachusetts, Amherst, 2010.