300 Faces In-The-Wild Challenge (300-W), IMAVIS 2014


Latest News!

  • The cropped version of the 300-W dataset used in the second conduct of 300-W competition (300-W IMAVIS) can be downloaded from here.
  • The 300-W dataset has been released and can be downloaded from [part1][part2][part3][part4].
    Please note that the database is simply split into 4 smaller parts for easier download. In order to create the database you have to unzip part1 (i.e., 300w.zip.001) using a file archiver (e.g., 7zip).
  • The facial landmark annotations are provided strictly for research purposes and commercial use is prohibited.
  • If you use the above dataset please cite the following papers:
  • Scripts in Matlab, Python and data to generate the results of both versions of the 300W Challenge (ICCV 2013, IMAVIS 2015) in the form of Cumulative Error Distribution (CED) curve can be downloaded from here
  • Results announced! Please check the Results session for more information about participants, winners and results!
  • The binaries and papers submission deadline has been extended! Please see the Important Dates below for more details about the updated deadlines.
  • In order to facilitate the training of accurate face detection models, we give to the participants the range of the distance between the eyes' corners of the testing images. Please see the section 'Test Database and Face Detection Initialization' below for more details.
  • Challenge details announced!
  • The Challenge's results as well as the testing database will be released as soon as all the IMAVIS papers get accepted. In the meantime, if you wish to compare with the results of the first conduct of the competition (300-W ICCV), feel free to send us your binary code (300faces.challenge@gmail.com) and we will send you back the final results.



The First Automatic Facial Landmark Detection in-the-Wild Challenge (300-W 2014) to be held for a second year for a special issue in Elsevier Image and Vision Computing Journal.



Georgios Tzimiropoulos, University of Lincoln, UK
Epameinondas Antonakos, Imperial College London, UK
Stefanos Zafeiriou, Imperial College London, UK
Maja Pantic, Imperial College London, UK



Automatic facial landmark detection is a long-standing problem in computer vision and 300-W Challenge is the first event of its kind organized exclusively to benchmark the efforts in the field. The particular focus is on facial landmark detection in real-world datasets of facial images captured in-the-wild. The results of the Challenge will be presented in a special issue of Elsevier Image and Vision Computing Journal.



To facilitate training, the recently collected in-the-wild data sets LFPW [1], AFW [2] and HELEN [3], as well as the controlled lab XM2VTS [4] and FRGC [5] have been re-annotated using a semi-supervised methodology [6] and the well-established landmark configuration of MultiPIE [7] (68 points mark-up, please see Fig. 1). To enhance accuracy, the final annotations have been manually corrected by an expert. Additionally, we provide annotations for the IBUG data set which consists of 135 images with highly expressive faces, difficult poses and occlusions.

All annotations have been made publicly available and can be downloaded from here. Each annotation file has the same name as the corresponding image file. For LFPW, AFW, HELEN,  and IBUG  datasets we also provide the images. The remaining image databases can be downloaded from their authors’ websites. 

Participants are strongly encouraged (but are not restricted) to train their algorithms using the aforementioned training sets and the provided annotations. Should you use any of the provided annotations, please cite [6] and the paper presenting the corresponding database.

Please note that the provided re-annotated data are saved in the Matlab convention of 1 being the first index, i.e. the coordinates of the top-left pixel in an image are x=1, y=1.


Figure 1: The 68 and 51 points mark-up used for our annotations.


Test Database and Face Detection Initialization

Participants will have their algorithms tested on a newly collected data set with 2x300 (300 indoor and 300 outdoor) face images collected in the wild (300-W test set). The test set is aimed to test the ability of current systems to handle unseen subjects, independently of variations in pose, expression, illumination, background, occlusion and image quality. Participants will not have access to the testing data.

As opposed to last year's Challenge (300-W 2013), in this version we will not provide any face detection initializations. Each system should detect the face in the image and then localize the facial landmarks. For this purpose, we have appropriately cropped each test image, so that it includes only one face. Sample examples of such cropped images are shown in Fig. 2. Moreover, some examples of face detection methods can be found in the FDDB benchmark [8] here. Of course participants can use any other face detection method they prefer.

Finally, the range of the Euclidean distance between the eyes' corners in the testing images is [39, 805] pixels. The eyes' corners are represented by points 37 and 46 in the 68 landmark points mark-up of Fig. 1. This range is given in order to facilitate the training of accurate face detection models.


Figure 2: Examples of cropped images in order to include only one face. Top: outdoor. Bottom: indoor.




The submitted systems will be evaluated with respect to:

  • Accuracy: Facial landmark detection performance will be assessed on both the 68 fiducial points of the MultiPIE mark-up as well as on the 51 points which are the points without the boundary, as shown in Fig. 1. The average point-to-point Euclidean error normalized by the inter-ocular distance (measured as the Euclidean distance between the outer corners of the eyes) will be used as the error measure. Matlab code for calculating the error can be downloaded from here. Non-detected faces will be assigned an infinite error. The cumulative error rate corresponding to the percentage of test images for which the error was less than a specific value will be produced.
  • Computational Cost: The submitted algorithms will also be ranked with respect to the computational cost. Note that there is an maximum cost limit of 2 minutes for a 640x480 image, measured on a 6-core Intel(R) Core(TM) i7-3930K CPU @ 3.20GHz with 32GB RAM. If the cost for an image exceeds this limit, we will assume that the method has failed for that specific image and an infinite error will be assigned.

Important Dates

  • Binaries validation week (optional): 6-12 October 2014 12-16 January 2015
  • Binaries submission deadline: 15 October 2014 19 January 2015 (extended deadline)
  • Paper submission deadline: 15 December 2014 27 February 2015
  • Acceptance notification: 15 March 2015 The submitted papers are under review.

Please note that these are the valid deadlines; not the ones mentioned on the IMAVIS website.


Submission Information and Format

  • Binaries submission: Participants should send binaries with their trained algorithms to the organizers. Each binary should accept as input the path to the test image (all test images have png format). The output of the binary should be a 68x2 matrix with the detected landmarks. Each such output matrix should be also saved in a file (.pts) with the same ordering and format as the provided annotation files and the same name as the input image filename. If the face is not detected, the output should be either an empty array or NaN and no pts file should be saved. The binaries should be compiled in a 64bit machine and dependencies to publicly available vision repositories (such as OpenCV) should be explicitly stated in a document that accompanies the binary. The binaries must be sent to the participants using the email address:
    The results will be returned to the participants for inclusion in their papers. The binaries submitted for the competition will be handled confidentially. They will be used for the scope of the competition and will be erased after the challenge completion.
  • Binaries validation (optional): A week will be dedicated to validate the binaries functionality. Participants can send their binaries to the organizers during this week in order to validate that they return the expected results on our machines. The submitted binaries will be used to detect the facial landmark points on a small set of images and then both the images and the results will be returned to the participants in order to confirm that the results are the expected ones. This procedure is optional, but highly recommended in order to avoid any possible mistakes during the final evaluation. The binaries should have the format explained in the previous bullet and must be sent to the organizers using the email address: 300faces.challenge@gmail.com.
  • Paper submission: All manuscripts and any supplementary material should be submitted through Elsevier Editorial System (EES). The authors must select as "SI: 300W" when they reach the "Article Type" step in the submission process. The EES website is located at: http://ees.elsevier.com/imavis/.



  • J. Deng, Q. Liu, J. Yang, D. Tao. M3 csr: Multi-view, multi-scale and multi-component cascade shape regression.  (Academia)
  • H. Fan, E. Zhou. Approaching human level facial landmark localization bydeep learning. (Industry)



51 points 68 points


51 points 68 points


51 points 68 points


1. J. Cech, V. Franc, M. Uricar, J. Matas. Multi-view facial landmark detection by using a 3d shape model

2. J. Deng, Q. Liu, J. Yang, D. Tao. M3 csr: Multi-view, multi-scale and multi-component cascade shape regression

3. H. Fan, E. Zhou. Approaching human level facial landmark localization by deep learning

4. B. Martinez, M. F. Valstar. L2,1-based regression and prediction accumulation across views for robust facial landmark detection

5. M. Uricar, V. Franc, D. Thomas, A. Sugimoto, V. Hlavac. Multi-view facial landmark detector learned by the structured output svm


Should you have any enquiries, please contact us through:


[1] Belhumeur, P., Jacobs, D., Kriegman, D. and Kumar, N., ‘Localizing parts of faces using a consensus of exemplars’. In IEEE Int'l Conf. on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, June 2011.

[2] Zhu, X. and Ramanan, D., ‘Face detection, pose estimation and landmark localization in the wild’. In IEEE Int'l Conf. on Computer Vision and Pattern Recognition (CVPR), Providence, RI, June 2012.

[3] Le, V., Brandt, J., Lin, Z., Boudev, L. and Huang, T.S., ‘Interactive Facial Feature Localization’, European Conference on Computer Vision (ECCV), Firenze, Italy, October 2012.

[4] Messer, K., Matas, J., Kittler, J., Luettin, J. and Maitre, G., ‘XM2VTSDB: The extended M2VTS database’. In 2nd International Conference on Audio and Video-Based Biometric Person Authentication, 1999.

[5] Phillips, P., Flynn, P., Scruggs, T., Bowyer, K., Chang, J., Hoffman, K., Marques, J., Min, J. and Worek, W., 'Overview of the face recognition grand challenge', In IEEE Int'l Conf. on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, June 2005.

[6] Sagonas, C., Tzimiropoulos, G., Zafeiriou, S. and Pantic, M., 'A semi-automatic methodology for facial landmark annotation', IEEE Int'l Conf. On Computer Vision and Pattern Recognition (CVPR-W'13), 5th Workshop on Analysis and Modeling of Faces and Gestures (AMFG2013), Portland, OR, June 2013.

[7] Gross, R., Matthews, I., Cohn, J., Kanade, T. and Baker, S., 'Multi-pie', Image and Vision Computing Journal, Elsevier, 28(5):807-813, 2010.

[8] Jain, V. and Learned-Miller, E., 'FDDB: A benchmark for face detection in unconstrained settings', University of Massachusetts, Amherst, 2010.