Lightweight Face Recognition Challenge & Workshop (ICCV 2019)

Latest News

The Lightweight Face Recognition Challenge & Workshop will be held in conjunction with the International Conference on Computer Vision (ICCV) 2019, Seoul Korea. 


Workshop Agenda

8:30am ~ 12:30am 28th Oct 2019

Room: 307BC

1.Top-ranked Solutions from the Challenge  (~5min Each)

2.Invited Talks (~30min Each)

Rama Chellappa : Light Minimum-weight Face Recognition and Verification

Stan Z. Li  : Techniques for Solving Challenging Problems in Face Recognition

Xiaoming Liu : On the Interpretability, Vulnerability, and Decomposability of Faces

Ji Lin : Design Automation for Efficient Deep Learning Computing

 3. Award

Organisers

General Chairs:

Jiankang Deng, Imperial College London, UK                                     j.deng16@imperial.ac.uk

Jia Guo, InsightFace                                                                            guojia@gmail.com

Data Chairs:

Debing Zhang, Yafeng Deng, Song Shi, Xiangju Lu

Scope

Face recognition in static images and video sequences captured in unconstrained recording conditions is one of the most widely studied topics in computer vision due to its extensive applications in surveillance, law enforcement, bio-metrics, marketing, and so forth. Recently, methodologies that achieve good performance have been presented in top-tier computer vision conferences (e.g., ICCV, CVPR, ECCV etc.) and great progress has been achieved in face recognition with deep learning-based methods. Even though comprehensive benchmarks and extensive efforts exist for deep face recognition, very limited effort has been made towards benchmarking lightweight deep face recognition, which aims at model compactness and energy efficiency to enable efficient system deployment. In ICCV 2019, we make a significant step further and propose a new comprehensive benchmark, as well as organise the first challenge & workshop for lightweight deep face recognition. 

 

Lightweight Face Recognition Challenge

Track 1 (1G FLOPs): 

In this track, we consider the application scenario of unlocking mobile telephone with smooth user experience (< 50ms on ARM).
Detailed requirements:
(1) The upper bound of computational complexity is 1G FLOPs.
(2) The upper bound of model size is 20MB.
(3) We target on float32 solutions. Float16, int8 or any other quantization methods are not allowed.
(4) The upper bound of the feature dimension is 512.

Track 2 (30G FLOPs):

In this track, we refer to the submission requirement of the face recognition vendor test [1] (< 1s on CPU).
Detailed requirements:
(1) The upper bound of computational complexity is 30G FLOPs.
(2) We target on float32 solutions. Float16, int8 or any other quantization methods are not allowed.
(3) The upper bound of the feature dimension is 512.

 

Training

Training data: 

Our training dataset is cleaned from MS1M [2]. All face images are preprocessed to the size of 112x112 by the five facial landmarks predicted by RetinaFace [3]. In total, there are 5.1M images of 93K identities. The training data is fixed to facilitate future comparison and reproduction.
Detailed requirements:
(1) All participants must use this dataset for training without any modification (e.g. re-alignment or changing image size are both prohibited).
(2) Any external dataset is not allowed.

Download Links: Baidu CloudDropbox.

Training Reference

InsightFace (Mxnet) [4] is highly recommended but the challenge has no limitation on deep learning frameworks (e.g. Tensorflow, Pytorch, Caffe). InsightFace (Mxnet) provides efficient parallel training, FLOPs calculation and baselines.

Training Method

The participants can use any method (e.g. better network and loss design) to improve the performance, but external datasets and models are not allowed.

 

Testing

Large-scale Image Test Set:

We take the Trillion-pairs dataset [5] as our large-scale image test set.  The Trillion-Pairs consists of the following two parts:
(1) ELFW: Face images of celebrities in the LFW name list. There are 274K images from 5.7K identities.
(2) DELFW: Distractors for ELFW. There are 1.58 M face images from Flickr.

All test images will be preprocessed to the size of 112x112 ( same as the training data). Modification (e.g. re-alignment or resize) on test images is not allowed. Horizontal flipping is allowed for test augmentation while all other test argumentation methods are prohibited. The multi-model ensemble strategy is also not allowed.

Download Links: Baidu CloudDropbox.

Large-scale Video Test Set:

We take the iQIYI-VID test set [6] as our large-scale video test set.  The iQIYI-VID test set includes 200K videos of 10K identities.

Face frames are extracted from each video at 8FPS and preprocessed to the size of 112x112 ( same as the training data). We will provide 6.3M preprocessed face crops instead of original videos to simplify the competition. The mapping between videos and frames will be also provided and participants can investigate how to aggregate frame features to video feature. Modification (e.g. re-alignment or resize) on test images is not allowed. Horizontal flipping is allowed for test augmentation while all other test argumentation methods are prohibited. The multi-model ensemble strategy is also not allowed.

Download Links:  Please download iQIYI-VID-FACE.z01, iQIYI-VID-FACE.z02 and iQIYI-VID-FACE.zip after registration.

Performance Assessment

We employ the 1:1 verification protocol on both the image and video test set. More specifically, we choose TAR@FAR as our evaluation metric.

For the image test set, the participants need to submit a binary feature matrix (ImageNum x FeatureDim in float32) to the test server. For the video test set, the participants also need to submit a binary feature matrix (VideoNum x FeatureDim in float32) to the test server.  For all valid submissions, the final rank is only decided by accuracy. 

Test server address: http://39.104.128.76/overview

Award:

Track-1 and Track-2 share the competition prizes with 50% each. For each task in each track, the top 3 performing teams will receive award certificates and cash prizes at 50%, 30% and 20%, respectively. The ranking results are directly read from the leader-board.

For each Track, we will collect the training details from top 3 participants after we close the test server. We will check the FLOPs and reproduce the ACC on Mxnet. Our re-implementation will be released under the name of the participants. Both the results from the participants and our re-implementation will be included in the challenge report.

 

Online Discussion:

Check this link for the challenge tips and further discussions.

 

Challenge & Workshop Paper Submission:

Challenge participants should submit a paper to summarize the methodology and the achieved performance of their algorithm. Submissions should adhere to the main ICCV 2019 proceedings style. The workshop papers will be published in the ICCV 2019 proceedings. Please sign up in the submission system to submit your paper.


Important Dates:

  • 20 April - Announcement of the Challenge
  • 20 April - Release of the training and testing data
  • 25 April - Release of our baseline solution (training code, training log and pre-trained model)
  • 25 April - Release of the image and video test leader-board
  • 10 July - Stop leader-board submission (11:59 PM Pacific Time)
  • 11 July - Results return to the authors for inclusion in the paper
  • 15 July - Deadline for paper submission
  • 22 July - Final decision on submitted papers
  • 30 August - Deadline for camera-ready

 

References 

[1] https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt-ongoing

[2] Guo, Yandong, Lei Zhang, Yuxiao Hu, Xiaodong He, and Jianfeng Gao. "Ms-celeb-1m: A dataset and benchmark for large-scale face recognition." ECCV, 2016.

[3] Jiankang Deng, Jia Guo, Yuxiang Zhou, Jinke Yu, Irene Kotsia and Stefanos Zafeiriou. ''RetinaFace: Single-stage Dense Face Localisation in the Wild.'' Arxiv, 2019.

[4] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. "Arcface: Additive angular margin loss for deep face recognition." CVPR, 2019.

[5] http://trillionpairs.deepglint.com/overview 

[6] http://challenge.ai.iqiyi.com/data-cluster

 

Sponsors:

The Lightweight Face Recognition Challenge has been supported by 

EPSRC project FACER2VM (EP/N007743/1)

Huawei (5000$)

Kingsoft Cloud (3000$)

iQIYI (3000$)

Pensees  (3000$)

XForwardAI (3000$)

Dynamic funding pool: (17000$)


Contact:

Workshop Administrator:  insightface.challenge@gmail.com