Action Unit Detector (2016)

 

 

This application receives the locations of fiducial facial points extracted using the Chachra tracker [1] as input features. These are then passed through several blocks for data pre-processing, including normalization alignment and dimensionality reduction. Then classification of each target frame into active/non-active AU is performed using a CRF classifier trained for each AU1, AU2, AU4, AU6 and AU12 independently. 

 

Please follow the instructions in demo file. 

[DOWNLOAD]

 

Code released as is **for research purposes only**

 

Contacts:
Robert Walecki
r.walecki14@imperial.ac.uk

Feel free to modify/distribute but please cite the papers:

[1] Akshay Asthana, Stefanos Zafeiriou, Shiyang Cheng and Maja Pantic. Incremental Face Alignment in the Wild. In CVPR 2014. (pdf)

[2] R. Walecki, O. Rudovic, V. Pavlovic, M. Pantic. "Variable-state Latent Conditional Random Fields for Facial Expression Recognition and Action Unit Detection" Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition (FG'15). Ljubljana, Slovenia, pp. 1 - 8, May 2015. (pdf)

[3] O. Rudovic, V. Pavlovic, M. Pantic. "Multi-output Laplacian Dynamic Ordinal Regression for Facial Expression Recognition and Intensity Estimation" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2012). Providence, USA, pp. 2634 - 2641, June 2012. (pdf)