FG-2020 Competition: Affective Behavior Analysis in-the-wild (ABAW)

 

Latest News

The Competition: Affective Behavior Analysis in-the-wild (ABAW) will be held in conjunction with the IEEE International Conference on Automatic Face and Gesture Recognition (FG) 2020, in Buenos Aires, Argentina, 16-20 November 2020.

 

For any requests or enquiries, please contact: d.kollias@qmul.ac.uk  

 

 

Organisers

 

Chairs:

 

Stefanos Zafeiriou, Imperial College London, UK                                s.zafeiriou@imperial.ac.uk

Dimitrios Kollias, Imperial College London, UK                                   dimitrios.kollias15@imperial.ac.uk          

Attila Schulc,  Realeyes - Emotional Intelligence                                attila.schulc@realeyesit.com    

Elnar Hajiyev, Realeyes - Emotional Intelligence                                elnar@realeyesit.com 

 

 

 

How to acquire Aff-Wild2


If you are an academic, (i.e., a person with a permanent position at a research institute or university), please:
i) fill in this EULA;
ii) use your official academic email (as data cannot be released to personal emails);
iii) send an email to d.kollias@qmul.ac.uk with subject: Aff-Wild2 request by academic;
iv) include in the email the above signed EULA, the reason why you require access to the Aff-Wild2 database, and your official academic website

Ph.D. students fall under the above category but their supervisor should perform the described steps.


If you are from industry and you want to acquire Aff-Wild2 (either for research or commercial purposes), please email d.kollias@qmul.ac.uk with subject: Aff-Wild2 request from industry and explain the reason why the database access is needed.

 

If you are an undergraduate or postgraduate student (but not a Ph.D. student), please:
i) fill in this EULA;
ii) use your official university email (data cannot be released to personal emails);
iii) send an email to d.kollias@qmul.ac.uk with subject: Aff-Wild2 request by student
iv) include in the email the above signed EULA and 
proof/verification of your current student status (eg student ID card, webpage in the university site)

 

 

 Due to the high volume of requests, please allow around a week for the reply to your request for access.

 

 

 

Update

Whoever wants to be part of our leaderboard, should send the test set results, the github code and an arxiv paper, as described below.

 

Leaderboard:

 

1) Valence - Arousal Challenge: 

 

 

Team Name         

Results                       

Github        

arXiv       

-----------------------------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------------------------------------

 

NISL2020                       

1) Subm 1:                      
CCC-V = 0.421
CCC-A = 0.387


2) Subm 2: 
CCC-V = 0.429
CCC-A = 0.414


3) Subm 3: 
CCC-V = 0.426
CCC-A = 0.452


4) Subm 4: 
CCC-V = 0.44
CCC-A = 0.454

 

              Github           

 

       Paper        

-----------------------------------------------------------------------------------------------------------------------------------------------------------

 

TNT                                


1) Subm 1:                      
CCC-V = 0.283
CCC-A = 0.36


2) Subm 2: 
CCC-V = 0.322
CCC-A = 0.373


3) Subm 3: 
CCC-V = 0.437
CCC-A = 0.402


4) Subm 4: 
CCC-V = 0.448
CCC-A = 0.417


               Github        

          

 

          Paper    

        

-----------------------------------------------------------------------------------------------------------------------------------------------------------

AIMM                             

                     

1) Subm 1:                      
CCC-V = 0.414
CCC-A = 0.449

 

 

            Github           

          Paper

-----------------------------------------------------------------------------------------------------------------------------------------------------------

 

ICT-VIPL                         


1) Subm 1:                      
CCC-V = 0.26
CCC-A = 0.309


2) Subm 2: 
CCC-V = 0.274
CCC-A = 0.327


3) Subm 3: 
CCC-V = 0.339
CCC-A = 0.383


4) Subm 4: 
CCC-V = 0.361
CCC-A = 0.408

 

               Github           

 

       Paper        

-----------------------------------------------------------------------------------------------------------------------------------------------------------


CNU_ADL                       


1) Subm 1:                      
CCC-V = 0.356
CCC-A = 0.295


2) Subm 2: 
CCC-V = 0.368
CCC-A = 0.342


3) Subm 3: 
CCC-V = 0.381
CCC-A = 0.383


4) Subm 4: 
CCC-V = 0.386
CCC-A = 0.354

 

               Github           

 

       Paper        

----------------------------------------------------------------------------------------------------------------------------------------------------------

M-Not                              


1) Subm 1:                      
CCC-V = 0.389
CCC-A = 0.321


2) Subm 2: 
CCC-V = 0.394
CCC-A = 0.333

 

               Github           

 

       Paper        

----------------------------------------------------------------------------------------------------------------------------------------------------------

 

Nuctech DSAN               


1) Subm 1:                      
CCC-V = 0.328
CCC-A = 0.253

                Github           

       Paper

-----------------------------------------------------------------------------------------------------------------------------------------------------------

 

FLAB2020                      

                    

1) Subm 1:                      
CCC-V = 0.152
CCC-A = 0.289


2) Subm 2:                      
CCC-V = 0.11
CCC-A = 0.292


3) Subm 3:                      
CCC-V = 0.141
CCC-A = 0.285


4) Subm 4:                      
CCC-V = 0.194
CCC-A = 0.294


5) Subm 5:                      
CCC-V = 0.13
CCC-A = 0.315


6) Subm 6:                      
CCC-V = 0.213
CCC-A = 0.336



                Github           

       Paper

-----------------------------------------------------------------------------------------------------------------------------------------------------------

  

SPK@EmoPred             


1) Subm 1:                      
CCC-V = 0.164
CCC-A = 0.094


2) Subm 2: 
CCC-V = 0.181
CCC-A = 0.121


3) Subm 3: 
CCC-V = 0.232
CCC-A = 0.18

 

                 Github           

        Paper

-----------------------------------------------------------------------------------------------------------------------------------------------------------

DenisRang                     


1) Subm 1:                      
CCC-V = 0.201
CCC-A = 0.168


2) Subm 2: 
CCC-V = 0.193
CCC-A = 0.171

 

 

                 Github            

        Paper

 ----------------------------------------------------------------------------------------------------------------------------------------------------------

 

UPF-DTIC-EHIW            


1) Subm 1:                      
CCC-V = 0.134
CCC-A = 0.126


2) Subm 2: 
CCC-V = 0.147
CCC-A = 0.155


3) Subm 3: 
CCC-V = 0.171
CCC-A = 0.162

 

                 Github            

        Paper

 ----------------------------------------------------------------------------------------------------------------------------------------------------------

 

ContactYourExpression      

                    

1) Subm 1:                      
CCC-V = 0.133
CCC-A = 0.182


2) Subm 2:                      
CCC-V = 0.06
CCC-A = 0.09


3) Subm 3:                      
CCC-V = 0.078
CCC-A = 0.102


 

                Github           

       Paper

-----------------------------------------------------------------------------------------------------------------------------------------------------------

 

Rest4Bs                          

                     


1) Subm 1:                      
CCC-V = 0.134
CCC-A = 0.142

                 Github           

          Paper

-----------------------------------------------------------------------------------------------------------------------------------------------------------


Orgil                                 

                     


1) Subm 1:                      
CCC-V = 0.007
CCC-A = 0.002

                 Github           

          Paper

-----------------------------------------------------------------------------------------------------------------------------------------------------------

 
 

 

 

 

 

 

 

 

 

2) Expression Challenge: 

 

 

Team Name         

Results                       

Github        

Arxiv       

-----------------------------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------------------------------------

TNT                                

1) Subm 1:                      
F1 Score = 0.37
Accuracy = 0.664
Total  = 0.467


2) Subm 2:                      
F1 Score = 0.378
Accuracy = 0.683
Total  = 0.479

 

3) Subm 3:                      
F1 Score = 0.404
Accuracy = 0.7
Total  = 0.501


4) Subm 4:                      
F1 Score = 0.398
Accuracy = 0.734
Total  = 0.509

 

             Github     

          

         Paper

         

-----------------------------------------------------------------------------------------------------------------------------------------------------------

SSSIHL DMACS             

1) Subm 1:                      
F1 Score = 0.312
Accuracy = 0.702
Total  = 0.441


2) Subm 2:                      
F1 Score = 0.305
Accuracy = 0.665
Total  = 0.424

 

3) Subm 3:                      
F1 Score = 0.318
Accuracy = 0.665
Total  = 0.434


4) Subm 4:                      
F1 Score = 0.294
Accuracy = 0.654
Total  = 0.412


5) Subm 5:                      
F1 Score = 0.3
Accuracy = 0.66
Total  = 0.418

 

6) Subm 6:                      
F1 Score = 0.306
Accuracy = 0.646
Total  = 0.418


7) Subm 7:                      
F1 Score = 0.318
Accuracy = 0.668
Total  = 0.434

 

             Github     

          

         Paper

         

-----------------------------------------------------------------------------------------------------------------------------------------------------------

   
 

SIU                                   

1) Subm 1:                      
F1 Score = 0.26
Accuracy = 0.701
Total  = 0.406


2) Subm 2:                      
F1 Score = 0.282
Accuracy = 0.7
Total  = 0.419

 

3) Subm 3:                      
F1 Score = 0.288
Accuracy = 0.686
Total  = 0.42


4) Subm 4:                      
F1 Score = 0.283
Accuracy = 0.689
Total  = 0.417


5) Subm 5:                      
F1 Score = 0.287
Accuracy = 0.693
Total  = 0.421

 

6) Subm 6:                      
F1 Score = 0.277
Accuracy = 0.697
Total  = 0.416


7) Subm 7:                      
F1 Score = 0.284
Accuracy = 0.698
Total  = 0.418

 

             Github     

          

         Paper

         

-----------------------------------------------------------------------------------------------------------------------------------------------------------



 

ICT-VIPL                         


1) Subm 1:                      
F1 Score = 0.258
Accuracy = 0.66
Total  = 0.39


2) Subm 2:                      
F1 Score = 0.274
Accuracy = 0.669
Total  = 0.404


3) Subm 3:                      
F1 Score = 0.286
Accuracy = 0.655
Total  = 0.408


4) Subm 4:                      
F1 Score = 0.287
Accuracy = 0.652
Total  = 0.408

               Github    

          

           Paper     

       

-----------------------------------------------------------------------------------------------------------------------------------------------------------

 

NISL2020                        


1) Subm 1:                      
F1 Score = 0.292
Accuracy = 0.535
Total  = 0.372


2) Subm 2:                      
F1 Score = 0.303
Accuracy = 0.553
Total  = 0.386


3) Subm 3:                      
F1 Score = 0.271
Accuracy = 0.652
Total  = 0.397


4) Subm 4:                      
F1 Score = 0.27
Accuracy = 0.68
Total  = 0.405

                Github  

         

             Paper  

         

-----------------------------------------------------------------------------------------------------------------------------------------------------------

 

CNU_ADL                        

                      


1) Subm 1:                      
F1 Score = 0.263
Accuracy = 0.546
Total  = 0.356


2) Subm 2:                      
F1 Score = 0.264
Accuracy = 0.48
Total  = 0.335


3) Subm 3:                      
F1 Score = 0.311
Accuracy = 0.547
Total  = 0.389


4) Subm 4:                      
F1 Score = 0.295
Accuracy = 0.565
Total  = 0.384

                

               Github

 

         

                  Paper   

         

-----------------------------------------------------------------------------------------------------------------------------------------------------------

FLAB2020                        

1) Subm 1:                      
F1 Score = 0.208
Accuracy = 0.653
Total  = 0.355


2) Subm 2:                      
F1 Score = 0.209
Accuracy = 0.678
Total  = 0.364

 

3) Subm 3:                      
F1 Score = 0.219
Accuracy = 0.666
Total  = 0.367


4) Subm 4:                      
F1 Score = 0.211
Accuracy = 0.668
Total  = 0.362


5) Subm 5:                      
F1 Score = 0.225
Accuracy = 0.663
Total  = 0.369

 

6) Subm 6:                      
F1 Score = 0.211
Accuracy = 0.648
Total  = 0.355

 

             Github     

          

         Paper

         

-----------------------------------------------------------------------------------------------------------------------------------------------------------

Robolab @ UBD               


1) Subm 1:                      
F1 Score = 0.2
Accuracy = 0.63
Total  = 0.342

 

                  Github   

           

             Paper

-----------------------------------------------------------------------------------------------------------------------------------------------------------


 

SPK@EmoPred              

                    

                     

1) Subm 1:                      
F1 Score = 0.145
Accuracy = 0.531
Total  = 0.273


                 Github           

     Paper

-----------------------------------------------------------------------------------------------------------------------------------------------------------

ContactYourExpression   

1) Subm 1:                      
F1 Score = 0.167
Accuracy = 0.456
Total  = 0.263


2) Subm 2:                      
F1 Score = 0.183
Accuracy = 0.377
Total  = 0.247

 

3) Subm 3:                      
F1 Score = 0.177
Accuracy = 0.456
Total  = 0.27

 

             Github     

          

         Paper

         

-----------------------------------------------------------------------------------------------------------------------------------------------------------

NucTech DSAN               


1) Subm 1:                      
F1 Score = 0.11
Accuracy = 0.317
Total  = 0.179

 

                  Github   

           

             Paper

-----------------------------------------------------------------------------------------------------------------------------------------------------------

Rest4Bs                          


1) Subm 1:                      
F1 Score = 0.09
Accuracy = 0.278
Total  = 0.152

 

                    Github           

     Paper

 

 

 

 

 

 

 

 

 

 

 

 

 

3) Action Unit Challenge: 

 

 

Team Name         

Results                       

Github        

Arxiv       

-----------------------------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------------------------------------

FLAB2020                        

1) Subm 1:                      
F1 Score = 0.366
Accuracy = 0.926
Total  = 0.646

 

             Github  

           

            Paper  

         

-----------------------------------------------------------------------------------------------------------------------------------------------------------

NISL2020                        

1) Subm 1:                      
F1 Score = 0.196
Accuracy = 0.933
Total  = 0.565


2) Subm 2:                      
F1 Score = 0.289
Accuracy = 0.864
Total  = 0.576

 

3) Subm 3:                      
F1 Score = 0.236
Accuracy = 0.938
Total  = 0.587


4) Subm 4:                      
F1 Score = 0.309
Accuracy = 0.905
Total  = 0.607

 

             Github  

           

            Paper  

         

-----------------------------------------------------------------------------------------------------------------------------------------------------------

 

TNT                                


1) Subm 1:                      
F1 Score = 0.204
Accuracy = 0.928
Total  = 0.566


2) Subm 2:                      
F1 Score = 0.2
Accuracy = 0.936
Total  = 0.568


3) Subm 3:                      
F1 Score = 0.257
Accuracy = 0.937
Total  = 0.597


4) Subm 4:                      
F1 Score = 0.27
Accuracy = 0.932
Total  = 0.601

                 Github

           

             Paper

         

-----------------------------------------------------------------------------------------------------------------------------------------------------------

SALT                              


1) Subm 1:                      
F1 Score = 0.161
Accuracy = 0.906
Total  = 0.533


2) Subm 2:                      
F1 Score = 0.216
Accuracy = 0.886
Total  = 0.551


 

                  Github           

   Paper         

-----------------------------------------------------------------------------------------------------------------------------------------------------------

 

Netease Fuxi                  

Virtual Human

    

 

1) Subm 1:                      
F1 Score = 0.162
Accuracy = 0.908
Total  = 0.535


2) Subm 2:                      
F1 Score = 0.185
Accuracy = 0.894
Total  = 0.54



                   Github           

   Paper

-----------------------------------------------------------------------------------------------------------------------------------------------------------

Nuctech DSAN              


1) Subm 1:                      
F1 Score = 0.137
Accuracy = 0.921
Total  = 0.529

 

                     Github 

            Paper

-----------------------------------------------------------------------------------------------------------------------------------------------------------

 

 

Rest4Bs                          


1) Subm 1:                      
F1 Score = 0.12
Accuracy = 0.90
Total  = 0.51

 

                     Github           

   Paper

 

 

 

 

 


Scope


This Competition aims at advancing the state-of-the-art in the problem of analysis of human affective behavior in-the-wild. Representing human emotions has been a basic topic of research. The most frequently used emotion representation is the categorical one, including the seven basic categories, i.e., Anger, Disgust, Fear, Happiness, Sadness, Surprise and Neutral. Discrete emotion representation can also be described in terms of the Facial Action Coding System model, in which all possible facial actions are described in terms of Action Units (AUs). Finally, the dimensional model of affect has been proposed as a means to distinguish between subtly different displays of affect and encode small changes in the intensity of each emotion on a continuous scale. The 2-D Valence and Arousal Space (VA-Space) is the most usual dimensional emotion representation; valence shows how positive or negative an emotional state is, whilst arousal shows how passive or active it is.

 

 

The Competition

 

The Competition is split into three Challenges-Tracks, which are based, for the first time, on the same database; these target: dimensional and categorical affect recognition. In particular, the 3 Challenges-Tracks are: 

  • valence-arousal estimation
  • seven basic expression classification
  • facial action unit detection

These Challenges will produce a significant step forward when compared to previous events. In particular, they use the Aff-Wild2, the first comprehensive benchmark for all three affect recognition tasks in-the-wild. 

Participants are invited to participate in one or more of these Challenges.

There will be one winner per Challenge-Track; the winners are expected to contribute a paper describing their approach, methodlogy and results; the accepted winning papers will be part of the IEEE FG 2020 proceedings; all other teams will be able to able to submit a paper describing their solutions and final results to the Workshop that we are also organizing at FG2020.

For the purpose of the Challenges and to facilitate training, especially for people that do not have access to face detectors/tracking algorithms, we provide the cropped images and the cropped & aligned ones.

 

The baseline/white paper is ready. You can read it here.

 

 

Data: Aff-Wild2  

 

Aff-Wild2 is an extension of the Aff-Wild database (both in terms of annotations and videos). Aff-Wild2 is: i) an in-the-wild audiovisual database;  ii) a large scale database consisting of 564 videos of around 2.8M frames (the largest existing one); iii) the first database to contain annotations for all 3 behavior tasks (and also the first audiovisual database with annotations for AUs). 558 videos contain annotations for valence-arousal, 539 videos contain annotations for the 7 basic expressions and 57 videos contain annotations for 8 AUs (AU1,AU2,AU4,AU6,AU12,AU15,AU20,AU25).


How to participate


To participate, you need to register your team.
For this, please send us an email to: d.kollias@qmul.ac.uk

with the title "Affective Behavior Analysis in-the-wild Competition: Team Registration".
In this email include the following information:

Team Name
Team Members
Affiliation


There is no maximum number of participants in each team.

As a reply, you will receive access to the dataset's videos, annotations, cropped and cropped-aligned images and other important information.

 

At the end of the Challenges, each team will have to send us: i) their predictions on the test set, ii) a link to a Github repository where their solution/source code will be stored, and iii) a link to an ArXiv paper with 2-6 pages describing their proposed methodology, data used and results. After that, the winner of each Challenge will be announced and will be invited to submit a paper describing the solution and results. Also all (non-winning) teams will be able to submit a paper describing their solutions and final results to the Workshop that we are also organizing entitled 'Affect Recognition in-the-wild: Uni/Multi-Modal Analysis'.

 

Rules

 

• Participants can contribute to any of the 3 Challenges.

• In order to take part in any Challenge, participants will have to register by sending an email to the organizers containing the following information: Team Name, Team Members, Affiliation.

• Participants can use scene/background/body pose etc. information along with the face information.

• Any face detector whether commercial or academic can be used in the challenge. The paper accompanying the challenge result submission should contain clear details of the detectors/libraries used.

The participants are free to use external data for training along with the Aff-Wild2 partitions. However, this should be clearly discussed in the accompanying paper

The participants are free to use any pre-trained network, even the publicly available ones (CNN, AffWildNet) that displayed the best performance in the (former) Aff-Wild database (part of Aff-Wild2). 

 

Performance Assessment

 

1)  For Challenge-Track 1: Valence-Arousal estimation : the Concordance Correlation Coefficient (CCC) will be the metric to judge the performance of the models.

 

2) For Challenge-Track 2: 7 Basic Expression Classification: the perfromance metric will be:
0.67* F1_Score + 0.33* Accuracy
Note: F1 Score is the unweighted mean and Accuracy is the total accuracy

 

3) For Challenge-Track 3: 8 Action Unit Detection: the perfromance metric will be:
0.5* F1_Score + 0.5* Accuracy

Note: F1 Score is the unweighted mean and Accuracy is the total accuracy

 

 

Test Set Submissions:



Participating teams are allowed to have at most 7 different submissions per Challenge-Track. A submission is considered valid if we receive the code, the paper and the results; so if you fail to submit one of these items, your submission will be invalid.

When sending your final results, make sure to clarify to which Challenge-Track they correspond by for example storing them into a folder named after the Challenge-Track.

The format of the predictions should follow the (same) format of the annotation files that we provided. So if the test set contains for instance 200 videos, the submission should also contain 200 text files (or something more if some videos contain two subjects). The names of the files should match the ones that are in the attached, to this email, files. Each file should contain in its first line the above (as was the case with the annotation files), depending on which Challenge-Track it corresponds to:

  • valence,arousal
or
  • Neutral,Anger,Disgust,Fear,Happiness,Sadness,Surprise
or
  • AU1,AU2,AU4,AU6,AU12,AU15,AU20,AU25

After that, each line should contain the predictions corresponding to each video frame. 

For the VA Challenge-Track, each following line should have the valence and arousal values (first the valence value and then the arousal) comma separated (as was the case in the annotation files), such as:

0.58,0.32
 
For the AU Challenge-Track, each following line should have the 8 action unit values comma separated (as was the case in the annotation files), such as:

0,1,0,1,1,0,1,0
 
 For the Expr Challenge-Track, each following line should have the expression value (as was the case in the annotation files), which is in: {0,1,2,3,4,5,6} (these values correspond to the emotions: {Neutral,Anger,Disgust,Fear,Happiness,Sadness,Surprise}). So for instance one line could be:

4
 
Note that in your files you should include predictions for all frames in the video (regardless if the bounding box failed or not). So the total number of lines in a file should be equal to the total number of frames of this video plus one (we previously stated the format of the first line of each file).



References


If you use the above data, you must cite all following papers: 


  • D. Kollias, et. al.: "Analysing Affective Behavior in the First ABAW 2020 Competition". IEEE FG, 2020

@inproceedings{kollias2020analysing, title={Analysing Affective Behavior in the First ABAW 2020 Competition}, author={Kollias, D and Schulc, A and Hajiyev, E and Zafeiriou, S}, booktitle={2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020)(FG)}, pages={794--800}}

 

  • D. Kollias, S. Zafeiriou: "Expression, Affect, Action Unit Recognition: Aff-Wild2, Multi-Task Learning and ArcFace". BMVC, 2019

@article{kollias2019expression, title={Expression, Affect, Action Unit Recognition: Aff-Wild2, Multi-Task Learning and ArcFace}, author={Kollias, Dimitrios and Zafeiriou, Stefanos}, journal={arXiv preprint arXiv:1910.04855}, year={2019} }


  • D. Kollias, et at.: "Face Behavior a la carte: Expressions, Affect and Action Units in a Single Network", 2019

@article{kollias2019face,title={Face Behavior a la carte: Expressions, Affect and Action Units in a Single Network}, author={Kollias, Dimitrios and Sharmanska, Viktoriia and Zafeiriou, Stefanos}, journal={arXiv preprint arXiv:1910.11111}, year={2019}}

 

  • D. Kollias, et. al.: "Deep Affect Prediction in-the-wild: Aff-Wild Database and Challenge, Deep Architectures, and Beyond". International Journal of Computer Vision (IJCV), 2019

@article{kollias2019deep, title={Deep affect prediction in-the-wild: Aff-wild database and challenge, deep architectures, and beyond}, author={Kollias, Dimitrios and Tzirakis, Panagiotis and Nicolaou, Mihalis A and Papaioannou, Athanasios and Zhao, Guoying and Schuller, Bj{\"o}rn and Kotsia, Irene and Zafeiriou, Stefanos}, journal={International Journal of Computer Vision}, pages={1--23}, year={2019}, publisher={Springer} }

 

  • S. Zafeiriou, et. al. "Aff-Wild: Valence and Arousal in-the-wild Challenge". CVPR, 2017

@inproceedings{zafeiriou2017aff, title={Aff-wild: Valence and arousal ‘in-the-wild’challenge}, author={Zafeiriou, Stefanos and Kollias, Dimitrios and Nicolaou, Mihalis A and Papaioannou, Athanasios and Zhao, Guoying and Kotsia, Irene}, booktitle={Computer Vision and Pattern Recognition Workshops (CVPRW), 2017 IEEE Conference on}, pages={1980--1987}, year={2017}, organization={IEEE} }

 

  • D. Kollias, et. al. "Recognition of affect in the wild using deep neural networks". CVPR, 2017

@inproceedings{kollias2017recognition, title={Recognition of affect in the wild using deep neural networks}, author={Kollias, Dimitrios and Nicolaou, Mihalis A and Kotsia, Irene and Zhao, Guoying and Zafeiriou, Stefanos}, booktitle={Computer Vision and Pattern Recognition Workshops (CVPRW), 2017 IEEE Conference on}, pages={1972--1979}, year={2017}, organization={IEEE} }

 

 

Regarding the database:

 

• The database and annotations are available for academic non-commercial research purposes only. If you want to use them for any other purpose (eg industrial -either research or commercial-) emailD.Kollias@greenwich.ac.uk

• All the training/validation/testing images of the dataset have been obtained from Youtube. We are not responsible for the content nor the meaning of these images.

• Participants will agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purposes, any portion of the images and any portion of derived data. They will also agree not to further copy, publish or distribute any portion of annotations of the dataset. Except, for internal use at a single site within the same organization it is allowed to make copies of the dataset.

• We reserve the right to terminate participants’ access to the dataset at any time.

• If a participant’s face is displayed in any video and (s)he wants it to be removed, (s)he can email us at any time

 

 

Important Dates (updated): 

  • Call for participation announced, team registration begins, data available:       

18 November 2019

  • Final submission deadline (Results, Code and ArXiv paper):

  9 February, 2020

  • Winners Announcement:      

  10 February, 2020

  • Final paper submission deadline:                       
 

24 February, 2020

  • Review decisions sent to authors; Notification of acceptance:                     
 

   29 February, 2020

  • Camera ready version deadline:                                                                   
 

   4 March, 2020

 

 

Final Paper Sumbission Information

 

The paper format should adhere to the paper submission guidelines for FG2020. Please have a look at the: Instructions of paper submission for review

 The submission process will be handled through the CMT.

 All accepted manuscripts will be part of FG2020 conference proceedings.

 

 

Keynote Speakers

 

Aleix M. Martinez


Aleix M. Martinez is a Professor in the Department of Electrical and Computer Engineering at The Ohio State University (OSU), where he is the founder and director of the the Computational Biology and Cognitive Science Lab. He is also affiliated with the Department of Biomedical Engineering and to the Center for Cognitive Science where he is a member of the executive committee. Prior to joining OSU, he was affiliated with the Electrical and Computer Engineering Department at Purdue University and with the Sony Computer Science Lab. He has served as an associate editor of IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Transaction on Affective Computing, Computer Vision and Image Understanding, and Image and Vision Computing. He has been an area chair for many top conferences and was Program Chair for CVPR 2014 in his hometown, Columbus, OH. He is also a member of the Cognition and Perception study section at NIH and has served as reviewer for numerous NSF, NIH as well as other national and international funding agencies. Dr. Martinez is the recepient of numerous awards, including best paper awards at ECCV and CVPR, Lumely Research Award, and a Google Faculty Research Award. Dr. Martinez research has been covered by numerous national media outlets, including CNN, The Huffington Post, Time Magazine, CBS News and NPR, as well as intrernational outets, including The Guardian, Spiegel, El Pais and Le Monde. 

 

 

Pablo Barros


Pablo Barros is currently working as a research scientist at the Italian Institute of Technology in Genova,Italy. His main focus is on the development of deep and self-organizing neural networks for different aspects of emotional appraisal and display in social robots. Prior to that, he was a Post-Doctoral Research Associate in the TRR Crossmodal Learning Project at the Knowledge Technology research group at the University of Hamburg, Germany. He has also been a Vising Professor in the University of Pernambuco - UPE. He holds a Bachelor's degree in Information Systems from the Federal Rural University of Pernambuco - UFRPE and a Master's in Computer Engineering from the University of Pernambuco - UPE and a PhD in Computer Science from the University of Hamburg, Germany. He worked on projects involving computing and affective robotics, assistive computing, artificial neural networks and computational intelligence. He has organized many very successful workshops in top conferences, such as IEEE FG, IEEE/RSJ IROS,  IEEE WCCI, IEEE ICDL-EPIROB. Additionally he was a Chair in two Competitions held in conjunction with  IEEE FG and IEE WCCI/IJCNN 2018. He has organized special issues in journals, such as Frontiers in Neurorobotics, IEEE Transactions on Affective Computing and Elsevier Cognitive Systems Research.

 

 

 

Sponsors:

The Affective Behavior Analysis in-the-wild Challenge has been generously supported by:
Imperial College London
and
Realeyes - Emotional Intelligence