MAHNOB-Mimicry database

In order to study the phenomena occurring in social interactions between humans in more detail,and to allow machine analysis of these social signals, researchers are in need of rich sets of labeled data of repeatable experiments, which should represent situations occurring in daily life.The MHI-Mimicry database was created in order to address this issue, and more particularly to analyze mimicry in human-human interaction scenarios. Specifically, the goal of the database is to provide a collection of recordings in which the participating subjects are acting with a significant amount of resemblance and/or synchrony.

The recordings were made under controlled laboratory conditions using 15 cameras and 3 microphones, to obtain the most favorable conditions possible for analysis of the observed behavior. All sensory data was synchronized with extreme accuracy (less than 10ns) using hardware triggering [1].

Recordings were made of two experiments: a discussion on a political topic, and a role-playing game. In total there are 54 recordings, of which 34 are of the discussions and 20 of the role-playing game. Apart from the recordings, the database contains annotations for many different phenomena, including dialogue acts, turn-taking, affect, head gestures, hand gestures, body movements and facial expressions.

In total, 40 participants were recruited, of which 28 were male and 12 female, aged between 18 and 40 years old at the time of the recordings. All of the participants self-reported their felt experiences after the conduction of the experiments. Please cite [2] whenever using the data from the MHI-Mimicry database.

Related Publications

  1. A Multimodal Database for Mimicry Analysis

    X. Sun, J. Lichtenauer, M. F. Valstar, A. Nijholt, M. Pantic. Proceedings of the 4th Bi-Annual International Conference of the HUMAINE Association on Affective Computing and Intelligent Interaction (ACII2011). Memphis, Tennessee, USA, October 2011.

    Bibtex reference [hide]
    @inproceedings{sun2011multimodal,
        author = {X. Sun and J. Lichtenauer and M. F. Valstar and A. Nijholt and M. Pantic},
        address = {Memphis, Tennessee, USA},
        booktitle = {Proceedings of the 4th Bi-Annual International Conference of the HUMAINE Association on Affective Computing and Intelligent Interaction (ACII2011)},
        month = {October},
        title = {A Multimodal Database for Mimicry Analysis},
        year = {2011},
    }
    Endnote reference [hide]
    %0 Conference Proceedings
    %T A Multimodal Database for Mimicry Analysis
    %A Sun, X.
    %A Lichtenauer, J.
    %A Valstar, M. F.
    %A Nijholt, A.
    %A Pantic, M.
    %B Proceedings of the 4th Bi-Annual International Conference of the HUMAINE Association on Affective Computing and Intelligent Interaction (ACII2011)
    %D 2011
    %8 October
    %C Memphis, Tennessee, USA
    %F sun2011multimodal

  2. Cost-effective solution to synchronised audio-visual data capture using multiple sensors

    J. Lichtenauer, J. Shen, M. F. Valstar, M. Pantic. Image and Vision Computing. 29: pp. 666 - 680, September 2011.

    Bibtex reference [hide]
    @article{Lichtenauer2011a,
        author = {J. Lichtenauer and J. Shen and M. F. Valstar and M. Pantic},
        pages = {666--680},
        journal = {Image and Vision Computing},
        month = {September},
        publisher = {Elsevier},
        title = {Cost-effective solution to synchronised audio-visual data capture using multiple sensors},
        url = {http://dx.doi.org/10.1016/j.imavis.2011.07.004},
        volume = {29},
        year = {2011},
    }
    Endnote reference [hide]
    %0 Journal Article
    %T Cost-effective solution to synchronised audio-visual data capture using multiple sensors
    %A Lichtenauer, J.
    %A Shen, J.
    %A Valstar, M. F.
    %A Pantic, M.
    %J Image and Vision Computing
    %D 2011
    %8 September
    %V 29
    %I Elsevier
    %F Lichtenauer2011a
    %U http://dx.doi.org/10.1016/j.imavis.2011.07.004
    %P 666-680