Laughter is one of the most common and useful human social signals. It helps humans to express their emotions and intentions in social interactions and provides useful feedback during interpersonal interactions. However, it is surprising that our knowledge about laughter is still incomplete and little empirical information is available. The main reason for this is the lack of data.
In addition, most of the existing corpora that contain laughter only offer audio recordings. It is clear that laughter is an audiovisual event. It consists of an audio component, the laughter vocalization, and a visual component which involves facial activity around the mouth, the cheeks, and often the upper face.
Therefore, in order to study laughter using both audio and visual information we have created an audiovisual database containing laughter, speech, posed smiles and acted laughter. A video camera was used to capture video at 25 fps, two microphones were used to capture audio (lapel mic + camera mic). Also a thermal camera was used to capture thermal images at 25fps.
22 subjects (12 males, 10 females) were recorded in 4 sessions. In the first session the subjects were recorded while watching funny video clips. In the second and third sessions they were asked to pose a smile and produce an acted laughter, respectively. Finally, in the last session they were asked to speak first in their mother language and then in English. In total, 180 sessions are available with a total duration of 3h 49m. There are 563 instances of laughter, 849 speech utterances, 51 instances of acted laughter, ~50 instances of posed smiles and 167 other vocalisations.
Visit http://mahnob-db.eu/laughter/ to access the database.