Context is important to recognize emotions

EMOTIC Dataset

The EMOTIC dataset, named after EMOTions In Context, is a database of images with people in real environments, annotated with their apparent emotions. The images are annotated with an extended list of 26 emotion categories combined with the three common continuous dimensions Valence, Arousal and Dominance.

Download our paper

Please, cite our work if you use this service:

R. Kosti, J.M. Álvarez, A. Recasens and A. Lapedriza, "Context based emotion recognition using emotic dataset", IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2019. (pdf, bibtex)

You can also check our first paper on "Emotions in Context":

R. Kosti, J.M. Álvarez, A. Recasens and A. Lapedriza, "Emotion Recognition in Context", Computer Vision and Pattern Recognition (CVPR), 2017. (pdf, bibtex)
longboard © goXunuReviews

Emotion categories

Peace

Wellbeing and relaxed; no worry; having positive thoughts or sensations; satisfied

Engagement

Paying attention to something; absorbed into something; curious; interested

Continuous dimensions

Valence Negative / Positive

Valence

Arousal Calm / Active

Arousal

Dominance Dominated / In Control

Dominance

Motivation

The motivation of this project is providing machines with the ability of understanding what a person is experiencing from her frame of reference. This capacity is essential in our everyday life in order to perceive, anticipate and respond with care to people’s reactions. This makes one think that machines with this type of ability would interact better with people.

While remarkable improvements have been shown in emotion recognition from facial expression or body posture, there are no systems that incorporate contextual information, meaning the situation and surroundings of the person. We expect that our EMOTIC dataset, in combination with previous datasets on emotion estimation, will open the door to new approaches for the problem of emotion estimation in the wild from visual information.

Acknowledgements

This work is partly supported by the Ministerio de Economía, Industria y Competitividad (Spain), TIN2015-66951-C2-2-R.

We thank NVIDIA for their generous hardware donations.