The motivation of this project is providing machines with the ability of understanding what a person is experiencing from her frame of reference. This capacity is essential in our everyday life in order to perceive, anticipate and respond with care to people’s reactions. This makes one think that machines with this type of ability would interact better with people.
While remarkable improvements have been shown in emotion recognition from facial expression or body posture, there are no systems that incorporate contextual information, meaning the situation and surroundings of the person. We expect that our EMOTIC dataset, in combination with previous datasets on emotion estimation, will open the door to new approaches for
the problem of emotion estimation in the wild from visual information.