Logo Multimodal Gesture Recognition

Chalearn Looking at People 2014

Final Evaluation data labels published

We added the scripts and labels for each track on the track Test data tab of each track. You need the team credentials to access this information.

Final results

Dear participants, we are pleased to announce the final results for ChaLearn Looking at People 2014.

Track1: Human Pose Recovery

RankTeam NameScore
1ZJU0.194144
2Seawolf Vision0.182097

Track 2: Action/Interaction Recognition

RankTeam NameScore
1CUHK-SWJTU0.507173
2ADSC0.501164
3SBUVIS0.441405
4DonkeyBurger0.342192
5UC-T20.121565
6MindLAB0.008383

Track 3: Gesture Recognition

RankTeam NameScore
1liris0.849987
2CraSPN0.833904
3JY0.826799
4CUHK-SWJTU0.791933
5lpigou0.788804
6stevenwudi0.78731
7ismar0.746632
8Quads0.745449
9Telepoints0.688778
10TUM-fortiss0.648979
11CSU-SCM0.597177
12iva.mm0.556251
13Terrier0.539025
14Team Netherlands0.430709
15VecsRel0.408012
16Samgest0.391613
17YNL0.2706

ChaLearn organizes in 2014 three parallel challenge tracks on Human Pose Recovery on RGB data, action/interaction spotting on RGB data, and gesture spotting on RGB-Depth data.

Top three ranked participants on each track will be awarded and invited to follow the workshop submission guide for ECCV workshop for inclusion of a description of their system at the ECCV workshop proceedings and submit an extended paper in a special issue on gesture recognition at a high-impact factor journal.

 

 

This is a skill-based contest and chance plays no part in the determination of the winner(s). The goal of the contest is split into three competition tracks (with a shared schedule):

Track 1: Human Pose Recovery:

Focus of the Contest: More than 8,000 frames of continuous RGB sequences are recorded and labeled with the objective of performing human pose recovery by means of recognizing more than 120,000 human limbs of different people.

Track 2: Action/Interaction Recognition:

Focus of the Contest: Recognizing actions/interactions using 235 performances of 11 action/interaction categories recorded and manually labeled in continuous RGB sequences of different people performing natural isolated and collaborative behaviors.

Track 3: Gesture Recognition:

Focus of the Contest: Recognizing gestures drawn from a vocabulary of Italian sign gesture categories. The emphasis of this track is on multi-modal automatic learning of a set of 20 gestures performed by several different users, with the aim of performing user independent continuous gesture spotting.

This track follows a previous challenge organized on the same theme: ChaLearn Multi-modal Gesture Recognition 2013. In this new edition, more precise labels are provided, allowing a gesture spotting competition. The data contains more than 900 samples, containing near 14.000 gesture instances and more than 1.4 million of frames.

Source Code

Python code is provided in order to access the data and evaluate your predictions. Code files are common to all the tracks. See source code reference within each track.

ChalearnLAPSample

Define the classes and methods to access the data.

Download [09/02/2014]

ChalearnLAPEvaluation

Define the methods for evaluation.

Download [09/02/2014]

Track Information

Each track contains data description, provided code details, and download links. Choose the track tabs for especific information.