[Face-research-list] Resources for Gestures and Emotion Recognition

Pablo Barros barros at informatik.uni-hamburg.de
Fri Aug 10 16:26:28 BST 2018


Dear all,

We are very happy to announce the release of resource material related
to our research on affective computing and gestures recognition. This
resource contains research on hand gesture and emotion processing
(auditory, visual and crossmodal) recognition and processing, organized
as three datasets (NCD, GRIT, and OMG-Emotion), source code for proposed
neural network solutions, pre-trained models, and ready-to-run demos.

The NAO Camera hand posture Database (NCD) was designed and recorded
using the camera of a NAO robot and contains four different hand
postures. A total of 2000 images were recorded. In each image, the hand
was is present in different positions, not always in the centralized,
and sometimes with occlusion of some fingers.

The Gesture Commands for Robot InTeraction (GRIT) dataset contains
recordings of six different subjects performing eight command gestures
for Human-Robot Interaction (HRI): Abort, Circle, Hello, No, Stop, Turn
Right, Turn Left, and Warn. We recorded a total of 543 sequences with a
varying number of frames in each one.

The One-Minute Gradual Emotion Corpus (OMG-Emotion) is composed of
Youtube videos which are about a minute in length and are annotated
taking into consideration a continuous emotional behavior. The videos
were selected using a crawler technique that uses specific keywords
based on long-term emotional behaviors such as "monologues",
"auditions", "dialogues" and "emotional scenes".
After the videos were selected, we created an algorithm to identify
weather the video had at least two different modalities which contribute
for the emotional categorization: facial expressions, language context,
and a reasonably noiseless environment. We selected a total of 420
videos, totaling around 10 hours of data.

Together with the datasets, we provide the source code for different
proposed neural models. These models are based on novel deep and
self-organizing neural networks which deploy different mechanisms
inspired by neuropsychological concepts. All of our models are formally
described in different high-impact peer-reviewed publications. We also
provide a ready-to-run demo for visual emotion recognition based on our
proposed models.

These resources are accessible through our GitHub link:
https://github.com/knowledgetechnologyuhh/EmotionRecognitionBarros .

We hope that with these resources we can contribute to the areas of
affective computing and gesture recognition and foster the development
of innovative solutions.

-- 
Dr.rer.nat. Pablo Barros
Postdoctoral Research Associate - Crossmodal Learning Project (CML)
Knowledge Technology
Department of Informatics
University of Hamburg
Vogt-Koelln-Str. 30
22527 Hamburg, Germany
Phone: +49 40 42883 2535
Fax: +49 40 42883 2515
barros at informatik.uni-hamburg.de
https://www.inf.uni-hamburg.de/en/inst/ab/wtm/people/barros.html
https://www.inf.uni-hamburg.de/en/inst/ab/wtm/




More information about the Face-research-list mailing list