From barros at informatik.uni-hamburg.de Fri Aug 10 16:26:28 2018 From: barros at informatik.uni-hamburg.de (Pablo Barros) Date: Fri, 10 Aug 2018 17:26:28 +0200 Subject: [Face-research-list] Resources for Gestures and Emotion Recognition Message-ID: <6075c734-41d0-bcc0-d10a-b4574c46afcd@informatik.uni-hamburg.de> Dear all, We are very happy to announce the release of resource material related to our research on affective computing and gestures recognition. This resource contains research on hand gesture and emotion processing (auditory, visual and crossmodal) recognition and processing, organized as three datasets (NCD, GRIT, and OMG-Emotion), source code for proposed neural network solutions, pre-trained models, and ready-to-run demos. The NAO Camera hand posture Database (NCD) was designed and recorded using the camera of a NAO robot and contains four different hand postures. A total of 2000 images were recorded. In each image, the hand was is present in different positions, not always in the centralized, and sometimes with occlusion of some fingers. The Gesture Commands for Robot InTeraction (GRIT) dataset contains recordings of six different subjects performing eight command gestures for Human-Robot Interaction (HRI): Abort, Circle, Hello, No, Stop, Turn Right, Turn Left, and Warn. We recorded a total of 543 sequences with a varying number of frames in each one. The One-Minute Gradual Emotion Corpus (OMG-Emotion) is composed of Youtube videos which are about a minute in length and are annotated taking into consideration a continuous emotional behavior. The videos were selected using a crawler technique that uses specific keywords based on long-term emotional behaviors such as "monologues", "auditions", "dialogues" and "emotional scenes". After the videos were selected, we created an algorithm to identify weather the video had at least two different modalities which contribute for the emotional categorization: facial expressions, language context, and a reasonably noiseless environment. We selected a total of 420 videos, totaling around 10 hours of data. Together with the datasets, we provide the source code for different proposed neural models. These models are based on novel deep and self-organizing neural networks which deploy different mechanisms inspired by neuropsychological concepts. All of our models are formally described in different high-impact peer-reviewed publications. We also provide a ready-to-run demo for visual emotion recognition based on our proposed models. These resources are accessible through our GitHub link: https://github.com/knowledgetechnologyuhh/EmotionRecognitionBarros . We hope that with these resources we can contribute to the areas of affective computing and gesture recognition and foster the development of innovative solutions. -- Dr.rer.nat. Pablo Barros Postdoctoral Research Associate - Crossmodal Learning Project (CML) Knowledge Technology Department of Informatics University of Hamburg Vogt-Koelln-Str. 30 22527 Hamburg, Germany Phone: +49 40 42883 2535 Fax: +49 40 42883 2515 barros at informatik.uni-hamburg.de https://www.inf.uni-hamburg.de/en/inst/ab/wtm/people/barros.html https://www.inf.uni-hamburg.de/en/inst/ab/wtm/ From antitza.dantcheva at inria.fr Thu Aug 9 21:42:52 2018 From: antitza.dantcheva at inria.fr (Antitza Dantcheva) Date: Thu, 09 Aug 2018 20:42:52 -0000 Subject: [Face-research-list] Open Ph.D. position at INRIA, Sophia Antipolis in the research area Computer Vision and Deep Learning applied to Facial Analysis in invisible spectra In-Reply-To: <1951909227.24517685.1501763315241.JavaMail.zimbra@inria.fr> References: <1951909227.24517685.1501763315241.JavaMail.zimbra@inria.fr> Message-ID: <1685323419.14384221.1533847368633.JavaMail.zimbra@inria.fr> Open Ph.D. position at INRIA, Sophia Antipolis in the research area face analysis in invisible spectra ---------------------------------------------------------------------------------------------------------------------------------- Open Position for a Ph.D. student at INRIA Sophia Antipolis, France in the area of Computer Vision and Deep Learning applied to Facial Analysis in invisible spectra. INRIA Sophia Antipolis is ideally located in the heart of the French Riviera, inside the multi-cultural silicon valley of Europe. Positions are offered within the framework of the French national project SafeCity in collaboration with Gemalto, France. Please see the full announcement [ http://antitza.com/fdp_en_phd_gemalto.pdf | http://antitza.com/fdp_en_phd_gemalto.pdf ] To apply, please email a full application to Antitza Dantcheva ( antitza.dantcheva at inria.fr ), indicating “Gemalto – Ph.D.” in the e-mail subject line. The application should contain a motivation letter, a CV, as well as contact information for at least two references, who can provide recommendation letters upon request. The submission deadline is 31 st of August 2018. Nevertheless, the application may be closed before the date, if a satisfying candidate is found. Best regards, Antitza Dantcheva -------------------------- Researcher STARS Team INRIA Sophia Antipolis - Méditerranée 2004, route des Lucioles - BP 93 06902 Sophia Antipolis Cedex Phone: [ callto:+33%204%2097%2015%2053%2047 | +33 4 97 15 53 47 ] Website: antitza.com -------------- next part -------------- An HTML attachment was scrubbed... URL: