Dear colleagues,
We are inviting abstract submissions for a special session on “Human Health
Monitoring Based on Computer Vision”, as part of the 14th IEEE
International Conference on Automatic Face and Gesture Recognition (FG’19,
http://fg2019.org/), Lille, France, May 14-18, 2019. Details on the special
session follow below.
Title, abstract, list of authors, as well as the name of the corresponding
author, should be emailed directly to Abhijit Das (abhijitdas2048(a)gmail.com).
We hope to receive abstracts before October 8th 2018. Full paper submission
December 9th 2018.
Feel free to contact Abhijit Das if you have any further questions.
Kindly circulate this email to others who might be interested.
We look forward to your contributions!
François Brémond (INRIA, France)
Antitza Dantcheva (INRIA, France)
Abhijit Das (INRIA, France)
Xilin Chen (CAS, China)
Hu Han (CAS, China)
--------------------------------------------------------------------------------------------
*Call for abstract for FG 2018 special session *
*on*
*Human Health Monitoring Based on Computer Vision*
---------------------------------------------------------------
Human Health Monitoring Based on Computer Vision has gained rapid
scientific growth in the last years, with many research articles and
complete systems based on a set of features, extracted from face and
gesture. Researchers from the computer, as well as from medical science
have granted significant attention, with goals ranging from patient
analysis and monitoring to diagnostics. (e.g., for dementia, depression,
healthcare, physiological measurement [5, 6]).
Despite the progress, there are various open, unexplored, and
unidentified challenges. Such as the robustness of these techniques in the
real-world scenario, collecting large dataset for research, heterogeneity
of the acquiring environment and the artefacts. Moreover, healthcare
represents an area of broad economic (e.g.,
https://www.prnewswire.com/news-releases/healthcare-automation-market-to-re…),
social, and scientific impact. Therefore, it is imperative to foster
efforts coming from computer vision, machine learning, and the medical
domain, as well as multidisciplinary efforts. Towards this, we propose a
special session, with a focus on multidisciplinary efforts. We aim to
document recent advancements in automated healthcare, as well as enable and
discuss progress.. Therefore, the goal of this special session is to bring
together researchers and practitioners working in this area of computer
vision and medical science, and to address a wide range of theoretical and
practical issues related to real-life healthcare systems.
Topics of interest include, but are not limited to:
· Health monitoring based on face analysis,
· Health monitoring based on gesture analysis,
· Health monitoring based corporeal-based visual features,
· Depression analysis based on visual features,
· Face analytics for human behaviour understanding,
· Anxiety diagnosis based on face and gesture
· Physiological measurement employing face analytics,
· Databases on health monitoring, e.g., depression analysis,
· Augmentative and alternative communication,
· Human-robot interaction,
· Home healthcare,
· Technology for cognition,
· Automatic emotional hearing and understanding,
· Visual attention and visual saliency,
· Assistive living,
· Privacy preserving systems,
· Quality of life technologies,
· Mobile and wearable systems,
· Applications for the visually impaired,
· Sign language recognition and applications for hearing impaired,
· Applications for the ageing society,
· Personalized monitoring,
· Egocentric and first-person vision,
· Applications to improve the health and wellbeing of children and
the elderly, etc.
In addition, we plan to organise a special issue in a journal with the
extended version of accepted special session papers.
CALL FOR PARTICIPATION
The One-Minute Gradual-Empathy Prediction (OMG-Empathy) Competition
held in partnership with the IEEE International Conference on Automatic
Face and Gesture Recognition 2019 in Lille, France.
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/omg_empathy.html
I. Aim and Scope
The ability to perceive, understand and respond to social interactions
in a human-like manner is one of the most desired capabilities in
artificial agents, particularly social robots. These skills are highly
complex and require a focus on several different aspects of research,
including affective understanding. An agent which is able to recognize,
understand and, most importantly, adapt to different human affective
behaviors can increase its own social capabilities by being able to
interact and communicate in a natural way.
Emotional expression perception and categorization are extremely popular
in the affective computing community. However, the inclusion of emotions
in the decision-making process of an agent is not considered in most of
the research in this field. To treat emotion expressions as the final
goal, although necessary, reduces the usability of such solutions in
more complex scenarios. To create a general affective model to be used
as a modulator for learning different cognitive tasks, such as modeling
intrinsic motivation, creativity, dialog processing, grounded learning,
and human-level communication, only emotion perception cannot be the
pivotal focus. The integration of perception with intrinsic concepts of
emotional understanding, such as a dynamic and evolving mood and
affective memory, is required to model the necessary complexity of an
interaction and realize adaptability in an agent's social behavior.
Such models are most necessary for the development of real-world social
systems, which would communicate and interact with humans in a natural
way on a day-to-day basis. This could become the next goal for research
on Human-Robot Interaction (HRI) and could be an essential part of the
next generation of social robots.
For this challenge, we designed, collected and annotated a novel corpus
based on human-human interaction. This novel corpus builds on top of the
experience we gathered while organizing the OMG-Emotion Recognition
Challenge, making use of state-of-the-art frameworks for data collection
and annotation.
The One-Minute Gradual Empathy datasets (OMG-Empathy) contain
multi-modal recordings of different individuals discussing predefined
topics. One of them, the actor, shares a story about themselves while
the other, the listener, reacts to it emotionally. We annotated each
interaction based on the listener's own assessment of how they felt
while the interaction was taking place.
We encourage the participants to propose state-of-the-art solutions not
only based on deep, recurrent and self-organizing neural networks but
also traditional methods for feature representation and data processing.
We also enforce that the use of contextual information, as well as
personalized solutions for empathy assessment, will be extremely
important for the development of competitive solutions.
II. Competition Tracks
We let available for the challenge a pre-defined set of training,
validation and testing samples. We separate our samples based on each
story: 4 stories for training, 1 for validation and 3 for testing. Each
story sample is composed of 10 videos with interactions, one for each
listener. Although using the same training, validation and testing data
split, we propose two tracks which will measure different aspects of the
self-assessed empathy:
The Personalized Empathy track, where each team must predict the empathy
of a specific person. We will evaluate the ability of proposed models to
learn the empathic behavior of each of the subjects over a newly
perceived story. We encourage the teams to develop models which take
into consideration the individual behavior of each subject in the
training data.
The Generalized Empathy track, where the teams must predict the general
behavior of all the participants over each story. We will measure the
performance of the proposed models to learn a general empathic measure
for each of the stories individually. We encourage the proposed models
to take into consideration the aggregated behavior of all the
participants for each story and to generalize this behavior in a newly
perceived story.
The training and validation samples will be given to the participants at
the beginning of the challenge together with all the associated labels.
The test set will be given to the participants without the associated
labels. The team`s predictions on the test set will be used to calculate
the final metrics of the challenge.
III. How to Participate
To participate to the challenge, please send us an email to barros @
informatik.uni-hamburg.de with the title "OMG-Empathy Team
Registration". This e-mail must contain the following information:
Team Name
Team Members
Affiliation
Participating tracks
We split the corpus into three subsets: training, validation and
testing. The participants will receive the training and validation sets,
together with the associated annotations once they subscribe to the
challenge. The subscription will be done via e-mail. Each participant
team must consist of 1 to 5 participants and must agree to use the data
only for scientific purposes. Each team can choose to take part in one
or both the tracks.
After the training period is over, the testing set will be released
without the associated annotations.
Each team must submit, via e-mail, their final predictions as a .csv
file for each video on the test set. Together with the final submission,
each team must send a short 2-4 pages paper describing their solution
published on Arxiv and the link for a github page to their solution. If
a team fails to submit any of these items, their submission will be
invalidated. Each team can submit 3 complete submissions for each track.
IV. Important Dates
25th of September 2018 - Opening of the Challenge - Team registrations begin
1st of October 2018 - Training/validation data and annotation available
1st of December 2018 - Test data release
3rd of December 2018 - Final submission (Results and code)
5th of December 2018 - Final submission (Paper)
7th of December 2018 - Announcement of the winners
V. Organization
Pablo Barros, University of Hamburg, Germany
Nikhil Churamani, University of Cambridge, United Kingdom
Angelica Lim, Simon Fraser University, Canada
Stefan Wermter, Hamburg University, Germany
--
Dr.rer.nat. Pablo Barros
Postdoctoral Research Associate - Crossmodal Learning Project (CML)
Knowledge Technology
Department of Informatics
University of Hamburg
Vogt-Koelln-Str. 30
22527 Hamburg, Germany
Phone: +49 40 42883 2535
Fax: +49 40 42883 2515
barros at informatik.uni-hamburg.de
https://www.inf.uni-hamburg.de/en/inst/ab/wtm/people/barros.htmlhttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/
Dear colleagues,
We are inviting abstract submissions for a special session on “Human Health
Monitoring Based on Computer Vision”, as part of the 14th IEEE
International Conference on Automatic Face and Gesture Recognition (FG’19,
http://fg2019.org/), Lille, France, May 14-18, 2019. Details on the special
session follow below.
Title, abstract, list of authors, as well as the name of the corresponding
author, should be emailed directly to Abhijit Das (abhijitdas2048(a)gmail.com).
We hope to receive abstracts before Thursday, September 27th.
Feel free to contact Abhijit Das if you have any further questions.
Kindly circulate this email to others who might be interested.
We look forward to your contributions!
François Brémond (INRIA, France)
Antitza Dantcheva (INRIA, France)
Abhijit Das (INRIA, France)
Xilin Chen (CAS, China)
Hu Han (CAS, China)
--------------------------------------------------------------------------------------------
*Call for abstract for FG 2018 special session *
*on*
*Human Health Monitoring Based on Computer Vision*
---------------------------------------------------------------
Human Health Monitoring Based on Computer Vision has gained rapid
scientific growth in the last years, with many research articles and
complete systems based on a set of features, extracted from face and
gesture. Researchers from the computer, as well as from medical science
have granted significant attention, with goals ranging from patient
analysis and monitoring to diagnostics. (e.g., for dementia, depression,
healthcare, physiological measurement [5, 6]).
Despite the progress, there are various open, unexplored, and
unidentified challenges. Such as the robustness of these techniques in the
real-world scenario, collecting large dataset for research, heterogeneity
of the acquiring environment and the artefacts. Moreover, healthcare
represents an area of broad economic (e.g.,
https://www.prnewswire.com/news-releases/healthcare-automation-market-to-re…),
social, and scientific impact. Therefore, it is imperative to foster
efforts coming from computer vision, machine learning, and the medical
domain, as well as multidisciplinary efforts. Towards this, we propose a
special session, with a focus on multidisciplinary efforts. We aim to
document recent advancements in automated healthcare, as well as enable and
discuss progress.. Therefore, the goal of this special session is to bring
together researchers and practitioners working in this area of computer
vision and medical science, and to address a wide range of theoretical and
practical issues related to real-life healthcare systems.
Topics of interest include, but are not limited to:
· Health monitoring based on face analysis,
· Health monitoring based on gesture analysis,
· Health monitoring based corporeal-based visual features,
· Depression analysis based on visual features,
· Face analytics for human behaviour understanding,
· Anxiety diagnosis based on face and gesture
· Physiological measurement employing face analytics,
· Databases on health monitoring, e.g., depression analysis,
· Augmentative and alternative communication,
· Human-robot interaction,
· Home healthcare,
· Technology for cognition,
· Automatic emotional hearing and understanding,
· Visual attention and visual saliency,
· Assistive living,
· Privacy preserving systems,
· Quality of life technologies,
· Mobile and wearable systems,
· Applications for the visually impaired,
· Sign language recognition and applications for hearing impaired,
· Applications for the ageing society,
· Personalized monitoring,
· Egocentric and first-person vision,
· Applications to improve the health and wellbeing of children and
the elderly, etc.
In addition, we plan to organise a special issue in a journal with the
extended version of accepted special session papers.
Dear all,
We are very happy to announce the release of resource material related
to our research on affective computing and gestures recognition. This
resource contains research on hand gesture and emotion processing
(auditory, visual and crossmodal) recognition and processing, organized
as three datasets (NCD, GRIT, and OMG-Emotion), source code for proposed
neural network solutions, pre-trained models, and ready-to-run demos.
The NAO Camera hand posture Database (NCD) was designed and recorded
using the camera of a NAO robot and contains four different hand
postures. A total of 2000 images were recorded. In each image, the hand
was is present in different positions, not always in the centralized,
and sometimes with occlusion of some fingers.
The Gesture Commands for Robot InTeraction (GRIT) dataset contains
recordings of six different subjects performing eight command gestures
for Human-Robot Interaction (HRI): Abort, Circle, Hello, No, Stop, Turn
Right, Turn Left, and Warn. We recorded a total of 543 sequences with a
varying number of frames in each one.
The One-Minute Gradual Emotion Corpus (OMG-Emotion) is composed of
Youtube videos which are about a minute in length and are annotated
taking into consideration a continuous emotional behavior. The videos
were selected using a crawler technique that uses specific keywords
based on long-term emotional behaviors such as "monologues",
"auditions", "dialogues" and "emotional scenes".
After the videos were selected, we created an algorithm to identify
weather the video had at least two different modalities which contribute
for the emotional categorization: facial expressions, language context,
and a reasonably noiseless environment. We selected a total of 420
videos, totaling around 10 hours of data.
Together with the datasets, we provide the source code for different
proposed neural models. These models are based on novel deep and
self-organizing neural networks which deploy different mechanisms
inspired by neuropsychological concepts. All of our models are formally
described in different high-impact peer-reviewed publications. We also
provide a ready-to-run demo for visual emotion recognition based on our
proposed models.
These resources are accessible through our GitHub link:
https://github.com/knowledgetechnologyuhh/EmotionRecognitionBarros .
We hope that with these resources we can contribute to the areas of
affective computing and gesture recognition and foster the development
of innovative solutions.
--
Dr.rer.nat. Pablo Barros
Postdoctoral Research Associate - Crossmodal Learning Project (CML)
Knowledge Technology
Department of Informatics
University of Hamburg
Vogt-Koelln-Str. 30
22527 Hamburg, Germany
Phone: +49 40 42883 2535
Fax: +49 40 42883 2515
barros at informatik.uni-hamburg.de
https://www.inf.uni-hamburg.de/en/inst/ab/wtm/people/barros.htmlhttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/
Open Positions: 1 Ph.D. student and 2 Postdocs in the area of Computer Vision and Deep Learning at INRIA Sophia Antipolis, France
------------------------------ ------------------------------ ------------------------------ ------------------------------ ------------------------------ ----------------------
Positions are offered within the frameworks of the prestigious grants
- ANR JCJC Grant *ENVISION*: "Computer Vision for Automated Holistic Analysis for Humans" and the
- INRIA - CAS grant *FER4HM* "Facial expression recognition with application in health monitoring"
and are ideally located in the heart of the French Riviera, inside the multi-cultural silicon valley of Europe.
Full announcements:
- Open Ph.D.-Position in Computer Vision / Deep Learning (M/F) *ENVISION*: http://antitza.com/ANR_phd.pdf
- Open Post Doc - Position in Computer Vision / Deep Learning (M/F) *FER4HM*: http://antitza.com/INRIA_CAS_p ostdoc.pdf
- Open Post Doc - Position in Computer Vision / Deep Learning (M/F) (advanced level) *ENVISION*: http://antitza.com/ANR_postdoc .pdf
To apply, please email a full application to Antitza Dantcheva ( antitza.dantcheva(a)inria.fr ), indicating the position in the e-mail subject line.
Dear all,
we just published a database of 500 Mooney face stimuli for visual
perception research. Please find the paper here:
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0200106
Each face was tested in a cohort of healthy adults for face detection
difficulty and inversion effects. We also provide a comparison with Craig
Mooney's original stimulus set. The stimuli and accompanying data are
available from Figshare (https://figshare.com/account/articles/5783037)
under a CC BY license. Please feel free to use them for your research.
Caspar
RESEARCH SPECIALIST POSITION AT COGNITIVE NEUROSCIENCE LAB, University of Richmond
The Cognitive Neuroscience laboratory of Dr. Cindy Bukach is seeking a highly organized and energetic person to serve as full-time research specialist. The lab conducts research on object and face recognition in cognitively intact and impaired individuals, using electrophysiology (EEG and ERP) and behavioral methods. The duties of the research specialist include: Conducts Cognitive Neuroscience research on human subjects using both behavioral and ERP methods, including programming, recruiting, testing and statistical analysis. Under limited supervision, coordinates and supervises student research activities, ensuring adherence to safety and ethical regulations related to research with human subjects. Performs administrative duties such as database management, scheduling, hardware/software maintenance, website maintenance, equipment maintenance and general faculty support.
EDUCATION & EXPERIENCE: Bachelor’s degree or equivalent in psychology, neuroscience, cognitive science, computer science or related field
2 years experience in a research lab (preferably in cognitive or cognitive neuroscience laboratory using Event-related potential method)
PREFERRED QUALIFICATIONS (any of the following highly desired):
Prior research experience with Event-related potential method and data analysis
Advanced computer skills (Matlab, Python, Java, etc.)
For more information, please contact Cindy Bukach at cbukach(a)richmond.edu<mailto:cbukach@richmond.edu>
Cindy M. Bukach, PhD
Chair, Department of Psychology
Associate Professor of Cognitive Neuroscience
MacEldin Trawick Endowed Professor of Psychology
209 Richmond Hall
28 Westhampton Way
University of Richmond, Virginia
23173
Phone: (804) 287-6830
Fax: (804) 287-1905
Call for Expressions of Interest: UNSW Scientia PhD Scholarship
"Understanding super-recognition to improve face identification systems"
The UNSW Forensic Psychology Group invites Expressions of Interest for a unique PhD scholarship opportunity.
UNSW Scientia PhD scholars are awarded $50k per year, comprising a tax-free living allowance of $40k per year for 4 years, and a support package of up to $10k per year to provide financial support for career development activities.
The project is targeted at PhD candidates that are qualified to honours and/or masters level in psychology, computer science of cognitive science. We are particularly interested to hear from applicants with work experience in research, government or industry.
The topic of this thesis is open to negotiation, but we hope that the work can contribute to our broader goal of understanding how the best available human and machine solutions to face identification can be combined to produce optimal systems.
For more details and to apply see the following links:
https://www.2025.unsw.edu.au/apply/scientia-phd-scholarships/understanding-…http://forensic.psy.unsw.edu.au/joinus.htmlhttps://www.2025.unsw.edu.au/apply/<https://www.2025.unsw.edu.au/apply/scientia-phd-scholarships/understanding-…>
Please direct informal inquiries to david.white(a)unsw.edu.au<mailto:david.white@unsw.edu.au>
Dear colleagues – please circulate widely!
We are delighted to announce the following positions at the Institute of Neuroscience & Psychology, University of Glasgow, Scotland. All positions are funded by the ERC project Computing the Face Syntax of Social Communication led by Dr. Rachael Jack.
2 X Postdoctoral Researcher (5 years) Ref: 021275
The post-holders will contribute to a large 5-year research project to model culture-specific dynamic social face signals with transference to social robotics. The post requires expert knowledge in all or a substantial subset of the following: vision science, psychophysics, social perception, cultural psychology, dynamic social face signalling, human-robot interaction, reverse correlation techniques, MATLAB programming, computational methods, and designing behavioral experiments.
Informal enquiries may be made to Dr Rachael Jack
Email: Rachael.Jack(a)glasgow.ac.uk<mailto:Rachael.Jack@glasgow.ac.uk>
Apply online at: www.glasgow.ac.uk/jobs<http://www.glasgow.ac.uk/jobs>
Closing date: 12 June 2018<x-apple-data-detectors://2>
See also http://www.jobs.ac.uk/job/BJU668/research-assistant-associate/
The University has recently been awarded the Athena SWAN Institutional Bronze Award.
The University is committed to equality of opportunity in employment.
The University of Glasgow, charity number SC004401.
Dr. Rachael E. Jack, Ph.D.
Lecturer
Institute of Neuroscience & Psychology
School of Psychology
University of Glasgow
+44 (0) 141 5087<tel:+44%20141%205087>
www.psy.gla.ac.uk/schools/psychology/staff/rachaeljack/<http://www.psy.gla.ac.uk/schools/psychology/staff/rachaeljack/>
[University of Glasgow: The Times Scottish University of the Year 2018]
Dear all,
I would like to share with you all the results of our first OMG- Emotion
Recognition Challenge.
Our challenge is based on the One-Minute-Gradual Emotion Dataset
(OMG-Emotion Dataset), which is composed of 567 emotion videos with an
average length of 1 minute, collected from a variety of Youtube
channels. Each team had a task to describe each video with a continuous
space of arousal/valence domain.
The challenge had a total of 34 teams registered, from which we got 11
final submissions. Each final submission was composed of a short paper
describing the solution and the link to the code repository.
The solutions used different modalities (ranging from unimodal audio and
vision to multimodal audio, vision, and text), and thus provide us with
a very complex evaluation scenario. All the submissions were based on
neural network models.
We split results into arousal and valence. For arousal, the best results
came from the GammaLab team. Their three submissions are our top 3 CCC
arousal, followed by the three submissions from the audEERING team, and
the two submissions from the HKUST-NISL2018
team.
For valence, the GammaLab team stays still in first (with their three
submissions), followed by the two submissions of ADSC team and the three
submissions from the iBug team.
Congratulations to you all!
We provide a leaderboard on our website (
https://www2.informatik.uni-hamburg.de/wtm/OMG-EmotionChallenge/ ),
which will be permanently stored. This way, everyone can see the final
results of the challenge, have a quick access to a formal description of
the solutions and to the codes. This will help to disseminate knowledge
even further and will improve the reproducibility of your solutions.
We also provide a general leaderboard which will be updated constantly
with new submissions. If you are interested in having your score in our
general leaderboard, just send us an e-mail following the instructions
on our website.
I would also to invite you all to the presentation of the challenge
summary during the WCCI/IJCNN 2018 in Rio de Janeiro, Brasil.
Best Regards,
Pablo
--
Dr. Pablo Barros
Postdoctoral Research Associate - Crossmodal Learning Project (CML)
Knowledge Technology
Department of Informatics
University of Hamburg
Vogt-Koelln-Str. 30
22527 Hamburg, Germany
Phone: +49 40 42883 2535
Fax: +49 40 42883 2515
barros at informatik.uni-hamburg.de
https://www.inf.uni-hamburg.de/en/inst/ab/wtm/people/barros.htmlhttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/