Dear colleagues,
We are inviting abstract submissions for a special session on “Human Health
Monitoring Based on Computer Vision”, as part of the 14th IEEE
International Conference on Automatic Face and Gesture Recognition (FG’19,
http://fg2019.org/), Lille, France, May 14-18, 2019. Details on the special
session follow below.
Title, abstract, list of authors, as well as the name of the corresponding
author, should be emailed directly to Abhijit Das (abhijitdas2048(a)gmail.com).
We hope to receive abstracts before October 8th 2018. Full paper submission
December 9th 2018.
Feel free to contact Abhijit Das if you have any further questions.
Kindly circulate this email to others who might be interested.
We look forward to your contributions!
François Brémond (INRIA, France)
Antitza Dantcheva (INRIA, France)
Abhijit Das (INRIA, France)
Xilin Chen (CAS, China)
Hu Han (CAS, China)
--------------------------------------------------------------------------------------------
*Call for abstract for FG 2018 special session *
*on*
*Human Health Monitoring Based on Computer Vision*
---------------------------------------------------------------
Human Health Monitoring Based on Computer Vision has gained rapid
scientific growth in the last years, with many research articles and
complete systems based on a set of features, extracted from face and
gesture. Researchers from the computer, as well as from medical science
have granted significant attention, with goals ranging from patient
analysis and monitoring to diagnostics. (e.g., for dementia, depression,
healthcare, physiological measurement [5, 6]).
Despite the progress, there are various open, unexplored, and
unidentified challenges. Such as the robustness of these techniques in the
real-world scenario, collecting large dataset for research, heterogeneity
of the acquiring environment and the artefacts. Moreover, healthcare
represents an area of broad economic (e.g.,
https://www.prnewswire.com/news-releases/healthcare-automation-market-to-re…),
social, and scientific impact. Therefore, it is imperative to foster
efforts coming from computer vision, machine learning, and the medical
domain, as well as multidisciplinary efforts. Towards this, we propose a
special session, with a focus on multidisciplinary efforts. We aim to
document recent advancements in automated healthcare, as well as enable and
discuss progress.. Therefore, the goal of this special session is to bring
together researchers and practitioners working in this area of computer
vision and medical science, and to address a wide range of theoretical and
practical issues related to real-life healthcare systems.
Topics of interest include, but are not limited to:
· Health monitoring based on face analysis,
· Health monitoring based on gesture analysis,
· Health monitoring based corporeal-based visual features,
· Depression analysis based on visual features,
· Face analytics for human behaviour understanding,
· Anxiety diagnosis based on face and gesture
· Physiological measurement employing face analytics,
· Databases on health monitoring, e.g., depression analysis,
· Augmentative and alternative communication,
· Human-robot interaction,
· Home healthcare,
· Technology for cognition,
· Automatic emotional hearing and understanding,
· Visual attention and visual saliency,
· Assistive living,
· Privacy preserving systems,
· Quality of life technologies,
· Mobile and wearable systems,
· Applications for the visually impaired,
· Sign language recognition and applications for hearing impaired,
· Applications for the ageing society,
· Personalized monitoring,
· Egocentric and first-person vision,
· Applications to improve the health and wellbeing of children and
the elderly, etc.
In addition, we plan to organise a special issue in a journal with the
extended version of accepted special session papers.
CALL FOR PARTICIPATION
The One-Minute Gradual-Empathy Prediction (OMG-Empathy) Competition
held in partnership with the IEEE International Conference on Automatic
Face and Gesture Recognition 2019 in Lille, France.
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/omg_empathy.html
I. Aim and Scope
The ability to perceive, understand and respond to social interactions
in a human-like manner is one of the most desired capabilities in
artificial agents, particularly social robots. These skills are highly
complex and require a focus on several different aspects of research,
including affective understanding. An agent which is able to recognize,
understand and, most importantly, adapt to different human affective
behaviors can increase its own social capabilities by being able to
interact and communicate in a natural way.
Emotional expression perception and categorization are extremely popular
in the affective computing community. However, the inclusion of emotions
in the decision-making process of an agent is not considered in most of
the research in this field. To treat emotion expressions as the final
goal, although necessary, reduces the usability of such solutions in
more complex scenarios. To create a general affective model to be used
as a modulator for learning different cognitive tasks, such as modeling
intrinsic motivation, creativity, dialog processing, grounded learning,
and human-level communication, only emotion perception cannot be the
pivotal focus. The integration of perception with intrinsic concepts of
emotional understanding, such as a dynamic and evolving mood and
affective memory, is required to model the necessary complexity of an
interaction and realize adaptability in an agent's social behavior.
Such models are most necessary for the development of real-world social
systems, which would communicate and interact with humans in a natural
way on a day-to-day basis. This could become the next goal for research
on Human-Robot Interaction (HRI) and could be an essential part of the
next generation of social robots.
For this challenge, we designed, collected and annotated a novel corpus
based on human-human interaction. This novel corpus builds on top of the
experience we gathered while organizing the OMG-Emotion Recognition
Challenge, making use of state-of-the-art frameworks for data collection
and annotation.
The One-Minute Gradual Empathy datasets (OMG-Empathy) contain
multi-modal recordings of different individuals discussing predefined
topics. One of them, the actor, shares a story about themselves while
the other, the listener, reacts to it emotionally. We annotated each
interaction based on the listener's own assessment of how they felt
while the interaction was taking place.
We encourage the participants to propose state-of-the-art solutions not
only based on deep, recurrent and self-organizing neural networks but
also traditional methods for feature representation and data processing.
We also enforce that the use of contextual information, as well as
personalized solutions for empathy assessment, will be extremely
important for the development of competitive solutions.
II. Competition Tracks
We let available for the challenge a pre-defined set of training,
validation and testing samples. We separate our samples based on each
story: 4 stories for training, 1 for validation and 3 for testing. Each
story sample is composed of 10 videos with interactions, one for each
listener. Although using the same training, validation and testing data
split, we propose two tracks which will measure different aspects of the
self-assessed empathy:
The Personalized Empathy track, where each team must predict the empathy
of a specific person. We will evaluate the ability of proposed models to
learn the empathic behavior of each of the subjects over a newly
perceived story. We encourage the teams to develop models which take
into consideration the individual behavior of each subject in the
training data.
The Generalized Empathy track, where the teams must predict the general
behavior of all the participants over each story. We will measure the
performance of the proposed models to learn a general empathic measure
for each of the stories individually. We encourage the proposed models
to take into consideration the aggregated behavior of all the
participants for each story and to generalize this behavior in a newly
perceived story.
The training and validation samples will be given to the participants at
the beginning of the challenge together with all the associated labels.
The test set will be given to the participants without the associated
labels. The team`s predictions on the test set will be used to calculate
the final metrics of the challenge.
III. How to Participate
To participate to the challenge, please send us an email to barros @
informatik.uni-hamburg.de with the title "OMG-Empathy Team
Registration". This e-mail must contain the following information:
Team Name
Team Members
Affiliation
Participating tracks
We split the corpus into three subsets: training, validation and
testing. The participants will receive the training and validation sets,
together with the associated annotations once they subscribe to the
challenge. The subscription will be done via e-mail. Each participant
team must consist of 1 to 5 participants and must agree to use the data
only for scientific purposes. Each team can choose to take part in one
or both the tracks.
After the training period is over, the testing set will be released
without the associated annotations.
Each team must submit, via e-mail, their final predictions as a .csv
file for each video on the test set. Together with the final submission,
each team must send a short 2-4 pages paper describing their solution
published on Arxiv and the link for a github page to their solution. If
a team fails to submit any of these items, their submission will be
invalidated. Each team can submit 3 complete submissions for each track.
IV. Important Dates
25th of September 2018 - Opening of the Challenge - Team registrations begin
1st of October 2018 - Training/validation data and annotation available
1st of December 2018 - Test data release
3rd of December 2018 - Final submission (Results and code)
5th of December 2018 - Final submission (Paper)
7th of December 2018 - Announcement of the winners
V. Organization
Pablo Barros, University of Hamburg, Germany
Nikhil Churamani, University of Cambridge, United Kingdom
Angelica Lim, Simon Fraser University, Canada
Stefan Wermter, Hamburg University, Germany
--
Dr.rer.nat. Pablo Barros
Postdoctoral Research Associate - Crossmodal Learning Project (CML)
Knowledge Technology
Department of Informatics
University of Hamburg
Vogt-Koelln-Str. 30
22527 Hamburg, Germany
Phone: +49 40 42883 2535
Fax: +49 40 42883 2515
barros at informatik.uni-hamburg.de
https://www.inf.uni-hamburg.de/en/inst/ab/wtm/people/barros.htmlhttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/
Dear colleagues,
We are inviting abstract submissions for a special session on “Human Health
Monitoring Based on Computer Vision”, as part of the 14th IEEE
International Conference on Automatic Face and Gesture Recognition (FG’19,
http://fg2019.org/), Lille, France, May 14-18, 2019. Details on the special
session follow below.
Title, abstract, list of authors, as well as the name of the corresponding
author, should be emailed directly to Abhijit Das (abhijitdas2048(a)gmail.com).
We hope to receive abstracts before Thursday, September 27th.
Feel free to contact Abhijit Das if you have any further questions.
Kindly circulate this email to others who might be interested.
We look forward to your contributions!
François Brémond (INRIA, France)
Antitza Dantcheva (INRIA, France)
Abhijit Das (INRIA, France)
Xilin Chen (CAS, China)
Hu Han (CAS, China)
--------------------------------------------------------------------------------------------
*Call for abstract for FG 2018 special session *
*on*
*Human Health Monitoring Based on Computer Vision*
---------------------------------------------------------------
Human Health Monitoring Based on Computer Vision has gained rapid
scientific growth in the last years, with many research articles and
complete systems based on a set of features, extracted from face and
gesture. Researchers from the computer, as well as from medical science
have granted significant attention, with goals ranging from patient
analysis and monitoring to diagnostics. (e.g., for dementia, depression,
healthcare, physiological measurement [5, 6]).
Despite the progress, there are various open, unexplored, and
unidentified challenges. Such as the robustness of these techniques in the
real-world scenario, collecting large dataset for research, heterogeneity
of the acquiring environment and the artefacts. Moreover, healthcare
represents an area of broad economic (e.g.,
https://www.prnewswire.com/news-releases/healthcare-automation-market-to-re…),
social, and scientific impact. Therefore, it is imperative to foster
efforts coming from computer vision, machine learning, and the medical
domain, as well as multidisciplinary efforts. Towards this, we propose a
special session, with a focus on multidisciplinary efforts. We aim to
document recent advancements in automated healthcare, as well as enable and
discuss progress.. Therefore, the goal of this special session is to bring
together researchers and practitioners working in this area of computer
vision and medical science, and to address a wide range of theoretical and
practical issues related to real-life healthcare systems.
Topics of interest include, but are not limited to:
· Health monitoring based on face analysis,
· Health monitoring based on gesture analysis,
· Health monitoring based corporeal-based visual features,
· Depression analysis based on visual features,
· Face analytics for human behaviour understanding,
· Anxiety diagnosis based on face and gesture
· Physiological measurement employing face analytics,
· Databases on health monitoring, e.g., depression analysis,
· Augmentative and alternative communication,
· Human-robot interaction,
· Home healthcare,
· Technology for cognition,
· Automatic emotional hearing and understanding,
· Visual attention and visual saliency,
· Assistive living,
· Privacy preserving systems,
· Quality of life technologies,
· Mobile and wearable systems,
· Applications for the visually impaired,
· Sign language recognition and applications for hearing impaired,
· Applications for the ageing society,
· Personalized monitoring,
· Egocentric and first-person vision,
· Applications to improve the health and wellbeing of children and
the elderly, etc.
In addition, we plan to organise a special issue in a journal with the
extended version of accepted special session papers.