Dear colleagues,
We are inviting abstract submissions for a special session on “Human Health
Monitoring Based on Computer Vision”, as part of the 14th IEEE
International Conference on Automatic Face and Gesture Recognition (FG’19,
http://fg2019.org/), Lille, France, May 14-18, 2019. Details on the special
session follow below.
Title, abstract, list of authors, as well as the name of the corresponding
author, should be emailed directly to Abhijit Das (abhijitdas2048(a)gmail.com).
We hope to receive abstracts before Thursday, September 27th.
Feel free to contact Abhijit Das if you have any further questions.
Kindly circulate this email to others who might be interested.
We look forward to your contributions!
François Brémond (INRIA, France)
Antitza Dantcheva (INRIA, France)
Abhijit Das (INRIA, France)
Xilin Chen (CAS, China)
Hu Han (CAS, China)
--------------------------------------------------------------------------------------------
*Call for abstract for FG 2018 special session *
*on*
*Human Health Monitoring Based on Computer Vision*
---------------------------------------------------------------
Human Health Monitoring Based on Computer Vision has gained rapid
scientific growth in the last years, with many research articles and
complete systems based on a set of features, extracted from face and
gesture. Researchers from the computer, as well as from medical science
have granted significant attention, with goals ranging from patient
analysis and monitoring to diagnostics. (e.g., for dementia, depression,
healthcare, physiological measurement [5, 6]).
Despite the progress, there are various open, unexplored, and
unidentified challenges. Such as the robustness of these techniques in the
real-world scenario, collecting large dataset for research, heterogeneity
of the acquiring environment and the artefacts. Moreover, healthcare
represents an area of broad economic (e.g.,
https://www.prnewswire.com/news-releases/healthcare-automation-market-to-re…),
social, and scientific impact. Therefore, it is imperative to foster
efforts coming from computer vision, machine learning, and the medical
domain, as well as multidisciplinary efforts. Towards this, we propose a
special session, with a focus on multidisciplinary efforts. We aim to
document recent advancements in automated healthcare, as well as enable and
discuss progress.. Therefore, the goal of this special session is to bring
together researchers and practitioners working in this area of computer
vision and medical science, and to address a wide range of theoretical and
practical issues related to real-life healthcare systems.
Topics of interest include, but are not limited to:
· Health monitoring based on face analysis,
· Health monitoring based on gesture analysis,
· Health monitoring based corporeal-based visual features,
· Depression analysis based on visual features,
· Face analytics for human behaviour understanding,
· Anxiety diagnosis based on face and gesture
· Physiological measurement employing face analytics,
· Databases on health monitoring, e.g., depression analysis,
· Augmentative and alternative communication,
· Human-robot interaction,
· Home healthcare,
· Technology for cognition,
· Automatic emotional hearing and understanding,
· Visual attention and visual saliency,
· Assistive living,
· Privacy preserving systems,
· Quality of life technologies,
· Mobile and wearable systems,
· Applications for the visually impaired,
· Sign language recognition and applications for hearing impaired,
· Applications for the ageing society,
· Personalized monitoring,
· Egocentric and first-person vision,
· Applications to improve the health and wellbeing of children and
the elderly, etc.
In addition, we plan to organise a special issue in a journal with the
extended version of accepted special session papers.
Dear all,
We are very happy to announce the release of resource material related
to our research on affective computing and gestures recognition. This
resource contains research on hand gesture and emotion processing
(auditory, visual and crossmodal) recognition and processing, organized
as three datasets (NCD, GRIT, and OMG-Emotion), source code for proposed
neural network solutions, pre-trained models, and ready-to-run demos.
The NAO Camera hand posture Database (NCD) was designed and recorded
using the camera of a NAO robot and contains four different hand
postures. A total of 2000 images were recorded. In each image, the hand
was is present in different positions, not always in the centralized,
and sometimes with occlusion of some fingers.
The Gesture Commands for Robot InTeraction (GRIT) dataset contains
recordings of six different subjects performing eight command gestures
for Human-Robot Interaction (HRI): Abort, Circle, Hello, No, Stop, Turn
Right, Turn Left, and Warn. We recorded a total of 543 sequences with a
varying number of frames in each one.
The One-Minute Gradual Emotion Corpus (OMG-Emotion) is composed of
Youtube videos which are about a minute in length and are annotated
taking into consideration a continuous emotional behavior. The videos
were selected using a crawler technique that uses specific keywords
based on long-term emotional behaviors such as "monologues",
"auditions", "dialogues" and "emotional scenes".
After the videos were selected, we created an algorithm to identify
weather the video had at least two different modalities which contribute
for the emotional categorization: facial expressions, language context,
and a reasonably noiseless environment. We selected a total of 420
videos, totaling around 10 hours of data.
Together with the datasets, we provide the source code for different
proposed neural models. These models are based on novel deep and
self-organizing neural networks which deploy different mechanisms
inspired by neuropsychological concepts. All of our models are formally
described in different high-impact peer-reviewed publications. We also
provide a ready-to-run demo for visual emotion recognition based on our
proposed models.
These resources are accessible through our GitHub link:
https://github.com/knowledgetechnologyuhh/EmotionRecognitionBarros .
We hope that with these resources we can contribute to the areas of
affective computing and gesture recognition and foster the development
of innovative solutions.
--
Dr.rer.nat. Pablo Barros
Postdoctoral Research Associate - Crossmodal Learning Project (CML)
Knowledge Technology
Department of Informatics
University of Hamburg
Vogt-Koelln-Str. 30
22527 Hamburg, Germany
Phone: +49 40 42883 2535
Fax: +49 40 42883 2515
barros at informatik.uni-hamburg.de
https://www.inf.uni-hamburg.de/en/inst/ab/wtm/people/barros.htmlhttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/
Open Positions: 1 Ph.D. student and 2 Postdocs in the area of Computer Vision and Deep Learning at INRIA Sophia Antipolis, France
------------------------------ ------------------------------ ------------------------------ ------------------------------ ------------------------------ ----------------------
Positions are offered within the frameworks of the prestigious grants
- ANR JCJC Grant *ENVISION*: "Computer Vision for Automated Holistic Analysis for Humans" and the
- INRIA - CAS grant *FER4HM* "Facial expression recognition with application in health monitoring"
and are ideally located in the heart of the French Riviera, inside the multi-cultural silicon valley of Europe.
Full announcements:
- Open Ph.D.-Position in Computer Vision / Deep Learning (M/F) *ENVISION*: http://antitza.com/ANR_phd.pdf
- Open Post Doc - Position in Computer Vision / Deep Learning (M/F) *FER4HM*: http://antitza.com/INRIA_CAS_p ostdoc.pdf
- Open Post Doc - Position in Computer Vision / Deep Learning (M/F) (advanced level) *ENVISION*: http://antitza.com/ANR_postdoc .pdf
To apply, please email a full application to Antitza Dantcheva ( antitza.dantcheva(a)inria.fr ), indicating the position in the e-mail subject line.
Dear all,
we just published a database of 500 Mooney face stimuli for visual
perception research. Please find the paper here:
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0200106
Each face was tested in a cohort of healthy adults for face detection
difficulty and inversion effects. We also provide a comparison with Craig
Mooney's original stimulus set. The stimuli and accompanying data are
available from Figshare (https://figshare.com/account/articles/5783037)
under a CC BY license. Please feel free to use them for your research.
Caspar
RESEARCH SPECIALIST POSITION AT COGNITIVE NEUROSCIENCE LAB, University of Richmond
The Cognitive Neuroscience laboratory of Dr. Cindy Bukach is seeking a highly organized and energetic person to serve as full-time research specialist. The lab conducts research on object and face recognition in cognitively intact and impaired individuals, using electrophysiology (EEG and ERP) and behavioral methods. The duties of the research specialist include: Conducts Cognitive Neuroscience research on human subjects using both behavioral and ERP methods, including programming, recruiting, testing and statistical analysis. Under limited supervision, coordinates and supervises student research activities, ensuring adherence to safety and ethical regulations related to research with human subjects. Performs administrative duties such as database management, scheduling, hardware/software maintenance, website maintenance, equipment maintenance and general faculty support.
EDUCATION & EXPERIENCE: Bachelor’s degree or equivalent in psychology, neuroscience, cognitive science, computer science or related field
2 years experience in a research lab (preferably in cognitive or cognitive neuroscience laboratory using Event-related potential method)
PREFERRED QUALIFICATIONS (any of the following highly desired):
Prior research experience with Event-related potential method and data analysis
Advanced computer skills (Matlab, Python, Java, etc.)
For more information, please contact Cindy Bukach at cbukach(a)richmond.edu<mailto:cbukach@richmond.edu>
Cindy M. Bukach, PhD
Chair, Department of Psychology
Associate Professor of Cognitive Neuroscience
MacEldin Trawick Endowed Professor of Psychology
209 Richmond Hall
28 Westhampton Way
University of Richmond, Virginia
23173
Phone: (804) 287-6830
Fax: (804) 287-1905
Call for Expressions of Interest: UNSW Scientia PhD Scholarship
"Understanding super-recognition to improve face identification systems"
The UNSW Forensic Psychology Group invites Expressions of Interest for a unique PhD scholarship opportunity.
UNSW Scientia PhD scholars are awarded $50k per year, comprising a tax-free living allowance of $40k per year for 4 years, and a support package of up to $10k per year to provide financial support for career development activities.
The project is targeted at PhD candidates that are qualified to honours and/or masters level in psychology, computer science of cognitive science. We are particularly interested to hear from applicants with work experience in research, government or industry.
The topic of this thesis is open to negotiation, but we hope that the work can contribute to our broader goal of understanding how the best available human and machine solutions to face identification can be combined to produce optimal systems.
For more details and to apply see the following links:
https://www.2025.unsw.edu.au/apply/scientia-phd-scholarships/understanding-…http://forensic.psy.unsw.edu.au/joinus.htmlhttps://www.2025.unsw.edu.au/apply/<https://www.2025.unsw.edu.au/apply/scientia-phd-scholarships/understanding-…>
Please direct informal inquiries to david.white(a)unsw.edu.au<mailto:david.white@unsw.edu.au>
Dear colleagues – please circulate widely!
We are delighted to announce the following positions at the Institute of Neuroscience & Psychology, University of Glasgow, Scotland. All positions are funded by the ERC project Computing the Face Syntax of Social Communication led by Dr. Rachael Jack.
2 X Postdoctoral Researcher (5 years) Ref: 021275
The post-holders will contribute to a large 5-year research project to model culture-specific dynamic social face signals with transference to social robotics. The post requires expert knowledge in all or a substantial subset of the following: vision science, psychophysics, social perception, cultural psychology, dynamic social face signalling, human-robot interaction, reverse correlation techniques, MATLAB programming, computational methods, and designing behavioral experiments.
Informal enquiries may be made to Dr Rachael Jack
Email: Rachael.Jack(a)glasgow.ac.uk<mailto:Rachael.Jack@glasgow.ac.uk>
Apply online at: www.glasgow.ac.uk/jobs<http://www.glasgow.ac.uk/jobs>
Closing date: 12 June 2018<x-apple-data-detectors://2>
See also http://www.jobs.ac.uk/job/BJU668/research-assistant-associate/
The University has recently been awarded the Athena SWAN Institutional Bronze Award.
The University is committed to equality of opportunity in employment.
The University of Glasgow, charity number SC004401.
Dr. Rachael E. Jack, Ph.D.
Lecturer
Institute of Neuroscience & Psychology
School of Psychology
University of Glasgow
+44 (0) 141 5087<tel:+44%20141%205087>
www.psy.gla.ac.uk/schools/psychology/staff/rachaeljack/<http://www.psy.gla.ac.uk/schools/psychology/staff/rachaeljack/>
[University of Glasgow: The Times Scottish University of the Year 2018]
Dear all,
I would like to share with you all the results of our first OMG- Emotion
Recognition Challenge.
Our challenge is based on the One-Minute-Gradual Emotion Dataset
(OMG-Emotion Dataset), which is composed of 567 emotion videos with an
average length of 1 minute, collected from a variety of Youtube
channels. Each team had a task to describe each video with a continuous
space of arousal/valence domain.
The challenge had a total of 34 teams registered, from which we got 11
final submissions. Each final submission was composed of a short paper
describing the solution and the link to the code repository.
The solutions used different modalities (ranging from unimodal audio and
vision to multimodal audio, vision, and text), and thus provide us with
a very complex evaluation scenario. All the submissions were based on
neural network models.
We split results into arousal and valence. For arousal, the best results
came from the GammaLab team. Their three submissions are our top 3 CCC
arousal, followed by the three submissions from the audEERING team, and
the two submissions from the HKUST-NISL2018
team.
For valence, the GammaLab team stays still in first (with their three
submissions), followed by the two submissions of ADSC team and the three
submissions from the iBug team.
Congratulations to you all!
We provide a leaderboard on our website (
https://www2.informatik.uni-hamburg.de/wtm/OMG-EmotionChallenge/ ),
which will be permanently stored. This way, everyone can see the final
results of the challenge, have a quick access to a formal description of
the solutions and to the codes. This will help to disseminate knowledge
even further and will improve the reproducibility of your solutions.
We also provide a general leaderboard which will be updated constantly
with new submissions. If you are interested in having your score in our
general leaderboard, just send us an e-mail following the instructions
on our website.
I would also to invite you all to the presentation of the challenge
summary during the WCCI/IJCNN 2018 in Rio de Janeiro, Brasil.
Best Regards,
Pablo
--
Dr. Pablo Barros
Postdoctoral Research Associate - Crossmodal Learning Project (CML)
Knowledge Technology
Department of Informatics
University of Hamburg
Vogt-Koelln-Str. 30
22527 Hamburg, Germany
Phone: +49 40 42883 2535
Fax: +49 40 42883 2515
barros at informatik.uni-hamburg.de
https://www.inf.uni-hamburg.de/en/inst/ab/wtm/people/barros.htmlhttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/
Please see attached an advert for a Research Associate position working with Dr Rachael Jack within the School of Psychology/Institute of Neuroscience & Psychology.
Informal enquiries can be directed to Rachael
Email: Rachael.Jack(a)glasgow.ac.uk<mailto:Rachael.Jack@glasgow.ac.uk>
[University of Glasgow: The Times Scottish University of the Year 2018]
This year's automatic face and gesture recognition conference has not yet happened but we already have the call for next year's which will be in France. There is a desire to get more cross-fertilisation between psychology and computer science in this field; I think it would be great to have a session on the latest advances in understanding the psychology of face perception. Please think whether you might have something useful to say.
Thanks, Peter
Peter Hancock
Professor,
Deputy Head of Psychology,
Faculty of Natural Sciences
University of Stirling
FK9 4LA, UK
phone 01786 467675
fax 01786 467641
http://stir.ac.uk/190http://orcid.org/0000-0001-6025-7068http://www.researcherid.com/rid/A-4633-2009
Psychology at Stirling: 100% 4* Impact, REF2014
Come and study Face Perception at the University of Stirling! Our unique MSc in the Psychology of Faces is open for applications. For more information see http://www.stir.ac.uk/postgraduate/programme-information/prospectus/psychol…
[highly cited 2016]