CALL FOR PAPERS
IEEE Transactions on Affective Computing
Special Issue on Automated Perception of Human Affect from Longitudinal
Behavioral Data
Website:
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/tacSpecialIssue201…
I. Aim and Scope
Research trends within artificial intelligence and cognitive sciences are
still heavily based on computational models that attempt to imitate human
perception in various behavior categorization tasks. However, most of the
research in the field focuses on instantaneous categorization and
interpretation of human affect, such as the inference of six basic emotions
from face images, and/or affective dimensions (valence-arousal), stress and
engagement from multi-modal (e.g., video, audio, and autonomic physiology)
data. This diverges from the developmental aspect of emotional behavior
perception and learning, where human behavior and expressions of affect
evolve and change over time. Moreover, these changes are present not only
in the temporal domain but also within different populations and more
importantly, within each individual. This calls for a new perspective when
designing computational models for analysis and interpretation of human
affective behaviors: the computational models that can timely and
efficiently adapt to different contexts and individuals over time, and also
incorporate existing neurophysiological and psychological findings (prior
knowledge). Thus, the long-term goal is to create life-long personalized
learning and inference systems for analysis and perception of human
affective behaviors. Such systems would benefit from long-term contextual
information (including demographic and social aspects) as well as
individual characteristics. This, in turn, would allow building intelligent
agents (such as mobile and robot technologies) capable of adapting their
behavior in a continuous and on-line manner to the target contexts and
individuals.
This special issue aims at contributions from computational neuroscience
and psychology, artificial intelligence, machine learning, and affective
computing, challenging and expanding current research on interpretation and
estimation of human affective behavior from longitudinal behavioral data,
i.e., single or multiple modalities captured over extended periods of time
allowing efficient profiling of target behaviors and their inference in
terms of affect and other socio-cognitive dimensions. We invite
contributions focusing on both the theoretical and modeling perspective, as
well as applications ranging from human-human, human-computer and
human-robot interactions.
II. Potential Topics
Given computational models, the capability to perceive and understand
emotion behavior is an important and popular research topic. That is why
recent special issues on the IEEE Journal on Transactions on Affective
Computing covered topics from emotion behavior analysis “in-the-wild” to
personality analysis. However, most of the research published by these
specific calls treat emotion behavior as an instantaneous event, relating
mostly to emotion recognition, and thus neglect the development of complex
emotion behavior models. Our special issue will foster the development of
the field by focusing excellent research on emotion models for long-term
behavior analysis.
The topics of interest for this special issue include, but are not limited
to:
- New theories and findings on continuous emotion recognition
- Multi- and Cross-modal emotion perception and interpretation
- Lifelong affect analysis, perception, and interpretation
- Novel neural network models for affective processing
- New neuroscientific and psychological findings on continuous emotion
representation
- Embodied artificial agents for empathy and emotion appraisal
- Machine learning for affect-driven interventions
- Socially intelligent human-robot interaction
- Personalized systems for human affect recognition
III. Submission
Prospective authors are invited to submit their manuscripts electronically,
adhering to the IEEE Transactions on Affective Computing guidelines (
https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=5165369). Please
submit your papers through the online system (
https://mc.manuscriptcentral.com/taffc-cs) and be sure to select the
special issue: Special Issue/Section on Automated Perception of Human
Affect from Longitudinal Behavioral Data.
IV. IMPORTANT DATES:
Submissions Deadline: 15th of February 2019
V. Guest Editors
Pablo Barros, University of Hamburg, Germany
Stefan Wermter, University of Hamburg, Germany
Ognjen (Oggi) Rudovic, Massachusetts Institute of Technology, United States
of America
Hatice Gunes, University of Cambridge, United Kingdom
--
Best regards,
*Pablo Barros*
*http://www.pablobarros.net <http://www.pablobarros.net>*
Apologies for cross-posting
***********************************************************************************
ICMI 2019: Call for Long and Short Papers
https://icmi.acm.org/2019/index.php?id=cfp
Abstract Submission: May 1, 2019 (11:59pm PST)
Final Submission: May 7, 2019 (11:59pm PST)
***********************************************************************************
Call for Long and Short Papers
The 21st International Conference on Multimodal Interaction (ICMI 2019) will be held in Suzhou, China. ICMI is the premier international forum for multidisciplinary research on multimodal human-human and human-computer interaction, interfaces, and system development. The conference focuses on theoretical and empirical foundations, component technologies, and combined multimodal processing techniques that define the field of multimodal interaction analysis, interface design, and system development.
We are keen to showcase novel input and output modalities and interactions to the ICMI community. ICMI 2019 will feature a single-track main conference which includes: keynote speakers, technical full and short papers (including oral and poster presentations), demonstrations, exhibits and doctoral spotlight papers. The conference will also feature workshops and grand challenges. The proceedings of ICMI 2019 will be published by ACM as part of their series of International Conference Proceedings and Digital Library.
We also want to welcome conference papers from behavioral and social sciences. These papers allow us to understand how technology can be used to increase our scientific knowledge and may focus less on presenting technical or algorithmic novelty. For this reason, the "novelty" criteria used during ICMI 2019 review will be based on two sub-criteria (i.e., scientific novelty and technical novelty as described below). Accepted papers at ICMI 2019 only need to be novel on one of these sub-criteria. In other words, a paper which is strong on scientific knowledge contribution but low on algorithmic novelty should be ranked similarly to a paper that is high on algorithmic novelty but low on knowledge discovery.
Scientific Novelty: Papers should bring some new knowledge to the scientific community. For example, discovering new behavioral markers that are predictive of mental health or how new behavioral patterns relate to children’s interactions during learning. It is the responsibility of the authors to perform a proper literature review and clearly discuss the novelty in the scientific discoveries made in their paper.
Technical Novelty: Papers reviewed with this sub-criterion should include novelty in their computational approach for recognizing, generating or modeling data. Examples include: novelty in the learning and prediction algorithms, in the neural architecture, or in the data representation. Novelty can also be associated to a new usage of an existing approach.
Please see the Submission Guidelines for Authors for detailed submission instructions.
This year, ICMI welcomes contributions on our theme of multi-modal understanding of multi-party interactions. Additional topics of interest include but are not limited to:
Affective computing and interaction
Cognitive modeling and multimodal interaction
Gesture, touch and haptics
Healthcare, assistive technologies
Human communication dynamics
Human-robot/agent multimodal interaction
Interaction with smart environment
Machine learning for multimodal interaction
Mobile multimodal systems
Multimodal behavior generation
Multimodal datasets and validation
Multimodal dialogue modeling
Multimodal fusion and representation
Multimodal interactive applications
Speech behaviors in social interaction
System components and multimodal platforms
Visual behaviors in social interaction
Virtual/augmented reality and multimodal interaction
Important Dates
Abstract Submission
(must include title, authors, abstract) May 1, 2019 (11:59pm PST)
Final Submission
(final submissions -- no updates allowed after May 7) May 7, 2019 (11:59pm PST)
Rebuttal due June 25, 2019 (11:59pm PST)
Paper notification July 7, 2019
Paper camera-ready July 15, 2019 (11:59pm PST)
Presenting at main conference October 14-18, 2019
Best regards,
Social Media Chair ICMI 2019
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon
University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
Apologies for cross-posting
***********************************************************************************
ICMI 2019: Call for Multimodal Grand Challenges
https://icmi.acm.org/2019/index.php?id=cfc
Submission Deadline: January 14, 2019
***********************************************************************************
Call for Multimodal Grand Challenges
Proposals due January 14, 2019
Proposal notification February 1, 2019
Paper camera-ready August 12, 2019
We are calling for teams to propose one or more ICMI Multimodal Grand Challenges.
The International Conference on Multimodal Interaction (ICMI) is the premier international forum for multidisciplinary research on multimodal human-human and human-computer interaction, interfaces, and system development. Developing systems that can robustly understand human-human communication or respond to human input requires identifying the best algorithms and their failure modes. In fields such as computer vision, speech recognition, computational (para-) linguistics and physiological signal processing, for example, the availability of datasets and common tasks have led to great progress. We invite the ICMI community to collectively define and tackle the scientific Grand Challenges in our domain for the next 5 years. ICMI Multimodal Grand Challenges aim to inspire new ideas in the ICMI community and create momentum for future collaborative work. Analysis, synthesis, and interactive tasks are all possible. Challenge papers will be indexed in the proceedings of ICMI.
The grand challenge sessions are still to be confirmed. We invite organizers from various fields related to multimodal interaction to propose and run Grand Challenge events. We are looking for exciting and stimulating challenges including but not limited to the following categories:
Dataset-driven challenge. This challenge will provide a dataset that is exemplary of the complexities of current and future multimodal problems, and one or more multimodal tasks whose performance can be objectively measured and compared in rigorous conditions. Participants in the Challenge will evaluate their methods against the challenge data in order to identify areas of strengths and weakness.
Use-case challenge. This challenge will provide an interactive problem system (e.g. dialog-based or non-verbal-based) and the associated resources, which can allow people to participate through the integration of specific modules or alternative full systems. Proposers should also establish systematic evaluation procedures.
Health challenge. This challenge will provide a dataset that is exemplary of a health related task, whose analysis, diagnosis, treatment or prevention can be aided by Multimodal Interactions. The challenge should focus on exploring the benefits of multimodal (audio, visual, physiological, etc) solutions for the stated task.
Policy challenge. Legal, ethical, and privacy issues of Multimodal Interaction systems in the age of AI. The challenge could revolve around opinion papers, panels, discussions, etc.
Prospective organizers should submit a five-page maximum proposal containing the following information:
Title
Abstract appropriate for possible Web promotion of the Challenge
Distinctive topics to be addressed and specific goals
Detailed description of the Challenge and its relevance to multimodal interaction
Length (full day or half day)
Plan for soliciting participation
Description of how submissions (challenge’s submissions and papers) will be evaluated, and a list of proposed reviewers
Proposed schedule for releasing datasets (if applicable) and/or systems (if applicable) and receiving submissions
Short biography of the organizers (preferably from multiple institutions)
Funding source (if any) that supports or could support the challenge organization
Draft call for papers; affiliations and email address of the organisers; summary of the Grand Challenge; list of potential Technical Program Committee members and their affiliations, important dates
Proposals will be evaluated based on originality, ambition, feasibility, and implementation plan. A Challenge with dataset(s) or system(s) that has had pilot results to ensure its representativity and suitability to the proposed task will be given preference for acceptance; an additional 1 page description must be attached in such case. Continuation of or variants on the 2018 challenges are welcome, though we ask for submissions of this form to highlight the number of participants that attended during the previous year and describe what changes will be made from the previous year.
The ICMI organizers will offer support with basic logistics, which includes rooms and equipment to run the Workshop, coffee breaks can be offered if synchronised with the main conference.
Important Dates and Contact Details
Proposals should be emailed to both ICMI 2019 Multimodal Grand Challenge Chairs, Prof Anna Esposito and Dr. Michel Valstar via grandchallenge.ICMI19(a)gmail.com. Prospective organizers are also encouraged to contact the co-chairs if they have any questions. Proposals are due by January 14th, 2019. Notifications will be sent on February 1st, 2019.
Best regards,
Social Media Chair ICMI 2019
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
Apologies for cross-posting
***********************************************************************************
ICMI 2019: Call for Workshops
https://icmi.acm.org/2019/index.php?id=CfW
Submission Deadline: Saturday, February 16, 2019
***********************************************************************************
Call for Workshops
The International Conference on Multimodal Interaction (ICMI 2019) will be held in Suzhou, Jiangsu, China, during October 14-18, 2019. ICMI is the premier international conference for multidisciplinary research on multimodal human-human and human-computer interaction analysis, interface design, and system development. The theme of the ICMI 2019 conference is Multi-modal Understanding of Multi-party Interactions. ICMI has developed a tradition of hosting workshops in conjunction with the main conference to foster discourse on new research, technologies, social science models and applications. Examples of recent workshops include:
Multi-sensorial Approaches to Human-Food Interaction
Group Interaction Frontiers in Technology
Modeling Cognitive Processes from Multimodal Data
Human-Habitat for Health
Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction
Investigating Social Interactions with Artificial Agents
Child Computer Interaction
Multimodal Interaction for Education
We are seeking workshop proposals on emerging research areas related to the main conference topics, and those that focus on multi-disciplinary research. We would also strongly encourage workshops that will include a diverse set of keynote speakers (factors to consider include: gender, ethnic background, institutions, years of experience, geography, etc.).
The format, style, and content of accepted workshops are under the control of the workshop organizers. Workshops may be of a half-day or one day in duration. Workshop organizers will be expected to manage the workshop content, be present to moderate the discussion and panels, invite experts in the domain, and maintain a website for the workshop. Workshop papers will be indexed by ACM.
Submission
Prospective workshop organizers are invited to submit proposals in PDF format (Max. 3 pages). Please email proposals to the workshop chairs: Hongwei Ding (hwding(a)sjtu.edu.cn), Carlos Busso (busso(a)utdallas.edu) and Tadas Baltrusaitis (tadyla(a)gmail.com). The proposal should include the following:
Workshop title
List of organizers including affiliation, email address, and short biographies
Workshop motivation, expected outcomes and impact
Tentative list of keynote speakers
Workshop format (by invitation only, call for papers, etc.), anticipated number of talks/posters, workshop duration (half-day or full-day) including tentative program
Planned advertisement means, website hosting, and estimated participation
Paper review procedure (single/double-blind, internal/external, solicited/invited-only, pool of reviewers, etc.)
Paper submission and acceptance deadlines
Special space and equipment requests, if any
Important Dates:
Workshop proposal submission: Saturday, February 16, 2019
Notification of acceptance: Saturday, March 2, 2019
Workshop Date: Monday, October 14, 2019
Best regards,
Social Media Chair ICMI 2019
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
Dear Colleagues,
We would like to invite you to contribute a chapter for the upcoming volume
entitled “Neural and Machine Learning for Emotion and Empathy Recognition:
Experiences from the OMG-Challenges” to be published by the Springer Series
on Competitions in Machine Learning. Our book will be available by fall
2019.
Website: https://easychair.org/cfp/OMGBook2019
Short description of the volume:
Emotional expression perception and categorization are extremely popular in
the affective computing community. However, the inclusion of emotions in
the decision-making process of an agent is not considered in most of the
research in this field. To treat emotion expressions as the final goal,
although necessary, reduces the usability of such solutions in more complex
scenarios. To create a general affective model to be used as a modulator
for learning different cognitive tasks, such as modeling intrinsic
motivation, creativity, dialog processing, grounded learning, and
human-level communication, instantaneous emotion perception cannot be the
pivotal focus.
This book aims to present recent contributions for multimodal emotion
recognition and empathy prediction which take into consideration the
long-term development of affective concepts. On this regard, we provide
access to two datasets: the OMG-Emotion Behavior Recognition and
OMG-Empathy Prediction datasets. These datasets were designed, collected
and formalized to be used on the OMG-Emotion Recognition Challenge and the
OMG-Empathy Prediction challenge, respectively. All the participants of our
challenges are invited to submit their contribution to our book. We also
invite interested authors to use our datasets on the development of
inspiring and innovative research on affective computing. By formatting
these solutions and editing this book, we hope to inspire further research
in affective and cognitive computing over longer timescales.
TOPICS OF INTEREST
The topics of interest for this call for chapters include, but are not
limited to:
- New theories and findings on continuous emotion recognition
- Multi- and Cross-modal emotion perception and interpretation
- Novel neural network models for affective processing
- Lifelong affect analysis, perception, and interpretation
- New neuroscientific and psychological findings on continuous emotion
representation
- Embodied artificial agents for empathy and emotion appraisal
- Machine learning for affect-driven interventions
- Socially intelligent human-robot interaction
- Personalized systems for human affect recognition
- New theories and findings on empathy modeling
- Multimodal processing of empathetic and social signals
- Novel neural network models for empathy understanding
- Lifelong models for empathetic interactions
- Empathetic Human-Robot-Interaction Scenarios
- New neuroscientific and psychological findings on empathy representation
- Multi-agent communication for empathetic interactions
- Empathy as a decision-making modulator
- Personalized systems for empathy prediction
Each contributed chapter is expected to present a novel research study, a
comparative study, or a survey of the literature.
We also expect that each contributed chapter approach somehow at least one
of our datasets: the OMG-Emotion and the OMG-Empathy.
SUBMISSIONS
All submissions should be done via EasyChair:
https://easychair.org/cfp/OMGBook2019
Original artwork and a signed copyright release form will be required for
all accepted chapters. For author instructions, please visit:
https://www.springer.com/us/authors-editors/book-authors-editors/resources-…
ACCESS TO THE DATASETS
- OMG-EMOTION -
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/omg_emotion.html
- OMG-EMPATHY -
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/omg_empathy.html
To have access to the datasets, please send an e-mail to:
barros(a)informatik.uni-hamburg.de
IMPORTANT DATES:
- Submission of abstracts: 08th of February 2019
- Notification of initial editorial decisions: 15th of February 2019
- Submissions of full-length chapters: 29th of March 2019
- Notification of final editorial decisions 17th of May 2019
- Submission of revised chapters: 07th of June, 2019
--
Best regards,
*Pablo Barros*
*http://www.pablobarros.net <http://www.pablobarros.net>*
2nd CALL FOR PAPERS
IEEE Transactions on Affective Computing
Special Issue on Automated Perception of Human Affect from Longitudinal
Behavioral Data
Website:
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/tacSpecialIssue201…
I. Aim and Scope
Research trends within artificial intelligence and cognitive sciences are
still heavily based on computational models that attempt to imitate human
perception in various behavior categorization tasks. However, most of the
research in the field focuses on instantaneous categorization and
interpretation of human affect, such as the inference of six basic emotions
from face images, and/or affective dimensions (valence-arousal), stress and
engagement from multi-modal (e.g., video, audio, and autonomic physiology)
data. This diverges from the developmental aspect of emotional behavior
perception and learning, where human behavior and expressions of affect
evolve and change over time. Moreover, these changes are present not only
in the temporal domain but also within different populations and more
importantly, within each individual. This calls for a new perspective when
designing computational models for analysis and interpretation of human
affective behaviors: the computational models that can timely and
efficiently adapt to different contexts and individuals over time, and also
incorporate existing neurophysiological and psychological findings (prior
knowledge). Thus, the long-term goal is to create life-long personalized
learning and inference systems for analysis and perception of human
affective behaviors. Such systems would benefit from long-term contextual
information (including demographic and social aspects) as well as
individual characteristics. This, in turn, would allow building intelligent
agents (such as mobile and robot technologies) capable of adapting their
behavior in a continuous and on-line manner to the target contexts and
individuals.
This special issue aims at contributions from computational neuroscience
and psychology, artificial intelligence, machine learning, and affective
computing, challenging and expanding current research on interpretation and
estimation of human affective behavior from longitudinal behavioral data,
i.e., single or multiple modalities captured over extended periods of time
allowing efficient profiling of target behaviors and their inference in
terms of affect and other socio-cognitive dimensions. We invite
contributions focusing on both the theoretical and modeling perspective, as
well as applications ranging from human-human, human-computer and
human-robot interactions.
II. Potential Topics
Given computational models, the capability to perceive and understand
emotion behavior is an important and popular research topic. That is why
recent special issues on the IEEE Journal on Transactions on Affective
Computing covered topics from emotion behavior analysis “in-the-wild” to
personality analysis. However, most of the research published by these
specific calls treat emotion behavior as an instantaneous event, relating
mostly to emotion recognition, and thus neglect the development of complex
emotion behavior models. Our special issue will foster the development of
the field by focusing excellent research on emotion models for long-term
behavior analysis.
The topics of interest for this special issue include, but are not limited
to:
- New theories and findings on continuous emotion recognition
- Multi- and Cross-modal emotion perception and interpretation
- Lifelong affect analysis, perception, and interpretation
- Novel neural network models for affective processing
- New neuroscientific and psychological findings on continuous emotion
representation
- Embodied artificial agents for empathy and emotion appraisal
- Machine learning for affect-driven interventions
- Socially intelligent human-robot interaction
- Personalized systems for human affect recognition
III. Submission
Prospective authors are invited to submit their manuscripts electronically,
adhering to the IEEE Transactions on Affective Computing guidelines (
https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=5165369). Please
submit your papers through the online system (
https://mc.manuscriptcentral.com/taffc-cs) and be sure to select the
special issue: Special Issue/Section on Automated Perception of Human
Affect from Longitudinal Behavioral Data.
IV. IMPORTANT DATES:
Submissions Deadline: 15th of January 2019
Deadline for reviews and response to authors: 06th of April 2019
Camera-ready deadline: 05th of August 2019
V. Guest Editors
Pablo Barros, University of Hamburg, Germany
Stefan Wermter, University of Hamburg, Germany
Ognjen (Oggi) Rudovic, Massachusetts Institute of Technology, United States
of America
Hatice Gunes, University of Cambridge, United Kingdom
--
Dr. Pablo Barros
Postdoctoral Research Associate - Crossmodal Learning Project (CML)
Knowledge Technology
Department of Informatics
University of Hamburg
Vogt-Koelln-Str. 30
22527 Hamburg, Germany
Phone: +49 40 42883 2535
Fax: +49 40 42883 2515
barros at informatik.uni-hamburg.dehttp://www.pablobarros.nethttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/people/barros.htmlhttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/
Dear colleagues,
We are inviting participation to biometric competition on "6th *Sclera
Segmentation
Recognition Benchmarking Competition(SSBC 2019)", *in conjunction with the
12th* IAPR **International Conference on Biometrics (ICB 2019)*.
Details about the competition can be found at
https://sites.google.com/view/ssbc2019/home
Please find a call for participation flyer attached with the email. Please
feel free to register for the same.
*We will** welcome the top ranking participant to join as co-author of the
technical report of the competition that will be submitted to ICB 2019**.*
Feel free to contact Abhijit Das if you have any further questions.
Kindly circulate this email to others who might be interested.
We look forward to your contributions!
Best regards
Organizers SSBC 2019
Abhijit Das (Inria, France)
Umpada Pal (ISI, Kolkata, India)
Michael Blumenstein (UTS , Australia)
Dear colleagues,
We are inviting paper submissions for a special session on “Human Health
Monitoring Based on Computer Vision”, as part of the 14th IEEE
International Conference on Automatic Face and Gesture Recognition (FG’19,
http://fg2019.org/), Lille, France, May 14-18, 2019. Details on the special
session can be found in the attached call for paper and at
http://fg2019.org/participate/special-sessions/hhmbcv/.
IMPORTANT DATES:
Full Paper Submission: Dec 14th, 2018
Acceptance Notification: Jan 21st, 2019
Camera-Ready Paper Due: Feb 15th 2019
Feel free to contact Abhijit Das if you have any further questions.
Kindly circulate this email to others who might be interested.
We look forward to your contributions!
François Brémond (INRIA, France)
Antitza Dantcheva (INRIA, France)
Abhijit Das (INRIA, France)
Xilin Chen (CAS, China)
Hu Han (CAS, China)
2nd CALL FOR PARTICIPATION
The One-Minute Gradual-Empathy Prediction (OMG-Empathy) Competition
held in partnership with the IEEE International Conference on Automatic
Face and Gesture Recognition 2019 in Lille, France.
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/omg_empathy.html
I. Aim and Scope
The ability to perceive, understand and respond to social interactions in a
human-like manner is one of the most desired capabilities in artificial
agents, particularly social robots. These skills are highly complex and
require a focus on several different aspects of research, including
affective understanding. An agent which is able to recognize, understand
and, most importantly, adapt to different human affective behaviors can
increase its own social capabilities by being able to interact and
communicate in a natural way.
Emotional expression perception and categorization are extremely popular in
the affective computing community. However, the inclusion of emotions in
the decision-making process of an agent is not considered in most of the
research in this field. To treat emotion expressions as the final goal,
although necessary, reduces the usability of such solutions in more complex
scenarios. To create a general affective model to be used as a modulator
for learning different cognitive tasks, such as modeling intrinsic
motivation, creativity, dialog processing, grounded learning, and
human-level communication, only emotion perception cannot be the pivotal
focus. The integration of perception with intrinsic concepts of emotional
understanding, such as a dynamic and evolving mood and affective memory, is
required to model the necessary complexity of an interaction and realize
adaptability in an agent's social behavior.
Such models are most necessary for the development of real-world social
systems, which would communicate and interact with humans in a natural way
on a day-to-day basis. This could become the next goal for research on
Human-Robot Interaction (HRI) and could be an essential part of the next
generation of social robots.
For this challenge, we designed, collected and annotated a novel corpus
based on human-human interaction. This novel corpus builds on top of the
experience we gathered while organizing the OMG-Emotion Recognition
Challenge, making use of state-of-the-art frameworks for data collection
and annotation.
The One-Minute Gradual Empathy datasets (OMG-Empathy) contain multi-modal
recordings of different individuals discussing predefined topics. One of
them, the actor, shares a story about themselves while the other, the
listener, reacts to it emotionally. We annotated each interaction based on
the listener's own assessment of how they felt while the interaction was
taking place.
We encourage the participants to propose state-of-the-art solutions not
only based on deep, recurrent and self-organizing neural networks but also
traditional methods for feature representation and data processing. We also
enforce that the use of contextual information, as well as personalized
solutions for empathy assessment, will be extremely important for the
development of competitive solutions.
II. Competition Tracks
We let available for the challenge a pre-defined set of training,
validation and testing samples. We separate our samples based on each
story: 4 stories for training, 1 for validation and 3 for testing. Each
story sample is composed of 10 videos with interactions, one for each
listener. Although using the same training, validation and testing data
split, we propose two tracks which will measure different aspects of
self-assessed empathy:
The Personalized Empathy track, where each team must predict the empathy of
a specific person. We will evaluate the ability of proposed models to learn
the empathic behavior of each of the subjects over a newly perceived story.
We encourage the teams to develop models which take into consideration the
individual behavior of each subject in the training data.
The Generalized Empathy track, where the teams must predict the general
behavior of all the participants over each story. We will measure the
performance of the proposed models to learn a general empathic measure for
each of the stories individually. We encourage the proposed models to take
into consideration the aggregated behavior of all the participants for each
story and to generalize this behavior in a newly perceived story.
The training and validation samples will be given to the participants at
the beginning of the challenge together with all the associated labels. The
test set will be given to the participants without the associated labels.
The team`s predictions on the test set will be used to calculate the final
metrics of the challenge.
III. How to Participate
To participate in the challenge, please send us an email to barros @
informatik.uni-hamburg.de with the title "OMG-Empathy Team Registration".
This e-mail must contain the following information:
Team Name
Team Members
Affiliation
Participating tracks
We split the corpus into three subsets: training, validation, and testing.
The participants will receive the training and validation sets, together
with the associated annotations once they subscribe to the challenge. The
subscription will be done via e-mail. Each participating team must consist
of 1 to 5 participants and must agree to use the data only for scientific
purposes. Each team can choose to take part in one or both the tracks.
After the training period is over, the testing set will be released without
the associated annotations.
Each team must submit, via e-mail, their final predictions as a .csv file
for each video on the test set. Together with the final submission, each
team must send a short 2-4 pages paper describing their solution published
on Arxiv and the link for a GitHub page to their solution. If a team fails
to submit any of these items, their submission will be invalidated. Each
team can submit 3 complete submissions for each track.
IV. Important Dates
25th of September 2018 - Opening of the Challenge - Team registrations begin
1st of October 2018 - Training/validation data and annotation available
3rd of December 2018 - Test data release
5th of December 2018 - Final submission (Results and code)
7th of December 2018 - Final submission (Paper)
10th of December 2018 - Announcement of the winners
V. Organization
Pablo Barros, University of Hamburg, Germany
Nikhil Churamani, University of Cambridge, United Kingdom
Angelica Lim, Simon Fraser University, Canada
Stefan Wermter, Hamburg University, Germany
--
Dr. Pablo Barros
Postdoctoral Research Associate - Crossmodal Learning Project (CML)
Knowledge Technology
Department of Informatics
University of Hamburg
Vogt-Koelln-Str. 30
22527 Hamburg, Germany
Phone: +49 40 42883 2535
Fax: +49 40 42883 2515
barros at informatik.uni-hamburg.dehttp://www.pablobarros.nethttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/people/barros.htmlhttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/
Nanyang Technological University (NTU) in Singapore has open calls for several postdoctoral fellowships, which are also open for research on vision and perception. There are currently three labs in the area of vision research:
- Charles OR (charlesor(a)ntu.edu.sg<mailto:charlesor@ntu.edu.sg>)
Face perception, motion perception, form perception, EEG, eye movements, computational modelling, psychophysics;
http://research.ntu.edu.sg/expertise/academicprofile/Pages/StaffProfile.asp…
- Gerrit MAUS (maus(a)ntu.edu.sg<mailto:maus@ntu.edu.sg>)
Eye movements, eye blinks, filling-in, interpolation and extrapolation in vision, prediction, motion perception; psychophysics, fMRI, TMS;
http://blogs.ntu.edu.sg/perception
- Hong XU (xuhong(a)ntu.edu.sg<mailto:xuhong@ntu.edu.sg>)
Heading/Self-motion perception in navigation, face and object perception, attention and eye movements, virtual reality, EEG, electrophysiology, modelling, psychophysics;
http://www.ntu.edu.sg/home/xuhong/
We have access to state-of-the-art facilities for psychophysics, virtual reality, eye tracking, EEG, MEG, fMRI, fNIRS, TMS, and tDCS.
Feel free to contact any of us for more information or to discuss potential proposals.
The fellowship ad below (Deadline: 30 November) is currently offered by the College of Humanities, Arts, and Social Sciences, NTU.
There are two more opportunities available at NTU:
- for research related to Artificial Intelligence (Walllenberg - NTU Presidential Postdoctoral Fellowship, http://www.ntu.edu.sg/ppf/Pages/home.aspx),
- for research in any area by PhD graduates from Swedish Universities (Wallenberg - NTU Postdoctoral Fellowship, call open from Dec 1st, https://kaw.wallenberg.org/utlysningar/wallenberg-foundation-postdoctoral-f…)
Best Regards,
Charles OR, Gerrit MAUS, Hong XU
Assistant Professors (Psychology)
School of Social Sciences
College of Humanities, Arts, and Social Sciences
Nanyang Technological University, Singapore
======
The College of Humanities, Arts, and Social Sciences, Nanyang Technological University (NTU) invites applications from eligible candidates to join us as Postdoctoral Fellows for the Academic Year 2019.
The Postdoctoral Fellowships are for one year, renewable for a second year, subject to satisfactory performance. Applicants are strongly advised to explore the research interests of the College’s faculty members to identify potential faculty mentors.
Applicants must possess a doctoral degree issued no more than 3 years prior to the time of application (i.e. the degree must have been obtained after Jan 1, 2016). Candidates who are finishing up their degrees must have their doctoral degrees conferred by July 2019.
Owing to the interdisciplinary nature of the fellowship, applicants are expected to propose a research project to demonstrate how their expertise crosses different disciplines and relates to the specific Research Theme they are applying for.
Details are available at:
http://class.cohass.ntu.edu.sg/Research/Pages/Postdoctoral-Fellowship-2019.…
Applications and Reference Letters must reach the College by 30 November, 2018 (11:59pm Singapore Time UTC+8). Successful candidates are expected to commence their Fellowships in July or August 2019.
Owing to the tight deadline, interested candidates are invited to contact potential faculty mentors as soon as possible, with curriculum vitae and a brief research statement provided.
Application and enquiries should be addressed to:
The Associate Dean (Research)
College of Humanities, Arts, and Social Sciences
Email: AD-HASS-RESEARCH(a)ntu.edu.sg<mailto:AD-HASS-RESEARCH@ntu.edu.sg>
NTU is a young and research-intensive university ranking consistently amongst the top 10 in Asia and the 1st amongst young universities under 50. It has been ranked consistently and progressively under the top 100 universities in the world by the Times Higher Education since 2013, with its latest ranking at 51. Singapore is a fascinating, dynamic multi-cultural city in Southeast Asia with a large expat community, and a great hub for exploring neighbouring travel destinations.
=======
________________________________
CONFIDENTIALITY: This email is intended solely for the person(s) named and may be confidential and/or privileged. If you are not the intended recipient, please delete it, notify us and do not copy, use, or disclose its contents.
Towards a sustainable earth: Print only when necessary. Thank you.
Hi!
I am looking for a database of 3D models of famous faces.
I found Eric Baird's database at
http://www.relativitybook.com/CoolStuff/facebank.html, but was hoping
for additional celebrities.
If anyone has a database and is willing to share, or could point me to
a database, it would be very much appreciated.
Thanks!
Caspar
Apologies for cross-posting
***********************************************************************************
CBAR 2019: CALL FOR PAPERS
6th International Workshop on CONTEXT BASED AFFECT RECOGNITION
https://cbar2019.blogspot.com/
Submission Deadline: December 14th, 2018
***********************************************************************************
The 6th International Workshop on Context Based Affect Recognition (CBAR
2019) will be held in conjunction with FG 2019 in May 2019 in Lille
France – http://fg2019.org/
For details concerning the workshop program, paper submission, and
guidelines please visit our workshop website at:
https://cbar2019.blogspot.com/
Best regards,
Zakia Hammal
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
CALL FOR PAPERS
IEEE Transactions on Affective Computing
Special Issue on Automated Perception of Human Affect from Longitudinal
Behavioral Data
Website:
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/tacSpecialIssue201…
I. Aim and Scope
Research trends within artificial intelligence and cognitive sciences are
still heavily based on computational models that attempt to imitate human
perception in various behavior categorization tasks. However, most of the
research in the field focuses on instantaneous categorization and
interpretation of human affect, such as the inference of six basic emotions
from face images, and/or affective dimensions (valence-arousal), stress and
engagement from multi-modal (e.g., video, audio, and autonomic physiology)
data. This diverges from the developmental aspect of emotional behavior
perception and learning, where human behavior and expressions of affect
evolve and change over time. Moreover, these changes are present not only
in the temporal domain but also within different populations and more
importantly, within each individual. This calls for a new perspective when
designing computational models for analysis and interpretation of human
affective behaviors: the computational models that can timely and
efficiently adapt to different contexts and individuals over time, and also
incorporate existing neurophysiological and psychological findings (prior
knowledge). Thus, the long-term goal is to create life-long personalized
learning and inference systems for analysis and perception of human
affective behaviors. Such systems would benefit from long-term contextual
information (including demographic and social aspects) as well as
individual characteristics. This, in turn, would allow building intelligent
agents (such as mobile and robot technologies) capable of adapting their
behavior in a continuous and on-line manner to the target contexts and
individuals.
This special issue aims at contributions from computational neuroscience
and psychology, artificial intelligence, machine learning, and affective
computing, challenging and expanding current research on interpretation and
estimation of human affective behavior from longitudinal behavioral data,
i.e., single or multiple modalities captured over extended periods of time
allowing efficient profiling of target behaviors and their inference in
terms of affect and other socio-cognitive dimensions. We invite
contributions focusing on both the theoretical and modeling perspective, as
well as applications ranging from human-human, human-computer and
human-robot interactions.
II. Potential Topics
Given computational models, the capability to perceive and understand
emotion behavior is an important and popular research topic. That is why
recent special issues on the IEEE Journal on Transactions on Affective
Computing covered topics from emotion behavior analysis “in-the-wild” to
personality analysis. However, most of the research published by these
specific calls treat emotion behavior as an instantaneous event, relating
mostly to emotion recognition, and thus neglect the development of complex
emotion behavior models. Our special issue will foster the development of
the field by focusing excellent research on emotion models for long-term
behavior analysis.
The topics of interest for this special issue include, but are not limited
to:
- New theories and findings on continuous emotion recognition
- Multi- and Cross-modal emotion perception and interpretation
- Lifelong affect analysis, perception, and interpretation
- Novel neural network models for affective processing
- New neuroscientific and psychological findings on continuous emotion
representation
- Embodied artificial agents for empathy and emotion appraisal
- Machine learning for affect-driven interventions
- Socially intelligent human-robot interaction
- Personalized systems for human affect recognition
III. Submission
Prospective authors are invited to submit their manuscripts electronically,
adhering to the IEEE Transactions on Affective Computing guidelines (
https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=5165369). Please
submit your papers through the online system (
https://mc.manuscriptcentral.com/taffc-cs) and be sure to select the
special issue: Special Issue/Section on Automated Perception of Human
Affect from Longitudinal Behavioral Data.
IV. IMPORTANT DATES:
Submissions Deadline: 15th of January 2019
Deadline for reviews and response to authors: 06th of April 2019
Camera-ready deadline: 05th of August 2019
V. Guest Editors
Pablo Barros, University of Hamburg, Germany
Stefan Wermter, University of Hamburg, Germany
Ognjen (Oggi) Rudovic, Massachusetts Institute of Technology, United States
of America
Hatice Gunes, University of Cambridge, United Kingdom
--------------------------
--
Dr. Pablo Barros
Postdoctoral Research Associate - Crossmodal Learning Project (CML)
Knowledge Technology
Department of Informatics
University of Hamburg
Vogt-Koelln-Str. 30
22527 Hamburg, Germany
Phone: +49 40 42883 2535
Fax: +49 40 42883 2515
barros at informatik.uni-hamburg.dehttp://www.pablobarros.nethttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/people/barros.htmlhttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/
Dear colleagues,
The Society for Affective Science is pleased to announce a Call for Abstracts for the SAS 2019 Conference at the Westin Boston Waterfront Hotel in Boston, MA, on March 21-23, 2019.
Abstract Submission Formats
In addition to invited sessions, we invite attendees to submit abstracts for posters and flash talk sessions. We welcome abstract submissions that describe new research within the domain of affective science. In line with our goal to facilitate interdisciplinary advances, we welcome submissions from affective scientists in any discipline (e.g., anthropology, business, computer science, cultural studies, economics, education, geography, history, integrative medicine, law, linguistics, literature, neuroscience, philosophy, political science, psychiatry, psychology, public health, sociology, theatre), working on a broad range of topics using a variety of measures. Authors at all career stages – trainees, junior faculty, and senior faculty – are encouraged to submit an abstract according to two presentation formats on any topic within the domain of affective science as follows:
1. A poster
2. A flash talk
Deadline for Receipt of Abstracts
Abstracts must be submitted by Friday, November 9, 2018, at 11:59 p.m. Baker Island Time (BIT; UTC-12) to be considered for inclusion in the program. Please review the abstract submission instructions carefully. Click here<https://t.e2ma.net/click/4fw8pd/824poyb/sqgu0q> to submit your abstract. All presenters must register and pay to attend the meeting. Notification of acceptance or rejection of abstracts will be emailed to the corresponding author by Friday, January 11, 2019.
Selection Process
All abstracts will be evaluated for scholarly merit by an interdisciplinary group of SAS Program Committee members using blind peer review. Abstracts are also selected at this time for Poster Spotlight presentations and consideration for poster and flash talk awards. Posters and flash talk award nominees will be evaluated during the conference by SAS faculty members. Reviewers do not review abstracts, posters, or flash talks on which they have a known conflict of interest. Awards will be announced at the conference during the closing ceremony.
Program Highlights
This year's invited program includes new and exciting interdisciplinary conversations in affective science. An opening session "Trajectories, Transitions, and Turning Points," features Phoebe Ellsworth, Valeria Gazzola, and Bob Levenson describing experiences, events, and findings that were key transitions points in their research programs and careers. The presidential symposium will be devoted to research on culture, ethnicity, and emotion. This year's program will also feature a symposium on emotion and health jointly sponsored by SAS and the American Psychosomatic Society called "The Vital Role of Affective Science in Medicine."
We will also continue to showcase other exciting formats, such as TED-style talks, salons, methods and speed networking events, flash talks, poster spotlights, and interactive poster sessions. Come see other invited speakers, including Marc Brackett, Cynthia Breazeal, Alan Fiske, Richard Lane, Jenn Lerner, Tali Sharot, and invited flash-talk speakers who will present groundbreaking research representing a wide range of affective science! There are also four exciting pre-conferences that will be held March 21.
Please see the conference website<https://t.e2ma.net/click/4fw8pd/824poyb/8ihu0q> for more information. You can register for SAS 2019 starting in November.
We're looking forward to seeing you in Boston for what promises to be another great SAS meeting!
Relevant Links
Check out our new look!<https://t.e2ma.net/click/4fw8pd/824poyb/obiu0q> We are extremely excited about the new Society website! It's been redesigned from the bottom up with clarity and effectiveness foremost in our minds. We hope affective scientists everywhere will find the site easy to navigate and use. Note that our members-only section will open soon.
www.facebook.com/affectScience<https://www.facebook.com/affectScience/>
twitter.com/affectScience<https://twitter.com/affectScience>
Dr. Rachael E. Jack, Ph.D.
Reader
Institute of Neuroscience & Psychology
School of Psychology
University of Glasgow
+44 (0) 141 5087
www.psy.gla.ac.uk/schools/psychology/staff/rachaeljack/
Multi-view Representations for Pose Invariant Face Recognition in Man and Machine
ESRC DTP Artificial Intelligence Studentship
School of Psychology and School of Computer Science, Nottingham University
This is a unique opportunity for a fully-funded ESRC Doctoral Studentship for applicants from the UK, EU or Overseas with a background in psychology, neuroscience, cognitive science, computer science or a related STEM discipline. The successful candidate will be located within the School of Psychology. The start date of this studentship may be 1st February 2019 or 1st October 2019 and will depend on the required award length (see ‘Award Lengths’ section below).
Pose-Invariant Face Recognition (PIFR) remains a significant stumbling block to realizing the full potential of face recognition as a passive biometric technology. This fundamental human ability poses a significant challenge for computer vision systems due to the immense within-class appearance variations caused by pose change, e.g., self-occlusion and coupled illumination or expression variations. Despite extensive efforts to solve the problem of pose-invariant face recognition it remains a significant barrier to developments in Artificial Intelligence. PIFR is achieved effortlessly by the human visual system but at present we do not understand the human system well enough to provide plausible solutions to the clear technological challenges. The aim of the studentship will be to enhance our understanding of how human observers achieve pose invariant recognition of faces in order to inform AI strategies. We will particularly focus on multi- view or pose-aware strategies and compare these against object-based models or pose-agnostic approaches. The work will involve both extensive psychophysical experimentation with human participants and computational experiments exploring computer models of pose-invariant dynamic face recognition. The project will be jointly supervised by Prof Alan Johnston (Psychology) and Dr Michel Valstar (Computer Science).
Award Lengths
The length of award offered will depend on the extent to which the candidate has met the ESRC’s core research methods training requirements. These are:
* Quantitative methods,
* Qualitative methods,
* Philosophy of Research and
* Research Design.
The extent to which you have met this criteria will be assessed during the application process. For those who have met all of the training, a +3 year award will be made, for those who have met some of the training, a 3.5 year award will be made, with a requirement that core training is completed within the first 12 months. If no core methods research has been undertaken by the candidate, then a +4 award will be made, which will include 180 credits research methods training before progressing to the PhD.
Owing to these requirements, +3 awards could start in February, but +4 awards would need to start at the beginning of the next academic year. +3.5 awards would depend on the type of training required and we will be able to advise candidates further prior to application if required.
You can read more about award lengths here: www.nottingham.ac.uk/esrc-dtc/mgs/dtp-training-<http://www.nottingham.ac.uk/esrc-dtc/mgs/dtp-training-> at-nottingham.aspx
Application Process
To be considered for this PhD, please complete the ESRC AI Studentship application form available online here with a covering letter and a CV as well as two references and then email this to christopher.atkinson(a)nottingham.ac.uk<mailto:christopher.atkinson@nottingham.ac.uk>.
Application deadline: Friday 9th November 2018 Midlands Graduate School ESRC DTP
The Midlands Graduate School is an accredited Economic and Social Research Council (ESRC) Doctoral Training Partnership (DTP). One of 14 such partnerships in the UK, the Midlands Graduate School is a collaboration between the University of Warwick, Aston University, University of Birmingham, University of Leicester, Loughborough University and the University of Nottingham.
Our ESRC studentships cover fees and maintenance stipend and extensive support for research training, as well as research activity support grants. For this priority area candidates ordinarily resident in an EU member state will be eligible for a full award as will candidates from overseas.
Informal enquiries about the research prior to application can be directed to: alan.johnston(a)nottingham.ac.uk<mailto:alan.johnston@nottingham.ac.uk>.
This message and any attachment are intended solely for the addressee
and may contain confidential information. If you have received this
message in error, please contact the sender and delete the email and
attachment.
Any views or opinions expressed by the author of this email do not
necessarily reflect the views of the University of Nottingham. Email
communications with the University of Nottingham may be monitored
where permitted by law.
We are seeking an enthusiastic PhD student with a particular interest in neuroscience of auditory (speech) perception. The PhD position is part of the Chair of Cognitive and Clinical Neuroscience led by Prof. Dr. Katharina von Kriegstein at the TU Dresden, Germany. The position is funded by the ERC-project SENSOCOM (http://cordis.europa.eu/project/rcn/199655_en.html).
The aim of the SENSOCOM project is to investigate the role of auditory and visual subcortical sensory structures in analysing human communication signals and to specify how their dysfunction contributes to human communication disorders such as developmental dyslexia. For examples of our work on these topics see von Kriegstein et al., 2008 Current Biology, Diaz et al., 2012 PNAS, Müller-Axt et al., 2017 Current Biology.
TU Dresden is one of eleven German Universities of Excellence and offers an interdisciplinary scientific environment. The Neuroimaging Centre at the TU Dresden (http://www.nic-tud.de) has cutting-edge infrastructure with 3-Tesla MRI, MRI compatible headphones and eye-tracking, several EEG systems, a neurostimulation unit including neuronavigation, TMS and tDCS devices.
The PhD position is available at the next possible date. Subject to personal qualification, employees are remunerated according to salary group E 13 TV-L 50%. There will be the opportunity to participate in the TU Dresden graduate academy (https://tu-dresden.de/ga?set_language=en).
For more information on the post please see the official advertisement: https://tinyurl.com/ybwe47bc
For more information on our research group see: https://tu-dresden.de/mn/psychologie/kknw
---
Katharina von Kriegstein
Professor of Cognitive and Clinical Neuroscience
Technische Universität Dresden
Bamberger Str. 7, 01187 Dresden, Germany
Phone +49 (0) 351-463-43145
https://tu-dresden.de/mn/psychologie/kknwhttps://twitter.com/kvonkriegstein
Dear colleagues,
We are inviting abstract submissions for a special session on “Human Health
Monitoring Based on Computer Vision”, as part of the 14th IEEE
International Conference on Automatic Face and Gesture Recognition (FG’19,
http://fg2019.org/), Lille, France, May 14-18, 2019. Details on the special
session follow below.
Title, abstract, list of authors, as well as the name of the corresponding
author, should be emailed directly to Abhijit Das (abhijitdas2048(a)gmail.com).
We hope to receive abstracts before October 8th 2018. Full paper submission
December 9th 2018.
Feel free to contact Abhijit Das if you have any further questions.
Kindly circulate this email to others who might be interested.
We look forward to your contributions!
François Brémond (INRIA, France)
Antitza Dantcheva (INRIA, France)
Abhijit Das (INRIA, France)
Xilin Chen (CAS, China)
Hu Han (CAS, China)
--------------------------------------------------------------------------------------------
*Call for abstract for FG 2018 special session *
*on*
*Human Health Monitoring Based on Computer Vision*
---------------------------------------------------------------
Human Health Monitoring Based on Computer Vision has gained rapid
scientific growth in the last years, with many research articles and
complete systems based on a set of features, extracted from face and
gesture. Researchers from the computer, as well as from medical science
have granted significant attention, with goals ranging from patient
analysis and monitoring to diagnostics. (e.g., for dementia, depression,
healthcare, physiological measurement [5, 6]).
Despite the progress, there are various open, unexplored, and
unidentified challenges. Such as the robustness of these techniques in the
real-world scenario, collecting large dataset for research, heterogeneity
of the acquiring environment and the artefacts. Moreover, healthcare
represents an area of broad economic (e.g.,
https://www.prnewswire.com/news-releases/healthcare-automation-market-to-re…),
social, and scientific impact. Therefore, it is imperative to foster
efforts coming from computer vision, machine learning, and the medical
domain, as well as multidisciplinary efforts. Towards this, we propose a
special session, with a focus on multidisciplinary efforts. We aim to
document recent advancements in automated healthcare, as well as enable and
discuss progress.. Therefore, the goal of this special session is to bring
together researchers and practitioners working in this area of computer
vision and medical science, and to address a wide range of theoretical and
practical issues related to real-life healthcare systems.
Topics of interest include, but are not limited to:
· Health monitoring based on face analysis,
· Health monitoring based on gesture analysis,
· Health monitoring based corporeal-based visual features,
· Depression analysis based on visual features,
· Face analytics for human behaviour understanding,
· Anxiety diagnosis based on face and gesture
· Physiological measurement employing face analytics,
· Databases on health monitoring, e.g., depression analysis,
· Augmentative and alternative communication,
· Human-robot interaction,
· Home healthcare,
· Technology for cognition,
· Automatic emotional hearing and understanding,
· Visual attention and visual saliency,
· Assistive living,
· Privacy preserving systems,
· Quality of life technologies,
· Mobile and wearable systems,
· Applications for the visually impaired,
· Sign language recognition and applications for hearing impaired,
· Applications for the ageing society,
· Personalized monitoring,
· Egocentric and first-person vision,
· Applications to improve the health and wellbeing of children and
the elderly, etc.
In addition, we plan to organise a special issue in a journal with the
extended version of accepted special session papers.
CALL FOR PARTICIPATION
The One-Minute Gradual-Empathy Prediction (OMG-Empathy) Competition
held in partnership with the IEEE International Conference on Automatic
Face and Gesture Recognition 2019 in Lille, France.
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/omg_empathy.html
I. Aim and Scope
The ability to perceive, understand and respond to social interactions
in a human-like manner is one of the most desired capabilities in
artificial agents, particularly social robots. These skills are highly
complex and require a focus on several different aspects of research,
including affective understanding. An agent which is able to recognize,
understand and, most importantly, adapt to different human affective
behaviors can increase its own social capabilities by being able to
interact and communicate in a natural way.
Emotional expression perception and categorization are extremely popular
in the affective computing community. However, the inclusion of emotions
in the decision-making process of an agent is not considered in most of
the research in this field. To treat emotion expressions as the final
goal, although necessary, reduces the usability of such solutions in
more complex scenarios. To create a general affective model to be used
as a modulator for learning different cognitive tasks, such as modeling
intrinsic motivation, creativity, dialog processing, grounded learning,
and human-level communication, only emotion perception cannot be the
pivotal focus. The integration of perception with intrinsic concepts of
emotional understanding, such as a dynamic and evolving mood and
affective memory, is required to model the necessary complexity of an
interaction and realize adaptability in an agent's social behavior.
Such models are most necessary for the development of real-world social
systems, which would communicate and interact with humans in a natural
way on a day-to-day basis. This could become the next goal for research
on Human-Robot Interaction (HRI) and could be an essential part of the
next generation of social robots.
For this challenge, we designed, collected and annotated a novel corpus
based on human-human interaction. This novel corpus builds on top of the
experience we gathered while organizing the OMG-Emotion Recognition
Challenge, making use of state-of-the-art frameworks for data collection
and annotation.
The One-Minute Gradual Empathy datasets (OMG-Empathy) contain
multi-modal recordings of different individuals discussing predefined
topics. One of them, the actor, shares a story about themselves while
the other, the listener, reacts to it emotionally. We annotated each
interaction based on the listener's own assessment of how they felt
while the interaction was taking place.
We encourage the participants to propose state-of-the-art solutions not
only based on deep, recurrent and self-organizing neural networks but
also traditional methods for feature representation and data processing.
We also enforce that the use of contextual information, as well as
personalized solutions for empathy assessment, will be extremely
important for the development of competitive solutions.
II. Competition Tracks
We let available for the challenge a pre-defined set of training,
validation and testing samples. We separate our samples based on each
story: 4 stories for training, 1 for validation and 3 for testing. Each
story sample is composed of 10 videos with interactions, one for each
listener. Although using the same training, validation and testing data
split, we propose two tracks which will measure different aspects of the
self-assessed empathy:
The Personalized Empathy track, where each team must predict the empathy
of a specific person. We will evaluate the ability of proposed models to
learn the empathic behavior of each of the subjects over a newly
perceived story. We encourage the teams to develop models which take
into consideration the individual behavior of each subject in the
training data.
The Generalized Empathy track, where the teams must predict the general
behavior of all the participants over each story. We will measure the
performance of the proposed models to learn a general empathic measure
for each of the stories individually. We encourage the proposed models
to take into consideration the aggregated behavior of all the
participants for each story and to generalize this behavior in a newly
perceived story.
The training and validation samples will be given to the participants at
the beginning of the challenge together with all the associated labels.
The test set will be given to the participants without the associated
labels. The team`s predictions on the test set will be used to calculate
the final metrics of the challenge.
III. How to Participate
To participate to the challenge, please send us an email to barros @
informatik.uni-hamburg.de with the title "OMG-Empathy Team
Registration". This e-mail must contain the following information:
Team Name
Team Members
Affiliation
Participating tracks
We split the corpus into three subsets: training, validation and
testing. The participants will receive the training and validation sets,
together with the associated annotations once they subscribe to the
challenge. The subscription will be done via e-mail. Each participant
team must consist of 1 to 5 participants and must agree to use the data
only for scientific purposes. Each team can choose to take part in one
or both the tracks.
After the training period is over, the testing set will be released
without the associated annotations.
Each team must submit, via e-mail, their final predictions as a .csv
file for each video on the test set. Together with the final submission,
each team must send a short 2-4 pages paper describing their solution
published on Arxiv and the link for a github page to their solution. If
a team fails to submit any of these items, their submission will be
invalidated. Each team can submit 3 complete submissions for each track.
IV. Important Dates
25th of September 2018 - Opening of the Challenge - Team registrations begin
1st of October 2018 - Training/validation data and annotation available
1st of December 2018 - Test data release
3rd of December 2018 - Final submission (Results and code)
5th of December 2018 - Final submission (Paper)
7th of December 2018 - Announcement of the winners
V. Organization
Pablo Barros, University of Hamburg, Germany
Nikhil Churamani, University of Cambridge, United Kingdom
Angelica Lim, Simon Fraser University, Canada
Stefan Wermter, Hamburg University, Germany
--
Dr.rer.nat. Pablo Barros
Postdoctoral Research Associate - Crossmodal Learning Project (CML)
Knowledge Technology
Department of Informatics
University of Hamburg
Vogt-Koelln-Str. 30
22527 Hamburg, Germany
Phone: +49 40 42883 2535
Fax: +49 40 42883 2515
barros at informatik.uni-hamburg.de
https://www.inf.uni-hamburg.de/en/inst/ab/wtm/people/barros.htmlhttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/
Dear colleagues,
We are inviting abstract submissions for a special session on “Human Health
Monitoring Based on Computer Vision”, as part of the 14th IEEE
International Conference on Automatic Face and Gesture Recognition (FG’19,
http://fg2019.org/), Lille, France, May 14-18, 2019. Details on the special
session follow below.
Title, abstract, list of authors, as well as the name of the corresponding
author, should be emailed directly to Abhijit Das (abhijitdas2048(a)gmail.com).
We hope to receive abstracts before Thursday, September 27th.
Feel free to contact Abhijit Das if you have any further questions.
Kindly circulate this email to others who might be interested.
We look forward to your contributions!
François Brémond (INRIA, France)
Antitza Dantcheva (INRIA, France)
Abhijit Das (INRIA, France)
Xilin Chen (CAS, China)
Hu Han (CAS, China)
--------------------------------------------------------------------------------------------
*Call for abstract for FG 2018 special session *
*on*
*Human Health Monitoring Based on Computer Vision*
---------------------------------------------------------------
Human Health Monitoring Based on Computer Vision has gained rapid
scientific growth in the last years, with many research articles and
complete systems based on a set of features, extracted from face and
gesture. Researchers from the computer, as well as from medical science
have granted significant attention, with goals ranging from patient
analysis and monitoring to diagnostics. (e.g., for dementia, depression,
healthcare, physiological measurement [5, 6]).
Despite the progress, there are various open, unexplored, and
unidentified challenges. Such as the robustness of these techniques in the
real-world scenario, collecting large dataset for research, heterogeneity
of the acquiring environment and the artefacts. Moreover, healthcare
represents an area of broad economic (e.g.,
https://www.prnewswire.com/news-releases/healthcare-automation-market-to-re…),
social, and scientific impact. Therefore, it is imperative to foster
efforts coming from computer vision, machine learning, and the medical
domain, as well as multidisciplinary efforts. Towards this, we propose a
special session, with a focus on multidisciplinary efforts. We aim to
document recent advancements in automated healthcare, as well as enable and
discuss progress.. Therefore, the goal of this special session is to bring
together researchers and practitioners working in this area of computer
vision and medical science, and to address a wide range of theoretical and
practical issues related to real-life healthcare systems.
Topics of interest include, but are not limited to:
· Health monitoring based on face analysis,
· Health monitoring based on gesture analysis,
· Health monitoring based corporeal-based visual features,
· Depression analysis based on visual features,
· Face analytics for human behaviour understanding,
· Anxiety diagnosis based on face and gesture
· Physiological measurement employing face analytics,
· Databases on health monitoring, e.g., depression analysis,
· Augmentative and alternative communication,
· Human-robot interaction,
· Home healthcare,
· Technology for cognition,
· Automatic emotional hearing and understanding,
· Visual attention and visual saliency,
· Assistive living,
· Privacy preserving systems,
· Quality of life technologies,
· Mobile and wearable systems,
· Applications for the visually impaired,
· Sign language recognition and applications for hearing impaired,
· Applications for the ageing society,
· Personalized monitoring,
· Egocentric and first-person vision,
· Applications to improve the health and wellbeing of children and
the elderly, etc.
In addition, we plan to organise a special issue in a journal with the
extended version of accepted special session papers.
Dear all,
We are very happy to announce the release of resource material related
to our research on affective computing and gestures recognition. This
resource contains research on hand gesture and emotion processing
(auditory, visual and crossmodal) recognition and processing, organized
as three datasets (NCD, GRIT, and OMG-Emotion), source code for proposed
neural network solutions, pre-trained models, and ready-to-run demos.
The NAO Camera hand posture Database (NCD) was designed and recorded
using the camera of a NAO robot and contains four different hand
postures. A total of 2000 images were recorded. In each image, the hand
was is present in different positions, not always in the centralized,
and sometimes with occlusion of some fingers.
The Gesture Commands for Robot InTeraction (GRIT) dataset contains
recordings of six different subjects performing eight command gestures
for Human-Robot Interaction (HRI): Abort, Circle, Hello, No, Stop, Turn
Right, Turn Left, and Warn. We recorded a total of 543 sequences with a
varying number of frames in each one.
The One-Minute Gradual Emotion Corpus (OMG-Emotion) is composed of
Youtube videos which are about a minute in length and are annotated
taking into consideration a continuous emotional behavior. The videos
were selected using a crawler technique that uses specific keywords
based on long-term emotional behaviors such as "monologues",
"auditions", "dialogues" and "emotional scenes".
After the videos were selected, we created an algorithm to identify
weather the video had at least two different modalities which contribute
for the emotional categorization: facial expressions, language context,
and a reasonably noiseless environment. We selected a total of 420
videos, totaling around 10 hours of data.
Together with the datasets, we provide the source code for different
proposed neural models. These models are based on novel deep and
self-organizing neural networks which deploy different mechanisms
inspired by neuropsychological concepts. All of our models are formally
described in different high-impact peer-reviewed publications. We also
provide a ready-to-run demo for visual emotion recognition based on our
proposed models.
These resources are accessible through our GitHub link:
https://github.com/knowledgetechnologyuhh/EmotionRecognitionBarros .
We hope that with these resources we can contribute to the areas of
affective computing and gesture recognition and foster the development
of innovative solutions.
--
Dr.rer.nat. Pablo Barros
Postdoctoral Research Associate - Crossmodal Learning Project (CML)
Knowledge Technology
Department of Informatics
University of Hamburg
Vogt-Koelln-Str. 30
22527 Hamburg, Germany
Phone: +49 40 42883 2535
Fax: +49 40 42883 2515
barros at informatik.uni-hamburg.de
https://www.inf.uni-hamburg.de/en/inst/ab/wtm/people/barros.htmlhttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/
Open Positions: 1 Ph.D. student and 2 Postdocs in the area of Computer Vision and Deep Learning at INRIA Sophia Antipolis, France
------------------------------ ------------------------------ ------------------------------ ------------------------------ ------------------------------ ----------------------
Positions are offered within the frameworks of the prestigious grants
- ANR JCJC Grant *ENVISION*: "Computer Vision for Automated Holistic Analysis for Humans" and the
- INRIA - CAS grant *FER4HM* "Facial expression recognition with application in health monitoring"
and are ideally located in the heart of the French Riviera, inside the multi-cultural silicon valley of Europe.
Full announcements:
- Open Ph.D.-Position in Computer Vision / Deep Learning (M/F) *ENVISION*: http://antitza.com/ANR_phd.pdf
- Open Post Doc - Position in Computer Vision / Deep Learning (M/F) *FER4HM*: http://antitza.com/INRIA_CAS_p ostdoc.pdf
- Open Post Doc - Position in Computer Vision / Deep Learning (M/F) (advanced level) *ENVISION*: http://antitza.com/ANR_postdoc .pdf
To apply, please email a full application to Antitza Dantcheva ( antitza.dantcheva(a)inria.fr ), indicating the position in the e-mail subject line.
Dear all,
we just published a database of 500 Mooney face stimuli for visual
perception research. Please find the paper here:
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0200106
Each face was tested in a cohort of healthy adults for face detection
difficulty and inversion effects. We also provide a comparison with Craig
Mooney's original stimulus set. The stimuli and accompanying data are
available from Figshare (https://figshare.com/account/articles/5783037)
under a CC BY license. Please feel free to use them for your research.
Caspar
RESEARCH SPECIALIST POSITION AT COGNITIVE NEUROSCIENCE LAB, University of Richmond
The Cognitive Neuroscience laboratory of Dr. Cindy Bukach is seeking a highly organized and energetic person to serve as full-time research specialist. The lab conducts research on object and face recognition in cognitively intact and impaired individuals, using electrophysiology (EEG and ERP) and behavioral methods. The duties of the research specialist include: Conducts Cognitive Neuroscience research on human subjects using both behavioral and ERP methods, including programming, recruiting, testing and statistical analysis. Under limited supervision, coordinates and supervises student research activities, ensuring adherence to safety and ethical regulations related to research with human subjects. Performs administrative duties such as database management, scheduling, hardware/software maintenance, website maintenance, equipment maintenance and general faculty support.
EDUCATION & EXPERIENCE: Bachelor’s degree or equivalent in psychology, neuroscience, cognitive science, computer science or related field
2 years experience in a research lab (preferably in cognitive or cognitive neuroscience laboratory using Event-related potential method)
PREFERRED QUALIFICATIONS (any of the following highly desired):
Prior research experience with Event-related potential method and data analysis
Advanced computer skills (Matlab, Python, Java, etc.)
For more information, please contact Cindy Bukach at cbukach(a)richmond.edu<mailto:cbukach@richmond.edu>
Cindy M. Bukach, PhD
Chair, Department of Psychology
Associate Professor of Cognitive Neuroscience
MacEldin Trawick Endowed Professor of Psychology
209 Richmond Hall
28 Westhampton Way
University of Richmond, Virginia
23173
Phone: (804) 287-6830
Fax: (804) 287-1905
Call for Expressions of Interest: UNSW Scientia PhD Scholarship
"Understanding super-recognition to improve face identification systems"
The UNSW Forensic Psychology Group invites Expressions of Interest for a unique PhD scholarship opportunity.
UNSW Scientia PhD scholars are awarded $50k per year, comprising a tax-free living allowance of $40k per year for 4 years, and a support package of up to $10k per year to provide financial support for career development activities.
The project is targeted at PhD candidates that are qualified to honours and/or masters level in psychology, computer science of cognitive science. We are particularly interested to hear from applicants with work experience in research, government or industry.
The topic of this thesis is open to negotiation, but we hope that the work can contribute to our broader goal of understanding how the best available human and machine solutions to face identification can be combined to produce optimal systems.
For more details and to apply see the following links:
https://www.2025.unsw.edu.au/apply/scientia-phd-scholarships/understanding-…http://forensic.psy.unsw.edu.au/joinus.htmlhttps://www.2025.unsw.edu.au/apply/<https://www.2025.unsw.edu.au/apply/scientia-phd-scholarships/understanding-…>
Please direct informal inquiries to david.white(a)unsw.edu.au<mailto:david.white@unsw.edu.au>
Dear colleagues – please circulate widely!
We are delighted to announce the following positions at the Institute of Neuroscience & Psychology, University of Glasgow, Scotland. All positions are funded by the ERC project Computing the Face Syntax of Social Communication led by Dr. Rachael Jack.
2 X Postdoctoral Researcher (5 years) Ref: 021275
The post-holders will contribute to a large 5-year research project to model culture-specific dynamic social face signals with transference to social robotics. The post requires expert knowledge in all or a substantial subset of the following: vision science, psychophysics, social perception, cultural psychology, dynamic social face signalling, human-robot interaction, reverse correlation techniques, MATLAB programming, computational methods, and designing behavioral experiments.
Informal enquiries may be made to Dr Rachael Jack
Email: Rachael.Jack(a)glasgow.ac.uk<mailto:Rachael.Jack@glasgow.ac.uk>
Apply online at: www.glasgow.ac.uk/jobs<http://www.glasgow.ac.uk/jobs>
Closing date: 12 June 2018<x-apple-data-detectors://2>
See also http://www.jobs.ac.uk/job/BJU668/research-assistant-associate/
The University has recently been awarded the Athena SWAN Institutional Bronze Award.
The University is committed to equality of opportunity in employment.
The University of Glasgow, charity number SC004401.
Dr. Rachael E. Jack, Ph.D.
Lecturer
Institute of Neuroscience & Psychology
School of Psychology
University of Glasgow
+44 (0) 141 5087<tel:+44%20141%205087>
www.psy.gla.ac.uk/schools/psychology/staff/rachaeljack/<http://www.psy.gla.ac.uk/schools/psychology/staff/rachaeljack/>
[University of Glasgow: The Times Scottish University of the Year 2018]