Apologies for cross-posting
***********************************************************************************
CBAR 2019: CALL FOR PAPERS
6th International Workshop on CONTEXT BASED AFFECT RECOGNITION
https://cbar2019.blogspot.com/
Submission Deadline: January 28th, 2019 (Extended)
***********************************************************************************
The 6th International Workshop on Context Based Affect Recognition (CBAR
2019) will be held in conjunction with FG 2019 in May 2019 in Lille
France – http://fg2019.org/
For details concerning the workshop program, paper submission, and
guidelines please visit our workshop website at:
https://cbar2019.blogspot.com/
Best regards,
Zakia Hammal
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
Dear members,
We are pleased to announce publication of first of its kind children
spontaneous facial expression video database, the LIRIS Children
Spontaneous Facial Expression Video Database (LIRIS-CSE). This unique
database contains spontaneous / natural facial expressions of 12 children
in diverse settings with variable recording scenarios showing six universal
or prototypic emotional expressions (happiness, sadness, anger, surprise,
disgust and fear). Children are recorded in constraint free environment (no
restriction on head movement, no restriction on hands movement, free
sitting setting, no restriction of any sort) while they watched specially
built / selected stimuli. This constraint free environment allowed us to
record spontaneous / natural expression of children as they occur. The
database has been validated by 22 human raters. Details of the database are
presented in the following publication:
A novel database of Children's Spontaneous Facial Expressions (LIRIS-CSE).
Rizwan Ahmed Khan, Crenn Arthur, Alexandre Meyer, Saida Bouakaz. arXiv
(2018) preprint, arXiv:1812.01555. https://arxiv.org/abs/1812.01555
To request database download (for research purpose only) visit project
webpage at: https://childrenfacialexpression.projet.liris.cnrs.fr/
__________________
Best Regards,
Dr. Rizwan Ahmed KHAN
Associate Professor, Barrett Hodgson University, Karachi, Pakistan.
|| Researcher
- Laboratoire d'InfoRmatique en Image et Systèmes d'information (LIRIS),
Lyon, France.
<https://sites.google.com/site/drkhanrizwan17/>
<http://scholar.google.com/citations?user=T66djn8AAAAJ&hl=en>
<http://dblp.uni-trier.de/pers/hd/k/Khan:Rizwan_Ahmed.html>
<https://www.youtube.com/user/Rizwankhan2000/videos?view_as=subscriber>
*Help preserve the color of our world – Think before you print.*
Apologies for cross-posting
***********************************************************************************
CBAR 2019: CALL FOR PAPERS
6th International Workshop on CONTEXT BASED AFFECT RECOGNITION
https://cbar2019.blogspot.com/
Submission Deadline: January 7th, 2019
***********************************************************************************
The 6th International Workshop on Context Based Affect Recognition (CBAR
2019) will be held in conjunction with FG 2019 in May 2019 in Lille
France – http://fg2019.org/
For details concerning the workshop program, paper submission, and
guidelines please visit our workshop website at:
https://cbar2019.blogspot.com/
Best regards,
Zakia Hammal
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
CALL FOR PAPERS
Extended Deadline for the IEEE Transactions on Affective Computing
Special Issue on Automated Perception of Human Affect from Longitudinal
Behavioral Data
Website:
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/tacSpecialIssue201…
I. Aim and Scope
Research trends within artificial intelligence and cognitive sciences are
still heavily based on computational models that attempt to imitate human
perception in various behavior categorization tasks. However, most of the
research in the field focuses on instantaneous categorization and
interpretation of human affect, such as the inference of six basic emotions
from face images, and/or affective dimensions (valence-arousal), stress and
engagement from multi-modal (e.g., video, audio, and autonomic physiology)
data. This diverges from the developmental aspect of emotional behavior
perception and learning, where human behavior and expressions of affect
evolve and change over time. Moreover, these changes are present not only
in the temporal domain but also within different populations and more
importantly, within each individual. This calls for a new perspective when
designing computational models for analysis and interpretation of human
affective behaviors: the computational models that can timely and
efficiently adapt to different contexts and individuals over time, and also
incorporate existing neurophysiological and psychological findings (prior
knowledge). Thus, the long-term goal is to create life-long personalized
learning and inference systems for analysis and perception of human
affective behaviors. Such systems would benefit from long-term contextual
information (including demographic and social aspects) as well as
individual characteristics. This, in turn, would allow building intelligent
agents (such as mobile and robot technologies) capable of adapting their
behavior in a continuous and on-line manner to the target contexts and
individuals.
This special issue aims at contributions from computational neuroscience
and psychology, artificial intelligence, machine learning, and affective
computing, challenging and expanding current research on interpretation and
estimation of human affective behavior from longitudinal behavioral data,
i.e., single or multiple modalities captured over extended periods of time
allowing efficient profiling of target behaviors and their inference in
terms of affect and other socio-cognitive dimensions. We invite
contributions focusing on both the theoretical and modeling perspective, as
well as applications ranging from human-human, human-computer and
human-robot interactions.
II. Potential Topics
Given computational models, the capability to perceive and understand
emotion behavior is an important and popular research topic. That is why
recent special issues on the IEEE Journal on Transactions on Affective
Computing covered topics from emotion behavior analysis “in-the-wild” to
personality analysis. However, most of the research published by these
specific calls treat emotion behavior as an instantaneous event, relating
mostly to emotion recognition, and thus neglect the development of complex
emotion behavior models. Our special issue will foster the development of
the field by focusing excellent research on emotion models for long-term
behavior analysis.
The topics of interest for this special issue include, but are not limited
to:
- New theories and findings on continuous emotion recognition
- Multi- and Cross-modal emotion perception and interpretation
- Lifelong affect analysis, perception, and interpretation
- Novel neural network models for affective processing
- New neuroscientific and psychological findings on continuous emotion
representation
- Embodied artificial agents for empathy and emotion appraisal
- Machine learning for affect-driven interventions
- Socially intelligent human-robot interaction
- Personalized systems for human affect recognition
III. Submission
Prospective authors are invited to submit their manuscripts electronically,
adhering to the IEEE Transactions on Affective Computing guidelines (
https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=5165369). Please
submit your papers through the online system (
https://mc.manuscriptcentral.com/taffc-cs) and be sure to select the
special issue: Special Issue/Section on Automated Perception of Human
Affect from Longitudinal Behavioral Data.
IV. IMPORTANT DATES:
Submissions Deadline: 15th of February 2019
V. Guest Editors
Pablo Barros, University of Hamburg, Germany
Stefan Wermter, University of Hamburg, Germany
Ognjen (Oggi) Rudovic, Massachusetts Institute of Technology, United States
of America
Hatice Gunes, University of Cambridge, United Kingdom
--
Dr. Pablo Barros
Postdoctoral Research Associate - Crossmodal Learning Project (CML)
Knowledge Technology
Department of Informatics
University of Hamburg
Vogt-Koelln-Str. 30
22527 Hamburg, Germany
Phone: +49 40 42883 2535
Fax: +49 40 42883 2515
barros at informatik.uni-hamburg.dehttp://www.pablobarros.nethttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/people/barros.htmlhttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/
CALL FOR PAPERS
IEEE Transactions on Affective Computing
Special Issue on Automated Perception of Human Affect from Longitudinal
Behavioral Data
Website:
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/tacSpecialIssue201…
I. Aim and Scope
Research trends within artificial intelligence and cognitive sciences are
still heavily based on computational models that attempt to imitate human
perception in various behavior categorization tasks. However, most of the
research in the field focuses on instantaneous categorization and
interpretation of human affect, such as the inference of six basic emotions
from face images, and/or affective dimensions (valence-arousal), stress and
engagement from multi-modal (e.g., video, audio, and autonomic physiology)
data. This diverges from the developmental aspect of emotional behavior
perception and learning, where human behavior and expressions of affect
evolve and change over time. Moreover, these changes are present not only
in the temporal domain but also within different populations and more
importantly, within each individual. This calls for a new perspective when
designing computational models for analysis and interpretation of human
affective behaviors: the computational models that can timely and
efficiently adapt to different contexts and individuals over time, and also
incorporate existing neurophysiological and psychological findings (prior
knowledge). Thus, the long-term goal is to create life-long personalized
learning and inference systems for analysis and perception of human
affective behaviors. Such systems would benefit from long-term contextual
information (including demographic and social aspects) as well as
individual characteristics. This, in turn, would allow building intelligent
agents (such as mobile and robot technologies) capable of adapting their
behavior in a continuous and on-line manner to the target contexts and
individuals.
This special issue aims at contributions from computational neuroscience
and psychology, artificial intelligence, machine learning, and affective
computing, challenging and expanding current research on interpretation and
estimation of human affective behavior from longitudinal behavioral data,
i.e., single or multiple modalities captured over extended periods of time
allowing efficient profiling of target behaviors and their inference in
terms of affect and other socio-cognitive dimensions. We invite
contributions focusing on both the theoretical and modeling perspective, as
well as applications ranging from human-human, human-computer and
human-robot interactions.
II. Potential Topics
Given computational models, the capability to perceive and understand
emotion behavior is an important and popular research topic. That is why
recent special issues on the IEEE Journal on Transactions on Affective
Computing covered topics from emotion behavior analysis “in-the-wild” to
personality analysis. However, most of the research published by these
specific calls treat emotion behavior as an instantaneous event, relating
mostly to emotion recognition, and thus neglect the development of complex
emotion behavior models. Our special issue will foster the development of
the field by focusing excellent research on emotion models for long-term
behavior analysis.
The topics of interest for this special issue include, but are not limited
to:
- New theories and findings on continuous emotion recognition
- Multi- and Cross-modal emotion perception and interpretation
- Lifelong affect analysis, perception, and interpretation
- Novel neural network models for affective processing
- New neuroscientific and psychological findings on continuous emotion
representation
- Embodied artificial agents for empathy and emotion appraisal
- Machine learning for affect-driven interventions
- Socially intelligent human-robot interaction
- Personalized systems for human affect recognition
III. Submission
Prospective authors are invited to submit their manuscripts electronically,
adhering to the IEEE Transactions on Affective Computing guidelines (
https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=5165369). Please
submit your papers through the online system (
https://mc.manuscriptcentral.com/taffc-cs) and be sure to select the
special issue: Special Issue/Section on Automated Perception of Human
Affect from Longitudinal Behavioral Data.
IV. IMPORTANT DATES:
Submissions Deadline: 15th of February 2019
V. Guest Editors
Pablo Barros, University of Hamburg, Germany
Stefan Wermter, University of Hamburg, Germany
Ognjen (Oggi) Rudovic, Massachusetts Institute of Technology, United States
of America
Hatice Gunes, University of Cambridge, United Kingdom
--
Best regards,
*Pablo Barros*
*http://www.pablobarros.net <http://www.pablobarros.net>*
Apologies for cross-posting
***********************************************************************************
ICMI 2019: Call for Long and Short Papers
https://icmi.acm.org/2019/index.php?id=cfp
Abstract Submission: May 1, 2019 (11:59pm PST)
Final Submission: May 7, 2019 (11:59pm PST)
***********************************************************************************
Call for Long and Short Papers
The 21st International Conference on Multimodal Interaction (ICMI 2019) will be held in Suzhou, China. ICMI is the premier international forum for multidisciplinary research on multimodal human-human and human-computer interaction, interfaces, and system development. The conference focuses on theoretical and empirical foundations, component technologies, and combined multimodal processing techniques that define the field of multimodal interaction analysis, interface design, and system development.
We are keen to showcase novel input and output modalities and interactions to the ICMI community. ICMI 2019 will feature a single-track main conference which includes: keynote speakers, technical full and short papers (including oral and poster presentations), demonstrations, exhibits and doctoral spotlight papers. The conference will also feature workshops and grand challenges. The proceedings of ICMI 2019 will be published by ACM as part of their series of International Conference Proceedings and Digital Library.
We also want to welcome conference papers from behavioral and social sciences. These papers allow us to understand how technology can be used to increase our scientific knowledge and may focus less on presenting technical or algorithmic novelty. For this reason, the "novelty" criteria used during ICMI 2019 review will be based on two sub-criteria (i.e., scientific novelty and technical novelty as described below). Accepted papers at ICMI 2019 only need to be novel on one of these sub-criteria. In other words, a paper which is strong on scientific knowledge contribution but low on algorithmic novelty should be ranked similarly to a paper that is high on algorithmic novelty but low on knowledge discovery.
Scientific Novelty: Papers should bring some new knowledge to the scientific community. For example, discovering new behavioral markers that are predictive of mental health or how new behavioral patterns relate to children’s interactions during learning. It is the responsibility of the authors to perform a proper literature review and clearly discuss the novelty in the scientific discoveries made in their paper.
Technical Novelty: Papers reviewed with this sub-criterion should include novelty in their computational approach for recognizing, generating or modeling data. Examples include: novelty in the learning and prediction algorithms, in the neural architecture, or in the data representation. Novelty can also be associated to a new usage of an existing approach.
Please see the Submission Guidelines for Authors for detailed submission instructions.
This year, ICMI welcomes contributions on our theme of multi-modal understanding of multi-party interactions. Additional topics of interest include but are not limited to:
Affective computing and interaction
Cognitive modeling and multimodal interaction
Gesture, touch and haptics
Healthcare, assistive technologies
Human communication dynamics
Human-robot/agent multimodal interaction
Interaction with smart environment
Machine learning for multimodal interaction
Mobile multimodal systems
Multimodal behavior generation
Multimodal datasets and validation
Multimodal dialogue modeling
Multimodal fusion and representation
Multimodal interactive applications
Speech behaviors in social interaction
System components and multimodal platforms
Visual behaviors in social interaction
Virtual/augmented reality and multimodal interaction
Important Dates
Abstract Submission
(must include title, authors, abstract) May 1, 2019 (11:59pm PST)
Final Submission
(final submissions -- no updates allowed after May 7) May 7, 2019 (11:59pm PST)
Rebuttal due June 25, 2019 (11:59pm PST)
Paper notification July 7, 2019
Paper camera-ready July 15, 2019 (11:59pm PST)
Presenting at main conference October 14-18, 2019
Best regards,
Social Media Chair ICMI 2019
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon
University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
Apologies for cross-posting
***********************************************************************************
ICMI 2019: Call for Multimodal Grand Challenges
https://icmi.acm.org/2019/index.php?id=cfc
Submission Deadline: January 14, 2019
***********************************************************************************
Call for Multimodal Grand Challenges
Proposals due January 14, 2019
Proposal notification February 1, 2019
Paper camera-ready August 12, 2019
We are calling for teams to propose one or more ICMI Multimodal Grand Challenges.
The International Conference on Multimodal Interaction (ICMI) is the premier international forum for multidisciplinary research on multimodal human-human and human-computer interaction, interfaces, and system development. Developing systems that can robustly understand human-human communication or respond to human input requires identifying the best algorithms and their failure modes. In fields such as computer vision, speech recognition, computational (para-) linguistics and physiological signal processing, for example, the availability of datasets and common tasks have led to great progress. We invite the ICMI community to collectively define and tackle the scientific Grand Challenges in our domain for the next 5 years. ICMI Multimodal Grand Challenges aim to inspire new ideas in the ICMI community and create momentum for future collaborative work. Analysis, synthesis, and interactive tasks are all possible. Challenge papers will be indexed in the proceedings of ICMI.
The grand challenge sessions are still to be confirmed. We invite organizers from various fields related to multimodal interaction to propose and run Grand Challenge events. We are looking for exciting and stimulating challenges including but not limited to the following categories:
Dataset-driven challenge. This challenge will provide a dataset that is exemplary of the complexities of current and future multimodal problems, and one or more multimodal tasks whose performance can be objectively measured and compared in rigorous conditions. Participants in the Challenge will evaluate their methods against the challenge data in order to identify areas of strengths and weakness.
Use-case challenge. This challenge will provide an interactive problem system (e.g. dialog-based or non-verbal-based) and the associated resources, which can allow people to participate through the integration of specific modules or alternative full systems. Proposers should also establish systematic evaluation procedures.
Health challenge. This challenge will provide a dataset that is exemplary of a health related task, whose analysis, diagnosis, treatment or prevention can be aided by Multimodal Interactions. The challenge should focus on exploring the benefits of multimodal (audio, visual, physiological, etc) solutions for the stated task.
Policy challenge. Legal, ethical, and privacy issues of Multimodal Interaction systems in the age of AI. The challenge could revolve around opinion papers, panels, discussions, etc.
Prospective organizers should submit a five-page maximum proposal containing the following information:
Title
Abstract appropriate for possible Web promotion of the Challenge
Distinctive topics to be addressed and specific goals
Detailed description of the Challenge and its relevance to multimodal interaction
Length (full day or half day)
Plan for soliciting participation
Description of how submissions (challenge’s submissions and papers) will be evaluated, and a list of proposed reviewers
Proposed schedule for releasing datasets (if applicable) and/or systems (if applicable) and receiving submissions
Short biography of the organizers (preferably from multiple institutions)
Funding source (if any) that supports or could support the challenge organization
Draft call for papers; affiliations and email address of the organisers; summary of the Grand Challenge; list of potential Technical Program Committee members and their affiliations, important dates
Proposals will be evaluated based on originality, ambition, feasibility, and implementation plan. A Challenge with dataset(s) or system(s) that has had pilot results to ensure its representativity and suitability to the proposed task will be given preference for acceptance; an additional 1 page description must be attached in such case. Continuation of or variants on the 2018 challenges are welcome, though we ask for submissions of this form to highlight the number of participants that attended during the previous year and describe what changes will be made from the previous year.
The ICMI organizers will offer support with basic logistics, which includes rooms and equipment to run the Workshop, coffee breaks can be offered if synchronised with the main conference.
Important Dates and Contact Details
Proposals should be emailed to both ICMI 2019 Multimodal Grand Challenge Chairs, Prof Anna Esposito and Dr. Michel Valstar via grandchallenge.ICMI19(a)gmail.com. Prospective organizers are also encouraged to contact the co-chairs if they have any questions. Proposals are due by January 14th, 2019. Notifications will be sent on February 1st, 2019.
Best regards,
Social Media Chair ICMI 2019
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
Apologies for cross-posting
***********************************************************************************
ICMI 2019: Call for Workshops
https://icmi.acm.org/2019/index.php?id=CfW
Submission Deadline: Saturday, February 16, 2019
***********************************************************************************
Call for Workshops
The International Conference on Multimodal Interaction (ICMI 2019) will be held in Suzhou, Jiangsu, China, during October 14-18, 2019. ICMI is the premier international conference for multidisciplinary research on multimodal human-human and human-computer interaction analysis, interface design, and system development. The theme of the ICMI 2019 conference is Multi-modal Understanding of Multi-party Interactions. ICMI has developed a tradition of hosting workshops in conjunction with the main conference to foster discourse on new research, technologies, social science models and applications. Examples of recent workshops include:
Multi-sensorial Approaches to Human-Food Interaction
Group Interaction Frontiers in Technology
Modeling Cognitive Processes from Multimodal Data
Human-Habitat for Health
Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction
Investigating Social Interactions with Artificial Agents
Child Computer Interaction
Multimodal Interaction for Education
We are seeking workshop proposals on emerging research areas related to the main conference topics, and those that focus on multi-disciplinary research. We would also strongly encourage workshops that will include a diverse set of keynote speakers (factors to consider include: gender, ethnic background, institutions, years of experience, geography, etc.).
The format, style, and content of accepted workshops are under the control of the workshop organizers. Workshops may be of a half-day or one day in duration. Workshop organizers will be expected to manage the workshop content, be present to moderate the discussion and panels, invite experts in the domain, and maintain a website for the workshop. Workshop papers will be indexed by ACM.
Submission
Prospective workshop organizers are invited to submit proposals in PDF format (Max. 3 pages). Please email proposals to the workshop chairs: Hongwei Ding (hwding(a)sjtu.edu.cn), Carlos Busso (busso(a)utdallas.edu) and Tadas Baltrusaitis (tadyla(a)gmail.com). The proposal should include the following:
Workshop title
List of organizers including affiliation, email address, and short biographies
Workshop motivation, expected outcomes and impact
Tentative list of keynote speakers
Workshop format (by invitation only, call for papers, etc.), anticipated number of talks/posters, workshop duration (half-day or full-day) including tentative program
Planned advertisement means, website hosting, and estimated participation
Paper review procedure (single/double-blind, internal/external, solicited/invited-only, pool of reviewers, etc.)
Paper submission and acceptance deadlines
Special space and equipment requests, if any
Important Dates:
Workshop proposal submission: Saturday, February 16, 2019
Notification of acceptance: Saturday, March 2, 2019
Workshop Date: Monday, October 14, 2019
Best regards,
Social Media Chair ICMI 2019
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
Dear Colleagues,
We would like to invite you to contribute a chapter for the upcoming volume
entitled “Neural and Machine Learning for Emotion and Empathy Recognition:
Experiences from the OMG-Challenges” to be published by the Springer Series
on Competitions in Machine Learning. Our book will be available by fall
2019.
Website: https://easychair.org/cfp/OMGBook2019
Short description of the volume:
Emotional expression perception and categorization are extremely popular in
the affective computing community. However, the inclusion of emotions in
the decision-making process of an agent is not considered in most of the
research in this field. To treat emotion expressions as the final goal,
although necessary, reduces the usability of such solutions in more complex
scenarios. To create a general affective model to be used as a modulator
for learning different cognitive tasks, such as modeling intrinsic
motivation, creativity, dialog processing, grounded learning, and
human-level communication, instantaneous emotion perception cannot be the
pivotal focus.
This book aims to present recent contributions for multimodal emotion
recognition and empathy prediction which take into consideration the
long-term development of affective concepts. On this regard, we provide
access to two datasets: the OMG-Emotion Behavior Recognition and
OMG-Empathy Prediction datasets. These datasets were designed, collected
and formalized to be used on the OMG-Emotion Recognition Challenge and the
OMG-Empathy Prediction challenge, respectively. All the participants of our
challenges are invited to submit their contribution to our book. We also
invite interested authors to use our datasets on the development of
inspiring and innovative research on affective computing. By formatting
these solutions and editing this book, we hope to inspire further research
in affective and cognitive computing over longer timescales.
TOPICS OF INTEREST
The topics of interest for this call for chapters include, but are not
limited to:
- New theories and findings on continuous emotion recognition
- Multi- and Cross-modal emotion perception and interpretation
- Novel neural network models for affective processing
- Lifelong affect analysis, perception, and interpretation
- New neuroscientific and psychological findings on continuous emotion
representation
- Embodied artificial agents for empathy and emotion appraisal
- Machine learning for affect-driven interventions
- Socially intelligent human-robot interaction
- Personalized systems for human affect recognition
- New theories and findings on empathy modeling
- Multimodal processing of empathetic and social signals
- Novel neural network models for empathy understanding
- Lifelong models for empathetic interactions
- Empathetic Human-Robot-Interaction Scenarios
- New neuroscientific and psychological findings on empathy representation
- Multi-agent communication for empathetic interactions
- Empathy as a decision-making modulator
- Personalized systems for empathy prediction
Each contributed chapter is expected to present a novel research study, a
comparative study, or a survey of the literature.
We also expect that each contributed chapter approach somehow at least one
of our datasets: the OMG-Emotion and the OMG-Empathy.
SUBMISSIONS
All submissions should be done via EasyChair:
https://easychair.org/cfp/OMGBook2019
Original artwork and a signed copyright release form will be required for
all accepted chapters. For author instructions, please visit:
https://www.springer.com/us/authors-editors/book-authors-editors/resources-…
ACCESS TO THE DATASETS
- OMG-EMOTION -
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/omg_emotion.html
- OMG-EMPATHY -
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/omg_empathy.html
To have access to the datasets, please send an e-mail to:
barros(a)informatik.uni-hamburg.de
IMPORTANT DATES:
- Submission of abstracts: 08th of February 2019
- Notification of initial editorial decisions: 15th of February 2019
- Submissions of full-length chapters: 29th of March 2019
- Notification of final editorial decisions 17th of May 2019
- Submission of revised chapters: 07th of June, 2019
--
Best regards,
*Pablo Barros*
*http://www.pablobarros.net <http://www.pablobarros.net>*
2nd CALL FOR PAPERS
IEEE Transactions on Affective Computing
Special Issue on Automated Perception of Human Affect from Longitudinal
Behavioral Data
Website:
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/tacSpecialIssue201…
I. Aim and Scope
Research trends within artificial intelligence and cognitive sciences are
still heavily based on computational models that attempt to imitate human
perception in various behavior categorization tasks. However, most of the
research in the field focuses on instantaneous categorization and
interpretation of human affect, such as the inference of six basic emotions
from face images, and/or affective dimensions (valence-arousal), stress and
engagement from multi-modal (e.g., video, audio, and autonomic physiology)
data. This diverges from the developmental aspect of emotional behavior
perception and learning, where human behavior and expressions of affect
evolve and change over time. Moreover, these changes are present not only
in the temporal domain but also within different populations and more
importantly, within each individual. This calls for a new perspective when
designing computational models for analysis and interpretation of human
affective behaviors: the computational models that can timely and
efficiently adapt to different contexts and individuals over time, and also
incorporate existing neurophysiological and psychological findings (prior
knowledge). Thus, the long-term goal is to create life-long personalized
learning and inference systems for analysis and perception of human
affective behaviors. Such systems would benefit from long-term contextual
information (including demographic and social aspects) as well as
individual characteristics. This, in turn, would allow building intelligent
agents (such as mobile and robot technologies) capable of adapting their
behavior in a continuous and on-line manner to the target contexts and
individuals.
This special issue aims at contributions from computational neuroscience
and psychology, artificial intelligence, machine learning, and affective
computing, challenging and expanding current research on interpretation and
estimation of human affective behavior from longitudinal behavioral data,
i.e., single or multiple modalities captured over extended periods of time
allowing efficient profiling of target behaviors and their inference in
terms of affect and other socio-cognitive dimensions. We invite
contributions focusing on both the theoretical and modeling perspective, as
well as applications ranging from human-human, human-computer and
human-robot interactions.
II. Potential Topics
Given computational models, the capability to perceive and understand
emotion behavior is an important and popular research topic. That is why
recent special issues on the IEEE Journal on Transactions on Affective
Computing covered topics from emotion behavior analysis “in-the-wild” to
personality analysis. However, most of the research published by these
specific calls treat emotion behavior as an instantaneous event, relating
mostly to emotion recognition, and thus neglect the development of complex
emotion behavior models. Our special issue will foster the development of
the field by focusing excellent research on emotion models for long-term
behavior analysis.
The topics of interest for this special issue include, but are not limited
to:
- New theories and findings on continuous emotion recognition
- Multi- and Cross-modal emotion perception and interpretation
- Lifelong affect analysis, perception, and interpretation
- Novel neural network models for affective processing
- New neuroscientific and psychological findings on continuous emotion
representation
- Embodied artificial agents for empathy and emotion appraisal
- Machine learning for affect-driven interventions
- Socially intelligent human-robot interaction
- Personalized systems for human affect recognition
III. Submission
Prospective authors are invited to submit their manuscripts electronically,
adhering to the IEEE Transactions on Affective Computing guidelines (
https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=5165369). Please
submit your papers through the online system (
https://mc.manuscriptcentral.com/taffc-cs) and be sure to select the
special issue: Special Issue/Section on Automated Perception of Human
Affect from Longitudinal Behavioral Data.
IV. IMPORTANT DATES:
Submissions Deadline: 15th of January 2019
Deadline for reviews and response to authors: 06th of April 2019
Camera-ready deadline: 05th of August 2019
V. Guest Editors
Pablo Barros, University of Hamburg, Germany
Stefan Wermter, University of Hamburg, Germany
Ognjen (Oggi) Rudovic, Massachusetts Institute of Technology, United States
of America
Hatice Gunes, University of Cambridge, United Kingdom
--
Dr. Pablo Barros
Postdoctoral Research Associate - Crossmodal Learning Project (CML)
Knowledge Technology
Department of Informatics
University of Hamburg
Vogt-Koelln-Str. 30
22527 Hamburg, Germany
Phone: +49 40 42883 2535
Fax: +49 40 42883 2515
barros at informatik.uni-hamburg.dehttp://www.pablobarros.nethttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/people/barros.htmlhttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/