Apologies for cross-posting
***********************************************************************************
CBAR 2019: CALL FOR PAPERS
6th International Workshop on CONTEXT BASED AFFECT RECOGNITION
https://cbar2019.blogspot.com/
Submission Deadline: January 28th, 2019
***********************************************************************************
The 6th International Workshop on Context Based Affect Recognition (CBAR
2019) will be held in conjunction with FG 2019 in May 2019 in Lille
France – http://fg2019.org/
For details concerning the workshop program, paper submission, and
guidelines please visit our workshop website at:
https://cbar2019.blogspot.com/
Best regards,
Zakia Hammal
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
Dear all,
I am on the hunt for a face database that contains expressions
of embarrassment, guilt, flirtation, boredom, arrogance and admiration, as
well as neutral and basic emotions (e.g. neutral, happy, angry, sad etc.)
for an EEG experiment I am running with a dissertation student of mine. If
anyone can point me in the right direction or has access to a number of
faces with these expressions, I would be most grateful!
Best wishes,
Nicola
--
Dear face researchers,
We want to create caricatures for some famous faces that we used in an
experiment last year. These include both men and women and a range of ages
from mid-20s to 60-70s (and the queen). All are Caucasian/White.
Does anyone have average faces for male and female for approx <30 years,
30-60 years and 60+ years, that they would be willing to share with us?
We also have the problem that many but not all the faces are showing teeth,
so we will probably need different averages for the mouth with teeth
showing versus no teeth showing versions...
Thanks!
Rachel
--
“It is not our differences that divide us. It is our inability to
recognize, accept, and celebrate those differences.” - Audre Lorde
Apologies for cross-posting
***********************************************************************************
CBAR 2019: CALL FOR PAPERS
6th International Workshop on CONTEXT BASED AFFECT RECOGNITION
https://cbar2019.blogspot.com/
Submission Deadline: January 28th, 2019 (Extended)
***********************************************************************************
The 6th International Workshop on Context Based Affect Recognition (CBAR
2019) will be held in conjunction with FG 2019 in May 2019 in Lille
France – http://fg2019.org/
For details concerning the workshop program, paper submission, and
guidelines please visit our workshop website at:
https://cbar2019.blogspot.com/
Best regards,
Zakia Hammal
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
Dear members,
We are pleased to announce publication of first of its kind children
spontaneous facial expression video database, the LIRIS Children
Spontaneous Facial Expression Video Database (LIRIS-CSE). This unique
database contains spontaneous / natural facial expressions of 12 children
in diverse settings with variable recording scenarios showing six universal
or prototypic emotional expressions (happiness, sadness, anger, surprise,
disgust and fear). Children are recorded in constraint free environment (no
restriction on head movement, no restriction on hands movement, free
sitting setting, no restriction of any sort) while they watched specially
built / selected stimuli. This constraint free environment allowed us to
record spontaneous / natural expression of children as they occur. The
database has been validated by 22 human raters. Details of the database are
presented in the following publication:
A novel database of Children's Spontaneous Facial Expressions (LIRIS-CSE).
Rizwan Ahmed Khan, Crenn Arthur, Alexandre Meyer, Saida Bouakaz. arXiv
(2018) preprint, arXiv:1812.01555. https://arxiv.org/abs/1812.01555
To request database download (for research purpose only) visit project
webpage at: https://childrenfacialexpression.projet.liris.cnrs.fr/
__________________
Best Regards,
Dr. Rizwan Ahmed KHAN
Associate Professor, Barrett Hodgson University, Karachi, Pakistan.
|| Researcher
- Laboratoire d'InfoRmatique en Image et Systèmes d'information (LIRIS),
Lyon, France.
<https://sites.google.com/site/drkhanrizwan17/>
<http://scholar.google.com/citations?user=T66djn8AAAAJ&hl=en>
<http://dblp.uni-trier.de/pers/hd/k/Khan:Rizwan_Ahmed.html>
<https://www.youtube.com/user/Rizwankhan2000/videos?view_as=subscriber>
*Help preserve the color of our world – Think before you print.*
Apologies for cross-posting
***********************************************************************************
CBAR 2019: CALL FOR PAPERS
6th International Workshop on CONTEXT BASED AFFECT RECOGNITION
https://cbar2019.blogspot.com/
Submission Deadline: January 7th, 2019
***********************************************************************************
The 6th International Workshop on Context Based Affect Recognition (CBAR
2019) will be held in conjunction with FG 2019 in May 2019 in Lille
France – http://fg2019.org/
For details concerning the workshop program, paper submission, and
guidelines please visit our workshop website at:
https://cbar2019.blogspot.com/
Best regards,
Zakia Hammal
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
CALL FOR PAPERS
Extended Deadline for the IEEE Transactions on Affective Computing
Special Issue on Automated Perception of Human Affect from Longitudinal
Behavioral Data
Website:
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/tacSpecialIssue201…
I. Aim and Scope
Research trends within artificial intelligence and cognitive sciences are
still heavily based on computational models that attempt to imitate human
perception in various behavior categorization tasks. However, most of the
research in the field focuses on instantaneous categorization and
interpretation of human affect, such as the inference of six basic emotions
from face images, and/or affective dimensions (valence-arousal), stress and
engagement from multi-modal (e.g., video, audio, and autonomic physiology)
data. This diverges from the developmental aspect of emotional behavior
perception and learning, where human behavior and expressions of affect
evolve and change over time. Moreover, these changes are present not only
in the temporal domain but also within different populations and more
importantly, within each individual. This calls for a new perspective when
designing computational models for analysis and interpretation of human
affective behaviors: the computational models that can timely and
efficiently adapt to different contexts and individuals over time, and also
incorporate existing neurophysiological and psychological findings (prior
knowledge). Thus, the long-term goal is to create life-long personalized
learning and inference systems for analysis and perception of human
affective behaviors. Such systems would benefit from long-term contextual
information (including demographic and social aspects) as well as
individual characteristics. This, in turn, would allow building intelligent
agents (such as mobile and robot technologies) capable of adapting their
behavior in a continuous and on-line manner to the target contexts and
individuals.
This special issue aims at contributions from computational neuroscience
and psychology, artificial intelligence, machine learning, and affective
computing, challenging and expanding current research on interpretation and
estimation of human affective behavior from longitudinal behavioral data,
i.e., single or multiple modalities captured over extended periods of time
allowing efficient profiling of target behaviors and their inference in
terms of affect and other socio-cognitive dimensions. We invite
contributions focusing on both the theoretical and modeling perspective, as
well as applications ranging from human-human, human-computer and
human-robot interactions.
II. Potential Topics
Given computational models, the capability to perceive and understand
emotion behavior is an important and popular research topic. That is why
recent special issues on the IEEE Journal on Transactions on Affective
Computing covered topics from emotion behavior analysis “in-the-wild” to
personality analysis. However, most of the research published by these
specific calls treat emotion behavior as an instantaneous event, relating
mostly to emotion recognition, and thus neglect the development of complex
emotion behavior models. Our special issue will foster the development of
the field by focusing excellent research on emotion models for long-term
behavior analysis.
The topics of interest for this special issue include, but are not limited
to:
- New theories and findings on continuous emotion recognition
- Multi- and Cross-modal emotion perception and interpretation
- Lifelong affect analysis, perception, and interpretation
- Novel neural network models for affective processing
- New neuroscientific and psychological findings on continuous emotion
representation
- Embodied artificial agents for empathy and emotion appraisal
- Machine learning for affect-driven interventions
- Socially intelligent human-robot interaction
- Personalized systems for human affect recognition
III. Submission
Prospective authors are invited to submit their manuscripts electronically,
adhering to the IEEE Transactions on Affective Computing guidelines (
https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=5165369). Please
submit your papers through the online system (
https://mc.manuscriptcentral.com/taffc-cs) and be sure to select the
special issue: Special Issue/Section on Automated Perception of Human
Affect from Longitudinal Behavioral Data.
IV. IMPORTANT DATES:
Submissions Deadline: 15th of February 2019
V. Guest Editors
Pablo Barros, University of Hamburg, Germany
Stefan Wermter, University of Hamburg, Germany
Ognjen (Oggi) Rudovic, Massachusetts Institute of Technology, United States
of America
Hatice Gunes, University of Cambridge, United Kingdom
--
Dr. Pablo Barros
Postdoctoral Research Associate - Crossmodal Learning Project (CML)
Knowledge Technology
Department of Informatics
University of Hamburg
Vogt-Koelln-Str. 30
22527 Hamburg, Germany
Phone: +49 40 42883 2535
Fax: +49 40 42883 2515
barros at informatik.uni-hamburg.dehttp://www.pablobarros.nethttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/people/barros.htmlhttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/
CALL FOR PAPERS
IEEE Transactions on Affective Computing
Special Issue on Automated Perception of Human Affect from Longitudinal
Behavioral Data
Website:
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/tacSpecialIssue201…
I. Aim and Scope
Research trends within artificial intelligence and cognitive sciences are
still heavily based on computational models that attempt to imitate human
perception in various behavior categorization tasks. However, most of the
research in the field focuses on instantaneous categorization and
interpretation of human affect, such as the inference of six basic emotions
from face images, and/or affective dimensions (valence-arousal), stress and
engagement from multi-modal (e.g., video, audio, and autonomic physiology)
data. This diverges from the developmental aspect of emotional behavior
perception and learning, where human behavior and expressions of affect
evolve and change over time. Moreover, these changes are present not only
in the temporal domain but also within different populations and more
importantly, within each individual. This calls for a new perspective when
designing computational models for analysis and interpretation of human
affective behaviors: the computational models that can timely and
efficiently adapt to different contexts and individuals over time, and also
incorporate existing neurophysiological and psychological findings (prior
knowledge). Thus, the long-term goal is to create life-long personalized
learning and inference systems for analysis and perception of human
affective behaviors. Such systems would benefit from long-term contextual
information (including demographic and social aspects) as well as
individual characteristics. This, in turn, would allow building intelligent
agents (such as mobile and robot technologies) capable of adapting their
behavior in a continuous and on-line manner to the target contexts and
individuals.
This special issue aims at contributions from computational neuroscience
and psychology, artificial intelligence, machine learning, and affective
computing, challenging and expanding current research on interpretation and
estimation of human affective behavior from longitudinal behavioral data,
i.e., single or multiple modalities captured over extended periods of time
allowing efficient profiling of target behaviors and their inference in
terms of affect and other socio-cognitive dimensions. We invite
contributions focusing on both the theoretical and modeling perspective, as
well as applications ranging from human-human, human-computer and
human-robot interactions.
II. Potential Topics
Given computational models, the capability to perceive and understand
emotion behavior is an important and popular research topic. That is why
recent special issues on the IEEE Journal on Transactions on Affective
Computing covered topics from emotion behavior analysis “in-the-wild” to
personality analysis. However, most of the research published by these
specific calls treat emotion behavior as an instantaneous event, relating
mostly to emotion recognition, and thus neglect the development of complex
emotion behavior models. Our special issue will foster the development of
the field by focusing excellent research on emotion models for long-term
behavior analysis.
The topics of interest for this special issue include, but are not limited
to:
- New theories and findings on continuous emotion recognition
- Multi- and Cross-modal emotion perception and interpretation
- Lifelong affect analysis, perception, and interpretation
- Novel neural network models for affective processing
- New neuroscientific and psychological findings on continuous emotion
representation
- Embodied artificial agents for empathy and emotion appraisal
- Machine learning for affect-driven interventions
- Socially intelligent human-robot interaction
- Personalized systems for human affect recognition
III. Submission
Prospective authors are invited to submit their manuscripts electronically,
adhering to the IEEE Transactions on Affective Computing guidelines (
https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=5165369). Please
submit your papers through the online system (
https://mc.manuscriptcentral.com/taffc-cs) and be sure to select the
special issue: Special Issue/Section on Automated Perception of Human
Affect from Longitudinal Behavioral Data.
IV. IMPORTANT DATES:
Submissions Deadline: 15th of February 2019
V. Guest Editors
Pablo Barros, University of Hamburg, Germany
Stefan Wermter, University of Hamburg, Germany
Ognjen (Oggi) Rudovic, Massachusetts Institute of Technology, United States
of America
Hatice Gunes, University of Cambridge, United Kingdom
--
Best regards,
*Pablo Barros*
*http://www.pablobarros.net <http://www.pablobarros.net>*
Apologies for cross-posting
***********************************************************************************
ICMI 2019: Call for Long and Short Papers
https://icmi.acm.org/2019/index.php?id=cfp
Abstract Submission: May 1, 2019 (11:59pm PST)
Final Submission: May 7, 2019 (11:59pm PST)
***********************************************************************************
Call for Long and Short Papers
The 21st International Conference on Multimodal Interaction (ICMI 2019) will be held in Suzhou, China. ICMI is the premier international forum for multidisciplinary research on multimodal human-human and human-computer interaction, interfaces, and system development. The conference focuses on theoretical and empirical foundations, component technologies, and combined multimodal processing techniques that define the field of multimodal interaction analysis, interface design, and system development.
We are keen to showcase novel input and output modalities and interactions to the ICMI community. ICMI 2019 will feature a single-track main conference which includes: keynote speakers, technical full and short papers (including oral and poster presentations), demonstrations, exhibits and doctoral spotlight papers. The conference will also feature workshops and grand challenges. The proceedings of ICMI 2019 will be published by ACM as part of their series of International Conference Proceedings and Digital Library.
We also want to welcome conference papers from behavioral and social sciences. These papers allow us to understand how technology can be used to increase our scientific knowledge and may focus less on presenting technical or algorithmic novelty. For this reason, the "novelty" criteria used during ICMI 2019 review will be based on two sub-criteria (i.e., scientific novelty and technical novelty as described below). Accepted papers at ICMI 2019 only need to be novel on one of these sub-criteria. In other words, a paper which is strong on scientific knowledge contribution but low on algorithmic novelty should be ranked similarly to a paper that is high on algorithmic novelty but low on knowledge discovery.
Scientific Novelty: Papers should bring some new knowledge to the scientific community. For example, discovering new behavioral markers that are predictive of mental health or how new behavioral patterns relate to children’s interactions during learning. It is the responsibility of the authors to perform a proper literature review and clearly discuss the novelty in the scientific discoveries made in their paper.
Technical Novelty: Papers reviewed with this sub-criterion should include novelty in their computational approach for recognizing, generating or modeling data. Examples include: novelty in the learning and prediction algorithms, in the neural architecture, or in the data representation. Novelty can also be associated to a new usage of an existing approach.
Please see the Submission Guidelines for Authors for detailed submission instructions.
This year, ICMI welcomes contributions on our theme of multi-modal understanding of multi-party interactions. Additional topics of interest include but are not limited to:
Affective computing and interaction
Cognitive modeling and multimodal interaction
Gesture, touch and haptics
Healthcare, assistive technologies
Human communication dynamics
Human-robot/agent multimodal interaction
Interaction with smart environment
Machine learning for multimodal interaction
Mobile multimodal systems
Multimodal behavior generation
Multimodal datasets and validation
Multimodal dialogue modeling
Multimodal fusion and representation
Multimodal interactive applications
Speech behaviors in social interaction
System components and multimodal platforms
Visual behaviors in social interaction
Virtual/augmented reality and multimodal interaction
Important Dates
Abstract Submission
(must include title, authors, abstract) May 1, 2019 (11:59pm PST)
Final Submission
(final submissions -- no updates allowed after May 7) May 7, 2019 (11:59pm PST)
Rebuttal due June 25, 2019 (11:59pm PST)
Paper notification July 7, 2019
Paper camera-ready July 15, 2019 (11:59pm PST)
Presenting at main conference October 14-18, 2019
Best regards,
Social Media Chair ICMI 2019
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon
University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
Apologies for cross-posting
***********************************************************************************
ICMI 2019: Call for Multimodal Grand Challenges
https://icmi.acm.org/2019/index.php?id=cfc
Submission Deadline: January 14, 2019
***********************************************************************************
Call for Multimodal Grand Challenges
Proposals due January 14, 2019
Proposal notification February 1, 2019
Paper camera-ready August 12, 2019
We are calling for teams to propose one or more ICMI Multimodal Grand Challenges.
The International Conference on Multimodal Interaction (ICMI) is the premier international forum for multidisciplinary research on multimodal human-human and human-computer interaction, interfaces, and system development. Developing systems that can robustly understand human-human communication or respond to human input requires identifying the best algorithms and their failure modes. In fields such as computer vision, speech recognition, computational (para-) linguistics and physiological signal processing, for example, the availability of datasets and common tasks have led to great progress. We invite the ICMI community to collectively define and tackle the scientific Grand Challenges in our domain for the next 5 years. ICMI Multimodal Grand Challenges aim to inspire new ideas in the ICMI community and create momentum for future collaborative work. Analysis, synthesis, and interactive tasks are all possible. Challenge papers will be indexed in the proceedings of ICMI.
The grand challenge sessions are still to be confirmed. We invite organizers from various fields related to multimodal interaction to propose and run Grand Challenge events. We are looking for exciting and stimulating challenges including but not limited to the following categories:
Dataset-driven challenge. This challenge will provide a dataset that is exemplary of the complexities of current and future multimodal problems, and one or more multimodal tasks whose performance can be objectively measured and compared in rigorous conditions. Participants in the Challenge will evaluate their methods against the challenge data in order to identify areas of strengths and weakness.
Use-case challenge. This challenge will provide an interactive problem system (e.g. dialog-based or non-verbal-based) and the associated resources, which can allow people to participate through the integration of specific modules or alternative full systems. Proposers should also establish systematic evaluation procedures.
Health challenge. This challenge will provide a dataset that is exemplary of a health related task, whose analysis, diagnosis, treatment or prevention can be aided by Multimodal Interactions. The challenge should focus on exploring the benefits of multimodal (audio, visual, physiological, etc) solutions for the stated task.
Policy challenge. Legal, ethical, and privacy issues of Multimodal Interaction systems in the age of AI. The challenge could revolve around opinion papers, panels, discussions, etc.
Prospective organizers should submit a five-page maximum proposal containing the following information:
Title
Abstract appropriate for possible Web promotion of the Challenge
Distinctive topics to be addressed and specific goals
Detailed description of the Challenge and its relevance to multimodal interaction
Length (full day or half day)
Plan for soliciting participation
Description of how submissions (challenge’s submissions and papers) will be evaluated, and a list of proposed reviewers
Proposed schedule for releasing datasets (if applicable) and/or systems (if applicable) and receiving submissions
Short biography of the organizers (preferably from multiple institutions)
Funding source (if any) that supports or could support the challenge organization
Draft call for papers; affiliations and email address of the organisers; summary of the Grand Challenge; list of potential Technical Program Committee members and their affiliations, important dates
Proposals will be evaluated based on originality, ambition, feasibility, and implementation plan. A Challenge with dataset(s) or system(s) that has had pilot results to ensure its representativity and suitability to the proposed task will be given preference for acceptance; an additional 1 page description must be attached in such case. Continuation of or variants on the 2018 challenges are welcome, though we ask for submissions of this form to highlight the number of participants that attended during the previous year and describe what changes will be made from the previous year.
The ICMI organizers will offer support with basic logistics, which includes rooms and equipment to run the Workshop, coffee breaks can be offered if synchronised with the main conference.
Important Dates and Contact Details
Proposals should be emailed to both ICMI 2019 Multimodal Grand Challenge Chairs, Prof Anna Esposito and Dr. Michel Valstar via grandchallenge.ICMI19(a)gmail.com. Prospective organizers are also encouraged to contact the co-chairs if they have any questions. Proposals are due by January 14th, 2019. Notifications will be sent on February 1st, 2019.
Best regards,
Social Media Chair ICMI 2019
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/