Dear colleagues,
Calling all Psychologists, Neuroscientists & Philosophers for the 20th ACM International Conference on Intelligent Virtual Agents, 9th-12th September 2020 in Glasgow, UK.
Any questions, get in touch!
Rachael E. Jack
IVA 2020 General Co-Chair
Reader (Associate Professor)
Institute of Neuroscience & Psychology
School of Psychology
University of Glasgow
Scotland, G12 8QB
+44 (0) 141 330 5087
View this email in your browser<https://mailchi.mp/02ea15a092ee/call-for-papers-iva-5697375?e=c6d54cce24>
[https://gallery.mailchimp.com/bd7d786e2e8ae63c14d942ee0/images/a12ef9a3-dc6…]
Call for Submissions
ACM 20th International Conference on Intelligent Virtual Agents
September 9-12th 2020 University of Glasgow, Scotland
https://iva2020.gla.ac.uk/
SUBMISSION DATES
Papers: Sunday 5th April 2020 (23:59 UTC-12)
Extended Abstracts: Sunday 24th May 2020 (23:59 UTC-12)
---------------------------------------------
2020 Intelligent Virtual Agents
---------------------------------------------
Intelligent Virtual Agents (IVA) Annual Convention is the premier
international event for interdisciplinary research on the design,
application, and evaluation of Intelligent Virtual Agents (IVAs) with a
specific focus on the ability to socially interact.
*** IVA 2020 will be the 20th Annual Convention, held in Glasgow, Scotland,
9- 12th September 2020. ***
IVAs are interactive characters that exhibit human-like qualities including
communicating using natural human modalities such as facial expressions,
speech and gesture. IVAs are also capable of real-time perception,
cognition, emotion and action that allows them to participate in dynamic
social situations.
IVA 2020 aims to showcase cutting-edge research on the design, application,
and evaluation of IVAs, as well as the basic research underlying the
technology that supports human-agent interaction such as social perception,
dialog modeling, and social behavior planning. We also welcome submissions
on central theoretical issues, uses of virtual agents in psychological
research and showcases of working applications.
IVA 2020 offers two submission tracks: Papers (8 pages, including
references) and Extended Abstracts (3 pages, including references).
All submissions will be double-blind peer-reviewed by external expert
reviewers. All accepted submissions will be published in the proceedings.
Accepted papers will be presented at a talk. Accepted extended abstracts
will be presented as a talk or a poster, depending on the outcome of the
review process.
[https://gallery.mailchimp.com/bd7d786e2e8ae63c14d942ee0/images/18ce1289-cd5…]
SCOPE AND LIST OF TOPICS
IVA invites submissions on a broad range of topics, including but not
limited to:
Agent Design and modeling of:
* Cognition
* Emotion (including personality and cultural differences)
* Socially communicative behavior (e.g., of emotions, personality traits)
* Conversational behavior
* Social perception
* Machine learning approaches to agent modeling
* Approaches to realizing adaptive behavior
* Models informed by theoretical and empirical research from psychology
Multimodal interaction:
* Verbal and nonverbal behavior coordination
* Face-to-face communication skills
* Engagement
* Managing co-presence and interpersonal relation
* Multi-party interaction
* Data driven multimodal modeling
Social agent architectures:
* Design criteria and design methodologies
* Engineering of real-time human-agent interaction
* Standards / measures to support interoperability
* Portability and reuse
* Specialized tools, toolkits and tool chains
Evaluation methods and studies:
* Evaluation methodologies and user studies
* Ethical considerations and societal impact
* Applicable lessons across fields (e.g. between robotics and virtual agents)
* Social agents as a means to study and model human behavior
Applications:
* Applications in education, skills training, health, counseling, games, art, etc.
* Virtual agents in games and simulations
* Social agents as tools in psychology
* Migration between platforms
SPECIAL IVA 2020 TOPIC:
Exploring Connections between Computer Science, Robotics and Psychology.
Across computer science, robotics, psychology and the commercial world,
there has been a rapid growth in the research, development and application
of artificial social agents. Computer scientists and roboticists are
researching graphics-based and physical social agents. Psychologists and
neuroscientists are using these artifacts in laboratory experiments in
order to study our interaction with them as well as to use them as
confederates in the study of human behavior. Companies are actively
developing similar technologies. However, these communities too rarely
interact even though there are close synergies between psychology, the
study of human behavior, and artificial social agents, the engineering of
human behavior. The design of an artificial social agent involves the
formalization of theories and data about human behavior, integration of
resulting models into an agent and evaluation of its behavior, leveraging
techniques derived from psychology. Each of these steps can in return be of
fundamental value to psychological research. For example, formalization and
integration forces one to concretely specify theoretical constructs and
thereby expose hidden assumptions and gaps in theories. IVA 2020’s Special
Topic provides an invitation to researchers and developers across
disciplines to share their work on the challenges and uses of social agent
research, in the hope to further trans-disciplinary collaboration.
INSTRUCTIONS FOR AUTHORS
Paper submissions should be anonymous and prepared in the "ACM Standard"
format, more specifically the "SigConf" format.
* The LaTeX template for the "ACM Standard"/"SigConf" format can be found inside the official 2017 ACM Master article template package. Please use the most recent version (1.65) available at:https://www.acm.org/publications/proceedings-template<https://gla.us4.list-manage.com/track/click?u=bd7d786e2e8ae63c14d942ee0&id=…>
* The "ACM Standard" Microsoft Word template is currently not part of the downloadable package as the ACM is currently revising it to improve accessibility of resulting PDF-documents. Please use the "Interim Word Template" instead: https://www.acm.org/binaries/content/assets/publications/word_style/interim…<https://gla.us4.list-manage.com/track/click?u=bd7d786e2e8ae63c14d942ee0&id=…>
* IVA 2020 accepts two types of submissions:
* Full papers: 8 pages (including references)
* Extended abstracts: 3 pages (including references)
* All papers must be submitted in PDF-format.
Important Dates:
Papers
---> Submission Deadline: Sunday 5th April 2020 (23:59 UTC-12)
---> Notification of acceptance: 15th May, 2020
Extended Abstracts
---> Submission Deadline: Sunday 24th May 2020 (23:59 UTC-12)
---> Notification of acceptance: 21st June, 2020
CONFERENCE ORGANIZERS
Conference Chairs:
* Stacy Marsella, University of Glasgow
* Rachael Jack, University of Glasgow
Program Chairs:
* Hannes Vilhjalmsson, Reykjavik University
* Pedro Sequeira, SRI International
* Emily Cross, University of Glasgow
Workshop/Demonstration Organization Chairs:
* Lucile Callebert, University of Glasgow
* Florian Pecune, University of Glasgow
* Contact: workshopsdemos.iva2020(a)gmail.com<mailto:workshopsdemos.iva2020@gmail.com>
Web Site
* Amol Deshmukh, University of Glasgow Doctoral Consortium
* Jonathan Gratch, ICT/USC
Treasurer
* Catherine Pelachaud, CNRS
Publicity Chair
* Mary Ellen Foster, University of Glasgow
Volunteer Coordinator:
* Carolyn Saund, University of Glasgow
[IVA 2020 Website]<https://gla.us4.list-manage.com/track/click?u=bd7d786e2e8ae63c14d942ee0&id=…>
IVA 2020
This email was sent to 2039067h(a)student.gla.ac.uk<mailto:2039067h@student.gla.ac.uk>
why did I get this?<https://gla.us4.list-manage.com/about?u=bd7d786e2e8ae63c14d942ee0&id=f8c7d9…> unsubscribe from this list<https://gla.us4.list-manage.com/unsubscribe?u=bd7d786e2e8ae63c14d942ee0&id=…> update subscription preferences<https://gla.us4.list-manage.com/profile?u=bd7d786e2e8ae63c14d942ee0&id=f8c7…>
IVA · University Avenue · Glasgow, Glg G12 8QQ · United Kingdom
[Email Marketing Powered by Mailchimp]<http://www.mailchimp.com/monkey-rewards/?utm_source=freemium_newsletter&utm…>
*We would like to let the community members know about the Consortium of
European Research on Emotion (CERE) 2020 Conference in Granada, Spain, Jun
5-6. Call for abstracts open until 15th January
2020. http://www.cere-emotionconferences.org/
<http://www.cere-emotionconferences.org/>*
Manuel J. Ruiz and Inmaculada Valor
--
<http://www.unex.es/>
Manuel J. Ruiz, *P.hD*
*Departamento de Psicología y Antropología*
*Personalidad, Evaluación y Tratamiento Psicológico*
*Universidad de Extremadura*
Despacho 2.22 (Edif. Principal)
Facultad de Educación
Avda. de Elvas S/N
06006 Badajoz (Spain)
Tfno:
Email: mjrm <mjrm(a)unex.es>@unex.es <mjrm(a)unex.es>
ORCID: 0000-0002-1286-6624 <http://orcid.org/0000-0002-1286-6624>
<http://www.cere-emotionconferences.org/>
Dear colleagues,
We are organizing a special session on “Computer Vision for Automatic Human
Health Monitoring” in conjunction with 15th IEEE Conference on
Automatic Face and Gesture Recognition to be held between 18th-22th May
2020 in Buenos Aires, Argentina. Kindly find the related call for papers
below.
*Important dates*
Papers submission deadline: 10th January 2019 – midnight PST (firm deadline
no further extension)
Paper notification deadline: 10th February 2020
Final camera-ready papers: 28 February 2020
*Submission instructions* can be found at
https://fg2020.org/instructions-of-paper-submission-for-review/.
*For submission* log into
https://cmt3.research.microsoft.com/FG2020/Submission/Index, proceed to
“create new submission”. Select “special session track and subject area” as
“Special session: Computer Vision for Automatic Human Health Monitoring”.
Accepted papers will be included in FG2020 proceedings and will appear in
the IEEE Xplore digital library,
Please feel free to contact us for any further details. Kindly disseminate
this email to others who might be interested.
We look forward to your contributions.
Antitza Dantcheva (INRIA, France)
Abhijit Das (USC, USA)
François Brémond (INRIA, France)
Xilin Chen (CAS, China)
Hu Han (CAS, China)
--------------------------------------------------------------------------------------------
*Call for paper for FG 2020 special session *
*on *
*COMPUTER VISION FOR AUTOMATIC HUMAN HEALTH MONITORING*
-----------------------------------------------------------------------------------
Automatic Human Health Monitoring Based on Computer Vision has gained rapid
scientific attention in the decade, fueled by a large number of research
articles and commercial systems based on set of features, extracted from
face and gesture. Consequently, researchers from computer vision, as well
as from medical science community have granted significant attention, with
goals ranging from patient analysis and monitoring to diagnostics. The
goal of this special session is to bring together researchers and
practitioners working in this area of computer vision and medical science,
and to address a wide range of theoretical and practical issues related to
real-life healthcare systems.
Topics of interest include, but are not limited to:
Health monitoring based on face analysis,
Health monitoring based on gesture analysis,
Health monitoring based corporeal-based visual features,
Depression analysis based on visual features,
Face analytics for human behavior understanding,
Anxiety diagnosis based on face and gesture,
Physiological measurement employing face analytics,
Databases on health monitoring, e.g., depression analysis,
Augmentative and alternative communication,
Human-robot interaction,
Home healthcare,
Technology for cognition,
Automatic emotional hearing and understanding,
Visual attention and visual saliency,
Assistive living,
Privacy preserving systems,
Quality of life technologies,
Mobile and wearable systems,
Applications for the visually impaired,
Sign language recognition and applications for hearing impaired,
Applications for the ageing society,
Personalized monitoring,
Egocentric and first-person vision,
Applications to improve health and wellbeing of children and elderly.
***************************************
ICMI 2020: Call for Long and Short Papers
http://icmi.acm.org/2020/index.php?id=cfp
25-29 Oct 2020, Utrecht, The Netherlands
***************************************
Call for Long and Short Papers
The 22nd International Conference on Multimodal Interaction (ICMI 2020)
will be held in Utrecht, the Netherlands. ICMI is the premier
international forum for multidisciplinary research on multimodal
human-human and human-computer interaction, interfaces, and system
development. The conference focuses on theoretical and empirical
foundations, component technologies, and combined multimodal processing
techniques that define the field of multimodal interaction analysis,
interface design, and system development.
We are keen to showcase novel input and output modalities and interactions
to the ICMI community. ICMI 2020 will feature a single-track main
conference which includes: keynote speakers, technical full and short
papers (including oral and poster presentations), demonstrations, exhibits
and doctoral spotlight papers. The conference will also feature workshops
and grand challenges. The proceedings of ICMI 2020 will be published by ACM
as part of their series of International Conference Proceedings and Digital
Library.
We also want to welcome conference papers from behavioral and social
sciences. These papers allow us to understand how technology can be used to
increase our scientific knowledge and may focus less on presenting
technical or algorithmic novelty. For this reason, the "novelty" criteria
used during ICMI 2020 review will be based on two sub-criteria (i.e.,
scientific novelty and technical novelty as described below). Accepted
papers at ICMI 2020 only need to be novel on one of these sub-criteria. In
other words, a paper which is strong on scientific knowledge contribution
but low on algorithmic novelty should be ranked similarly to a paper that
is high on algorithmic novelty but low on knowledge discovery.
- Scientific Novelty: Papers should bring some new knowledge to the
scientific community. For example, discovering new behavioral markers that
are predictive of mental health or how new behavioral patterns relate to
children’s interactions during learning. It is the responsibility of the
authors to perform a proper literature review and clearly discuss the
novelty in the scientific discoveries made in their paper.
- Technical Novelty: Papers reviewed with this sub-criterion should include
novelty in their computational approach for recognizing, generating or
modeling data. Examples include: novelty in the learning and prediction
algorithms, in the neural architecture, or in the data representation.
Novelty can also be associated to a new usage of an existing approach.
Please see the Submission Guidelines for Authors
https://icmi.acm.org/2020/index.php?id=authors for detailed submission
instructions.
This year’s conference theme: In this information age, technological
innovation is at the core of our lives and rapidly transforming and
impacting the state of the world in art, culture, and society, and science
as well - the borders between classical disciplines such as humanities and
computer science are fading. In particular, we wonder how multimodal
processing of human behavioural data can create meaningful impact in art,
culture, and society practices. And vice versa, how does art, culture, and
society influence our approaches and techniques in multimodal processing?
As such, this year, ICMI welcomes contributions on our theme for Multimodal
processing and representation of Human Behaviour in Art, Culture, and
Society.
Additional topics of interest include but are not limited to:
- Affective computing and interaction
- Cognitive modeling and multimodal interaction
- Gesture, touch and haptics
- Healthcare, assistive technologies
- Human communication dynamics
- Human-robot/agent multimodal interaction
- Interaction with smart environment
- Machine learning for multimodal interaction
- Mobile multimodal systems
- Multimodal behavior generation
- Multimodal datasets and validation
- Multimodal dialogue modeling
- Multimodal fusion and representation
- Multimodal interactive applications
- Speech behaviors in social interaction
- System components and multimodal platforms
- Visual behaviours in social interaction
- Virtual/augmented reality and multimodal interaction
Important Dates
Paper Submission: May 4, 2020 (11:59pm GMT-7)
Reviews to authors: July 3, 2020
Rebuttal due: July 10, 2020 (11:59pm GMT-7)
Paper notification: July 20, 2020
Camera ready paper: August 17, 2020
Presenting at main conference: October 25-29, 2020
--
Regards,
Leimin
This message is probably only of relevance to UK-based researchers, though people from other countries may have relevant comments, depending on how you vote. The UK Government is proposing to introduce a requirement to take photo-id to polling stations in order to vote. Apart from the issue of potentially excluding people who don't currently have such an id (the most disadvantaged in society, mainly) there is the problem that we know that photo id doesn't really work. Can I invite UK-based researchers to contact their MPs to explain the evidence that it is not reliable? If enough of us do, they might have a rethink.
Peter
My messages may arrive outside of the working day but this does not imply any expectation that you should reply outside of your normal working hours. If you wish to respond, please do so when convenient.
Peter Hancock
Professor of Psychology,
Faculty of Natural Sciences
University of Stirling
FK9 4LA, UK
phone 01786 467675
https://www.stir.ac.uk/people/11587http://orcid.org/0000-0001-6025-7068http://www.researcherid.com/rid/A-4633-2009
Psychology at Stirling: 100% 4* Impact, REF2014
Latest paper:
Eye see through you! Eye tracking unmasks concealed face recognition despite countermeasures https://rdcu.be/bNlKn<%20https:/rdcu.be/bNlKn>
Dear Colleagues,
We started a cross-cultural research on perception of faces of individuals
high or low on the Dark Triad scales. We would be grateful if you shared
the link on social media or mailing lists. The English version is available
here:
http://bit.ly/DTenglish
We also have a version in Hebrew. It is extremely difficult for us to
collect a sample of appropriate size, so help from colleagues in Israel are
especially appreciated. You can share this link:
http://bit.ly/DThebrew
Thanks in advance
*Ferenc Kocsor, PhD*
research associate
Institute of Psychology <Pszichológia Intézet> | Faculty of Humanites
<Bölcsészettudományi Kar> | University of Pécs
<https://outlook.office.com/mail/inbox/id/AAQkADkzOGM4ZjhhLTkyZmEtNGUyNC1iYj…>
| evolutionpsychology.com <evoluciospszichologia.hu>
Greetings, I hope this message finds you well!
I am writing to you regarding the Society for Affective Science and its
annual meeting, which will take place at the Parc 55 Hotel in San
Francisco, CA on April 23-25, 2020.
As a regular attendee and a member of the membership and outreach
committee, I encourage you and your trainees and students to attend!
Personally, I and my students have found SAS to be an excellent venue for
presenting research, and we find the interdisciplinary nature of the
program to be uniquely beneficial.
Below, I outline exciting highlights from the upcoming meeting and
opportunities for you and your students to submit abstracts for symposia,
flash talks or posters.
Please let me know if you have any questions. I hope to see you at the
2020 SAS meeting!
All the best,
Jolie
Jolie Baumann Wormwood, PhD
Assistant Professor of Psychology
422 McConnell Hall
15 Academic Way
Durham, NH 03824
——
*SAS 2020 Registration and Call for Abstracts*
*San Francisco, CaliforniaParc 55 HotelApril 23-25, 2020*
The Society for Affective Science (SAS) is pleased to announce its call for
submissions of abstracts to be considered for its 2020 Annual Conference.
The 2020 SAS conference will take place at the Parc 55 Hotel in San
Francisco, CA on April 23-25, 2020.
*Program Highlights*
We’re excited to announce that this year's program includes a Presidential
Symposium on affect in politics featuring Morteza Dehghani, Nathan Kalmoe
and Robb Willer, TED style talks by Lasana Harris, Dacher Keltner, Kristin
Lagattuta, Mathew Nock, and Jamil Zaki, and Invited Flash Talks by
James Cavanaugh,
Aaron Heller, Lori Hoggard, Jessica Lougheed, Emily Mower Provost, Nilam
Ram, Gal Sheppes, and Yukiko Uchida. Presentations will be featured in an
interesting array of sessions of varying formats.
*Abstract Submission Formats*
We welcome abstract submissions that describe new research within the
domain of affective science. In line with our goal to facilitate
interdisciplinary advances, we welcome submissions from affective
scientists in any discipline (e.g., anthropology, business, computer
science, cultural studies, economics, education, geography, history,
integrative medicine, law, linguistics, literature, neuroscience,
philosophy, political science, psychiatry, psychology, public health,
sociology, theatre), working on a broad range of topics using a variety of
measures. *Authors at all career stages – trainees, junior faculty, and
senior faculty – are encouraged to submit an abstract according to three
presentation formats as follows:*
1. A *poster* on any topic within the domain of affective science
2. A *flash talk* showcasing affective science research (open to all career
stages and disciplines).
3. A *Symposia* providing an in-depth perspective on individual research
areas/topics within affective science. *This is a NEW submission format
that we are excited to offer for SAS 2020!*
*Deadline for Receipt of Abstracts*
Abstracts must be submitted by Friday, November 8, 2019 at 11:59 p.m. Baker
Island Time (BIT; UTC-12 --- the last timezone before the date line) to be
considered for inclusion in the program. Please review the *abstract
submission instructions
<https://urldefense.proofpoint.com/v2/url?u=https-3A__society-2Dfor-2Daffect…>*
carefully.
*Selection Process*
Abstracts will be evaluated on the basis of scholarly merit by blind peer
review. Poster and Flash Talk abstracts with trainees (i.e., postdoctoral
fellow, graduate student, post-baccalaureate, undergraduate student) as
first author will be considered for an award based on further in-person
evaluation at the conference. Awards will be announced at the conference
during the closing ceremony. All presenters must register and pay to attend
the meeting. Notification of acceptance or rejection of abstracts will be
e-mailed to the corresponding author by mid-January 2020. Presenters must
be the first author on the submitted abstract.
*We hope to see you at SAS 2020 in San Francisco!*
Dear Colleagues,
We would like to invite you to contribute a chapter for the upcoming volume
entitled “Neural and Machine Learning for Emotion and Empathy Recognition:
Experiences from the OMG-Challenges” to be published by the Springer Series
on Competitions in Machine Learning. Our book will be available by mid-2020.
Website: https://easychair.org/cfp/OMGBook2019
Follows a short description of our volume:
Emotional expression perception and categorization are extremely popular in
the affective computing community. However, the inclusion of emotions in
the decision-making process of an agent is not considered in most of the
research in this field. To treat emotion expressions as the final goal,
although necessary, reduces the usability of such solutions in more complex
scenarios. To create a general affective model to be used as a modulator
for learning different cognitive tasks, such as modeling intrinsic
motivation, creativity, dialog processing, grounded learning, and
human-level communication, instantaneous emotion perception cannot be the
pivotal focus.
This book aims to present recent contributions for multimodal emotion
recognition and empathy prediction which take into consideration the
long-term development of affective concepts. In this regard, we provide
access to two datasets: OMG-Emotion Behavior Recognition and OMG-Empathy
Prediction datasets. These datasets were designed, collected and formalized
to be used on the OMG-Emotion Recognition Challenge and the OMG-Empathy
Prediction challenge, respectively. All the participants of our challenges
are invited to submit their contribution to our book. We also invite
interested authors to use our datasets on the development of inspiring and
innovative research on affective computing. By formatting these solutions
and editing this book, we hope to inspire further research in affective and
cognitive computing over longer timescales.
TOPICS OF INTEREST
The topics of interest for this call for chapters include, but are not
limited to:
- New theories and findings on continuous emotion recognition
- Multi- and Cross-modal emotion perception and interpretation
- Novel neural network models for affective processing
- Lifelong affect analysis, perception, and interpretation
- New neuroscientific and psychological findings on continuous emotion
representation
- Embodied artificial agents for empathy and emotion appraisal
- Machine learning for affect-driven interventions
- Socially intelligent human-robot interaction
- Personalized systems for human affect recognition
- New theories and findings on empathy modeling
- Multimodal processing of empathetic and social signals
- Novel neural network models for empathy understanding
- Lifelong models for empathetic interactions
- Empathetic Human-Robot-Interaction Scenarios
- New neuroscientific and psychological findings on empathy representation
- Multi-agent communication for empathetic interactions
- Empathy as a decision-making modulator
- Personalized systems for empathy prediction
Each contributed chapter is expected to present a novel research study, a
comparative study, or a survey of the literature.
SUBMISSIONS
All submissions should be done via EasyChair:
https://easychair.org/cfp/OMGBook2019
Original artwork and a signed copyright release form will be required for
all accepted chapters. For author instructions, please visit:
https://www.springer.com/us/authors-editors/book
-authors-editors/resources-guidelines/book-manuscript-guidelines
We would also like to announce that our two datasets, related to emotion
expressions and empathy prediction, are now fully available. You can have
access to them and obtain more information by visiting their website:
- OMG-EMOTION -
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/omg_emotion.html
- OMG-EMPATHY -
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/omg_empathy.html
If you want more information, please do not hesitate to contact me:
barros(a)informatik.uni-hamburg.de
IMPORTANT DATES:
- Submissions of full-length chapters: 31st of October 2019 (message us if
you need more time!)
--
--
Best regards,
*Pablo Barros*
*http://www.pablobarros.net <http://www.pablobarros.net>*
Dear Colleagues,
We would like to invite you to contribute a chapter for the upcoming volume
entitled “Neural and Machine Learning for Emotion and Empathy Recognition:
Experiences from the OMG-Challenges” to be published by the Springer Series
on Competitions in Machine Learning. Our book will be available by mid-2020.
Website: https://easychair.org/cfp/OMGBook2019
Follows a short description of our volume:
Emotional expression perception and categorization are extremely popular in
the affective computing community. However, the inclusion of emotions in
the decision-making process of an agent is not considered in most of the
research in this field. To treat emotion expressions as the final goal,
although necessary, reduces the usability of such solutions in more complex
scenarios. To create a general affective model to be used as a modulator
for learning different cognitive tasks, such as modeling intrinsic
motivation, creativity, dialog processing, grounded learning, and
human-level communication, instantaneous emotion perception cannot be the
pivotal focus.
This book aims to present recent contributions for multimodal emotion
recognition and empathy prediction which take into consideration the
long-term development of affective concepts. On this regard, we provide
access to two datasets: the OMG-Emotion Behavior Recognition and OMG-Empathy
Prediction datasets. These datasets were designed, collected and formalized
to be used on the OMG-Emotion Recognition Challenge and the OMG-Empathy
Prediction challenge, respectively. All the participants of our challenges
are invited to submit their contribution to our book. We also invite
interested authors to use our datasets on the development of inspiring and
innovative research on affective computing. By formatting these solutions
and editing this book, we hope to inspire further research in affective and
cognitive computing over longer timescales.
TOPICS OF INTEREST
The topics of interest for this call for chapters include, but are not
limited to:
- New theories and findings on continuous emotion recognition
- Multi- and Cross-modal emotion perception and interpretation
- Novel neural network models for affective processing
- Lifelong affect analysis, perception, and interpretation
- New neuroscientific and psychological findings on continuous emotion
representation
- Embodied artificial agents for empathy and emotion appraisal
- Machine learning for affect-driven interventions
- Socially intelligent human-robot interaction
- Personalized systems for human affect recognition
- New theories and findings on empathy modeling
- Multimodal processing of empathetic and social signals
- Novel neural network models for empathy understanding
- Lifelong models for empathetic interactions
- Empathetic Human-Robot-Interaction Scenarios
- New neuroscientific and psychological findings on empathy representation
- Multi-agent communication for empathetic interactions
- Empathy as a decision-making modulator
- Personalized systems for empathy prediction
Each contributed chapter is expected to present a novel research study, a
comparative study, or a survey of the literature.
SUBMISSIONS
All submissions should be done via EasyChair:
https://easychair.org/cfp/OMGBook2019
Original artwork and a signed copyright release form will be required for
all accepted chapters. For author instructions, please visit:
https://www.springer.com/us/authors-editors/book
-authors-editors/resources-guidelines/book-manuscript-guidelines
We would also like to announce that our two datasets, related to emotion
expressions and empathy prediction, are now fully available. You can have
access to them and obtain more information by visiting their website:
- OMG-EMOTION -
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/omg_emotion.html
- OMG-EMPATHY -
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/omg_empathy.html
If you want more information, please do not hesitate to contact me:
barros(a)informatik.uni-hamburg.de
IMPORTANT DATES:
- Submissions of full-length chapters: 31st of October 2019
--
Dr. Pablo Barros
Postdoctoral Research Associate - Crossmodal Learning Project (CML)
Knowledge Technology
Department of Informatics
University of Hamburg
Vogt-Koelln-Str. 30
22527 Hamburg, Germany
Phone: +49 40 42883 2535
Fax: +49 40 42883 2515
barros at informatik.uni-hamburg.dehttp://www.pablobarros.nethttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/people/barros.htmlhttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/