Apologies for cross-posting
***********************************************************************************
FGAHI 2019: CALL FOR PAPERS
2nd International Workshop on Face and Gesture Analysis for Health Informatics
Accepted papers will be published at the CVF open access archive.
Submission Deadline Extended: May 1st, 2019.
The camera-ready deadline: May 15th, 2019.
***********************************************************************************
The 2d International Workshop on Face and Gesture Analysis for Health Informatics (FGAHI
2019) will be held in conjunction with IEEE CVPR 2019 on June 16th - June 21st, Long Beach, CA.
For details concerning the workshop program, paper submission, and
guidelines please visit our workshop website at:
http://fgahi2019.isir.upmc.fr/
Best regards,
Zakia Hammal
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
Call For Papers - Frontiers Research Topic:
Closing the Loop: From Human Behavior to Multisensory Robots
I. Aim and Scope
The ability to efficiently process crossmodal information is a key feature
of the human brain that provides a robust perceptual experience and
behavioral responses. Consequently, the processing and integration of
multisensory information streams such as vision, audio, haptics, and
proprioception play a crucial role in the development of autonomous agents
and cognitive robots, yielding an efficient interaction with the
environment also under conditions of sensory uncertainty.
This Research Topic invites authors to submit new findings, theories,
systems, and trends in multisensory learning for intelligent agents and
robots with the aim to foster the development of novel and impactful
research which will contribute to the understanding of human behavior and
the development of artificial systems operating in real-world environments.
II. Potential Topics
Topics include, but are not limited to:
- New methods and applications for crossmodal processing and multisensory
integration (e.g. vision, audio, haptics, proprioception)
- Machine learning and neural networks for multisensory robot perception
- Computational models of crossmodal attention and perception
- Bio-inspired approaches for crossmodal learning
- Multisensory conflict resolution and executive control
- Sensorimotor learning for autonomous agents and robots
- Crossmodal learning for embodied and cognitive robots
III. Submission
- Abstract - 28th August 2019
- Paper Submission - 02nd December 2019
We have special discounts for open access papers participating in this
research topic. If you have any further question, please let us know.
More information:
https://www.frontiersin.org/research-topics/9321/closing-the-loop-from-huma…
IV. Guest Editors
Pablo Barros, University of Hamburg, Germany
Doreen Jirak, Hamburg University, Germany
German I. Parisi, Apprente, Inc., USA
Jun Tani, Okinawa Institute of Science and Technology, Japan
--
Dr. Pablo Barros
Postdoctoral Research Associate - Crossmodal Learning Project (CML)
Knowledge Technology
Department of Informatics
University of Hamburg
Vogt-Koelln-Str. 30
22527 Hamburg, Germany
Phone: +49 40 42883 2535
Fax: +49 40 42883 2515
barros at informatik.uni-hamburg.dehttp://www.pablobarros.nethttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/people/barros.htmlhttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/