A fully funded PhD Studentship (stipend + fees) opportunity at the University of Winchester, UK
The position is open to both UK/EU and international students*.
Applications are invited for a 3-year fully funded inter-disciplinary PhD position under the supervision of Dr. Daniel Gill and Prof. Paul Sowden from the Department of Psychology, and Dr. Claire Ancient from the Department of Digital Futures.
Our team is seeking a talented, enthusiastic, knowledgeable and highly motivated PhD student to take part in an exciting study that combines computational methods, clinical and behavioural and research techniques to study the effect of mental state on face perception.
Research implies that the process of attending to emotionally expressive faces is susceptible to mood and mental state. The current project will investigate the explicit modifications of facial mental representations induced by mental states, in particular in depression and anxiety. The project will involve psychophysical and computational tools.
The successful applicant will contribute to performing recordings with patients in collaborating clinics and the analysis of the data. They should have very good quantitative and computational skills and a strong background or interest in neuroscience and/or psychology.
Requirements
- A track record of high academic achievement, demonstrated by a first class or high upper second undergraduate honours degree and/or a master’s degree (or equivalent) in Neuroscience, Computer Science, Electric or Biomedical Engineering, Psychology, Statistics or related disciplines.
- Two academic references.
- The ability to work independently, with the support of a supervisory team, and the enthusiasm to contribute to a vibrant and stimulating research environment are essential.
- Programming skills (e.g., Matlab, orPython etc).
- Familiarity with machine learning and image processing techniques.
- Fluency in English
Prior to the submission of the formal application, prospective students are encouraged to contact Dr. Gill (daniel.gill(a)winchester.ac.uk) or Dr. Claire Ancient (Claire.encient(a)winchester.ac.uk) or Prof. Sowden (paul.sowden(a)winchester.ac.uk) by email no later than May 10th, 2017 for further instructions and informal enquiries (This is optional; applications can be submitted directly to the link in the Application Process section).
The University of Winchester in located at the stunning city of Winchester, one of the most beautiful cities in the UK. Winchester is less than an hour by train to London Waterloo train station.
Application Process:
Students should apply to the University of Winchester using Application Form A, which includes a substantial project proposal. To download a copy of Form A please click the following link:
https://www.winchester.ac.uk/study/research-degrees/how-to-apply/
Key Dates:
· Deadline for applications: Midnight 19 May 2019
· References direct from referees** required by 29 May 2019
· Interviews will be held between 24 June and 29 June 2019
· Awards begin September 2019
*nb Non-EU students are required to pay the balance between UK and non-EU tuition fees for the three years of the studentship (for 19/20: £13,300- £4,200= £9,100/annum)
Apologies for cross-posting
***********************************************************************************
FGAHI 2019: CALL FOR PAPERS
2nd International Workshop on Face and Gesture Analysis for Health Informatics
Accepted papers will be published at the CVF open access archive.
Submission Deadline Extended: May 1st, 2019.
The camera-ready deadline: May 15th, 2019.
***********************************************************************************
The 2d International Workshop on Face and Gesture Analysis for Health Informatics (FGAHI
2019) will be held in conjunction with IEEE CVPR 2019 on June 16th - June 21st, Long Beach, CA.
For details concerning the workshop program, paper submission, and
guidelines please visit our workshop website at:
http://fgahi2019.isir.upmc.fr/
Best regards,
Zakia Hammal
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
Hi everyone, my boss Cindy Bukach has a postdoc position available in her lab. See the job details below. Richmond is a wonderful place to live and work.
The University of Richmond has an opening for a postdoctoral fellowship in cognitive neuroscience starting in July 2019. The fellowship is designed to enhance candidate's scholarship and provide teaching experience for a career in academia. The candidate will work collaboratively with Dr. Cindy Bukach investigating how category specific effects (in particular face perception and other-race effects) emerge, generalize, and transform with changes in experience and context, using both behavioral and ERP techniques. Applicants should hold a PhD in a related field (e.g., psychology, cognitive science, neuroscience) and have extensive experience with EEG including experiment design, programming, data collection, and statistical analysis. Proven skills in scholarly writing will be an advantage, as will be a background in face/object recognition or social neuroscience and/or programming skills. Finally, the candidate is expected to have good social skills, the ability to work in a team, and commitment to mentoring undergraduates. The successful candidate will work collaboratively and independently on research with Dr. Cindy Bukach and her students, and will teach two units per academic year.
The University of Richmond is a private university located just a short drive from downtown Richmond, Virginia. Through its five schools and wide array of campus programming, the University combines the best qualities of a small liberal arts college and a large university. With nearly 4,000 students, an 8:1 student-faculty ratio, and 92% of traditional undergraduate students living on campus, the University is remarkably student-centered, focused on preparing students “to live lives of purpose, thoughtful inquiry, and responsible leadership in a global and pluralistic society.”
The University of Richmond is committed to developing a diverse workforce and student body, and to modeling an inclusive campus community which values the expression of difference in ways that promote excellence in teaching, learning, personal development, and institutional success. Our academic community strongly encourages applications that are in keeping with this commitment. For more information on the department and its programs, please see psychology.richmond.edu<http://psychology.richmond.edu/>.
Applicants should apply online at http://jobs.richmond.edu<http://jobs.richmond.edu/> using the Faculty (Adjunct/Visitor) link to Trawick Postdoctoral Fellow – 000669. Applicants will be asked to submit a curriculum vitae, a research statement, and names of two references who will receive an automated email asking them to submit their reference letter to this web site. Review of applications will begin immediately and continue until the position is filled. Send inquiries to Cindy Bukach atcbukach(a)richmond.edu<mailto:cbukach@richmond.edu>
Dear all
I'm just writing up a face matching study were I've employed Asian and Caucasian faces, however when the photographs were taken for the stimuli permission was not sought to use the faces I any peer-reviewed publication. Does anyone have a young (18-25) Asian male and Caucasian male full frontal face photographs, no facial hair of glasses, we can use in the paper as an example of the stimuli we used? Thanks in advance
Best,
Trina
[cid:image007.png@01D4E09C.83A18E30]
[cid:image008.png@01D4E09C.83A18E30]
Dr Catriona Havard| Head of Discipline, Senior Lecturer in Psychology, Faculty of Arts and Social Science
The Open University, Walton Hall, Milton Keynes, MK7 6AA
Tel: +44 (0) 1908 654554
[cid:image009.png@01D4E09C.83A18E30]
To access a selection of my papers please see:
http://oro.open.ac.uk/view/person/ch22572.html
*Postdoctoral position in object and face recognition at NYU Abu Dhabi*
A postdoctoral research position is open at the Objects and Knowledge
Laboratory, headed by Dr. Olivia Cheung, at New York University Abu Dhabi (
http://www.oliviacheunglab.org/). The postdoctoral researcher will carry
out behavioral and fMRI experiments on human object/face/letter/scene
recognition. Potential research projects include, but are not limited to,
investigations of the influences of experience and conceptual knowledge on
recognition processes.
Applicants must have a Ph.D. in Psychology, Cognitive Neuroscience, or a
related field, and should possess strong programming skills (e.g., Matlab).
Prior experience with neuroimaging and psychophysical techniques is highly
preferred. Initial appointment is for up to two years. Starting date is
flexible, preferably by August 2019.
The Objects and Knowledge Laboratory is part of the rapidly growing
Psychology division at New York University Abu Dhabi. The lab is located in
the campus on Saadiyat Island (Abu Dhabi’s cultural hub), and has access to
the state-of-the-art neuroimaging and behavioral facilities (including MRI,
MEG, eyetracking).
Apart from a competitive salary, the postdoctoral researcher will receive
housing and other benefits. More information about living in Abu Dhabi can
be found here:
http://nyuad.nyu.edu/en/campus-life/residential-education-and-housing/livin…
New York University has established itself as a Global Network University,
a multi-site, organically connected network encompassing key global cities
and idea capitals. The network has three foundational degree-granting
campuses: New York, Abu Dhabi, and Shanghai, complimented by a network of
eleven research and study-away sites across five continents. Faculty and
students circulate within this global network in pursuit of common research
interests and the promotion of cross-cultural and interdisciplinary
solutions for problems both local and global.
Informal inquiries regarding the position, university, or area, are
encouraged. To apply, individuals should email a curriculum vita, a brief
statement of research interests, the expected date of availability, and
contact information of two referees. All correspondence should be sent to
Olivia Cheung (olivia.cheung(a)nyu.edu).
--
Olivia Cheung
Assistant Professor of Psychology
New York University Abu Dhabi
o: Computational Research Building (A2), Office 161
p: +971 2 628 4967
e: olivia.cheung(a)nyu.edu
web: www.oliviacheunglab.org
A PhD Studentship
University of Winchester
Top-down effects of mental state on face perception
A fully funded PhD Studentship (stipend + fees) opportunity of at the University of Winchester, UK.
The position is open to both UK/EU and international students*.
Applications are invited for a 3-year, fully funded PhD position under the supervision of Dr. Daniel Gill and Prof. Paul Sowden from the Department of Psychology, and Dr. Claire Ancient from the Department of Digital Futures.
Our team is seeking a talented, enthusiastic, knowledgeable and highly motivated PhD student to take part in an exciting study that combines clinical, behavioural and computational research techniques to study the effect of mental state on face perception.
Research implies that the process of attending to emotionally expressive faces is susceptible to mood and mental state. The current project will investigate the explicit modifications of facial mental representations induced by mental states, in particular in depression and anxiety. The project will involve psychophysical and computational tools.
The successful applicant will contribute to performing recordings with patients in collaborating clinics and the analysis of the data. They should have very good quantitative and computational skills and a strong background or interest in neuroscience and/or psychology.
Requirements
- A track record of high academic achievement, demonstrated by a first class or high upper second undergraduate honours degree and/or a master’s degree (or equivalent) in Neuroscience, Computer Science, Electric or Biomedical Engineering, Psychology, Statistics or related disciplines.
- Two academic references.
- The ability to work independently, with the support of a supervisory team, and the enthusiasm to contribute to a vibrant and stimulating research environment are essential.
- Programming skills (Matlab or Python).
- Familiarity with machine learning and image processing techniques.
- Fluency in English
Prior to the submission of the formal application, prospective students are encouraged to contact Dr. Gill (daniel.gill(a)winchester.ac.uk) or Prof. Sowden (paul.sowden(a)winchester.ac.uk) by email no later than May 10th, 2017 for further instructions and informal enquiries (This is optional; applications can be submitted directly to the link in the Application Process section).
The University of Winchester in located at the stunning city of Winchester, one of the most beautiful cities in the UK. Winchester in less than an hour by train to London Waterloo train station.
Application Process:
Students should apply to the University of Winchester using Application Form A which includes a substantial project proposal. To download a copy of Form A please click the following link:
https://www.winchester.ac.uk/study/research-degrees/how-to-apply/
Key Dates:
· Deadline for applications: Midnight 19 May 2019
· References direct from referees** required by 29 May 2019
· Interviews will be held between 24 June and 29 June 2019
· Awards begin September 2019
*nb Non-EU students are required to pay the balance between UK and non-EU tuition fees for the three years of the studentship (for 19/20: £13,300- £4,200= £9,100/annum)
[https://www.winchester.ac.uk/media/content-assets/corporate-imagery/Email-s…]
Our privacy policy is here<https://www.winchester.ac.uk/about-us/leadership-and-governance/privacy-and…>.
University of Winchester, a private charitable company limited by
guarantee in England and Wales number 5969256.
Registered Office: Sparkford Road, Winchester, Hampshire SO22
4NR
Hi everyone,
I'm currently caricaturing celebrity faces in psychomorph. I'm finding that
several of the celebrity faces have lower than average eyebrows and or
eyelid folds, which is causing artifacts around the eyes. Elinor's
suggestion of adding more points in those locations doesn't seem to be
helping! Does any one have any suggestions?
Thanks!
Rachel
--
“It is not our differences that divide us. It is our inability to
recognize, accept, and celebrate those differences.” - Audre Lorde
Call for Challenge participation
Seventh Emotion Recognition in the Wild (EmotiW) Challenge 2019
https://sites.google.com/view/emotiw2019
@ ACM International Conference on Multimodal Interaction 2019, Suzhou,
China
----------------------------------------------------------------------
The Emotion Recognition in the Wild 2019 Challenge consists of multimodal
classification challenges, which mimics real-world conditions.
Traditionally, emotion recognition has been performed on laboratory
controlled data. While undoubtedly worthwhile at the time, such lab
controlled data poorly represents the environment and conditions faced in
real-world situations. With the increase in the number of video clips
online, it is worthwhile to explore the performance of emotion recognition
methods that work ‘in the wild’.
There are three sub-challenges:
a. Audio-video based Emotion Recognition
b. Group-level Cohesion Recognition
c. Engagement Prediction
Timeline:
Train and validate data available - 15th March 2019
Test data available - 5th June 2019
Paper submission deadline - July 2019
Paper notification - July 2019
Organisers
Abhinav Dhall, Indian Institute of Technology Ropar
Roland Goecke, University of Canberra
Tom Gedeon, Australian National University
--
Abhinav Dhall, PhD
Assistant Professor,
Department of Computer Science & Engineering,
Indian Institute of Technology, Ropar
Webpage: http://iitrpr.ac.in/lasii/
Google Scholar: https://goo.gl/iDwNTx
Apologies for cross-posting
***********************************************************************************
FGAHI 2019: CALL FOR PAPERS
2nd International Workshop on Face and Gesture Analysis for Health Informatics
Submission Deadline: March 22, 2019
***********************************************************************************
The 2d International Workshop on Face and Gesture Analysis for Health Informatics (FGAHI
2019) will be held in conjunction with IEEE CVPR 2019 on June 16th - June 21st, Long Beach, CA.
For details concerning the workshop program, paper submission, and
guidelines please visit our workshop website at:
http://fgahi2019.isir.upmc.fr/
Best regards,
Zakia Hammal
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
Apologies for cross-posting
***********************************************************************************
ICMI 2019: Call for Long and Short Papers
https://icmi.acm.org/2019/index.php?id=cfp
Abstract Submission: May 1, 2019 (11:59pm PST)
Final Submission: May 7, 2019 (11:59pm PST)
***********************************************************************************
Call for Long and Short Papers
The 21st International Conference on Multimodal Interaction (ICMI 2019) will be held in Suzhou, China. ICMI is the premier international forum for multidisciplinary research on multimodal human-human and human-computer interaction, interfaces, and system development. The conference focuses on theoretical and empirical foundations, component technologies, and combined multimodal processing techniques that define the field of multimodal interaction analysis, interface design, and system development.
We are keen to showcase novel input and output modalities and interactions to the ICMI community. ICMI 2019 will feature a single-track main conference which includes: keynote speakers, technical full and short papers (including oral and poster presentations), demonstrations, exhibits and doctoral spotlight papers. The conference will also feature workshops and grand challenges. The proceedings of ICMI 2019 will be published by ACM as part of their series of International Conference Proceedings and Digital Library.
We also want to welcome conference papers from behavioral and social sciences. These papers allow us to understand how technology can be used to increase our scientific knowledge and may focus less on presenting technical or algorithmic novelty. For this reason, the "novelty" criteria used during ICMI 2019 review will be based on two sub-criteria (i.e., scientific novelty and technical novelty as described below). Accepted papers at ICMI 2019 only need to be novel on one of these sub-criteria. In other words, a paper which is strong on scientific knowledge contribution but low on algorithmic novelty should be ranked similarly to a paper that is high on algorithmic novelty but low on knowledge discovery.
Scientific Novelty: Papers should bring some new knowledge to the scientific community. For example, discovering new behavioral markers that are predictive of mental health or how new behavioral patterns relate to children’s interactions during learning. It is the responsibility of the authors to perform a proper literature review and clearly discuss the novelty in the scientific discoveries made in their paper.
Technical Novelty: Papers reviewed with this sub-criterion should include novelty in their computational approach for recognizing, generating or modeling data. Examples include: novelty in the learning and prediction algorithms, in the neural architecture, or in the data representation. Novelty can also be associated to a new usage of an existing approach.
Please see the Submission Guidelines for Authors for detailed submission instructions.
This year, ICMI welcomes contributions on our theme of multi-modal understanding of multi-party interactions. Additional topics of interest include but are not limited to:
Affective computing and interaction
Cognitive modeling and multimodal interaction
Gesture, touch and haptics
Healthcare, assistive technologies
Human communication dynamics
Human-robot/agent multimodal interaction
Interaction with smart environment
Machine learning for multimodal interaction
Mobile multimodal systems
Multimodal behavior generation
Multimodal datasets and validation
Multimodal dialogue modeling
Multimodal fusion and representation
Multimodal interactive applications
Speech behaviors in social interaction
System components and multimodal platforms
Visual behaviors in social interaction
Virtual/augmented reality and multimodal interaction
Important Dates:
Abstract Submission May 1, 2019
Final submissions May 7, 2019
Paper rebuttal due June 25, 2019
Autdor notification July 7, 2019
Paper Camera Ready July 15, 2019
Best regards,
Social Media Chair ICMI 2019
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon
University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
Dear colleagues,
We are delighted to offer a fully funded 4-year PhD position on the ERC-funded project Computing the Face Syntax of Social Communication at the Institute of Neuroscience and Psychology, University of Glasgow, Scotland. The competition is open internationally.
Please see attached for details of the project and the application process.
Many thanks for sharing!
Best,
Dr. Rachael E. Jack, Ph.D.
Reader
Institute of Neuroscience & Psychology
School of Psychology
University of Glasgow
+44 (0) 141 5087
www.psy.gla.ac.uk/schools/psychology/staff/rachaeljack/
Apologies for cross-posting
***********************************************************************************
ICMI 2019: Call for Workshops
https://icmi.acm.org/2019/index.php?id=CfW
Workshop proposal submission extended: Sunday, February 24, 2019
***********************************************************************************
Call for Workshops
The International Conference on Multimodal Interaction (ICMI 2019) will be held in Suzhou, Jiangsu, China, during October 14-18, 2019. ICMI is the premier international conference for multidisciplinary research on multimodal human-human and human-computer interaction analysis, interface design, and system development. The theme of the ICMI 2019 conference is Multi-modal Understanding of Multi-party Interactions. ICMI has developed a tradition of hosting workshops in conjunction with the main conference to foster discourse on new research, technologies, social science models and applications. Examples of recent workshops include:
Multi-sensorial Approaches to Human-Food Interaction
Group Interaction Frontiers in Technology
Modeling Cognitive Processes from Multimodal Data
Human-Habitat for Health
Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction
Investigating Social Interactions with Artificial Agents
Child Computer Interaction
Multimodal Interaction for Education
We are seeking workshop proposals on emerging research areas related to the main conference topics, and those that focus on multi-disciplinary research. We would also strongly encourage workshops that will include a diverse set of keynote speakers (factors to consider include: gender, ethnic background, institutions, years of experience, geography, etc.).
The format, style, and content of accepted workshops are under the control of the workshop organizers. Workshops may be of a half-day or one day in duration. Workshop organizers will be expected to manage the workshop content, be present to moderate the discussion and panels, invite experts in the domain, and maintain a website for the workshop. Workshop papers will be indexed by ACM.
Submission
Prospective workshop organizers are invited to submit proposals in PDF format (Max. 3 pages). Please email proposals to the workshop chairs: Hongwei Ding (hwding(a)sjtu.edu.cn), Carlos Busso (busso(a)utdallas.edu) and Tadas Baltrusaitis (tadyla(a)gmail.com). The proposal should include the following:
Workshop title
List of organizers including affiliation, email address, and short biographies
Workshop motivation, expected outcomes and impact
Tentative list of keynote speakers
Workshop format (by invitation only, call for papers, etc.), anticipated number of talks/posters, workshop duration (half-day or full-day) including tentative program
Planned advertisement means, website hosting, and estimated participation
Paper review procedure (single/double-blind, internal/external, solicited/invited-only, pool of reviewers, etc.)
Paper submission and acceptance deadlines
Special space and equipment requests, if any
Important Dates:
Workshop proposal submission extended: Sunday, February 24, 2019
Notification of acceptance: Saturday, March 2, 2019
Workshop Date: Monday, October 14, 2019
Best regards,
Social Media Chair ICMI 2019
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
Dear Colleagues,
We would like to invite you to contribute a chapter for the upcoming volume
entitled “Neural and Machine Learning for Emotion and Empathy Recognition:
Experiences from the OMG-Challenges” to be published by the Springer Series
on Competitions in Machine Learning. Our book will be available by the end
of 2019.
Website: https://easychair.org/cfp/OMGBook2019
Follows a short description of our volume:
Emotional expression perception and categorization are extremely popular in
the affective computing community. However, the inclusion of emotions in
the decision-making process of an agent is not considered in most of the
research in this field. To treat emotion expressions as the final goal,
although necessary, reduces the usability of such solutions in more complex
scenarios. To create a general affective model to be used as a modulator
for learning different cognitive tasks, such as modeling intrinsic
motivation, creativity, dialog processing, grounded learning, and
human-level communication, instantaneous emotion perception cannot be the
pivotal focus.
This book aims to present recent contributions for multimodal emotion
recognition and empathy prediction which take into consideration the
long-term development of affective concepts. On this regard, we provide
access to two datasets: the OMG-Emotion Behavior Recognition and
OMG-Empathy Prediction datasets. These datasets were designed, collected
and formalized to be used on the OMG-Emotion Recognition Challenge and the
OMG-Empathy Prediction challenge, respectively. All the participants of our
challenges are invited to submit their contribution to our book. We also
invite interested authors to use our datasets on the development of
inspiring and innovative research on affective computing. By formatting
these solutions and editing this book, we hope to inspire further research
in affective and cognitive computing over longer timescales.
TOPICS OF INTEREST
The topics of interest for this call for chapters include, but are not
limited to:
- New theories and findings on continuous emotion recognition
- Multi- and Cross-modal emotion perception and interpretation
- Novel neural network models for affective processing
- Lifelong affect analysis, perception, and interpretation
- New neuroscientific and psychological findings on continuous emotion
representation
- Embodied artificial agents for empathy and emotion appraisal
- Machine learning for affect-driven interventions
- Socially intelligent human-robot interaction
- Personalized systems for human affect recognition
- New theories and findings on empathy modeling
- Multimodal processing of empathetic and social signals
- Novel neural network models for empathy understanding
- Lifelong models for empathetic interactions
- Empathetic Human-Robot-Interaction Scenarios
- New neuroscientific and psychological findings on empathy representation
- Multi-agent communication for empathetic interactions
- Empathy as a decision-making modulator
- Personalized systems for empathy prediction
Each contributed chapter is expected to present a novel research study, a
comparative study, or a survey of the literature.
We also expect that each contributed chapter approach somehow at least one
of our datasets: the OMG-Emotion and the OMG-Empathy.
SUBMISSIONS
All submissions should be done via EasyChair:
https://easychair.org/cfp/OMGBook2019
Original artwork and a signed copyright release form will be required for
all accepted chapters. For author instructions, please visit:
https://www.springer.com/us/authors-editors/book-authors-editors/resources-…
We would also like to announce that our two datasets, related to emotion
expressions and empathy prediction, are now fully available. You can have
access to them and obtain more information by visiting their website:
- OMG-EMOTION -
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/omg_emotion.html
- OMG-EMPATHY -
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/omg_empathy.html
If you wand more information, please do not hesitate to contact me:
barros(a)informatik.uni-hamburg.de
IMPORTANT DATES:
- Submission of abstracts: 29th of March 2019
- Submissions of full-length chapters: 29th of March 2019
- Notification of final editorial decisions 31st of May 2019
- Submission of revised chapters: 08th of July, 2019
--
Dr. Pablo Barros
Postdoctoral Research Associate - Crossmodal Learning Project (CML)
Knowledge Technology
Department of Informatics
University of Hamburg
Vogt-Koelln-Str. 30
22527 Hamburg, Germany
Phone: +49 40 42883 2535
Fax: +49 40 42883 2515
barros at informatik.uni-hamburg.dehttp://www.pablobarros.nethttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/people/barros.htmlhttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/
Apologies for cross-posting
***********************************************************************************
FGAHI 2019: CALL FOR PAPERS
2nd International Workshop on Face and Gesture Analysis for Health Informatics
Submission Deadline: March TBD, 2019
***********************************************************************************
The 2d International Workshop on Face and Gesture Analysis for Health Informatics (FGAHI
2019) will be held in conjunction with IEEE CVPR 2019 on June 16th - June 21st, Long Beach, CA.
For details concerning the workshop program, paper submission, and
guidelines please visit our workshop website at:
http://fgahi2019.isir.upmc.fr/
Best regards,
Zakia Hammal
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
Apologies for cross-posting
***********************************************************************************
ICMI 2019: Call for Long and Short Papers
https://icmi.acm.org/2019/index.php?id=cfp
Abstract Submission: May 1, 2019 (11:59pm PST)
Final Submission: May 7, 2019 (11:59pm PST)
***********************************************************************************
Call for Long and Short Papers
The 21st International Conference on Multimodal Interaction (ICMI 2019) will be held in Suzhou, China. ICMI is the premier international forum for multidisciplinary research on multimodal human-human and human-computer interaction, interfaces, and system development. The conference focuses on theoretical and empirical foundations, component technologies, and combined multimodal processing techniques that define the field of multimodal interaction analysis, interface design, and system development.
We are keen to showcase novel input and output modalities and interactions to the ICMI community. ICMI 2019 will feature a single-track main conference which includes: keynote speakers, technical full and short papers (including oral and poster presentations), demonstrations, exhibits and doctoral spotlight papers. The conference will also feature workshops and grand challenges. The proceedings of ICMI 2019 will be published by ACM as part of their series of International Conference Proceedings and Digital Library.
We also want to welcome conference papers from behavioral and social sciences. These papers allow us to understand how technology can be used to increase our scientific knowledge and may focus less on presenting technical or algorithmic novelty. For this reason, the "novelty" criteria used during ICMI 2019 review will be based on two sub-criteria (i.e., scientific novelty and technical novelty as described below). Accepted papers at ICMI 2019 only need to be novel on one of these sub-criteria. In other words, a paper which is strong on scientific knowledge contribution but low on algorithmic novelty should be ranked similarly to a paper that is high on algorithmic novelty but low on knowledge discovery.
Scientific Novelty: Papers should bring some new knowledge to the scientific community. For example, discovering new behavioral markers that are predictive of mental health or how new behavioral patterns relate to children’s interactions during learning. It is the responsibility of the authors to perform a proper literature review and clearly discuss the novelty in the scientific discoveries made in their paper.
Technical Novelty: Papers reviewed with this sub-criterion should include novelty in their computational approach for recognizing, generating or modeling data. Examples include: novelty in the learning and prediction algorithms, in the neural architecture, or in the data representation. Novelty can also be associated to a new usage of an existing approach.
Please see the Submission Guidelines for Authors for detailed submission instructions.
This year, ICMI welcomes contributions on our theme of multi-modal understanding of multi-party interactions. Additional topics of interest include but are not limited to:
Affective computing and interaction
Cognitive modeling and multimodal interaction
Gesture, touch and haptics
Healthcare, assistive technologies
Human communication dynamics
Human-robot/agent multimodal interaction
Interaction with smart environment
Machine learning for multimodal interaction
Mobile multimodal systems
Multimodal behavior generation
Multimodal datasets and validation
Multimodal dialogue modeling
Multimodal fusion and representation
Multimodal interactive applications
Speech behaviors in social interaction
System components and multimodal platforms
Visual behaviors in social interaction
Virtual/augmented reality and multimodal interaction
Important Dates:
Abstract Submission May 1, 2019
Final submissions May 7, 2019
Paper rebuttal due June 25, 2019
Autdor notification July 7, 2019
Paper Camera Ready July 15, 2019
Best regards,
Social Media Chair ICMI 2019
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon
University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
CALL FOR PAPERS
IEEE Transactions on Affective Computing
Special Issue on Automated Perception of Human Affect from Longitudinal
Behavioral Data
Website:
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/tacSpecialIssue201…
I. Aim and Scope
Research trends within artificial intelligence and cognitive sciences are
still heavily based on computational models that attempt to imitate human
perception in various behavior categorization tasks. However, most of the
research in the field focuses on instantaneous categorization and
interpretation of human affect, such as the inference of six basic emotions
from face images, and/or affective dimensions (valence-arousal), stress and
engagement from multi-modal (e.g., video, audio, and autonomic physiology)
data. This diverges from the developmental aspect of emotional behavior
perception and learning, where human behavior and expressions of affect
evolve and change over time. Moreover, these changes are present not only
in the temporal domain but also within different populations and more
importantly, within each individual. This calls for a new perspective when
designing computational models for analysis and interpretation of human
affective behaviors: the computational models that can timely and
efficiently adapt to different contexts and individuals over time, and also
incorporate existing neurophysiological and psychological findings (prior
knowledge). Thus, the long-term goal is to create life-long personalized
learning and inference systems for analysis and perception of human
affective behaviors. Such systems would benefit from long-term contextual
information (including demographic and social aspects) as well as
individual characteristics. This, in turn, would allow building intelligent
agents (such as mobile and robot technologies) capable of adapting their
behavior in a continuous and on-line manner to the target contexts and
individuals.
This special issue aims at contributions from computational neuroscience
and psychology, artificial intelligence, machine learning, and affective
computing, challenging and expanding current research on interpretation and
estimation of human affective behavior from longitudinal behavioral data,
i.e., single or multiple modalities captured over extended periods of time
allowing efficient profiling of target behaviors and their inference in
terms of affect and other socio-cognitive dimensions. We invite
contributions focusing on both the theoretical and modeling perspective, as
well as applications ranging from human-human, human-computer and
human-robot interactions.
II. Potential Topics
Given computational models, the capability to perceive and understand
emotion behavior is an important and popular research topic. That is why
recent special issues on the IEEE Journal on Transactions on Affective
Computing covered topics from emotion behavior analysis “in-the-wild” to
personality analysis. However, most of the research published by these
specific calls treat emotion behavior as an instantaneous event, relating
mostly to emotion recognition, and thus neglect the development of complex
emotion behavior models. Our special issue will foster the development of
the field by focusing excellent research on emotion models for long-term
behavior analysis.
The topics of interest for this special issue include, but are not limited
to:
- New theories and findings on continuous emotion recognition
- Multi- and Cross-modal emotion perception and interpretation
- Lifelong affect analysis, perception, and interpretation
- Novel neural network models for affective processing
- New neuroscientific and psychological findings on continuous emotion
representation
- Embodied artificial agents for empathy and emotion appraisal
- Machine learning for affect-driven interventions
- Socially intelligent human-robot interaction
- Personalized systems for human affect recognition
III. Submission
Prospective authors are invited to submit their manuscripts electronically,
adhering to the IEEE Transactions on Affective Computing guidelines (
https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=5165369). Please
submit your papers through the online system (
https://mc.manuscriptcentral.com/taffc-cs) and be sure to select the
special issue: Special Issue/Section on Automated Perception of Human
Affect from Longitudinal Behavioral Data.
IV. IMPORTANT DATES:
Submissions Deadline: 15th of February 2019
V. Guest Editors
Pablo Barros, University of Hamburg, Germany
Stefan Wermter, University of Hamburg, Germany
Ognjen (Oggi) Rudovic, Massachusetts Institute of Technology, United States
of America
Hatice Gunes, University of Cambridge, United Kingdom
--
Best regards,
*Pablo Barros*
*http://www.pablobarros.net <http://www.pablobarros.net>*
Dear colleagues,
I would be grateful if you could share this call for EoIs with your colleagues and on your mailing lists.
Deadline 4 FEB 2019.
Thanks
The Center for Social, Cognitive, Affective Neuroscience (cSCAN<http://cscan.gla.ac.uk/>: <<http://cscan.gla.ac.uk/>http://cscan.gla.ac.<http://cscan.gla.ac.uk/>uk>),<http://cscan.gla.ac.uk/> <http://cscan.gla.ac.uk/> at the University of Glasgow, Scotland seeks expressions of interest (EoI) in the Future Leaders Fellowship (FLF) from UK Research and Innovation <www.ukri.org/funding/funding-opportunities/future-leaders-fellowships/<http://www.ukri.org/funding/funding-opportunities/future-leaders-fellowship…>>. The FLF seeks to “develop, retain, attract and sustain research and innovation talent in the UK” and offers a research-focused post for 4 years, plus a possible 3 additional years, followed by an ongoing academic post within cSCAN.
cSCAN researchers address fundamental mechanisms of social perception, social cognition, and social interaction from a unique, highly transdisciplinary perspective that spans psychology, neuroscience and the computational/engineering sciences. More details can be found at <cscan<http://cscan.gla.ac.uk/>.gla.ac.<http://cscan.gla.ac.uk/>uk<http://cscan.gla.ac.uk/>>. Only candidates who are early-career researchers (within ~6-7 years post-PhD) with very strong publication records relative to career stage and with clear value-added to the cSCAN research program will be considered.
Please email rachael.jack(a)glasgow.ac.uk<mailto:rachael.jack@glasgow.ac.uk> a current CV and a 2-page research statement by Feb 4th, 2019 at the latest. Include ‘Interest in Future Leaders Fellowship’ in the subject of the email. We will be in touch with applicants who are short-listed to progress to the next stage and work with them to develop applications to submit to UKRI. Applicants should refer to the UKRI website for deadline information.
Dr. Rachael E. Jack, Ph.D.
Reader
Institute of Neuroscience & Psychology
School of Psychology
University of Glasgow
+44 (0) 141 5087
www.psy.gla.ac.uk/schools/psychology/staff/rachaeljack/
Apologies for cross-posting
***********************************************************************************
CBAR 2019: CALL FOR PAPERS
6th International Workshop on CONTEXT BASED AFFECT RECOGNITION
https://cbar2019.blogspot.com/
Submission Deadline: January 28th, 2019
***********************************************************************************
The 6th International Workshop on Context Based Affect Recognition (CBAR
2019) will be held in conjunction with FG 2019 in May 2019 in Lille
France – http://fg2019.org/
For details concerning the workshop program, paper submission, and
guidelines please visit our workshop website at:
https://cbar2019.blogspot.com/
Best regards,
Zakia Hammal
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
Apologies for cross-posting
***********************************************************************************
CBAR 2019: CALL FOR PAPERS
6th International Workshop on CONTEXT BASED AFFECT RECOGNITION
https://cbar2019.blogspot.com/
Submission Deadline: January 28th, 2019
***********************************************************************************
The 6th International Workshop on Context Based Affect Recognition (CBAR
2019) will be held in conjunction with FG 2019 in May 2019 in Lille
France – http://fg2019.org/
For details concerning the workshop program, paper submission, and
guidelines please visit our workshop website at:
https://cbar2019.blogspot.com/
Best regards,
Zakia Hammal
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
Dear all,
I am on the hunt for a face database that contains expressions
of embarrassment, guilt, flirtation, boredom, arrogance and admiration, as
well as neutral and basic emotions (e.g. neutral, happy, angry, sad etc.)
for an EEG experiment I am running with a dissertation student of mine. If
anyone can point me in the right direction or has access to a number of
faces with these expressions, I would be most grateful!
Best wishes,
Nicola
--
Dear face researchers,
We want to create caricatures for some famous faces that we used in an
experiment last year. These include both men and women and a range of ages
from mid-20s to 60-70s (and the queen). All are Caucasian/White.
Does anyone have average faces for male and female for approx <30 years,
30-60 years and 60+ years, that they would be willing to share with us?
We also have the problem that many but not all the faces are showing teeth,
so we will probably need different averages for the mouth with teeth
showing versus no teeth showing versions...
Thanks!
Rachel
--
“It is not our differences that divide us. It is our inability to
recognize, accept, and celebrate those differences.” - Audre Lorde
Apologies for cross-posting
***********************************************************************************
CBAR 2019: CALL FOR PAPERS
6th International Workshop on CONTEXT BASED AFFECT RECOGNITION
https://cbar2019.blogspot.com/
Submission Deadline: January 28th, 2019 (Extended)
***********************************************************************************
The 6th International Workshop on Context Based Affect Recognition (CBAR
2019) will be held in conjunction with FG 2019 in May 2019 in Lille
France – http://fg2019.org/
For details concerning the workshop program, paper submission, and
guidelines please visit our workshop website at:
https://cbar2019.blogspot.com/
Best regards,
Zakia Hammal
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
Dear members,
We are pleased to announce publication of first of its kind children
spontaneous facial expression video database, the LIRIS Children
Spontaneous Facial Expression Video Database (LIRIS-CSE). This unique
database contains spontaneous / natural facial expressions of 12 children
in diverse settings with variable recording scenarios showing six universal
or prototypic emotional expressions (happiness, sadness, anger, surprise,
disgust and fear). Children are recorded in constraint free environment (no
restriction on head movement, no restriction on hands movement, free
sitting setting, no restriction of any sort) while they watched specially
built / selected stimuli. This constraint free environment allowed us to
record spontaneous / natural expression of children as they occur. The
database has been validated by 22 human raters. Details of the database are
presented in the following publication:
A novel database of Children's Spontaneous Facial Expressions (LIRIS-CSE).
Rizwan Ahmed Khan, Crenn Arthur, Alexandre Meyer, Saida Bouakaz. arXiv
(2018) preprint, arXiv:1812.01555. https://arxiv.org/abs/1812.01555
To request database download (for research purpose only) visit project
webpage at: https://childrenfacialexpression.projet.liris.cnrs.fr/
__________________
Best Regards,
Dr. Rizwan Ahmed KHAN
Associate Professor, Barrett Hodgson University, Karachi, Pakistan.
|| Researcher
- Laboratoire d'InfoRmatique en Image et Systèmes d'information (LIRIS),
Lyon, France.
<https://sites.google.com/site/drkhanrizwan17/>
<http://scholar.google.com/citations?user=T66djn8AAAAJ&hl=en>
<http://dblp.uni-trier.de/pers/hd/k/Khan:Rizwan_Ahmed.html>
<https://www.youtube.com/user/Rizwankhan2000/videos?view_as=subscriber>
*Help preserve the color of our world – Think before you print.*
Apologies for cross-posting
***********************************************************************************
CBAR 2019: CALL FOR PAPERS
6th International Workshop on CONTEXT BASED AFFECT RECOGNITION
https://cbar2019.blogspot.com/
Submission Deadline: January 7th, 2019
***********************************************************************************
The 6th International Workshop on Context Based Affect Recognition (CBAR
2019) will be held in conjunction with FG 2019 in May 2019 in Lille
France – http://fg2019.org/
For details concerning the workshop program, paper submission, and
guidelines please visit our workshop website at:
https://cbar2019.blogspot.com/
Best regards,
Zakia Hammal
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
CALL FOR PAPERS
Extended Deadline for the IEEE Transactions on Affective Computing
Special Issue on Automated Perception of Human Affect from Longitudinal
Behavioral Data
Website:
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/tacSpecialIssue201…
I. Aim and Scope
Research trends within artificial intelligence and cognitive sciences are
still heavily based on computational models that attempt to imitate human
perception in various behavior categorization tasks. However, most of the
research in the field focuses on instantaneous categorization and
interpretation of human affect, such as the inference of six basic emotions
from face images, and/or affective dimensions (valence-arousal), stress and
engagement from multi-modal (e.g., video, audio, and autonomic physiology)
data. This diverges from the developmental aspect of emotional behavior
perception and learning, where human behavior and expressions of affect
evolve and change over time. Moreover, these changes are present not only
in the temporal domain but also within different populations and more
importantly, within each individual. This calls for a new perspective when
designing computational models for analysis and interpretation of human
affective behaviors: the computational models that can timely and
efficiently adapt to different contexts and individuals over time, and also
incorporate existing neurophysiological and psychological findings (prior
knowledge). Thus, the long-term goal is to create life-long personalized
learning and inference systems for analysis and perception of human
affective behaviors. Such systems would benefit from long-term contextual
information (including demographic and social aspects) as well as
individual characteristics. This, in turn, would allow building intelligent
agents (such as mobile and robot technologies) capable of adapting their
behavior in a continuous and on-line manner to the target contexts and
individuals.
This special issue aims at contributions from computational neuroscience
and psychology, artificial intelligence, machine learning, and affective
computing, challenging and expanding current research on interpretation and
estimation of human affective behavior from longitudinal behavioral data,
i.e., single or multiple modalities captured over extended periods of time
allowing efficient profiling of target behaviors and their inference in
terms of affect and other socio-cognitive dimensions. We invite
contributions focusing on both the theoretical and modeling perspective, as
well as applications ranging from human-human, human-computer and
human-robot interactions.
II. Potential Topics
Given computational models, the capability to perceive and understand
emotion behavior is an important and popular research topic. That is why
recent special issues on the IEEE Journal on Transactions on Affective
Computing covered topics from emotion behavior analysis “in-the-wild” to
personality analysis. However, most of the research published by these
specific calls treat emotion behavior as an instantaneous event, relating
mostly to emotion recognition, and thus neglect the development of complex
emotion behavior models. Our special issue will foster the development of
the field by focusing excellent research on emotion models for long-term
behavior analysis.
The topics of interest for this special issue include, but are not limited
to:
- New theories and findings on continuous emotion recognition
- Multi- and Cross-modal emotion perception and interpretation
- Lifelong affect analysis, perception, and interpretation
- Novel neural network models for affective processing
- New neuroscientific and psychological findings on continuous emotion
representation
- Embodied artificial agents for empathy and emotion appraisal
- Machine learning for affect-driven interventions
- Socially intelligent human-robot interaction
- Personalized systems for human affect recognition
III. Submission
Prospective authors are invited to submit their manuscripts electronically,
adhering to the IEEE Transactions on Affective Computing guidelines (
https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=5165369). Please
submit your papers through the online system (
https://mc.manuscriptcentral.com/taffc-cs) and be sure to select the
special issue: Special Issue/Section on Automated Perception of Human
Affect from Longitudinal Behavioral Data.
IV. IMPORTANT DATES:
Submissions Deadline: 15th of February 2019
V. Guest Editors
Pablo Barros, University of Hamburg, Germany
Stefan Wermter, University of Hamburg, Germany
Ognjen (Oggi) Rudovic, Massachusetts Institute of Technology, United States
of America
Hatice Gunes, University of Cambridge, United Kingdom
--
Dr. Pablo Barros
Postdoctoral Research Associate - Crossmodal Learning Project (CML)
Knowledge Technology
Department of Informatics
University of Hamburg
Vogt-Koelln-Str. 30
22527 Hamburg, Germany
Phone: +49 40 42883 2535
Fax: +49 40 42883 2515
barros at informatik.uni-hamburg.dehttp://www.pablobarros.nethttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/people/barros.htmlhttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/