A PhD Studentship
University of Winchester
Top-down effects of mental state on face perception
A fully funded PhD Studentship (stipend + fees) opportunity of at the University of Winchester, UK.
The position is open to both UK/EU and international students*.
Applications are invited for a 3-year, fully funded PhD position under the supervision of Dr. Daniel Gill and Prof. Paul Sowden from the Department of Psychology, and Dr. Claire Ancient from the Department of Digital Futures.
Our team is seeking a talented, enthusiastic, knowledgeable and highly motivated PhD student to take part in an exciting study that combines clinical, behavioural and computational research techniques to study the effect of mental state on face perception.
Research implies that the process of attending to emotionally expressive faces is susceptible to mood and mental state. The current project will investigate the explicit modifications of facial mental representations induced by mental states, in particular in depression and anxiety. The project will involve psychophysical and computational tools.
The successful applicant will contribute to performing recordings with patients in collaborating clinics and the analysis of the data. They should have very good quantitative and computational skills and a strong background or interest in neuroscience and/or psychology.
Requirements
- A track record of high academic achievement, demonstrated by a first class or high upper second undergraduate honours degree and/or a master’s degree (or equivalent) in Neuroscience, Computer Science, Electric or Biomedical Engineering, Psychology, Statistics or related disciplines.
- Two academic references.
- The ability to work independently, with the support of a supervisory team, and the enthusiasm to contribute to a vibrant and stimulating research environment are essential.
- Programming skills (Matlab or Python).
- Familiarity with machine learning and image processing techniques.
- Fluency in English
Prior to the submission of the formal application, prospective students are encouraged to contact Dr. Gill (daniel.gill(a)winchester.ac.uk) or Prof. Sowden (paul.sowden(a)winchester.ac.uk) by email no later than May 10th, 2017 for further instructions and informal enquiries (This is optional; applications can be submitted directly to the link in the Application Process section).
The University of Winchester in located at the stunning city of Winchester, one of the most beautiful cities in the UK. Winchester in less than an hour by train to London Waterloo train station.
Application Process:
Students should apply to the University of Winchester using Application Form A which includes a substantial project proposal. To download a copy of Form A please click the following link:
https://www.winchester.ac.uk/study/research-degrees/how-to-apply/
Key Dates:
· Deadline for applications: Midnight 19 May 2019
· References direct from referees** required by 29 May 2019
· Interviews will be held between 24 June and 29 June 2019
· Awards begin September 2019
*nb Non-EU students are required to pay the balance between UK and non-EU tuition fees for the three years of the studentship (for 19/20: £13,300- £4,200= £9,100/annum)
[https://www.winchester.ac.uk/media/content-assets/corporate-imagery/Email-s…]
Our privacy policy is here<https://www.winchester.ac.uk/about-us/leadership-and-governance/privacy-and…>.
University of Winchester, a private charitable company limited by
guarantee in England and Wales number 5969256.
Registered Office: Sparkford Road, Winchester, Hampshire SO22
4NR
Hi everyone,
I'm currently caricaturing celebrity faces in psychomorph. I'm finding that
several of the celebrity faces have lower than average eyebrows and or
eyelid folds, which is causing artifacts around the eyes. Elinor's
suggestion of adding more points in those locations doesn't seem to be
helping! Does any one have any suggestions?
Thanks!
Rachel
--
“It is not our differences that divide us. It is our inability to
recognize, accept, and celebrate those differences.” - Audre Lorde
Call for Challenge participation
Seventh Emotion Recognition in the Wild (EmotiW) Challenge 2019
https://sites.google.com/view/emotiw2019
@ ACM International Conference on Multimodal Interaction 2019, Suzhou,
China
----------------------------------------------------------------------
The Emotion Recognition in the Wild 2019 Challenge consists of multimodal
classification challenges, which mimics real-world conditions.
Traditionally, emotion recognition has been performed on laboratory
controlled data. While undoubtedly worthwhile at the time, such lab
controlled data poorly represents the environment and conditions faced in
real-world situations. With the increase in the number of video clips
online, it is worthwhile to explore the performance of emotion recognition
methods that work ‘in the wild’.
There are three sub-challenges:
a. Audio-video based Emotion Recognition
b. Group-level Cohesion Recognition
c. Engagement Prediction
Timeline:
Train and validate data available - 15th March 2019
Test data available - 5th June 2019
Paper submission deadline - July 2019
Paper notification - July 2019
Organisers
Abhinav Dhall, Indian Institute of Technology Ropar
Roland Goecke, University of Canberra
Tom Gedeon, Australian National University
--
Abhinav Dhall, PhD
Assistant Professor,
Department of Computer Science & Engineering,
Indian Institute of Technology, Ropar
Webpage: http://iitrpr.ac.in/lasii/
Google Scholar: https://goo.gl/iDwNTx
Apologies for cross-posting
***********************************************************************************
FGAHI 2019: CALL FOR PAPERS
2nd International Workshop on Face and Gesture Analysis for Health Informatics
Submission Deadline: March 22, 2019
***********************************************************************************
The 2d International Workshop on Face and Gesture Analysis for Health Informatics (FGAHI
2019) will be held in conjunction with IEEE CVPR 2019 on June 16th - June 21st, Long Beach, CA.
For details concerning the workshop program, paper submission, and
guidelines please visit our workshop website at:
http://fgahi2019.isir.upmc.fr/
Best regards,
Zakia Hammal
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
Apologies for cross-posting
***********************************************************************************
ICMI 2019: Call for Long and Short Papers
https://icmi.acm.org/2019/index.php?id=cfp
Abstract Submission: May 1, 2019 (11:59pm PST)
Final Submission: May 7, 2019 (11:59pm PST)
***********************************************************************************
Call for Long and Short Papers
The 21st International Conference on Multimodal Interaction (ICMI 2019) will be held in Suzhou, China. ICMI is the premier international forum for multidisciplinary research on multimodal human-human and human-computer interaction, interfaces, and system development. The conference focuses on theoretical and empirical foundations, component technologies, and combined multimodal processing techniques that define the field of multimodal interaction analysis, interface design, and system development.
We are keen to showcase novel input and output modalities and interactions to the ICMI community. ICMI 2019 will feature a single-track main conference which includes: keynote speakers, technical full and short papers (including oral and poster presentations), demonstrations, exhibits and doctoral spotlight papers. The conference will also feature workshops and grand challenges. The proceedings of ICMI 2019 will be published by ACM as part of their series of International Conference Proceedings and Digital Library.
We also want to welcome conference papers from behavioral and social sciences. These papers allow us to understand how technology can be used to increase our scientific knowledge and may focus less on presenting technical or algorithmic novelty. For this reason, the "novelty" criteria used during ICMI 2019 review will be based on two sub-criteria (i.e., scientific novelty and technical novelty as described below). Accepted papers at ICMI 2019 only need to be novel on one of these sub-criteria. In other words, a paper which is strong on scientific knowledge contribution but low on algorithmic novelty should be ranked similarly to a paper that is high on algorithmic novelty but low on knowledge discovery.
Scientific Novelty: Papers should bring some new knowledge to the scientific community. For example, discovering new behavioral markers that are predictive of mental health or how new behavioral patterns relate to children’s interactions during learning. It is the responsibility of the authors to perform a proper literature review and clearly discuss the novelty in the scientific discoveries made in their paper.
Technical Novelty: Papers reviewed with this sub-criterion should include novelty in their computational approach for recognizing, generating or modeling data. Examples include: novelty in the learning and prediction algorithms, in the neural architecture, or in the data representation. Novelty can also be associated to a new usage of an existing approach.
Please see the Submission Guidelines for Authors for detailed submission instructions.
This year, ICMI welcomes contributions on our theme of multi-modal understanding of multi-party interactions. Additional topics of interest include but are not limited to:
Affective computing and interaction
Cognitive modeling and multimodal interaction
Gesture, touch and haptics
Healthcare, assistive technologies
Human communication dynamics
Human-robot/agent multimodal interaction
Interaction with smart environment
Machine learning for multimodal interaction
Mobile multimodal systems
Multimodal behavior generation
Multimodal datasets and validation
Multimodal dialogue modeling
Multimodal fusion and representation
Multimodal interactive applications
Speech behaviors in social interaction
System components and multimodal platforms
Visual behaviors in social interaction
Virtual/augmented reality and multimodal interaction
Important Dates:
Abstract Submission May 1, 2019
Final submissions May 7, 2019
Paper rebuttal due June 25, 2019
Autdor notification July 7, 2019
Paper Camera Ready July 15, 2019
Best regards,
Social Media Chair ICMI 2019
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon
University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
Dear colleagues,
We are delighted to offer a fully funded 4-year PhD position on the ERC-funded project Computing the Face Syntax of Social Communication at the Institute of Neuroscience and Psychology, University of Glasgow, Scotland. The competition is open internationally.
Please see attached for details of the project and the application process.
Many thanks for sharing!
Best,
Dr. Rachael E. Jack, Ph.D.
Reader
Institute of Neuroscience & Psychology
School of Psychology
University of Glasgow
+44 (0) 141 5087
www.psy.gla.ac.uk/schools/psychology/staff/rachaeljack/
Apologies for cross-posting
***********************************************************************************
ICMI 2019: Call for Workshops
https://icmi.acm.org/2019/index.php?id=CfW
Workshop proposal submission extended: Sunday, February 24, 2019
***********************************************************************************
Call for Workshops
The International Conference on Multimodal Interaction (ICMI 2019) will be held in Suzhou, Jiangsu, China, during October 14-18, 2019. ICMI is the premier international conference for multidisciplinary research on multimodal human-human and human-computer interaction analysis, interface design, and system development. The theme of the ICMI 2019 conference is Multi-modal Understanding of Multi-party Interactions. ICMI has developed a tradition of hosting workshops in conjunction with the main conference to foster discourse on new research, technologies, social science models and applications. Examples of recent workshops include:
Multi-sensorial Approaches to Human-Food Interaction
Group Interaction Frontiers in Technology
Modeling Cognitive Processes from Multimodal Data
Human-Habitat for Health
Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction
Investigating Social Interactions with Artificial Agents
Child Computer Interaction
Multimodal Interaction for Education
We are seeking workshop proposals on emerging research areas related to the main conference topics, and those that focus on multi-disciplinary research. We would also strongly encourage workshops that will include a diverse set of keynote speakers (factors to consider include: gender, ethnic background, institutions, years of experience, geography, etc.).
The format, style, and content of accepted workshops are under the control of the workshop organizers. Workshops may be of a half-day or one day in duration. Workshop organizers will be expected to manage the workshop content, be present to moderate the discussion and panels, invite experts in the domain, and maintain a website for the workshop. Workshop papers will be indexed by ACM.
Submission
Prospective workshop organizers are invited to submit proposals in PDF format (Max. 3 pages). Please email proposals to the workshop chairs: Hongwei Ding (hwding(a)sjtu.edu.cn), Carlos Busso (busso(a)utdallas.edu) and Tadas Baltrusaitis (tadyla(a)gmail.com). The proposal should include the following:
Workshop title
List of organizers including affiliation, email address, and short biographies
Workshop motivation, expected outcomes and impact
Tentative list of keynote speakers
Workshop format (by invitation only, call for papers, etc.), anticipated number of talks/posters, workshop duration (half-day or full-day) including tentative program
Planned advertisement means, website hosting, and estimated participation
Paper review procedure (single/double-blind, internal/external, solicited/invited-only, pool of reviewers, etc.)
Paper submission and acceptance deadlines
Special space and equipment requests, if any
Important Dates:
Workshop proposal submission extended: Sunday, February 24, 2019
Notification of acceptance: Saturday, March 2, 2019
Workshop Date: Monday, October 14, 2019
Best regards,
Social Media Chair ICMI 2019
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
Dear Colleagues,
We would like to invite you to contribute a chapter for the upcoming volume
entitled “Neural and Machine Learning for Emotion and Empathy Recognition:
Experiences from the OMG-Challenges” to be published by the Springer Series
on Competitions in Machine Learning. Our book will be available by the end
of 2019.
Website: https://easychair.org/cfp/OMGBook2019
Follows a short description of our volume:
Emotional expression perception and categorization are extremely popular in
the affective computing community. However, the inclusion of emotions in
the decision-making process of an agent is not considered in most of the
research in this field. To treat emotion expressions as the final goal,
although necessary, reduces the usability of such solutions in more complex
scenarios. To create a general affective model to be used as a modulator
for learning different cognitive tasks, such as modeling intrinsic
motivation, creativity, dialog processing, grounded learning, and
human-level communication, instantaneous emotion perception cannot be the
pivotal focus.
This book aims to present recent contributions for multimodal emotion
recognition and empathy prediction which take into consideration the
long-term development of affective concepts. On this regard, we provide
access to two datasets: the OMG-Emotion Behavior Recognition and
OMG-Empathy Prediction datasets. These datasets were designed, collected
and formalized to be used on the OMG-Emotion Recognition Challenge and the
OMG-Empathy Prediction challenge, respectively. All the participants of our
challenges are invited to submit their contribution to our book. We also
invite interested authors to use our datasets on the development of
inspiring and innovative research on affective computing. By formatting
these solutions and editing this book, we hope to inspire further research
in affective and cognitive computing over longer timescales.
TOPICS OF INTEREST
The topics of interest for this call for chapters include, but are not
limited to:
- New theories and findings on continuous emotion recognition
- Multi- and Cross-modal emotion perception and interpretation
- Novel neural network models for affective processing
- Lifelong affect analysis, perception, and interpretation
- New neuroscientific and psychological findings on continuous emotion
representation
- Embodied artificial agents for empathy and emotion appraisal
- Machine learning for affect-driven interventions
- Socially intelligent human-robot interaction
- Personalized systems for human affect recognition
- New theories and findings on empathy modeling
- Multimodal processing of empathetic and social signals
- Novel neural network models for empathy understanding
- Lifelong models for empathetic interactions
- Empathetic Human-Robot-Interaction Scenarios
- New neuroscientific and psychological findings on empathy representation
- Multi-agent communication for empathetic interactions
- Empathy as a decision-making modulator
- Personalized systems for empathy prediction
Each contributed chapter is expected to present a novel research study, a
comparative study, or a survey of the literature.
We also expect that each contributed chapter approach somehow at least one
of our datasets: the OMG-Emotion and the OMG-Empathy.
SUBMISSIONS
All submissions should be done via EasyChair:
https://easychair.org/cfp/OMGBook2019
Original artwork and a signed copyright release form will be required for
all accepted chapters. For author instructions, please visit:
https://www.springer.com/us/authors-editors/book-authors-editors/resources-…
We would also like to announce that our two datasets, related to emotion
expressions and empathy prediction, are now fully available. You can have
access to them and obtain more information by visiting their website:
- OMG-EMOTION -
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/omg_emotion.html
- OMG-EMPATHY -
https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/omg_empathy.html
If you wand more information, please do not hesitate to contact me:
barros(a)informatik.uni-hamburg.de
IMPORTANT DATES:
- Submission of abstracts: 29th of March 2019
- Submissions of full-length chapters: 29th of March 2019
- Notification of final editorial decisions 31st of May 2019
- Submission of revised chapters: 08th of July, 2019
--
Dr. Pablo Barros
Postdoctoral Research Associate - Crossmodal Learning Project (CML)
Knowledge Technology
Department of Informatics
University of Hamburg
Vogt-Koelln-Str. 30
22527 Hamburg, Germany
Phone: +49 40 42883 2535
Fax: +49 40 42883 2515
barros at informatik.uni-hamburg.dehttp://www.pablobarros.nethttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/people/barros.htmlhttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/
Apologies for cross-posting
***********************************************************************************
FGAHI 2019: CALL FOR PAPERS
2nd International Workshop on Face and Gesture Analysis for Health Informatics
Submission Deadline: March TBD, 2019
***********************************************************************************
The 2d International Workshop on Face and Gesture Analysis for Health Informatics (FGAHI
2019) will be held in conjunction with IEEE CVPR 2019 on June 16th - June 21st, Long Beach, CA.
For details concerning the workshop program, paper submission, and
guidelines please visit our workshop website at:
http://fgahi2019.isir.upmc.fr/
Best regards,
Zakia Hammal
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
Apologies for cross-posting
***********************************************************************************
ICMI 2019: Call for Long and Short Papers
https://icmi.acm.org/2019/index.php?id=cfp
Abstract Submission: May 1, 2019 (11:59pm PST)
Final Submission: May 7, 2019 (11:59pm PST)
***********************************************************************************
Call for Long and Short Papers
The 21st International Conference on Multimodal Interaction (ICMI 2019) will be held in Suzhou, China. ICMI is the premier international forum for multidisciplinary research on multimodal human-human and human-computer interaction, interfaces, and system development. The conference focuses on theoretical and empirical foundations, component technologies, and combined multimodal processing techniques that define the field of multimodal interaction analysis, interface design, and system development.
We are keen to showcase novel input and output modalities and interactions to the ICMI community. ICMI 2019 will feature a single-track main conference which includes: keynote speakers, technical full and short papers (including oral and poster presentations), demonstrations, exhibits and doctoral spotlight papers. The conference will also feature workshops and grand challenges. The proceedings of ICMI 2019 will be published by ACM as part of their series of International Conference Proceedings and Digital Library.
We also want to welcome conference papers from behavioral and social sciences. These papers allow us to understand how technology can be used to increase our scientific knowledge and may focus less on presenting technical or algorithmic novelty. For this reason, the "novelty" criteria used during ICMI 2019 review will be based on two sub-criteria (i.e., scientific novelty and technical novelty as described below). Accepted papers at ICMI 2019 only need to be novel on one of these sub-criteria. In other words, a paper which is strong on scientific knowledge contribution but low on algorithmic novelty should be ranked similarly to a paper that is high on algorithmic novelty but low on knowledge discovery.
Scientific Novelty: Papers should bring some new knowledge to the scientific community. For example, discovering new behavioral markers that are predictive of mental health or how new behavioral patterns relate to children’s interactions during learning. It is the responsibility of the authors to perform a proper literature review and clearly discuss the novelty in the scientific discoveries made in their paper.
Technical Novelty: Papers reviewed with this sub-criterion should include novelty in their computational approach for recognizing, generating or modeling data. Examples include: novelty in the learning and prediction algorithms, in the neural architecture, or in the data representation. Novelty can also be associated to a new usage of an existing approach.
Please see the Submission Guidelines for Authors for detailed submission instructions.
This year, ICMI welcomes contributions on our theme of multi-modal understanding of multi-party interactions. Additional topics of interest include but are not limited to:
Affective computing and interaction
Cognitive modeling and multimodal interaction
Gesture, touch and haptics
Healthcare, assistive technologies
Human communication dynamics
Human-robot/agent multimodal interaction
Interaction with smart environment
Machine learning for multimodal interaction
Mobile multimodal systems
Multimodal behavior generation
Multimodal datasets and validation
Multimodal dialogue modeling
Multimodal fusion and representation
Multimodal interactive applications
Speech behaviors in social interaction
System components and multimodal platforms
Visual behaviors in social interaction
Virtual/augmented reality and multimodal interaction
Important Dates:
Abstract Submission May 1, 2019
Final submissions May 7, 2019
Paper rebuttal due June 25, 2019
Autdor notification July 7, 2019
Paper Camera Ready July 15, 2019
Best regards,
Social Media Chair ICMI 2019
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon
University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/