Dear all, apologies if you regard this as noise, but I decided to send on this post from a fellow academic, though I cropped out the relevant faces to save bandwidth. There may be some facial comparison experts on the list who would like to have a look. My approach would be wisdom of the crowds, since there can never be certainty.
Peter.
Dear face-research group,
Please forgive this unsolicited note. An academic colleague at UCL thought you might be able to advise with regard to the two women in the attached pictures? Whether the woman standing to the left of the seated man in uniform (looking at the picture) is the same person as the married woman in the other image?
I hope you don't mind my writing but it would be reassuring to know. The married woman in the twosome is Edith Thompson. My book on her was published by Hamish Hamilton and Penguin in 1988 / 1990, and again in 2001. I am currently working on a digital database about the case and the 1920s. The photograph of the threesome derives from her younger brother's estate but does not identify any of the people in it.
If it turned out that the young woman in the threesome is indeed Edith Thompson, assuming that it is possible to have any kind of near-certainty in face recognition, that would throw a valuable light on her tragically short life; and possibly open up a new area of research.
Thank you for bearing with me and apologies again for thus trespassing on your time. I will of course fully understand if you are too busy to deal with this query.
Best wishes,
René Weis <r.weis(a)ucl.ac.uk>
Peter Hancock
Professor,
Deputy Head of Psychology,
Faculty of Natural Sciences
University of Stirling
FK9 4LA, UK
phone 01786 467675
fax 01786 467641
http://stir.ac.uk/190http://orcid.org/0000-0001-6025-7068http://www.researcherid.com/rid/A-4633-2009
Psychology at Stirling: 100% 4* Impact, REF2014
Come and study Face Perception at the University of Stirling! Our unique MSc in the Psychology of Faces is open for applications. For more information see http://www.stir.ac.uk/postgraduate/programme-information/prospectus/psychol…
[highly cited 2016]
Dear fellow face researchers,
Due to mixed messages from the grant bodies, I am trying to get an idea of how successful grants on face recognition have been with ESRC versus BBSRC - particularly applications based on face recognition abilities / prosopagnosia which did NOT include any neuroimaging (note: MRC is not possible to apply for in my case).
If you have had some experience with either of these, I would be *hugely* grateful if you could email me privately (j.davies-thompson(a)nottingham.ac.uk) with which grant body you were successful with (or perhaps got turned down from because it was not ’social’ or ‘biological’ enough), and the general topic that your grant was on.
Many thanks!
Jodie
This message and any attachment are intended solely for the addressee
and may contain confidential information. If you have received this
message in error, please send it back to me, and immediately delete it.
Please do not use, copy or disclose the information contained in this
message or in any attachment. Any views or opinions expressed by the
author of this email do not necessarily reflect the views of the
University of Nottingham.
This message has been checked for viruses but the contents of an
attachment may still contain software viruses which could damage your
computer system, you are advised to perform your own checks. Email
communications with the University of Nottingham may be monitored as
permitted by UK legislation.
PhD Studentship: Models of Human Face Perception
We have a studentship available for UK/EU citizens to work in the Face Research Lab with Professor Peter Hancock. The aim is to work on a computer model of human face perception and recognition. Such a model should show characteristics of human perception, for example being very good at recognising familiar faces but rather poor with unfamiliar ones, yet still able to derive things like age, sex, race and expression. The student will join a much larger project, FACER2VM, where the aim is to improve the state of the art in computer face recognition 'in the wild'. Working with two postdocs who are studying human face recognition, the aim of this studentship is to further our understanding of how we may do it; it is explicitly not a hard-core, squeeze the best you can out of a deep neural network project.
The successful candidate will need good programming skills, for example in Matlab or Python. Ideally they will already also be familiar with the psychology of human face perception.
The studentship is available for three-years, and includes a tax-free stipend of approximately £14,553 p.a. Tuition fees will be met by the University at the home/EU rate. Subject to satisfactory progress review at the end of the first year, the studentship will be renewed for a second year and thereafter for a third year.
The studentship will have an anticipated registration date of 1 October 2017.
Informal enquiries to Peter Hancock, pjbh1(a)stir.ac.uk<mailto:pjbh1@stir.ac.uk> or Linda Cullen (linda.cullen(a)stir.ac.uk<mailto:linda.cullen@stir.ac.uk> Tel: +44 (0) 1786 466854.
Please submit a CV and research proposal via the online application, selecting 'Research Degree in Psychology':
http://www.stir.ac.uk/postgraduate/how-to-apply/
Once you have started the application process, please email research.admissions(a)stir.ac.uk<mailto:research.admissions@stir.ac.uk> to ask to be exempted from the 'find-a-supervisor' process.
Closing date: 28th July
Peter Hancock
Professor,
Deputy Head of Psychology,
Faculty of Natural Sciences
University of Stirling
FK9 4LA, UK
phone 01786 467675
fax 01786 467641
http://stir.ac.uk/190http://orcid.org/0000-0001-6025-7068http://www.researcherid.com/rid/A-4633-2009
Psychology at Stirling: 100% 4* Impact, REF2014
Come and study Face Perception at the University of Stirling! Our unique MSc in the Psychology of Faces is open for applications. For more information see http://www.stir.ac.uk/postgraduate/programme-information/prospectus/psychol…
[highly cited 2016]
Unique PhD opportunity in Face Recognition at UNSW Sydney **Deadline for Expressions of Interest 20 July!**
=====================================================================================
We are seeking a talented and enthusiastic PhD student to join our face recognition team at UNSW Sydney. The position is open to both Australian and International applicants, and we are inviting applications from graduates in all areas of cognitive science.
The PhD will be funded by UNSW's Scientia Scholarship scheme, which is seeking scholars with a strong commitment to making a difference in the world. Successful applicants will be funded for 4 years with a stipend of $40k per annum (plus an additional $10k p/a travel and support package). In addition, international students will be awarded a tuition fee scholarship for 4 years. UNSW Scientia Scholars will also benefit from enhanced professional and career development opportunities, for more information visit: http://www.2025.unsw.edu.au/apply/
The successful applicant will be supervised by cognitive psychologists David White, Richard Kemp and Alice Towler from the UNSW Forensic Psychology Group (http://forensic.psy.unsw.edu.au/). They will conduct original research that complements our ongoing interest in people with superior abilities in face identification. Our group has many active research partnerships with leading academics as well as government and industry experts in this field, and we expect the research project to benefit from these linkages.
We invite applications from graduates in psychology and those with complementary backgrounds in other areas of cognitive science. A computer science background is not essential. We are also very interested to hear from applicants with experience in government and private research sectors.
For more information on the project, and to submit your Expression of Interest please visit the following webpage (before 20 July only): http://www.2025.unsw.edu.au/apply/scientia-phd-scholarships/fusing-human-ex…
For informal queries please contact: david.white(a)unsw.edu.au
2nd Call For Papers
Apologies for unintended cross-posting
=================================================================================
IHCI 2017: 9th International Conference on Intelligent Human-Computer
Interaction
Evry, France, December 11-13, 2017, http://ihci2017.sciencesconf.org
=================================================================================
The 9th international conference on Intelligent Human Computer
Interaction (IHCI 2017) will be held in Evry, near Paris, France, from
11 to 13 of December 2017.
IHCI allows researchers and practitioners to exchange on recent results
in the area of human-computer interaction, related technologies
(including signal processing, multimodal analysis, artificial
intelligence, machine learning and cognitive modelling) and their
applications. The conference will bring together researchers from
academia, industry and research organizations from various disciplines,
around theoretical, practical and application-oriented contributions.
This year, along with usual topics, IHCI 2017 will focus on human
cognition modelling for interaction, including human cognitive process
modelling (for task analysis...), human-robot interaction (for companion
robots...), cognition for interaction in virtual worlds (for autonomous
conversational agents...).
Keynotes will be given by :
- Pr. Alain Berthoz, Honorary Professor at Collège de France, member of
the French Academy of Science and Academy of Technology, on "Simplexity
and vicariance. On human cognition principles for man-machine interaction"
- Pr. Mohamed Chetouani, Professor at Pierre and Marie Curie University,
France, on "Interpersonal Human-Human and Human-Robot Interactions",
- Pr. Antti Oulasvirta, Associate Professor at Aalto University,
Finland, on "Can Machines Design? Optimizing User Interfaces for Human
Performance".
The IHCI topics include but are not limited to:
Human Cognition Modelling:
- Cognitive models of intelligence
- Modelling perceptual processes
- Modelling of learning and thinking
- Modelling of memory
- Cognitive task analysis
User adaptation and Personalization:
- Adaptive learning
- Affective computing for adaptive interaction
- Reinforcement learning
Brain Computer Interfaces:
- Brain computer integration
- Brain activity understanding for interaction
Machine Perception of Humans:
- Speech detection and recognition
- Natural language processing
- Face and emotion detection
- Body sensors and communication
- Gesture recognition
- Human motion tracking
Tactile interfaces:
- Haptics fundaments
- Haptic feedback for interaction
- Haptic feedback for robot collaboration
Human-Robot Interaction and collaboration:
- Collaborative learning
- Collaborative systems
- Temporal coordination modelling
Applications:
- Natural User Interfaces
- Human-robot interaction
- Virtual and augmented reality
- Remote and face-to-face collaboration
- Embodied conversational agents
- Mobile interfaces
- Interface design for accessibility and rehabilitation
- Interaction and cognition for education
- Health
- Serious games
---------------------------------------------------------------------------------------
**NEW:* The conference proceedings will be published as an open access
volume in the Springer series Lecture Notes in Computer Science (LNCS)
and indexed in the ISI Conference Proceedings Citation Index, Scopus, EI
Engineering Index, Google Scholar, DBLP, etc. Papers can be either long
papers (10 to 12 pages) or short papers (4 to 6 pages), and must conform
to the LNCS templates (see the guidelines for authors). *
---------------------------------------------------------------------------------------
Papers must be written in English and describe original work that has
not been published and is not under review elsewhere. In order to
enforce blind reviewing, papers must be made anonymous before submitting
(by removing the author’s names and institution from the header).
Important dates:
- Submission deadline: *June 30, 2017 (EXTENDED) *
- Decision notification: September 10, 2017 (tentative)
- Final version due: October 1st, 2017
Patrick Horain (Telecom SudParis), Chair
Catherine Achard (Université Pierre et Marie Curie), Co-Chair
Malik Mallem (Université Evry Val d'Essonne), Co-Chair
Dear all
I hope you don't mind me forwarding this query from a friend of mine who is a Sleep Clinical Physiologist. If anyone can answers her query that would be great.
Do you know if there is a standard system which can be used to measure an individual’s facial profile, either manually or using software? Our medical photography team take photos annually of patients using non-invasive ventilation, who may experience facial changes related to mask use, and we want to find a way to quantify any changes over time. There are patients where we can subjectively see a change, but we’re looking to find a way to assess this objectively.
Looking in the literature, we can’t find anything for measuring points on the face - there are lots of papers measuring skull landmarks on cephalometric x-rays, but we’re looking for something we can do from photos to avoid irradiating patients if we can!
It sounds like the sort of thing that must already exist, but we can’t seem to find it. I wondered if there might be anything that you use in the facial recognition field that we might be able to apply?
Many thanks,
Trina
Dr Catriona Havard
Senior Lecturer in Psychology
The Open University
Walton Hall
Milton Keynes
MK7 6AA
Tel: 01908 654554
To see a selection of my papers click here
To find out about about our new module Counselling and Forensic Psychology: Investigating crime and therapy click here<http://dd310.madorbad.org/>
Call for papers: Vision Research Special Issue
Vision Research SI: Face perception: Experience, models and neural mechanisms
Editors: Ipek Oruc, Benjamin Balas, Michael S. Landy
Scope:
Faces are ecologically significant stimuli central to social interaction and communication. Human observers possess a remarkable ability to recall great numbers of unique facial identities encountered in a lifetime. Observers can individuate faces seemingly effortlessly based on minor differences across exemplars, yet remain robust against tremendous variation across different images of the same identity. For these and other reasons face recognition is considered to be a form of specialized perceptual expertise. The last few decades have seen a flurry of research activity delineating the limits to this expertise. For example, face expertise fails to generalize to faces of unfamiliar races (“the other-race effect”) and to faces viewed in the inverted orientation (“the face inversion effect”). Despite this tremendous progress identifying the limits of specialized face perception, there is little consensus over the origins of this specialization and the forces that shape this extraordinary skill. Some researchers emphasize genetic and innate contributions. Others stress the key role played by experience during sensitive periods of early development. Yet others argue that face expertise is a dynamic ability continually reshaped by experience well into adulthood.
The primary goal of this special issue is to bring together current research on this topic. Questions we would like to address include but are not limited to: What are the main contributors to face expertise: experiencing a large number of individual exemplars even if only during brief encounters (e.g., unfamiliar faces in a bus) or prolonged experience with a small number of faces (e.g., family interactions)? Can the other-race effect be eliminated (or even reversed)? If so, is this possible during adulthood or limited to early development? How does experience alter perceptual representations of faces and neural mechanisms underlying face recognition? We seek research papers that address the emergence and maintenance of face expertise that span the entire life cycle from development to adulthood as well as aging. Behavioural, neuroimaging, naturalistic observation and modelling approaches are all welcome.
Deadline for submission is September 15, 2017.
Prospective authors are encouraged to contact one of the editors (ipor(a)mail.ubc.ca<mailto:ipor@mail.ubc.ca>, bjbalas(a)gmail.com<mailto:bjbalas@gmail.com>, landy(a)nyu.edu<mailto:landy@nyu.edu>) with a tentative title prior to submission.
For further information and author instructions:
https://www.journals.elsevier.com/vision-research/call-for-papers/face-perc…
_______________________________________________
Ipek Oruc
Assistant Professor
Department of Ophthalmology & Visual Sciences
University of British Columbia
Rm 4440 - 818 West 10th Avenue
Vancouver, BC V5Z 1M9
email: ipor(a)mail.ubc.ca<mailto:ipor@mail.ubc.ca>
URL: http://www.visualcognition.ca/ipek/
Call For Papers
Apologies for unintended cross-posting
=================================================================================
IHCI 2017: 9th International Conference on Intelligent Human-Computer
Interaction
Evry, France, December 11-13, 2017, http://ihci2017.sciencesconf.org
=================================================================================
The 9th international conference on Intelligent Human Computer
Interaction (IHCI 2017) will be held in Evry, near Paris, France, from
11 to 13 of December 2017.
IHCI allows researchers and practitioners to exchange on recent results
in the area of human-computer interaction, related technologies
(including signal processing, multimodal analysis, artificial
intelligence, machine learning and cognitive modelling) and their
applications. The conference will bring together researchers from
academia, industry and research organizations from various disciplines,
around theoretical, practical and application-oriented contributions.
This year, along with usual topics, IHCI 2017 will focus on human
cognition modelling for interaction, including human cognitive process
modelling (for task analysis...), human-robot interaction (for companion
robots...), cognition for interaction in virtual worlds (for autonomous
conversational agents...).
Keynotes will be given by :
- Pr. Alain Berthoz, Honorary Professor at Collège de France, member of
the French Academy of Science and Academy of Technology, on "Simplexity
and vicariance. On human cognition principles for man-machine interaction"
- Pr. Mohamed Chetouani, Professor at Pierre and Marie Curie University,
on "Interpersonal Human-Human and Human-Robot Interactions",
- Pr. Antti Oulasvirta, Associate Professor at Aalto University, on "Can
Machines Design? Optimizing User Interfaces for Human Performance".
The IHCI topics include but are not limited to:
Human Cognition Modelling:
- Cognitive models of intelligence
- Modelling perceptual processes
- Modelling of learning and thinking
- Modelling of memory
- Cognitive task analysis
User adaptation and Personalization:
- Adaptive learning
- Affective computing for adaptive interaction
- Reinforcement learning
Brain Computer Interfaces:
- Brain computer integration
- Brain activity understanding for interaction
Machine Perception of Humans:
- Speech detection and recognition
- Natural language processing
- Face and emotion detection
- Body sensors and communication
- Gesture recognition
- Human motion tracking
Tactile interfaces:
- Haptics fundaments
- Haptic feedback for interaction
- Haptic feedback for robot collaboration
Human-Robot Interaction and collaboration:
- Collaborative learning
- Collaborative systems
- Temporal coordination modelling
Applications:
- Natural User Interfaces
- Human-robot interaction
- Virtual and augmented reality
- Remote and face-to-face collaboration
- Embodied conversational agents
- Mobile interfaces
- Interface design for accessibility and rehabilitation
- Interaction and cognition for education
- Health
- Serious games
Regular (5 to 8 pages) and short papers (3 to 4 pages) describing
original work in any of the Human Computer Interaction areas are welcome
and will be published by an established scientific editor (previous
proceedings were published by IEEE, Springer or Elsevier). Papers must
be written in English and describe original work that has not been
published and is not under review elsewhere.
In order to enforce blind reviewing, papers must be made anonymous by
removing the author’s names and institution from the header. Papers must
be written in English using the templates and guide for authors (coming
soon).
Important dates:
- Submission deadline: June 20, 2017
- Decision notification: September 10, 2017 (tentative)
- Final version due: October 1st, 2017
Patrick Horain (Telecom SudParis), Chair
Catherine Achard (Université Pierre et Marie Curie), Co-Chair
Malik Mallem (Université Evry Val d'Essonne), Co-Chair
Apologies for cross-postings
2nd Call for challenge participation
Train and Validation data is available now!
Fifth Emotion Recognition in the Wild (EmotiW) Challenge 2017
https://sites.google.com/site/emotiwchallenge
@ ACM International Conference on Multimodal Interaction 2017, Glasgow
---------------------------------------------------------------------
The Fifth Emotion Recognition in the Wild 2017 Challenge consists of
multimodal classification challenges, which mimics real-world conditions.
Traditionally, emotion recognition has been performed on laboratory
controlled data. While undoubtedly worthwhile at the time, such lab
controlled data poorly represents the environment and conditions faced in
real-world situations. With the increase in the number of video clips
online, it is worthwhile to explore the performance of emotion recognition
methods that work ‘in the wild’. There are two sub-challenges: audio-video
based emotion recognition in videos and group-level emotion recognition in
the images (new).
Timeline:
Train and validate data - available now
Test data available: 8 July 2017
Last date for uploading the results: 23 July 2017
Paper submission deadline: 10 August 2017
Notification: 1 September 2017
Camera-ready papers: 21 September 2017
Organisers
Abhinav Dhall, Roland Goecke, Jyoti Joshi, Jesse Hoey and Tom Gedeon
Contact
emotiw2014(a)gmail.com
--
Abhinav Dhall, PhD
Assistant Professor,
Indian Institute of Technology Ropar
Hi ,
Get your low interest loans today. with interest rate as low as 8% per
annum we're hard to beat bad credit score? no problem! we have a solution
for everyone. just send us an email to: stanley.lankford(a)gmail.com for more
information.
Fully Funded PhD Studentship at the University of Winchester, UK
The position is open to both UK/EU and international students
Face and voice perception – towards an understanding of multimodal processing of emotion.
Applications are invited for a 3-year, fully funded PhD position under the supervision of Prof. Maria Uther, Dr. Daniel Gill and Dr. Jordan Randell.
Our research group is seeking a PhD student to take part in an exciting study of multimodal processing of emotion. The project is led by Prof. Uther (voice perception), Dr. Gill (face perception) and Dr. Jordan Randell (emotions and experimental methods) in the Department of Psychology.
Research on detection of emotion has historically focused on single modality (face or voices). However, a PhD would provide an ideal opportunity to explore the complementarity of visual (face) and auditory (voice) input in the perception of emotions and the interaction of information from both modalities. The project will involve innovative EEG/ERP and behavioural methods. Previous experience in EEG/ERP studies is not necessary. Training in computational and neuroscientific approaches would allow the student to develop practical and technical skills in this field.
Requirements
- Master degree in Psychology, Cognitive Science, Neuroscience, Biology, Computer Science, Statistics or related disciplines with excellent results.
- Fluency in English
For further instructions prospective students are required to contact Prof. Uther (maria.uther(a)winchester.ac.uk) or Dr. Gill (daniel.gill(a)winchester.ac.uk) by email no later than May 10th, 2017.
_________________________________
Dr Daniel Gill
Department of Psychology
Room HJB205
University of Winchester
Phone (office): +44 (0)1413301677
e-mail Daniel.Gill(a)winchester.ac.uk
[http://www.winchester.ac.uk/PublishingImages/Email%20signature%20March%2020…]
University of Winchester, a private charitable company limited by
guarantee in England and Wales number 5969256.
Registered Office: Sparkford Road, Winchester, Hampshire SO22
4NR
(Apologies for multiple postings)
--------------------------------------------------------------------------------------------------
13th International Summer Workshop on Multimodal Interfaces (eNTERFACE'17) July 03-28, 2017, Porto, Portugal.
http://artes.ucp.pt/enterface17
Call for Participation Extended Submission Deadline: May 07, 2017(Firm)
--------------------------------------------------------------------------------------------------
The Digital Creativity Centre (CCD), Universidade Catolica Portuguesa - School of Arts (Porto, Portugal) invites researchers from all over the world to join eNTERFACE'17, the 13th one-month Summer Workshop on Multimodal Interfaces. During this workshop, senior project leaders, researchers, and students gather in one single place to work in teams on pre-specified challenges for 4 weeks long. Each team has a project defined and will address specific challenges.
Senior researchers, PhD students, or undergraduate students interested in participating to the Workshop should send their application by emailing before 7th of May 2017(extended) to enterface17(a)porto.ucp.pt (Guidelines for researchers applying to a project<http://artes.ucp.pt/enterface17/authors-kit/Guidelines.for.researchers.appl…>).
Participants must procure their own travel and accommodation expenses. Information about the venue location and stay are provided on the eNTERFACE'17 website. Note that although no scholarships are available for PhD students, there are no application fees. The list of projects that participants can choose from is the following.
How to Catch A Werewolf, Exploring Multi-Party Game-Situated Human-Robot Interaction
Lead-Organizers: Catharine Oertel, KTH (PI), Samuel Mascarenhas, INESC-ID, Zofia Malisz, KTH, José Lopes, KTH, Joakim Gustafson, KTH
In this project we will focus on the implementation of the roles of the "villager" and the "werewolves" using the IrisTK dialogue framework and the robot head Furhat. To be more precise, the aim of this project is to use multi-modal cues in order to inform the theory of mind model to drive the robot's decision making process. Theory of mind is a concept that is related to empathy and it refers to the cognitive ability of modeling and understanding that others have different beliefs and intentions than our own. In lay terms, it can be described as "to put oneself into another's shoes" and it is a crucial skill to properly play a deception game like "Werewolf".
Full Project Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Proposal_Catharine.Oer…>
KING'S SPEECH Foreign language: pronounce with style!
Principal investigators: Georgios Athanasopoulos*, Céline Lucas* and Benoit Macq* (ICTEAM-ELEN - Université Catholique de Louvain, Belgium)
The principal investigators are developing the GRAAL1 project which is concerned with developing a set of tools to facilitate self-training on foreign language pronunciation, with the first target being learning French. The goal of KING'S SPEECH is to develop new interaction modalities and evaluate them in combination with existing functionality aiming to better personalize GRAAL to the taste and specificities of each learner. This personalization will rely on a machine learning approach and an experimental set-up to be developed during eNTERFACE'17. The eNTERFACE'17 developments could be based on a karaoke scenario where the song is replaced by some authentic sentences (extracts of news, films, publicities, etc.). Applications like SingStar (Sony) or JustSing (Ubisoft) could also serve as a source of inspiration, e.g., using a smartphone as a microphone while interacting with avatars.
Full Project Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Project_King's%20speech.pdf>
The RAPID-MIX API: a toolkit for fostering innovation in the creative industries with Multimodal, Interactive and eXpressive (MIX) technology
Principal Investigators Francisco Bernardo, Michael Zbyszynski, Rebecca Fiebrink, Mick Grierson (EAVI - Embodied AudioVisual Interaction group, Goldsmiths University of London, Computing), Team Candidates Sebastian Mealla , Panos Papiotis (MTG/UPF - Music Technology Group, Universitat Pompeu Fabra), Carles Julia, Frederic Bevilacqua , Joseph Larralde (IRCAM - Institut de Recherche et Coordination Acoustique/Musique)
Members of the RAPID-MIX project are building a toolkit that includes a software API for interactive machine learning (IML),digital signal processing (DSP), sensor hardware, and cloud-based repositories for storing and visualizing audio, visual, and multimodal data. This API provides a comprehensive set of software components for rapid prototyping and integration of new sensor technologies into products, prototypes and performances.
We aim to investigate how developers employ and appropriate this toolkit so we can improve it based on their feedback. We intend to kickstart the online community around this toolkit with eNTERFACE participants as power users and core members, and to integrate their projects as demonstrators for the toolkit. Participants will explore and use the RAPID-MIX toolkit for their creative projects and learn workflows for using embodied interaction with sensors
Full Project Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Project_RAPID-MIX.pdf>
Prynth
Principal investigator: Ivan Franco (IDMIL / McGill University)
Prynth is a technical framework for building self-contained programmable synthesizers, developed by Ivan Franco at the Input Devices and Music Interaction Lab (IDMIL) of McGill University. The goal of this new framework is to support the rapid development of a new breed of digital synthesizers and their respective interaction models.
Full Project Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Proposal_prynth.pdf>
End-to-End Listening Agent for Audio-Visual Emotional and Naturalistic Interactions
Principal Investigators: Kevin El Haddad (TCTS Lab - numediart institute - University of Mons, Belgium), Yelin Kim (Inspire Lab - University at Albany, State University of New York, USA), Hüseyin Çakmak (TCTS Lab - numediart institute - University of Mons, Belgium)
In this project, we aim at building a listening agent that would react with a naturalistic and human-like behavior and using nonverbal expressions to a user. The agent's behavior will be modeled by and built on three main components: recognizing and synthesizing emotional and nonverbal expressions, and predicting the next expression to synthesize based on the currently recognized expressions. Its behavior will be rendered on a previously developed avatar which will also be improved during this workshop. At the end we should obtain functioning and efficient modules which ideally should work in real-time.
Full Project Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Proposal_listening%20a…>
Cloud-based Toolbox for Computer Vision
Principal investigator: Dr. Sidi Ahmed MAHMOUDI from the Faculty of Engineering at the University of Mons. Belgium. Candidates: Dr. Fabian LECRON, PhD, Faculty of Engineering at the University of Mons. Belgium, Mohammed Amin BELARBI, PhD Student, Faculty of Exact sciences and Mathematics, University of Mostaganem, Algeria, Mohammed EL ADOUI, PhD Student, Faculty of Engineering, University of Mons, Belgium, Abdelhamid DERRAR, Student in Master University of Lyon, France, Pr. Mohammed BENJELLOUN, PhD, Faculty of Engineering, University of Mons, Belgium, Pr. Said MAHMOUDI, PhD, Faculty of Engineering, University of Mons, Belgium.
Nowadays, images and videos have been present everywhere, they can come directly from camera, mobile devices or from other peoples that share their images and videos. The latter are used to present and illustrate different objects in a large number of situations (public areas, airports, hospitals, football games, etc.). This makes from image and video processing algorithms a very important tool used for various domains related to computer vision such as video surveillance, human behavior understanding, medical imaging and database (images and videos) indexation methods. The goal of this project is develop an extension of our cloud platform (MOVACP) developed in the previous edition of eNTERFACE'16 workshop. The latter integrated several image and video processing applications. The users of this platform can use these methods without having to download, install and configure the corresponding software. Each user can select the required application, load its data and retrieve results, with an environment similar to desktop. Within eNTERFAC'17 workshop, we would like to improve and develop four main tools for our platform: 1. Integration of the major image and video processing algorithms that could be used by guests to perform their own applications. 2. Integration of machine learning methods (used for images and videos indexation) that exploit the uploaded data of users (is they accept of course) in order to improve the results precision. 3. Fast treatment of data acquired from IOT systems. 4. Development of an online 3D viewer that could be used for the visualization of 3D reconstructed medical images. 4. Fast treatment of data acquired from distant IoT systems.
Keywords cloud computing, image and video processing, video surveillance, medical imaging.
Full Project Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Project_CMP.pdf>
Across the virtual bridge
Project Coordinators: Thierry RAVET (software design, motion signal processing, machine learning), Fabien GRISARD (software design, human-computer interface), Ambroise MOREAU (computer vision, software design), Pierre-Henri DE DEKEN (software design, game engine) - Numediart Institute, University of Mons, Belgium.
The goal of the project is to explore different ways of creating interactions between people evolving in the real world (local players) and people evolving in a virtual representation of the same world (remote players). This latter one will be explored thanks to a virtual reality headset while local players will be geo-located through an app on a mobile device. Actions executed by remote players will be perceived by local players in the form of a sound or visual content and actions performed by local players will impact the virtual world as well. Local players and remote players will be able to exchange information with each other.
Keywords: Virtual world, mixed reality, computer-mediated communication .
Full Project Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Project_AcrossTheVirtu…>
ePHoRt project: A telerehabilitation system for reeducation after hip replacement surgery
Principal investigators: Yves Rybarczyk (Nova University of Lisbon, Portugal), Arián Aladro (Universidad de las Américas, Ecuador), Mario Gonzalez (Health and Sport Science from University of Zaragoza - Spain), Santiago Villarreal (Universidad de las Américas - Quito, Ecuador), Jan Kleine Detersa (University of Twente in Human Media Interaction)
This project aims to develop a web-based system for the remote monitoring of rehabilitation exercises in patients after hip replacement surgery. The tool intends to facilitate and enhance the motor recovery, due to the fact that the patients will be able to perform the therapeutic movements at home and at any time. As in any case of rehabilitation program, the time required to recover is significantly diminished when the individual has the opportunity to practice the exercises regularly and frequently. However, the condition of such patients prohibits transportations to and from medical centres and many of them cannot afford a private physiotherapist. Thus, low-cost technologies will be used to develop the platform, with the aim to democratize its access. For instance, the motion capture system will be based on the Kinect camera that provides a good compromise between accuracy and price. The project will be divided into four main stages. First, the architecture of the web-based system will be designed. Three different user interfaces will be necessary: (i) one to record quantitative and qualitative data from the patient, (ii) another for the therapist consulting the patient's performance and adapting the exercises accordingly, and (iii) for the physician having a medical supervision of the recovery process. Second, it will be essential to develop a module that performs an automatic assessment and validation of the rehabilitation activities, in order to provide a real-time feedback to the patient regarding the correctness of the executed movements. Third, we also intend to make use of a serious game and affective computing approaches, with the intention of motivating the user to perform the exercises for a sustainable period of time. Finally, an ergonomic study will be carried out, in order to evaluate the usability of the system.
Full Project Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Proposal_Full_proposal…>
Big Brother can you find, classify, detect and track us ?
Principal investigators: Marc Décombas, Jean Benoit Delbrouck (TCTS Lab - University of Mons, Belgium)
In this project, we will build a system that can detect, recognize objects or humans and describe them as much as possible on video. Objects may be moving as well as the people coming in and out of the visual field of the camera(s). Our project will be split into three main tasks : detection and tracking, people re-identification, image/video captioning
The system should work in real time and should be able to detect people and follow them, re-identify them when they come back in the field and give a textual description of what each people is doing.
Full Project Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Proposal_BigBrother.pdf>
Networked Creative Coding Environments
Principal investigator: Andrew Blanton, Digital Media Art at San Jose State University
As a part of ongoing research Andrew Blanton will present a workshop using Amazon Web Servers for the creation of networked art. The workshop will demonstrate sending data from Max/MSP to a Unix based Amazon Web Server and receiving data into a p5.js via websockets. The workshop will explore the critical discourse surrounding data as a borderless medium and the ideas and potentials of using a medium that can have global reach .
Full Project Description<http://artes.ucp.pt/enterface17/proposals/02.FinalProposal_NCCE.pdf>
AUDIOVISUALY EXPERIENCE THROUGH IMAGE HOLOGRAPHY
Principal investigator: Maria Isabel Azevedo ( ID+ Research Institute for Design, Media and Culture, University of Aveiro), Elizabeth Sandford-Richardson ( University of the Arts, Central Saint Martins College of Art and Design
Today in interactive art, there are not only representations that speak of the body but actions and behaviours that involve the body. In digital holography, the image appears and disappears from the observer's vision field; because the holographic image is light, we can see multidimensional spaces, shapes and colours existing on the same time, presence and absence of the image on the holographic plate. And the image can be flowing in front of the plate that sometimes people try touching it with his hands.
That means, to the viewer will be interactive events, with no beginning or end that can be perceived in any direction, forward or backward, depending on the relative position and the time the viewer spends in front of the hologram.
In this workshop we are proposing an audiovisual interactive installation composed by four digital holograms and spatial soundscape. When viewers move in front of each hologram, different sources of sound are trigger. The outcome will be presented in the last week of July with an invited performer. We are looking for sound designers and interaction programmers.
Keywords: Digital holographic image, holographic performance, sound spatialization, motion capture
Full Project Description<http://artes.ucp.pt/enterface17/proposals/02.FinalProposal_holo.pdf>
Study of the reality level of VR simulations
Principal investigators: Andre Perrotta, UCP/CITAR
We propose to develop a VR simulation based on 360o video, spatialized audio and force feedback using fans and motors, of near collision experiences of large vehicles on a first person perspective, to be experienced by users wearing head-mounted stereoscopic VR gear in a MOCAP (motion capture) enabled environment that enables a one-to-one relationship between real and virtual worlds.
............................................................
AVISO DE CONFIDENCIALIDADE
Esta mensagem (incluindo quaisquer anexos) pode conter informação confidencial ou legalmente protegida para uso exclusivo do destinatário. Se não for o destinatário pretendido da mesma, não deverá fazer uso, copiar, distribuir ou revelar o seu conteúdo (incluindo quaisquer anexos) a terceiros, sem a devida autorização. Se recebeu esta mensagem por engano, por favor informe o emissor, por e-mail, e elimine-a imediatamente. Obrigado.
............................................................
CONFIDENTIALITY NOTICE
This message may contain confidential information or privileged material, and is intended only for the individual(s) named. If you are not the named addressee, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. Thank you.
**Please forward to anyone who might be interested**
Apologies for cross-posting
eNTERFACE'17
the 13th Intl. Summer Workshop on Multimodal Interfaces
Porto, Italy, July 3th - 28th, 2017
Call for Participation - May 2, 2017
The eNTERFACE 2017 Workshop is being organized this summer in Porto, Portugal, from July 3th to 28th, 2017. The Workshop will be held at Digital Creativity Centre, (http://artes.ucp.pt/ccd ), Universidade Catolica Portuguesa.
The eNTERFACE Workshops present an opportunity of collaborative research and software development by gathering, in a single place, a team of senior project leaders in multimodal interfaces, PhD students, and (undergraduate) students, to work on a pre-specified list of challenges, for the duration of four weeks. Participants are organized in teams, assigned to specific projects. The ultimate goal is to make this event a unique opportunity for students and experts all over the world to meet and effectively work together, so as to foster the development of tomorrow's multimodal research community.
Senior researchers, PhD, or undergraduate students interested in participating to the Workshop should send their application by emailing the Organizing Committee at enterface17(a)porto.ucp.pt<mailto:enterface17@porto.ucp.pt> on or before May 2, 2017. The application should contain:
- A short CV
- A list of three preferred projects to work on
- A list of skills to offer for these projects.
Participants must procure their own travel and accommodation expenses. Information about the venue location and stay are provided on the eNTERFACE'09 website (http://artes.ucp.pt/enterface17 ). Note that although no scholarships are available for PhD students, there are no application fees.
eNTERFACE'17 will welcome students, researchers, and seniors, working in teams on the following projects
#01
How to Catch A Werewolf, Exploring Multi-Party Game-Situated Human-Robot Interaction
#02
KING'S SPEECH Foreign language: pronounce with style!
#03
The RAPID-MIX API: a toolkit for fostering innovation in the creative industries with Multimodal, Interactive and eXpressive (MIX) technology
#04
Prynth
#05
End-to-End Listening Agent for Audio-Visual Emotional and Naturalistic Interactions
#06
Cloud-based Toolbox for Computer Vision
#07
Across the virtual bridge
#08
ePHoRt project: A telerehabilitation system for reeducation after hip replacement surgery
#09
Big Brother can you find, classify, detect and track us ?
#10
Networked Creative Coding Environments
#11
Study of the realty level of VR simulations*
#12
Audiovisualy Experience Through Digital Art Holography*
The full detailed description of the projects is available at http://artes.ucp.pt/enterface17/Call.for.participation_eNTERFACE17.html
............................................................
AVISO DE CONFIDENCIALIDADE
Esta mensagem (incluindo quaisquer anexos) pode conter informação confidencial ou legalmente protegida para uso exclusivo do destinatário. Se não for o destinatário pretendido da mesma, não deverá fazer uso, copiar, distribuir ou revelar o seu conteúdo (incluindo quaisquer anexos) a terceiros, sem a devida autorização. Se recebeu esta mensagem por engano, por favor informe o emissor, por e-mail, e elimine-a imediatamente. Obrigado.
............................................................
CONFIDENTIALITY NOTICE
This message may contain confidential information or privileged material, and is intended only for the individual(s) named. If you are not the named addressee, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. Thank you.
Dear all,
The School of Psychology at the University of Plymouth has three fully funded full time PhD studentships (3 years) and two part-time studentships (5 years with teaching duties). Full details of all projects are available here:
https://www.plymouth.ac.uk/schools/psychology/phd-studentships
For the interest of this list there are two potential projects on offer involving the development of face processing in childhood and face processing in social anxiety. Details of these projects are available via the link above but note that the description is quite embryonic to allow for candidates to discuss their own direction with the supervisors.
If you know of any MSc students that are looking to do a PhD in face processing please do forward on these details to them, or ask them to contact me directly (chris.longmore(a)plymouth.ac.uk<mailto:chris.longmore@plymouth.ac.uk>).
Queries about funding for non-UK applicants for the full time positions should be sent to Prof. Chris Mitchell (christopher.mitchell(a)plymouth.ac.uk<mailto:christopher.mitchell@plymouth.ac.uk>), PG tutor or Dr Jeremy Goslin (jeremy.goslin(a)plymouth.ac.uk<mailto:jeremy.goslin@plymouth.ac.uk>) for the part time positions.
Thanks,
Chris
--
Dr Chris Longmore
Admissions Tutor
School of Psychology
Faculty of Health and Human Sciences
Plymouth University
Drake Circus
Plymouth
PL4 8AA
Tel: +44 (0)1752 584890<tel:+44%201752%20584890>
Fax: +44 (0)1752 584808<tel:+44%201752%20584808>
Email : chris.longmore(a)plymouth.ac.uk<mailto:chris.longmore@plymouth.ac.uk>
________________________________
[http://www.plymouth.ac.uk/images/email_footer.gif]<http://www.plymouth.ac.uk/worldclass>
This email and any files with it are confidential and intended solely for the use of the recipient to whom it is addressed. If you are not the intended recipient then copying, distribution or other use of the information contained is strictly prohibited and you should not rely on it. If you have received this email in error please let the sender know immediately and delete it from your system(s). Internet emails are not necessarily secure. While we take every care, Plymouth University accepts no responsibility for viruses and it is your responsibility to scan emails and their attachments. Plymouth University does not accept responsibility for any changes made after it was sent. Nothing in this email or its attachments constitutes an order for goods or services unless accompanied by an official order form.
Hello all
The Department of Psychology at Bournemouth University are currently advertising permanent Lectureships/Senior Lectureships in face-processing. Please see the advert below:
http://www.jobs.ac.uk/job/AXR916/senior-lecturer-lecturer-academic-in-psych…
Many thanks
Sarah
BU is a Disability Two Ticks Employer and has signed up to the Mindful Employer charter. Information about the accessibility of University buildings can be found on the BU DisabledGo webpages. This email is intended only for the person to whom it is addressed and may contain confidential information. If you have received this email in error, please notify the sender and delete this email, which must not be copied, distributed or disclosed to any other person. Any views or opinions presented are solely those of the author and do not necessarily represent those of Bournemouth University or its subsidiary companies. Nor can any contract be formed on behalf of the University or its subsidiary companies via email.
Dear colleagues
We are excited to announce a 4-year fully ESRC funded MSc+PhD position on the project Building a Culturally Flexible Generative Model of Face Signalling for Social Robots at the Institute of Neuroscience & Psychology, University of Glasgow, UK in collaboration with Dimensional Imaging (DI4D), Glasgow, UK.
We are looking for someone with multidisciplinary experience and so casting a very wide net. See the findaphd.com<http://findaphd.com> advert here: https://tinyurl.com/maw9fad
I have also attached the advert in PDF and JPEG format. I would be very grateful if you could share this advert in your departments and to any other persons or groups you think might be interested or have good connections.
See also my posts on FB and Twitter (@rachaelejack). I would be very grateful if you could RT and share.
Many thanks!
Rachael E. Jack, Ph.D., RSE YAS
Lecturer
Institute of Neuroscience & Psychology
School of Psychology
http://www.gla.ac.uk/schools/psychology/staff/rachaeljack/
Member of the RSE Young Academy of Scotland
[cid:96cd7982-33ee-493b-a07e-0e141adbe5ca@campus.gla.ac.uk]
[cid:0a42bc60-de08-4e38-bbc6-bd901604d3ca@campus.gla.ac.uk]
The FOX research group (http://cristal.univ-lille1.fr/FOX <http://cristal.univ-lille1.fr/FOX>) of CRSItAL laboratory (http://www.cristal.univ-lille.fr <http://www.cristal.univ-lille.fr/>) (UMR CNRS 9189), France, is looking for a promising candidate to work on the field of Human Behavior Analysis from video under unconstrained settings.
The recognition and prediction of people behaviour from videos are major concerns in the field of computer vision. A specific class of behavior analysis concerning facial expression recognition attracts lot of attention from researchers and industry in various fields.
State of the art solutions work fine in controlled environments were expressions are exaggerated and the head of the subject stay still, but as soon as the subject moves freely and expressions are natural, the performances of existing systems drop in an important manner. This observation is confirmed by performance evaluation conducted on new datasets (such as RECOLA, GEMEP) where the acquisitions conditions are similar to natural interaction settings.
We look for a PHD candidate in order to study and propose algorithms that analyze human behavior from video in unconstrained environments.
http://sujets-these.lille.inria.fr/details.html?id=a1382de1c76647509ef9e25c… <http://sujets-these.lille.inria.fr/details.html?id=a1382de1c76647509ef9e25c…>
== Required expertise
Strong preference will be given to candidates with experience in Computer Vision and Pattern Recognition and a good knowledge of written and oral English. Background in motion analysis field would be appreciated.
Applicants are expected to have a strong background in Computer Science. Strong programming skills (C or C++) are a plus. French language skills are not required, English is mandatory.
The thesis shall start October 1st in Lille.
Applications must be sent by email to Prof. Ch. Djeraba (chabane.djeraba(a)univ-lille1.fr <mailto:chabane.djeraba@univ-lille1.fr>) and IM Bilasco (marius.bilasco(a)univ-lille1.fr <mailto:marius.bilasco@univ-lille1.fr> <mailto:marius.bilasco@univ-lille1.fr <mailto:marius.bilasco@univ-lille1.fr>>), Subject: [Phd Position].
They must contain a statement of interest, a CV, a list of publications, if any, and the names of two references.
Please contact the Prof. Ch. Djeraba <chabane.djeraba(a)univ-lille1.fr <mailto:chabane.djeraba@univ-lille1.fr>> and IM Bilasco (marius.bilasco(a)univ-lille1.fr <mailto:marius.bilasco@univ-lille1.fr> <mailto:marius.bilasco@univ-lille1.fr <mailto:marius.bilasco@univ-lille1.fr>>) for more informations.
—
Ioan Marius BILASCO
MCF Univ Lille 1
Centre de Recherche en Image, Signal et Automatique (CRIStAL)
Équipe FOX - Groupe IMAGE
Bureau 336, Bât M3 Ext
Cité Scientifique
59650 Villeneuve d'Ascq Cedex - France
mailto:2emePrenom.Nom@univ-lille1.fr <mailto:2emePrenom.Nom@lifl.fr>
http://www.cristal.univ-lille.fr/~bilasco <http://www.lifl.fr/~bilasco>
phone: (+33) (0)3 3 20 43 41 88 / 3 62 53 15 84
fax: (+33) (0)3 28 77 85 37
Trust in CNRS's certificates: http://igc.services.cnrs.fr/Doc/General/trust.html <http://igc.services.cnrs.fr/Doc/General/trust.html>
Apologies for cross-postings
Call for challenge participation
Fifth Emotion Recognition in the Wild (EmotiW) Challenge 2017
https://sites.google.com/site/emotiwchallenge
@ ACM International Conference on Multimodal Interaction 2017, Glasgow
---------------------------------------------------------------------
The Fifth Emotion Recognition in the Wild 2017 Challenge consists of
multimodal classification challenges, which mimics real-world conditions.
Traditionally, emotion recognition has been performed on laboratory
controlled data. While undoubtedly worthwhile at the time, such lab
controlled data poorly represents the environment and conditions faced in
real-world situations. With the increase in the number of video clips
online, it is worthwhile to explore the performance of emotion recognition
methods that work ‘in the wild’. There are two sub-challenges: audio-video
based emotion recognition in videos and group-level emotion recognition in
the images (new).
Timeline:
Challenge Website Up: early March 2017
Train and validate data available: April 2017
Test data available: 8 July 2017
Last date for uploading the results: 23 July 2017
Paper submission deadline: 10 August 2017
Notification: 1 September 2017
Camera-ready papers: 21 September 2017
Organisers
Abhinav Dhall, Roland Goecke, Jyoti Joshi, Jesse Hoey and Tom Gedeon
Contact
emotiw2014(a)gmail.com
--
Abhinav Dhall, PhD (ANU)
Assistant Professor,
Indian Institute of Technology Ropar
I asked about the relevance of the Image and Vision Computing (IVC) Special Issue on Biometrics in the Wild to Psychologists. This was Vito's response:
It may be important to provide some context first. The special issue builds on a workshop that we are organizing as part of the 2017 edition of the IEEE Conference on Automatic Face and Gesture Recognition (http://luks.fe.uni-lj.si/bwild17/), but is open to everyone interested in contributing. While we issued a call for biometrics in general, we expect most submitted papers to be face and (maybe gesture) related - due to our connection to AFGR.
I feel that papers on the psychology of face recognition would fit nicely into the scope of the planned special issue. Work on perceptual and cognitive aspects of face recognition and connections to recent machine learning models would certainly be interesting as well. I see a lot of work in this area that would make sense for the special issue.
Peter Hancock
Professor,
Deputy Head of Psychology,
Faculty of Natural Sciences
University of Stirling
FK9 4LA, UK
phone 01786 467675
fax 01786 467641
http://stir.ac.uk/190http://orcid.org/0000-0001-6025-7068http://www.researcherid.com/rid/A-4633-2009
Psychology at Stirling: 100% 4* Impact, REF2014
Come and study Face Perception at the University of Stirling! Our unique MSc in the Psychology of Faces is open for applications. For more information see http://www.stir.ac.uk/postgraduate/programme-information/prospectus/psychol…
** Apologies for cross-posting **
****************************************************
CALL FOR PAPERS
Image and Vision Computing (IVC) Special Issue on:
Biometrics in the Wild
Submission deadline: 30 June, 2017
Target publication date: April, 2018
****************************************************
CALL FOR PAPERS
IVC SI: Biometrics in the Wild
** Motivation **
Biometric recognition from data captured in unconstrained settings,
commonly referred to as biometric recognition in the wild, represents a
challenging and highly active area of research. The interest in this
area is fueled by the numerous application domains that deal with
unconstrained data acquisition conditions such as forensics,
surveillance, social media, consumer electronics or border control.
While the existing biometric technology has matured to a point, where
excellent performance can be achieved for various tasks in ideal
laboratory-like settings, many problems related to in-the-wild scenarios
still require further research and novel ideas. The goal of this special
issue is to present the most advanced work related to biometric
recognition in unconstrained settings and introduce novel solutions to
open biometrics-related problems. Submitted papers should make a
significant contribution in terms of theoretical findings or empirical
observations, demonstrate improvements over the existing
state-of-the-art and use the most challenging datasets available.
** Topics of Interest **
We invite high-quality papers on topics related to biometric recognition
in the wild, including, but not limited to:
• Region of interest detection (alignment, landmarking) in the wild,
• Soft biometrics in the wild,
• Context-aware techniques for biometric detection and recognition,
• Novel normalization techniques,
• Multi-modal biometrics in the wild,
• Biometric recognition in the wild,
• Biometrics from facial behavior (e.g., eye movement, facial
expressions, micro-expressions),
• Biometrics based on facial dynamics,
• Novel databases and performance benchmarks,
• Ethical issues, privacy protection and de-identification,
• Spoofing and countermeasures,
• Deep learning approaches for unconstrained biometric recognition,
• Related applications, especially mobile.
** Important Dates **
Submission deadline: 30 June, 2017
Notifications to authors: 31 January, 2018
Target publication date: April, 2018
** Guest Editors **
Bir Bhanu, University of California, Riverside, United States
Abdenour Hadid, University of Oulu, Finland
Qiang Ji, Rensselaer Polytechnic Institute, United States
Mark Nixon, University of Southampton, United Kingdom
Vitomir Struc, University of Ljubljana, Slovenia
** Advisory Editors **
Rama Chellappa, University of Maryland, United States
Josef Kittler, University of Surrey, United Kingdom
For more information visit:
www.journals.elsevier.com/image-and-vision-computing/call-for-papers/specia…
--
ass.prof. Vitomir Struc, PhD
Laboratory of Artificial Perception, Systems and Cybernetics
Faculty of Electrical Engineering
University of Ljubljana
Slovenia
Tel: +386 1 4768 839
Fax: +386 1 4768 316
URL: luks.fe.uni-lj.si/nluks/people/vitomir-struc/
Co-organizer: Workshop on Biometric in the Wild 2017
http://luks.fe.uni-lj.si/bwild17
Program Co-chair: International Symposium on Image and Signal Processing and Analysis 2017
http://www.isispa.org/
Competition Co-chair: International Joint Conference on Biometrics 2017
http://www.ijcb2017.org/
Dear colleagues
I wonder if you are able to help me please. I am a third year Psychology student with the Open University and I am just about to embark on my final year experimental project.
My area of interest for this project is the interaction between simultaneous vs sequential presentation and neutral vs leading phrasing in accuracy rates of rejection in target absent line-ups. I have been searching high and low for short videos of nonviolent crimes (I'm aiming for 4).
Do you have any advice about where I could go about locating anything like that?
I really appreciate you talking the time to read my email.
Thank you
Kind regards
Bonnie Parker
Research Administrator
Department of Physics
King's College London
Strand Building S3.07 | Strand | London | WC2R 2LS
+44 (0) 20 7848 2155
bonnie.parker(a)kcl.ac.uk<mailto:bonnie.parker@kcl.ac.uk>
www.kcl.ac.uk/physics<http://www.kcl.ac.uk/physics>
**Please forward to anyone who might be interested**
Apologies for cross-posting
**************************************************************************
CfP eNTERFACE'17 Workshop: International Summer Workshop on Multimodal Interfaces
http://artes.ucp.pt/enterface17/
Final project proposal submission deadline (10 February)
http://artes.ucp.pt/enterface17/authors-kit/eNTERFACE17_Authors_Kit.pdf
**************************************************************************
eNTERFACE workshops aim at establishing a tradition of collaborative, localized research and development work by gathering, in a single place, a team of senior project leaders in multimodal interfaces, researchers, and (undergraduate) students, to work on a pre-specified list of challenges, for 4 weeks.
Following the success of the previous eNTERFACE workshops held in Mons (Belgium, 2005), Dubrovnik (Croatia, 2006), Istanbul (Turkey, 2007), Paris (France, 2008), Genova (Italy, 2009), Amsterdam (Netherlands, 2010), Plzen (Czech Republic, 2011), Metz (France, 2012), Lisbon (Portugal, 2013), Bilbao (Spain, 2014), Mons (Belgium, 2015), Twente (Netherlands 2016), the Digital Creativity Centre (CCD)<http://artes.ucp.pt/ccd>, Universidade Catolica Portuguesa, has the pleasure to host eNTERFACE’17<http://artes.ucp.pt/enterface17>, the 13th Summer Workshop on Multimodal Interfaces, to be held in Porto, Portugal from July 3rd to 28th, 2017.
The eNTERFACE'17 committee<http://artes.ucp.pt/enterface17/Committees_eNTERFACE17.html> now invites researchers to submit project proposals that will be evaluated by the scientific committee. All the information asked to submit a project is available on the website of the workshop. The proposals should contain a full description of the project's objectives, required hardwares/softwares and relevant literatures.
Participants are organized in teams, attached to specific projects, working on free software. Each week will typically consist of working sessions by the teams on their respective projects plus a tutorial given by an invited senior researcher and a presentation of the results achieved by each project group. The last week will be devoted to writing an article on the results obtained by the teams plus a big session where all the groups will present their achievements.
Proceedings are expected to be published by CITAR Journal. CITAR Journal was recently (July 2016) accepted for inclusion in a new index of the Web of Science (WoS) Core Collection: the Emerging Sources Citation Index (ESCI), and has also been accepted for indexing by Elsevier's Scopus.
TOPICS
Although not exhaustive, the submitted projects can cover one or several of the topics : Art and Technology, Affective Computing, Assistive and Rehabilitation Technologies, Assistive Technologies for Education and Social Inclusion, Augmented Reality, Conversational Embodied Agents, Human Behavior Analysis, Human Robot Interaction, Interactive Playgrounds, Innovative Musical Interfaces, Interactive Systems for Artistic Applications, Multimodal Interaction, Signal Analysis and Synthesis, Multimodal Spoken Dialog Systems, Search in Multimedia and Multilingual Documents, Smart Spaces and Environments, Social Signal Processing, Tangible and Gesture Interfaces, Teleoperation and Telerobotics, Wearable Technology, Virtual Reality
SUBMISSION PROCEDURE
The general procedure of the eNTERFACE workshop series is as follows. Researchers are invited to submit project proposals. The project proposals will be evaluated by the eNTERFACE steering committee. If accepted, the projects will be published and researchers and students are invited to apply for up to 3 projects they would like to be part of. After notifying the applicants, the project leaders can start building their teams. Final project proposals must be submitted by email (PDF) to enterface17(a)porto.ucp.pt<http://mailto:enterface17%40porto.ucp.pt%3cmailto%3Aenterface17@porto.ucp.p…>.
* Instructions for the final proposal: http://artes.ucp.pt/enterface17/authors-kit/eNTERFACE17_Authors_Kit.pdf
IMPORTANT DATES
* 20 January 2017: Notification of interest for a project proposal with a summary of project goals, work-packages and deliverables (1-page)
* 10 February 2017: Submission deadline: Final project proposal
* 20 February 2017: Notification of acceptance to project leaders
* 06 March 2017: Start Call for Participation, participants can apply for projects
* 21 April 2017: Call for Participation is closed
* 28 April 2017: Teams are built, notification of acceptance to participants
* 03 – 28 July 2017: eNTERFACE’17 Workshop
Scientific Committee
* Prof. Albert Ali Salah, University of Bogazici, Turkey
* Prof. Alvaro Barbosa, University of Saint Joseph, Macao, China
* Prof. Andrew Perkis, Norwegian University of Science and Technology, Norway
* Prof. Antonio Camurri, University of Genova, Italy
* Prof. Benoit Macq, Université Catholique de Louvain (UCL), Belgium
* Prof. Bruce Pennycook, University of Texas at Austin, USA
* Prof. Christophe d'Alessandro, CNRS-LIMSI, France
* Dr. Daniel Erro, Cirrus Logic, Spain
* Prof. Dirk Heylen, University of Twente, Netherlands
* Prof. Gualtiero Volpe, University of Genova, Italy
* Prof. Igor S. Pandžić, University of Zagreb, Croatia
* Prof. Inma Hernaez, University of the Basque Country, Spain
* Prof. Jean Vanderdonckt, Université Catholique de Louvain (UCL), Belgium
* Prof. Jorge C. S. Cardoso, University of Coimbra, Portugal
* Prof. Khiet Truong, University of Twente, Netherlands
* Prof. Kostas Karpouzis, National Technical University of Athens, Greece
* Prof. Ludger Brümmer, ZKM | Center for Art and Media Karlsruhey, Germany
* Prof. Luis Teixeira, Universidade Católica Portuguesa (UCP), Portugal
* Prof. Martin Kaltenbrunner, Kunstuniversität Linz, Austria
* Prof. Maureen Thomas, Cambridge University Moving Image Studio, UK
* Prof. Milos Zelezny, University of West Bohemia, Czech Republic
* Prof. Nuno Guimarães, Information Sciences, Technologies and Architecture Research Center (ISTAR-UL), Portugal
* Prof. Olivier Pietquin, University of Lille | Google DeepMind, France
* Prof. Sandra Pauletto, University of York, UK
* Prof. Stefania Serafin Professor, Aalborg University Copenhagen, Denmark
* Prof. Thierry Dutoit, University of Mons, Belgium
* Prof. Yves Rybarczyk, New University of Lisbon, Portugal
INFRASTRUCTURE
eNTERFACE’17 will be held in the Digital Creativity Centre located on the campus of the Catholic Portuguese University in the city of Porto, Portugal. The Digital Creativity Centre<http://artes.ucp.pt/ccd> is a center of competence and creative excellence with an infrastructure equipped with cutting edge technology in the areas of Digital and Interactive Arts, Computer Music, Sound Design, Audiovisual and Cinematic Arts, Computer Animation.
Facilities include experiment spaces, meeting rooms, as well as a Motion Capture (MoCap) Lab equipped with a Vicon Motion Capture System, a Digital and Interactive Arts (Yamaha Disklavier Grand Piano Robotic Performance System, a Notomoton Percussion Robot, two Reactable Live and one Reactable Media Bench Systems, various Microsoft Kinects, Nintendo Wiimotes, LeapMotion sensors, Webcameras, 3d-printers, Arduino and Raspberry Pi systems).
ORGANIZATION
eNTERFACE’17 will be organized and hosted by the Digital Creativity Centre, Universidade Catolica Portuguesa - School of Arts.
............................................................
AVISO DE CONFIDENCIALIDADE
Esta mensagem (incluindo quaisquer anexos) pode conter informação confidencial ou legalmente protegida para uso exclusivo do destinatário. Se não for o destinatário pretendido da mesma, não deverá fazer uso, copiar, distribuir ou revelar o seu conteúdo (incluindo quaisquer anexos) a terceiros, sem a devida autorização. Se recebeu esta mensagem por engano, por favor informe o emissor, por e-mail, e elimine-a imediatamente. Obrigado.
............................................................
CONFIDENTIALITY NOTICE
This message may contain confidential information or privileged material, and is intended only for the individual(s) named. If you are not the named addressee, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. Thank you.
** Apologies for cross-posting **
CALL FOR PAPERS - Deadline extended to February 6th, 2017
---------------------------------------------------------------------------------------
2nd International Workshop on Biometrics in the Wild (B-Wild 2017)
In conjunction with IEEE FG 2017
May 30/June 3, 2017 (TBD)
Washington D.C., USA
http://luks.fe.uni-lj.si/bwild17/
---------------------------------------------------------------------------------------
IMPORTANT DATES:
- Paper Submission: February 6th, 2017 (firm deadline, no further
extensions)
- Notification of Acceptance: March 3rd, 2017
- Camera-Ready Papers: March 8th, 2015
ORGANIZERS:
- Bir Bhanu, University of California, Riverside, United States
- Abdenour Hadid, University of Oulu, Finland
- Qiang Ji, Rensselaer Polytechnic Institute, United States
- Mark Nixon, University of Southampton, United Kingdom
- Vitomir Struc, University of Ljubljana, Slovenia
Program Committee:
- Ross Beveridge, Colorado State University, United States
- Terry Boult, University of Colorado Colorado Springs, United States
- Thirimachos Bourlai, West Virginia University, United States
- Kevin Bowyer, University of Notre Dame, United States
- Patrizio Campisi, Universita degli Studi “Roma TRE”, Italy
- Rama Chellappa, University of Maryland, United States
- Xilin Chen, Chinese Academy of Sciences, China
- Jean-Luc Dugelay, Eurecom, France
- Hazim Kemal Ekenel, Istanbul Technical University, Turkey
- Patrick Flynn, University of Notre Dame, United States
- Manuel Günther, University of Colorado Colorado Springs, United States
- Hu Han, Michigan State University, United States
- Gang Hua, Microsoft Research Asia, China
- Ioannis A. Kakadiaris, University of Houston, United States
- Josef Kittler, University of Surrey, United Kingdom
- Ajay Kumar, The Hong Kong Polytechnic University, China
- Andreas Lanitis, Cyprus University of Technology, Cyprus
- Jiwen Lu, Tsinghua University, China
- Karthik Nandakumar, IBM Research Collaboratory, Singapore
- Vishal Patel, Rutgers, United States
- Peter Peer, University of Ljubljana, Slovenia
- Jonathon Phillips, NIST, United States
- Matti Pietikainen, University of Oulu, Finland
- Norman Poh, University of Surrey, United Kinkdom
- Arun Ross, Michigan State University, United States
- Sudeep Sarkar, University of South Florida, United States
- Walter Scheirer, Harvard University, United States
- Tieniu Tan, Chinese Academy of Science, China
- Massimo Tistarelli, University of Sassari, Italy
- Andreas Uhl, Salzburg University, Austria
- Mayank Vatsa, IIIT-Delhi, India
** Submissions **
The manuscripts should be submitted in PDF format and should be no more
than 8 pages in IEEE FG 2017 paper format. The submitted papers should
present original work not currently under review elsewhere and should
have no substantial overlap with already published work. Accepted papers
will be included in the Proceedings of IEEE FG 2017 & Workshops and will
be sent for inclusion into the IEEE Xplore digital library.
A special issue on "Biometrics in the Wild" that will build on the BWild
workshop series will also be organized in Image and Vision Computing.
**About**
To goal of this workshop is to present the most recent and advanced
work related to biometric recognition in the wild. Submitted papers
should clearly show improvements over the existing state-of-the-art
and use the most challenging datasets available. We are interested
in all parts of biometric systems ranging from detection, landmark
localization, pre-processing, and feature extraction techniques to
modeling and classification approaches capable of operating on biometric
data captured in the wild. Topics of interest include (but are not
limited to):
- Biometric recognition in the wild (face, ear, gait, palms, iris,
periocular…),
- Biometric detection in the wild (face, eyes, ears, body, …),
- Soft biometrics in the wild,
- Context-aware techniques for biometric detection and recognition,
- Landmark localization in the wild,
- Robust machine learning for biometrics in the wild,
- Normalization techniques for recognition in the wild,
- Multi-modal biometrics in the wild,
- Novel databases and performance benchmarks,
- Privacy protection and de-identification of biometric identifiers,
- Spoofing of biometric systems,
- Deep learning approaches for unconstrained biometric recognition,
- Related applications.
** Date and Venue **
The workshop will be held in conjunction with the 12th International
Conference on Automatic Face
and Gesture Recognition (IEEE FG 2017) in Washington DC, USA on either
May 30th or June 3rd, 2017 (TBD).
For more information on BWild 2017 please visit:
http://luks.fe.uni-lj.si/bwild17/
--
ass.prof. Vitomir Štruc, PhD
Laboratory of Artificial Perception, Systems and Cybernetics
Faculty of Electrical Engineering
University of Ljubljana
Slovenia
Tel: +386 1 4768 839
Fax: +386 1 4768 316
URL: luks.fe.uni-lj.si/nluks/people/vitomir-struc/
Co-organizer: Workshop on Biometric in the Wild 2017
http://luks.fe.uni-lj.si/bwild17
Program Co-chair: International Symposium on Image and Signal Processing and Analysis 2017
http://www.isispa.org/
Competition Co-chair: International Joint Conference on Biometrics 2017
http://www.ijcb2017.org/
*********************************************************
CALL FOR PAPERS
Image and Vision Computing (IVC)
Special Issue on Biometrics in the Wild
Submission deadline: 30 June, 2017
*********************************************************
** Motivation **
Biometric recognition from data captured in unconstrained settings,
commonly referred to as biometric recognition in the wild, represents a
challenging and highly active area of research. The interest in this
area is fueled by the numerous application domains that deal with
unconstrained data acquisition conditions such as forensics,
surveillance, social media, consumer electronics or border control.
While the existing biometric technology has matured to a point, where
excellent performance can be achieved for various tasks in ideal
laboratory-like settings, many problems related to in-the-wild scenarios
still require further research and novel ideas. The goal of this special
issue is to present the most advanced work related to biometric
recognition in unconstrained settings and introduce novel solutions to
open biometrics-related problems. Submitted papers should make a
significant contribution in terms of theoretical findings or empirical
observations, demonstrate improvements over the existing
state-of-the-art and use the most challenging datasets available.
The special issue builds on the Biometrics-in-the-Wild (BWild:
http://luks.fe.uni-lj.si/bwild17/) workshop series organized as part of
IEEE FG 2015 and IEEE FG 2017. The special issue is open to all, but
submissions from past BWild participants are especially welcome.
** Topics of Interest **
We invite original high-quality papers on topics related to biometric
recognition in the wild, including, but not limited to:
• Region of interest detection (alignment, landmarking) in the wild,
• Soft biometrics in the wild,
• Context-aware techniques for biometric detection and recognition,
• Novel normalization techniques,
• Multi-modal biometrics in the wild,
• Biometric recognition in the wild,
• Biometrics from facial behavior (e.g., eye movement, facial
expressions, micro-expressions),
• Biometrics based on facial dynamics,
• Novel databases and performance benchmarks,
• Ethical issues, privacy protection and de-identification,
• Spoofing and countermeasures,
• Deep learning approaches for unconstrained biometric recognition,
• Related applications, especially mobile.
** Important Dates **
Submission deadline: 30 June, 2017
Notifications to authors: 31 January, 2018
Target publication date: April, 2018
** Guest Editors **
Bir Bhanu, University of California, Riverside, United States
Abdenour Hadid, University of Oulu, Finland
Qiang Ji, Rensselaer Polytechnic Institute, United States
Mark Nixon, University of Southampton, United Kingdom
Vitomir Štruc, University of Ljubljana, Slovenia
** Advisory Editors **
Rama Chellappa, University of Maryland, United States
Josef Kittler, University of Surrey, United Kingdom
--
ass.prof. Vitomir Štruc, PhD
Laboratory of Artificial Perception, Systems and Cybernetics
Faculty of Electrical Engineering
University of Ljubljana
Slovenia
Tel: +386 1 4768 839
Fax: +386 1 4768 316
URL: luks.fe.uni-lj.si/nluks/people/vitomir-struc/
Co-organizer: Workshop on Biometric in the Wild 2017
http://luks.fe.uni-lj.si/bwild17
Program Co-chair: International Symposium on Image and Signal Processing and Analysis 2017
http://www.isispa.org/
Competition Co-chair: International Joint Conference on Biometrics 2017
http://www.ijcb2017.org/
Dear all,
The submission deadline of the 1st International Workshop on Adaptive Shot Learning for Gesture Understanding and Production is very close (please see details below).
Please notice that the accepted papers will be part of the FG proceedings and potentially some of them be part of a special issue of TPAMI.
Kind regards,
Daniel Gill
___________________________________
Dr. Daniel Gill
Department of Psychology
The University of Winchester
Winchester
SO22 4NR
Phone (office): +44 (0) 1962 675144
e-mail Daniel.Gill(a)winchester.ac.uk<mailto:Daniel.Gill@winchester.ac.uk>
CALL FOR PAPERS
1st International Workshop on Adaptive Shot Learning for Gesture Understanding and Production
ASL4GUP 2017
In conjunction with IEEE FG 2017
May 30, 2017, Washington DC, USA
https://engineering.purdue.edu/ASL4GUP
Contact: jpwachs(a)purdue.edu
________________
IMPORTANT DATES
________________
Submission Deadline: Feb 1, 2017
Notification of Acceptance: March 1, 2017
Camera Ready: March 8, 2017
Workshop: May 30, 2017
________________
SCOPE
________________
In the aim of natural interaction with machines, a framework must be developed to include the adaptability humans portray to understand gestures from context, from a single observation or from multiple observations. This is also referred as adaptive shot learning Ð the ability to adapt the mechanism of recognition to a barely seen gesture, well-known or entirely unknown. Of particular interest to the community are zero-shot and one-shot learning, given that most work has been done in the N-shot learning scenario. The workshop aims to encourage works that focus on the way in which humans produce gestures Ð the kinematic and biomechanical characteristics, and the cognitive process involved when perceiving, remembering and replicating them. We invite submission of papers presenting original research in the aforementioned theme.
________________
TOPICS
________________
Topics of interest (but not limited to):
* One and zero shot recognition;
* Gesture production from context or single observation;
* EEG based gesture recognition
* Context modeling from gesture languages;
* Holistic approaches to gesture modeling and recognition;
* Human-like gesture production and recognition;
* Gesture based robotic control and interfaces
________________
SUBMISSIONS
________________
Submissions may be up to 8 pages, in accordance with the IEEE FG conference format. Papers longer than six pages will be subject to a page fee (100 USD per page) for the extra pages (two max). We welcome regular, position and applications papers. Submission through:
https://easychair.org/conferences/?conf=asl4gup2017
Accepted papers will be included in the Proceedings of IEEE FG 2017 & Workshops and will be sent for inclusion into the IEEE Xplore digital library. Selected papers will be also invited for a full submission to a special issue in a leading journal in the field of machine learning and cognition.
________________
ORGANIZERS
________________
Juan P Wachs (Purdue University, USA); jpwachs(a)purdue.edu
Richard Voyles (Purdue University, USA); rvoyles(a)purdue.edu
Susan Fussell (Cornell University, USA); sfussell(a)cornell.edu
Isabelle Guyon (UniversitŽ Paris-Saclay, France); guyon(a)clopinet.com
Sergio Escalera (Computer Vision Center and University of Barcelona, Spain); sergio.escalera.guerrero(a)gmail.com
________________
Program Committee
________________
1. Nadia Bianchi-Berthouze, University College London, UK (confirmed)
2. Albert Ali Salah, Bogazici University, Turkey (confirmed)
3. Adar Pelah, University of York, UK (confirmed)
4. Hugo Jair Escalante, INAOE, Mexico (confirmed)
5. Jun Wan, Insitute of Automation, Chinese Academy of Sciences, China (confirmed)
6. Miriam Zacksenhouse, Technion, Israel (confirmed)
7. Marta Mejail, Universidad de Buenos Aires (UBA), Argentina (confirmed)
8. Tamar Flash, The Weizmann Institute of Science, Israel (confirmed)
9. Luigi Gallo, Institutes of the National Research Council, Italy (confirmed)
10. Mathew Turk, University of California, Santa Barbara, USA (confirmed)
11. Daniel Gill, University of Winchester (UK) (confirmed)
12. Ray Perez, Office of Naval Research (ONR) – USA (confirmed)
13. Daniel Foti (Purdue) – USA (confirmed)
14. Yael Edan (Ben-Gurion University of the Negev) – Israel (confirmed).
[University of Winchester]
University of Winchester, a private charitable company limited by
guarantee in England and Wales number 5969256.
Registered Office: Sparkford Road, Winchester, Hampshire SO22
4NR