Ron Senack is a photographer who has amassed a collection of pictures of apparent faces in trees. He has a set of 30 'face' images and 30 controls that he is happy to share with anyone interested in using them for research. He has a facebook page https://www.facebook.com/ronsfacesintrees and an osf page https://osf.io/md8kp/ through which you can get in contact with him.
Peter
Peter Hancock (he/him)
Professor
Psychology, School of Natural Sciences
University of Stirling
FK9 4LA, UK
phone 01786 467675
http://rms.stir.ac.uk/converis-stirling/person/11587
@pjbhancock
Latest paper:
Simulated automated facial recognition systems as decision-aids in forensic face matching tasks.<https://psycnet.apa.org/record/2023-24366-001?doi=1>
https://psycnet.apa.org/doiLanding?doi=10.1037%2Fxge0001310
My messages may arrive outside of the working day but this does not imply any expectation that you should reply outside of your normal working hours. If you wish to respond, please do so when convenient.
________________________________
Scotland's University for Sporting Excellence
The University of Stirling is a charity registered in Scotland, number SC 011159
Dear colleagues,
*excuse any cross-posting*
Our call for submissions to Brain and Neuroscience Advances’ Special Collection<https://journals.sagepub.com/page/bna/collections/special-collection/the-ne…> is open!
This Special Collection will bring together current advances in the neuroscience of emotions, in both health and disease. We seek to publish primary research studies focused on the fundamental neuroscience of emotion, as well as clinical research insights from the fields of psychology, psychiatry and neurology. In addition to empirical work, we also welcome perspectives, reviews, and technical reports.
Along with our special guest editors, Luiz Pessoa, Rachael Jack, Liz Tunbridge and Edwin Dalmaijer, we invite you to submit your papers by 29 February 2024.
Brain and Neuroscience Advances is your society journal. We are proudly at the forefront of fully open-access and credible publishing practices, and welcome submissions from our members across a broad range of neuroscience disciplines.
You can find our Manuscript Submission Guidelines<https://journals.sagepub.com/author-instructions/BNA> and the submission portal<https://mc.manuscriptcentral.com/bna?_gl=1*ibheq8*_ga*MjAxNjMxNjA3Mi4xNjkxN…> on the journal website.
Don’t forget, BNA members receive 50% off APCs!
We look forward to your submissions and wish you a healthy and happy festive period.
With best wishes,
Dr Laura Ajram, Chief Executive, British Neuroscience Association
Dr Kate Baker, Co-Editor in Chief, Brain and Neuroscience Advances
Prof Jeff Dalley, Co-Editor in Chief, Brain and Neuroscience Advances
Prof. Rachael E. Jack, Ph.D.
Professor of Computational Social Cognition
School of Psychology & Neuroscience
University of Glasgow
Scotland, G12 8QB
+44 (0) 141 330 5087
*For all details see attachments, *
*The application will be open subject to the availability of
the positions *
*Position 1: *under Asian Smart Cities Research and Innovation Network
(ASCRIN) in collaboration with BITS Pilani and La Trobe University,
Melbourne Australia,
*Title of the project:* “Low-cost empathetic robotic solution for
caregiving (LERSCe)”.
*Abstract:* Australia and India have an elderly population which is rapidly
growing. Studies indicate that almost half of the current population of
over 75’s suffer from physical and/or mental impairments and as a result
are in need of a high level of care. With an increasing elderly population
and limited healthcare professionals, automated AI-aided health caregiving
services offer a scalable solution to cope. Moreover, these solutions can
be made efficient and cost-effective. One key service area where need
outstrips human resources is in the delivery of remote physiotherapy. This
project proposes to develop a novel soft-robotic actuator for the purpose
of delivering remote physiotherapy in the home. A closed loop edge AI
framework will drive the actuators to optimize service delivery. The edge
framework will be self-contained to address privacy and security concerns.
Control models will be trained offline and run locally on edge-capable
devices. Machine learning methods, such as deep learning, will be explored
to optimize actuator operations for maximum comfort and service efficacy.
The feedback loop may include facial and audio expressions and information
from other sensors such as skin resistance and
pressure to measure performance. Hence, there are two distinct aspects of
the project: 1) the development of a lightweight, low-cost soft-robotic
actuator 2) the development of an actuator control model which factors user
behaviour and emotional feedback. Feedback may be derived from a suite of
multimodal sensors including, but not limited to vision, tactile, and skin
resistance.
*Position 2: *funded by BITS Cross-disciplinary Research Funding (CDRF)
*Title: *Design, Development and Control of a low-cost AI-enabled 3-DOF
Compliant Manipulator
Abstract: The aim of this project is to develop a robotic manipulator
system that can be safely employed with human workers and which can
effectively coordinate with them. This would require an interdisciplinary
approach towards System Design, Control Law Design and Brain Computer
Interface (BCI). A 3-DoF robotic manipulator model with flexible joints and
torque controlled actuators will be designed. Then incorporate improved
user-machine adaptation algorithms in a BCI.
*Position 3 and 4: *funded by TiH-ISI Kolkata
*Title:* Generalized Tampering Detection in Media (GTDM)
Abstract: We plan to robustify and generalize the manipulation attack
detection by exploring monocular-based depth to better discriminate the
spatial and temporal synergy in the media. Along with this, we will explore
the power of SSL and multi-task learning (MTL) while learning the 3D
reconstruction.
Dear colleagues,
We are proud to announce the 46th European Conference on Visual Perception that will take place in Aberdeen, Scotland from Sunday 25th to Thursday 29th of August 2024.
Our website is now online: https://ecvp2024.abdn.ac.uk/. Aberdeen is Scotland’s third largest city, right on the heels of the worldwide famous Scottish Highlands, and it is within easy reach thanks to its own international airport (with regular connections to Dublin, London, Manchester, and Amsterdam), railway station (Eastcoast mainline from London), and ferry terminal.
THE PROGRAM
The conference will feature a mixture of keynote lectures, symposia, talks and poster presentations, and tutorials over five days. The social program will include the Welcome Reception, a poster session with associated whisky tasting, the Illusion Night, the Conference Dinner, and a Farewell Party. This year, for the first time, we are also introducing Perceptio-Nite (a networking event targeted at students), an exciting series of roundtable discussions during lunchtimes, and a third Keynote Lecture on new relevant findings in vision science (Spotlight in Vision). Prior to ECVP, we will be also hosting the Visual Science of Art Conference (VSAC, 22-24th of August) at the Old Aberdeen university campus. More information will soon be available at www.vsac.eu<http://www.vsac.eu>.
SUBMISSIONS AND DEADLINES
Submissions will be available in form of symposia proposals, posters or talks in forms of abstracts, and contributions to the Illusion Night. More specific symposia information will be circulated in a few weeks.
Detailed information can be found at https://ecvp2024.abdn.ac.uk/submission/registration/
Symposium submission opens: December 10th
Symposium submission deadline: January 19th
Registration opens: February 5th
Abstract submission deadline + Early Bird registration: April 5th
Abstract acceptance + Travel Awards notification: May 31st
Illusion Night submission deadline: June 24th
Late registration deadline: mid August
ECVP and ECEM
We are aware that the recently announced dates for ECEM coincide with the ECVP conference in Aberdeen. As customary, the ECVP 2024 dates were announced two years in advance during the ECVP 2022 business meeting to facilitate planning for attendees. We understand that the ECEM recent decision was based on their local organisational issues; we share the disappointment of those who are unable to support both meetings. We have reached out to ECEM organisers to find a solution in the interest of the vision community and will get back to the vision community as soon as we can/know more.
ECVP 2024 is strongly committed to be a welcoming, respectful, and inclusive conference where you are inspired and supported in your scientific goals (see our Equality, Diversity and Inclusion statement https://ecvp2024.abdn.ac.uk/conference/edi-statement/).
We look forward to welcoming you to Scotland!
Mauro Manassi and Constanze Hesse
(on behalf of the organising committee)
----------------------------------------------
Mauro Manassi
Lecturer
School of Psychology
William Guild Building, Room F11
University of Aberdeen, UK
https://www.manassilab.com/
The University of Aberdeen is a charity registered in Scotland, No SC013683.
Tha Oilthigh Obar Dheathain na charthannas clàraichte ann an Alba, Àir. SC013683.
Dear colleagues
*Please share; excuse any cross-posting*
[REMINDER] < 1 week remaining. Closing date: 31 October 2023
We are delighted to announce three full-time permanent posts in the Centre for Social, Cognitive and Affective Neuroscience (cSCAN), School of Psychology & Neuroscience, University of Glasgow, Scotland. cSCAN members operate in a research-rich capacity, with teaching-rich staff leading in the innovation and delivery of education
1. Full Professor/Associate Prof (Senior Lecturer)/Assistant Prof (Lecturer)
2. Assistant Prof (Lecturer)
We are seeking interdisciplinary researchers with internationally competitive research using innovative approaches to the computational modelling of social perception, cognition, interaction, and/or communication, with a focus on dynamic signalling, dyadic interactions, and/or dialogue in human-human and/or human-agent interactions.
3. Research Fellow/Specialist
We are seeking a researcher who will make a leading contribution to develop social interaction and communication technologies, including technologies to generate 3D dynamic human social signals, such as facial expressions, body movements, and voices, multimodal signals and dyadic interactions, plus 3D scenes and other socially related multimodal signals, the use and development of AI-related technologies to support these developments.
See our Nature Careers listing: https://www.nature.com/naturecareers/job/12803261/professor-senior-lecturer… and Twitter post: https://twitter.com/UofG_cSCAN/status/1683499306479755276?s=20
Questions? Get in touch!
Best,
Prof. Rachael E. Jack, Ph.D.
Professor of Computational Social Cognition
School of Psychology & Neuroscience
University of Glasgow
Scotland, G12 8QB
+44 (0) 141 330 5087
[MOSAIC_small_white.tiff]
We cordially invite you to join us for the National Institute of Standards and Technology Biometrics at 60 event<https://www.nist.gov/itl/iad/image-group/biometrics-60>.
The Biometrics at 60 event is a milestone celebration for the decades of biometric research and collaboration between scientists and stakeholders at NIST (NBS), other government agencies, industry, and academia in promoting the science and standards surrounding this important field of measurement research.
Join us for this celebration where you can find out more about the history of our work, how this history has shaped the science of biometrics, and where some of our future efforts will lead us.
The event will be held virtually on November 14th and 15th of 2023, and registration is free! Your participation is most welcome.
See the attached invitation for more information.
Craig Watson
NIST
Information Technology Laboratory
Information Access Division
Image Group Manager
craig.watson(a)nist.gov<mailto:craig.watson@nist.gov>
Dear Colleagues,
I am writing to you regarding the Society for Affective Science (SAS) and its annual meeting, which will be held in-person from Friday, March 1 – Sunday, March 3, 2024 in New Orleans, Louisiana, USA. You can learn more about the 2024 SAS conference here: https://society-for-affective-science.org/conferences/2024-sas-annual-confe…
SAS is an excellent venue to learn about what’s new in affective science. The interdisciplinary nature of the program is also uniquely beneficial.
The submission deadline is Wednesday, November 15, 2023. Don't miss out on the opportunity to present your science to the affective science community!: https://society-for-affective-science.org/conferences/2024-sas-annual-confe…
Please let me know if you have any questions. I hope you will join us in New Orleans, LA at the 2024 SAS meeting!
All the best,
Anthony Atkinson
Department of Psychology
Durham University
Durham, UK
https://www.durham.ac.uk/staff/a-p-atkinson/https://atkinsonap.github.io/
******************************************************************************
SAS 2024 Call for Abstract Submissions Open
The Society for Affective Science (SAS) is delighted to open its call for abstracts to be considered for the 2024 Annual Conference. The conference will be held in-person between Friday, March 1 – Sunday, March 3, 2024 in New Orleans, Louisiana, USA. The conference will once again feature Preconference sessions which will take place on Friday, March 1, 2024 prior to the opening of the Conference.
We encourage submissions from authors at all career stages.
There is no fee to submit an abstract. The conference registration fees will be announced in November.
Abstract Submissions – Four Submission Types
1. Poster: New Idea
2. Poster: New Results
3. Flash Talk
4. Symposium
Each presenting author may submit a maximum of TWO abstracts total, across all tracks (Poster: New Idea, Poster: New Results, Flash Talk, and Symposium). Abstracts submitted by the same presenting author must reflect two different research projects. There is no abstract limit for non-presenting authors.
CLICK HERE FOR FULL DETAILS AND THE SUBMISSION PORTALS<https://urldefense.com/v3/__https:/t.e2ma.net/click/nm5xnk/rdx0y3ub/36f8rac…>
Advancing Interdisciplinary Science
In line with our goal to facilitate interdisciplinarity, we welcome submissions from across all domains of affective science, including anthropology, business, computer science, cultural studies, economics, education, geography, history, integrative medicine, law, linguistics, literature, neuroscience, philosophy, political science, psychiatry, psychology, public health, sociology, theater, and more.
Important Dates
Submission Deadline
Abstracts must be submitted by Wednesday, November 15, 2023 at 11:59 p.m. Baker Island Time (BIT; UTC-12 — last time zone on earth) to be considered for inclusion in the program.
Please note: due to the earlier conference dates, there will be no extension to the submission deadline as in past years. Plan to submit early.
Submission Review Process
Abstracts will be evaluated based on scholarly merit by a double-blind peer review process with our Abstract Review Board.
Notification of acceptance or rejection of abstracts will be e-mailed to the corresponding author by early-January 2024.
Presenting authors must be the first author on the submitted abstract. All presenters must register and pay to attend the meeting.
Questions?
For any SAS 2024 conference related questions, please email sas(a)podiumconferences.com<mailto:sas@podiumconferences.com?subject=SAS%20>.
For more updates, watch our website!<https://society-for-affective-science.org/conferences/2024-sas-annual-confe…>
******************************************************************************
Two funded PhD positions are available at the Psychology and Neuroscience
of Cognition (PsyNCog) research unit
<https://www.psyncog.uliege.be/cms/c_5016065/en/about> (University of
Liège, Belgium), under the supervision of Dr. Christel Devue (Cognitive
Psychology research group). We are seeking two highly motivated candidates
to work on two different research projects.
*Position 1 – A cost-efficient mechanism of face learning - Interactions
between stability in appearance and learning conditions (3 years funding)*
The aim of the project is to test a new integrative theory of human face
learning (introduced in a recent paper here
<https://www.sciencedirect.com/science/article/pii/S0010027723002032?via%3Di…>)
that explains how recognition performance changes as familiarity with faces
develops. We hypothesise that the relative stability of a given face’s
appearance interact with learning demands to determine the level of details
that are stored in memory over time and the quality of facial
representations. In that framework, recognition errors are viewed as the
flip side of an otherwise efficient and economical mechanism.
This theory will be tested with (online) behavioural and eye-tracking
experiments that will track the development of facial representations. A
new understanding of human face learning will help address limits of facial
recognition technologies and contribute to the improvement of the treatment
of people with debilitating face recognition difficulties.
*Position 2 -* *Spatio-temporal compression in memory for real-world events
(4 years funding)*
Most of the current knowledge on episodic memory comes from laboratory
studies in which participants memorize stimuli under artificial conditions.
Yet, a new line of research suggests that information processing can
manifest in dramatically different ways in the lab and in the real world.
Here, we aim to determine how real-life events, and people and objects that
populate these events, are represented parsimoniously to deal with storage
limitations inherent to the human cognitive system. More specifically, we
will investigate how the complexity of real-world events is summarized and
compressed in episodic memory along the two crucial dimensions of space and
time.
This question will be examined using a novel experimental paradigm that
leverages information gathered by wearable camera technology and mobile
eye-tracking. This project is part of a broader project in collaboration
with Dr. Arnaud D’Argembeau and will imply to collaborate with another PhD
student. The candidate will focus on the spatial aspects and on person
recognition (including face processing).
*Profile*
We are seeking two highly motivated candidates with:
§ A Master's degree in Experimental/cognitive psychology, cognitive
neuroscience or equivalent.
§ A strong affinity with or interest in episodic memory and/or face
processing.
§ Excellent academic records.
§ Strong research skills including experimental designs and statistical
analyses
§ Experience with experiment programming software (e.g. OpenSesame,
E-prime, PsychoPy).
§ Coding skills (e.g., R, Matlab, or Python).
§ Excellent writing and oral communication skills.
§ A good command of English.
§ Organisational and time management skills.
§ Enthusiasm, self-motivation, team spirit and benevolence.
§ Experience with eye-tracking is a plus.
§ Experience with image and/or video editing software is a plus.
§ A command of or willingness to learn French is a plus for Position #2.
*Environment*
The Psychology and Neuroscience of Cognition (PsyNCog) research unit
<https://www.psyncog.uliege.be/cms/c_10112686/en/core-members> is
recognized internationally for its research on human memory. It includes
several research groups that investigate different aspects of memory and
perception, creating a dynamic research environment. The Psychology
department is located on a wooded campus (Sart Tilman
<https://www.campus.uliege.be/cms/c_9038317/en/liege-sart-tilman>) about 15
minutes’ drive from the centre of Liège and well connected via public
transports.
*Procedure*
To apply, please send the following to cdevue(a)uliege.be with email subject
“Application for PhD position #1 – Face learning” or “Application for PhD
position #2 – Spatio-temporal compression”:
§ A cover letter detailing your background and motivations.
§ A curriculum vitae, including a link to a copy of your master thesis and
a list of research projects in which you were involved, with a brief
description of your contribution.
§ Transcripts and diplomas for bachelor's and master's degrees.
§ Contact details of at least two academic references who agreed to be
contacted.
Applications will be accepted immediately and candidates will be considered
until the positions are filled. Selected candidates will be invited for an
interview online. Please contact Christel Devue (cdevue(a)uliege.be) for more
information or informal inquiries.
*Expected starting date: *as soon as possible (negotiable but no later than
December 2023).
Hello,
My lab is searching for a postdoctoral fellow who will contribute to an NSF
project investigating eye movements and retinotopic face tuning in
adults, children, and individuals with developmental prosopagnosia.
If you're interested, please see the ad below or click here
<https://apply.interfolio.com/130310>. If you have any questions, I'd be
happy to answer them.
Thanks,
Brad
The Social Perception Lab in Psychological and Brain Sciences at Dartmouth
invites applications for a postdoctoral fellow. We welcome applications
from creative scientists who are eager to develop a research program
involving psychophysics, neuropsychology, perceptual development, and
individual differences. The postdoctoral fellow will play a central role in
an NSF-funded project investigating eye movements and retinotopic face
tuning. The project will examine preferred fixation locations and face
tuning in children, adults at a variety of ages, and individuals with
developmental prosopagnosia.
This is a collaborative project between Brad Duchaine at Dartmouth and
Miguel Eckstein at UC-Santa Barbara. The postdoctoral fellow will be based
at Dartmouth but will regularly interact with Professor Eckstein and will
travel to Santa Barbara to collect data. Both supervisors are committed to
the training and career development of the fellow. For more information on
our work, please visit the Social Perception Lab and the Prosopagnosia
Research Center.
The Department of Psychological and Brain Sciences at Dartmouth offers the
best of a well-resourced, externally funded research university environment
along with the integrative and cross-disciplinary nature of a liberal arts
institution. In particular, our state-of-the-art research and teaching
facility houses human cognitive/social neuroscience and small-animal
behavioral/systems neuroscience in the same building. We have a
concentration of laboratories working on vision, so the postdoctoral fellow
will be part of a supportive community of vision researchers. Beyond the
department, postdoctoral scholars are supported by the Guarini School for
Graduate and Advanced Studies, including their diversity and inclusion
initiatives. The broader neuroscience community includes research programs
in the Department of Biological Sciences, Geisel School of Medicine, Thayer
School of Engineering, and the cross-departmental Integrative Neuroscience
at Dartmouth (IND) graduate program.
The Department of Psychological and Brain Sciences and Dartmouth are
committed to fostering a diverse, equitable, and inclusive population of
students, faculty, and staff. Dartmouth recently launched a new initiative,
Toward Equity, that embraces shared definitions of diversity, equity,
inclusion, and belonging as a foundation for our success in institutional
transformation. We are especially interested in applicants who are able to
work effectively with students, faculty, and staff from all backgrounds and
with different identities and attributes. Our labs regularly host students
participating in undergraduate diversity initiatives in STEM research, such
as our Women in Science Program, E. E. Just STEM Scholars Program, and the
Academic Summer Undergraduate Research Experience (ASURE).
Qualifications
Applicants should have a PhD in Psychology, Neuroscience, or a closely
related field, or be ABD with a degree received before the start of the
appointment. Qualified candidates should have experience with perception
research, substantial programming experience, and an interest in individual
differences and development. We also encourage enquiries from applicants
with other backgrounds.
Application Instructions
Please submit all materials electronically via Interfolio:
Cover Letter that outlines your research interests and qualifications
CV, including contact information for two references.
Review of applications will begin on October 1, 2023 and continue until the
position is filled. The anticipated start date is negotiable. For
enquiries, please contact Professor Brad Duchaine,
bradley.c.duchaine(a)dartmouth.edu.
Dear colleagues
*Please share; excuse any cross-posting*
We are delighted to announce three full-time *tenured* positions in the Centre for Social, Cognitive and Affective Neuroscience (cSCAN), School of Psychology & Neuroscience, University of Glasgow, Scotland. cSCAN members operate in a research-rich capacity, with teaching-rich staff leading in the innovation and delivery of education.
* Full Professor/Associate Prof (Senior Lecturer)/Assistant Prof (Lecturer)
* Assistant Prof (Lecturer)
We are seeking interdisciplinary researchers with internationally competitive research using innovative approaches to the computational modelling of social perception, cognition, interaction, and/or communication, with a focus on dynamic signalling, dyadic interactions, and/or dialogue in human-human and/or human-agent interactions.
* Research Fellow
We are seeking a researcher who will make a leading contribution to develop social interaction and communication technologies, including technologies to generate 3D dynamic human social signals, such as facial expressions, body movements, and voices, multimodal signals and dyadic interactions, plus 3D scenes and other socially related multimodal signals, the use and development of AI-related technologies to support these developments.
See our Nature Careers listing: https://www.nature.com/naturecareers/job/12803261/professor-senior-lecturer… and Twitter post: https://twitter.com/UofG_cSCAN/status/1683499306479755276?s=20
Closing date: 31 October 2023
Questions? Get in touch!
Prof. Rachael E. Jack, Ph.D.
Professor of Computational Social Cognition
School of Psychology & Neuroscience
University of Glasgow
Scotland, G12 8QB
+44 (0) 141 330 5087
[cid:11880D74-6D34-4656-B001-CB0A0540DFA1]
Dear All,
I would appreciate it if you`d propagate the following opportunity among
prospective students.
The *Institute of Psychology at the University of Pecs, Hungary*, has
started a *PhD program for international students*. During the program,
among other possibilities, students can join research that aims to extend
our knowledge about the cognitive and neural background of *face perception*.
We`re particularly interested in how semantic knowledge about a person
interacts with affective processes during recognition. The students will
have access to the following equipment in our lab:
- device for accurate reaction time measurements (cedrus)
- eye-tracker (Toobi TX300)
- physiological measurements (BIOPAC modules: EDA, heart rate, respiration
rate, EMG etc)
- EEG (brain products, 64 channel)
- noldus observer
For a limited number of students who are *EU-citizens *we can provide a
*scholarship* that covers tuition fee and costs of housing.
For citizens of other countries there is a tuition fee (3500 euros per
semester in the first and second year, and 2500 euros per semester in the
third and fourth year).
The deadline for the application program is 15.06.2023 (with a possibility
of extension), prior informal inquiries are advised.
Details about the program:
https://international.pte.hu/study-programs/phd-psychology
Ferenc Kocsor, PhD, habil.
senior researcher
head of the international doctoral program
e-mail: kocsor.ferenc(a)pte.hu
In recent times digital biometrics is of immense importance in all spheres
of life. Mostly the advances are in the direction of 3D biometrics and the
face is the body part that is used
mostly. Though face biometrics is one of the most used forms after
fingerprint right now, it is also open to many kinds of presentation attack
instruments. Presentation attack instruments are mainly videos, photographs
or masks and many times expert impersonators with prosthetic makeup. The 3D
face biometrics is sometimes strengthened with the ear, and in many cases,
the ear alone is sufficient for the recognition of individuals. The ear is
agnostic of expressions and thus easy to recognize but forging a
plastic-based ear is also a lot easier than face. 3D ear recognition
mitigates the effect to a consider-
able extent. 3D vascular biometrics and palm-based biometrics have recently
gained steam. Thus in many forms of human biometrics, 3D information is
crucial. But the need for sophisticated and expensive hardware components
works as a deterrent to its widespread adoption. To record and promote this
area of this research we plan to host this special session. We invite
practitioners, researchers, and engineers from biometrics, signal
processing, computer vision, and machine learning fields to contribute
their expertise to uplift the state-of-the-art.
Topics of interest include but are not limited to
• 3D shape capturing and reconstruction for the human body or body parts
from monocular vision
• 3D vasculature and palm-based biometrics from monocular vision
• 3D ear biometrics from monocular vision
• 3D air signature from monocular vision
* Passive 3D Gait biometrics-based recognition from monocular vision
• 3D face by the monocular vision for biometric application
• Emotion and artifact agnostic 3D biometrics by monocular vision
• Multimodal sensors for real-time 3D shape capturing
• 3D face estimation with high occlusion and monocular camera
• 3D information capture under low lighting conditions from the monocular
camera
• 3D biometrics from short videos
• Advancement in inexpensive single-shot sensor technology for 3D
biometrics capture
Submission Guidelines:
Submit your papers at:
https://cmt3.research.microsoft.com/IJCB2023 in a special session track.
The paper presented at this session will be published as part
of the IJCB2023 and should, therefore, follow the same guideline as the
main conference.
Page limit: A paper can be up to 8 pages including figures
and tables, plus additional pages for references only.
Papers will be double-blind peer-reviewed by at least three
reviewers. Please remove author names, affiliations, email addresses, etc.
from the paper. Remove personal acknowledgements.
Important Dates:
Full Paper Submission: July 17, 2023, 23:59:59 PDT
Acceptance Notice: August 17, 2023, 23:59:59 PDT
Camera-Ready Paper: August 21, 2023, 23:59:59 PDT
Organizing Committee:
Abhijit Das, BITS Pilani, India
Aritra Mukherjee, BITS Pilani, India
Xiangyu Zhu, CAS, China
Recent Advances in Detecting Manipulation Attacks on Biometric Systems
(ADMA-2023) IJCB 2023 - Special Session
Manipulated attacks in biometrics via modified images/videos and other
material-based techniques such as presentation attacks and deep fakes have
become a tremendous threat to the security world owing to increasingly
realistic spoofing methods. Hence, such manipulations have triggered the
need for research attention towards robust and reliable methods for
detecting biometric manipulation attacks. The recent inclusion of
manipulation/generation methods such as auto-encoder and generative
adversarial network approaches combined with accurate localisation and
perceptual learning objectives added an extra challenge to such
manipulation detection tasks. Due to this, the performance of existing
state-of-the-art manipulation detection methods significantly degrades in
unknown scenarios. Apart from this, real-time processing, manipulation on
low-quality medium, limited availability of data, and inclusion of these
manipulation detection techniques for forensic investigation are yet to be
widely explored. Hence, this special session aims to profile recent
developments and push the border of the digital manipulation detection
technique on biometric systems.
We invite practitioners, researchers and engineers from biometrics, signal
processing, material science, mathematics, computer vision and machine
learning to contribute their expertise to underpin the highlighted
challenges. Further, this special session promotes cross-disciplinary
research by inviting the partitioner in the field of psychology where one
can perform the human observer (or super-recogniser) analysis to detect
attacks.
Topics of interest include but are not limited to:
Deepfake manipulation and detection technique
Novel generalised PAD to unknown attacks
Image manipulation techniques datasets
Database in image and video manipulation, and attacks
Privacy-preserving techniques in digital manipulation attack
detection
Image and video synthesis in PAD
Image and video manipulation generation and detection
Human observer analysis in detecting the manipulated
biometric images
Novel sensors for detecting manipulated attacks
Bias analyses and mitigation in attack detection algorithms
Submission Guidelines:
Submit your papers at:
https://cmt3.research.microsoft.com/IJCB2023 in a special session track.
The paper presented at ADMA-2023 will be published as part of
the IJCB2023 and should, therefore, follow the same guideline as the main
conference.
Page limit: A paper can be up to 8 pages including figures
and tables, plus additional pages for references only.
Papers will be double-blind peer-reviewed by at least three
reviewers. Please remove author names, affiliations, email addresses, etc.
from the paper. Remove personal acknowledgements.
Important Dates:
Full Paper Submission: July 17, 2023, 23:59:59 PDT
Acceptance Notice: August 17, 2023, 23:59:59 PDT
Camera-Ready Paper: August 21, 2023, 23:59:59 PDT
Organizing Committee:
Abhijit Das, BITS Pilani, India
Raghavendra Ramachandra, NTNU, Norway
Meiling Fang, Fraunhofer IGD, Germany
Dear colleagues, I've written a short note relating to a simulation I ran to clarify my muddy thinking about the effects of bias (towards match or mismatch) in face matching experiments and the way that principal components analysis separates match and mismatch items into different components. I can't see it making a published paper but I figure others may find it useful, so I've put it up on psyarxiv: https://psyarxiv.com/f2a9j Bottom line: when participants vary in bias and ability independently, PCA tends to separate match and mismatch trials, especially after varimax rotation.
The (not very elegant) matlab simulation code is on OSF, linked from the paper.
Comments welcome, to me rather than the whole list.
Peter
Peter Hancock (he/him)
Professor
Psychology, School of Natural Sciences
University of Stirling
FK9 4LA, UK
phone 01786 467675
http://rms.stir.ac.uk/converis-stirling/person/11587
@pjbhancock
Latest paper:
Simulated automated facial recognition systems as decision-aids in forensic face matching tasks.<https://psycnet.apa.org/record/2023-24366-001?doi=1>
https://psycnet.apa.org/doiLanding?doi=10.1037%2Fxge0001310
My messages may arrive outside of the working day but this does not imply any expectation that you should reply outside of your normal working hours. If you wish to respond, please do so when convenient.
________________________________
Scotland's University for Sporting Excellence
The University of Stirling is a charity registered in Scotland, number SC 011159
Apologies for cross-posting
***********************************************************************************
AAP 2023: CALL FOR PAPERS
International Workshop on Automated Assessment of Pain
http://aap-workshop.net/
Submission Deadline: July 20th, 2023
***********************************************************************************
The International Workshop on Automated Assessment of Pain (AAP
2023) will be held in conjunction with ICMI 2023 on October 9th-13rd, 2023, Paris, France.
For details concerning the workshop program, paper submission, and
guidelines please visit our workshop website at:
http://aap-workshop.net/
Best regards,
Zakia Hammal, Steffen Walter, Nadia Berthouze
The Center of Brain and Health at New York University Abu Dhabi seeks to
recruit two postdoctoral associates for two projects: 1) a project on the
mechanisms underlying rapid perception and cognition, and 2) a project on
the neural mechanisms underlying interactions of visual and conceptual
systems.
*Project 1: Mechanisms underlying rapid perception and cognition*
*(PI: Prof. David Melcher, Perception and Active Cognition Laboratory)*
Attention, perception, working memory and other aspects of cognition are
limited by time constraints that are linked to the temporal scales of
neural activity. On the one hand, we can find general principles linking
ongoing brain rhythms to the temporal unfolding of thought, from the
sampling rate of sensory perception to the maintenance of active
representations in memory. However, there are also large individual
differences in processing speed within the healthy adult population, across
the developmental lifespan, and when considering clinical and neurological
patient groups. The successful applicant will drive a fascinating project
on the neural correlates of these individual and clinical differences in
speed of information processing.
*Project 2: Neural mechanisms underlying interactions of visual and
conceptual systems*
*(PI: Prof. Olivia Cheung, Objects And Knowledge Laboratory)*
High-level vision, which involves transforming visual inputs into
meaningful concepts such as faces, words, animals, human-made objects, and
scenes, is essential for humans to understand and interact with their
environment. This process relies on a cortical network that supports
perception, learning, memory, and prediction. The study of high-level
vision provides a window into how learning and experience impact the human
brain. The successful applicant will lead a project investigating the
complex nature of semantic associations and image statistics on category
selectivity, using machine learning and multivariate pattern analysis
techniques. To distinguish the cortical networks and behavioral markers
that are common across categories or unique to specific categories, the
project involves characterizing the similarities and differences in the
processing of multiple categories in healthy and clinical populations.
The positions are funded for two years with the possibility of renewal.
Required expertise includes strong knowledge of cognitive neuroscience and
expertise in at least one of the neuroimaging methodologies involved in the
project (fMRI, EEG or MEG). For a competitive application at the
postdoctoral level, candidates should demonstrate experience in leading
neuroimaging studies, as shown by publications in international scientific
journals. The successful candidates will work in a multidisciplinary Center
environment with world-class research infrastructure, consisting of
PhD-level scientists, graduate students and undergraduate students.
The terms of employment are extremely competitive and include housing and
educational subsidies for children. Applications will be accepted
immediately and candidates will be considered until the positions are
filled.
For more information and to apply via Interfolio:
https://apply.interfolio.com/120844 (Project 1)
https://apply.interfolio.com/122830 (Project 2)
Hello,
The Social Perception Lab at Dartmouth has an open lab manager / RA
position. The position is funded by an NIH grant to investigate the
cognitive and neural basis of developmental prosopagnosia. It requires
strong programming skills and an interest in perception, neuroscience, and
neuropsychology. can provide a good springboard into a PhD program.
If you're interested, here's a link to the ad:
https://searchjobs.dartmouth.edu/postings/67520
Please contact me if you have questions.
Thanks,
Brad
Apologies for cross-posting
***********************************************************************************
FGAHI 2019: CALL FOR PAPERS
2nd International Workshop on Face and Gesture Analysis for Health Informatics
Accepted papers will be published at the CVF open access archive.
Submission Deadline Extended: May 1st, 2019.
The camera-ready deadline: May 15th, 2019.
***********************************************************************************
The 2d International Workshop on Face and Gesture Analysis for Health Informatics (FGAHI
2019) will be held in conjunction with IEEE CVPR 2019 on June 16th - June 21st, Long Beach, CA.
For details concerning the workshop program, paper submission, and
guidelines please visit our workshop website at:
http://fgahi2019.isir.upmc.fr/
Best regards,
Zakia Hammal
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/