Dear Colleagues,
I am writing to you regarding the Society for Affective Science (SAS) and its annual meeting, which will be held in-person from Friday, March 1 – Sunday, March 3, 2024 in New Orleans, Louisiana, USA. You can learn more about the 2024 SAS conference here: https://society-for-affective-science.org/conferences/2024-sas-annual-confe…
SAS is an excellent venue to learn about what’s new in affective science. The interdisciplinary nature of the program is also uniquely beneficial.
The submission deadline is Wednesday, November 15, 2023. Don't miss out on the opportunity to present your science to the affective science community!: https://society-for-affective-science.org/conferences/2024-sas-annual-confe…
Please let me know if you have any questions. I hope you will join us in New Orleans, LA at the 2024 SAS meeting!
All the best,
Anthony Atkinson
Department of Psychology
Durham University
Durham, UK
https://www.durham.ac.uk/staff/a-p-atkinson/https://atkinsonap.github.io/
******************************************************************************
SAS 2024 Call for Abstract Submissions Open
The Society for Affective Science (SAS) is delighted to open its call for abstracts to be considered for the 2024 Annual Conference. The conference will be held in-person between Friday, March 1 – Sunday, March 3, 2024 in New Orleans, Louisiana, USA. The conference will once again feature Preconference sessions which will take place on Friday, March 1, 2024 prior to the opening of the Conference.
We encourage submissions from authors at all career stages.
There is no fee to submit an abstract. The conference registration fees will be announced in November.
Abstract Submissions – Four Submission Types
1. Poster: New Idea
2. Poster: New Results
3. Flash Talk
4. Symposium
Each presenting author may submit a maximum of TWO abstracts total, across all tracks (Poster: New Idea, Poster: New Results, Flash Talk, and Symposium). Abstracts submitted by the same presenting author must reflect two different research projects. There is no abstract limit for non-presenting authors.
CLICK HERE FOR FULL DETAILS AND THE SUBMISSION PORTALS<https://urldefense.com/v3/__https:/t.e2ma.net/click/nm5xnk/rdx0y3ub/36f8rac…>
Advancing Interdisciplinary Science
In line with our goal to facilitate interdisciplinarity, we welcome submissions from across all domains of affective science, including anthropology, business, computer science, cultural studies, economics, education, geography, history, integrative medicine, law, linguistics, literature, neuroscience, philosophy, political science, psychiatry, psychology, public health, sociology, theater, and more.
Important Dates
Submission Deadline
Abstracts must be submitted by Wednesday, November 15, 2023 at 11:59 p.m. Baker Island Time (BIT; UTC-12 — last time zone on earth) to be considered for inclusion in the program.
Please note: due to the earlier conference dates, there will be no extension to the submission deadline as in past years. Plan to submit early.
Submission Review Process
Abstracts will be evaluated based on scholarly merit by a double-blind peer review process with our Abstract Review Board.
Notification of acceptance or rejection of abstracts will be e-mailed to the corresponding author by early-January 2024.
Presenting authors must be the first author on the submitted abstract. All presenters must register and pay to attend the meeting.
Questions?
For any SAS 2024 conference related questions, please email sas(a)podiumconferences.com<mailto:sas@podiumconferences.com?subject=SAS%20>.
For more updates, watch our website!<https://society-for-affective-science.org/conferences/2024-sas-annual-confe…>
******************************************************************************
Two funded PhD positions are available at the Psychology and Neuroscience
of Cognition (PsyNCog) research unit
<https://www.psyncog.uliege.be/cms/c_5016065/en/about> (University of
Liège, Belgium), under the supervision of Dr. Christel Devue (Cognitive
Psychology research group). We are seeking two highly motivated candidates
to work on two different research projects.
*Position 1 – A cost-efficient mechanism of face learning - Interactions
between stability in appearance and learning conditions (3 years funding)*
The aim of the project is to test a new integrative theory of human face
learning (introduced in a recent paper here
<https://www.sciencedirect.com/science/article/pii/S0010027723002032?via%3Di…>)
that explains how recognition performance changes as familiarity with faces
develops. We hypothesise that the relative stability of a given face’s
appearance interact with learning demands to determine the level of details
that are stored in memory over time and the quality of facial
representations. In that framework, recognition errors are viewed as the
flip side of an otherwise efficient and economical mechanism.
This theory will be tested with (online) behavioural and eye-tracking
experiments that will track the development of facial representations. A
new understanding of human face learning will help address limits of facial
recognition technologies and contribute to the improvement of the treatment
of people with debilitating face recognition difficulties.
*Position 2 -* *Spatio-temporal compression in memory for real-world events
(4 years funding)*
Most of the current knowledge on episodic memory comes from laboratory
studies in which participants memorize stimuli under artificial conditions.
Yet, a new line of research suggests that information processing can
manifest in dramatically different ways in the lab and in the real world.
Here, we aim to determine how real-life events, and people and objects that
populate these events, are represented parsimoniously to deal with storage
limitations inherent to the human cognitive system. More specifically, we
will investigate how the complexity of real-world events is summarized and
compressed in episodic memory along the two crucial dimensions of space and
time.
This question will be examined using a novel experimental paradigm that
leverages information gathered by wearable camera technology and mobile
eye-tracking. This project is part of a broader project in collaboration
with Dr. Arnaud D’Argembeau and will imply to collaborate with another PhD
student. The candidate will focus on the spatial aspects and on person
recognition (including face processing).
*Profile*
We are seeking two highly motivated candidates with:
§ A Master's degree in Experimental/cognitive psychology, cognitive
neuroscience or equivalent.
§ A strong affinity with or interest in episodic memory and/or face
processing.
§ Excellent academic records.
§ Strong research skills including experimental designs and statistical
analyses
§ Experience with experiment programming software (e.g. OpenSesame,
E-prime, PsychoPy).
§ Coding skills (e.g., R, Matlab, or Python).
§ Excellent writing and oral communication skills.
§ A good command of English.
§ Organisational and time management skills.
§ Enthusiasm, self-motivation, team spirit and benevolence.
§ Experience with eye-tracking is a plus.
§ Experience with image and/or video editing software is a plus.
§ A command of or willingness to learn French is a plus for Position #2.
*Environment*
The Psychology and Neuroscience of Cognition (PsyNCog) research unit
<https://www.psyncog.uliege.be/cms/c_10112686/en/core-members> is
recognized internationally for its research on human memory. It includes
several research groups that investigate different aspects of memory and
perception, creating a dynamic research environment. The Psychology
department is located on a wooded campus (Sart Tilman
<https://www.campus.uliege.be/cms/c_9038317/en/liege-sart-tilman>) about 15
minutes’ drive from the centre of Liège and well connected via public
transports.
*Procedure*
To apply, please send the following to cdevue(a)uliege.be with email subject
“Application for PhD position #1 – Face learning” or “Application for PhD
position #2 – Spatio-temporal compression”:
§ A cover letter detailing your background and motivations.
§ A curriculum vitae, including a link to a copy of your master thesis and
a list of research projects in which you were involved, with a brief
description of your contribution.
§ Transcripts and diplomas for bachelor's and master's degrees.
§ Contact details of at least two academic references who agreed to be
contacted.
Applications will be accepted immediately and candidates will be considered
until the positions are filled. Selected candidates will be invited for an
interview online. Please contact Christel Devue (cdevue(a)uliege.be) for more
information or informal inquiries.
*Expected starting date: *as soon as possible (negotiable but no later than
December 2023).
Hello,
My lab is searching for a postdoctoral fellow who will contribute to an NSF
project investigating eye movements and retinotopic face tuning in
adults, children, and individuals with developmental prosopagnosia.
If you're interested, please see the ad below or click here
<https://apply.interfolio.com/130310>. If you have any questions, I'd be
happy to answer them.
Thanks,
Brad
The Social Perception Lab in Psychological and Brain Sciences at Dartmouth
invites applications for a postdoctoral fellow. We welcome applications
from creative scientists who are eager to develop a research program
involving psychophysics, neuropsychology, perceptual development, and
individual differences. The postdoctoral fellow will play a central role in
an NSF-funded project investigating eye movements and retinotopic face
tuning. The project will examine preferred fixation locations and face
tuning in children, adults at a variety of ages, and individuals with
developmental prosopagnosia.
This is a collaborative project between Brad Duchaine at Dartmouth and
Miguel Eckstein at UC-Santa Barbara. The postdoctoral fellow will be based
at Dartmouth but will regularly interact with Professor Eckstein and will
travel to Santa Barbara to collect data. Both supervisors are committed to
the training and career development of the fellow. For more information on
our work, please visit the Social Perception Lab and the Prosopagnosia
Research Center.
The Department of Psychological and Brain Sciences at Dartmouth offers the
best of a well-resourced, externally funded research university environment
along with the integrative and cross-disciplinary nature of a liberal arts
institution. In particular, our state-of-the-art research and teaching
facility houses human cognitive/social neuroscience and small-animal
behavioral/systems neuroscience in the same building. We have a
concentration of laboratories working on vision, so the postdoctoral fellow
will be part of a supportive community of vision researchers. Beyond the
department, postdoctoral scholars are supported by the Guarini School for
Graduate and Advanced Studies, including their diversity and inclusion
initiatives. The broader neuroscience community includes research programs
in the Department of Biological Sciences, Geisel School of Medicine, Thayer
School of Engineering, and the cross-departmental Integrative Neuroscience
at Dartmouth (IND) graduate program.
The Department of Psychological and Brain Sciences and Dartmouth are
committed to fostering a diverse, equitable, and inclusive population of
students, faculty, and staff. Dartmouth recently launched a new initiative,
Toward Equity, that embraces shared definitions of diversity, equity,
inclusion, and belonging as a foundation for our success in institutional
transformation. We are especially interested in applicants who are able to
work effectively with students, faculty, and staff from all backgrounds and
with different identities and attributes. Our labs regularly host students
participating in undergraduate diversity initiatives in STEM research, such
as our Women in Science Program, E. E. Just STEM Scholars Program, and the
Academic Summer Undergraduate Research Experience (ASURE).
Qualifications
Applicants should have a PhD in Psychology, Neuroscience, or a closely
related field, or be ABD with a degree received before the start of the
appointment. Qualified candidates should have experience with perception
research, substantial programming experience, and an interest in individual
differences and development. We also encourage enquiries from applicants
with other backgrounds.
Application Instructions
Please submit all materials electronically via Interfolio:
Cover Letter that outlines your research interests and qualifications
CV, including contact information for two references.
Review of applications will begin on October 1, 2023 and continue until the
position is filled. The anticipated start date is negotiable. For
enquiries, please contact Professor Brad Duchaine,
bradley.c.duchaine(a)dartmouth.edu.
Dear colleagues
*Please share; excuse any cross-posting*
We are delighted to announce three full-time *tenured* positions in the Centre for Social, Cognitive and Affective Neuroscience (cSCAN), School of Psychology & Neuroscience, University of Glasgow, Scotland. cSCAN members operate in a research-rich capacity, with teaching-rich staff leading in the innovation and delivery of education.
* Full Professor/Associate Prof (Senior Lecturer)/Assistant Prof (Lecturer)
* Assistant Prof (Lecturer)
We are seeking interdisciplinary researchers with internationally competitive research using innovative approaches to the computational modelling of social perception, cognition, interaction, and/or communication, with a focus on dynamic signalling, dyadic interactions, and/or dialogue in human-human and/or human-agent interactions.
* Research Fellow
We are seeking a researcher who will make a leading contribution to develop social interaction and communication technologies, including technologies to generate 3D dynamic human social signals, such as facial expressions, body movements, and voices, multimodal signals and dyadic interactions, plus 3D scenes and other socially related multimodal signals, the use and development of AI-related technologies to support these developments.
See our Nature Careers listing: https://www.nature.com/naturecareers/job/12803261/professor-senior-lecturer… and Twitter post: https://twitter.com/UofG_cSCAN/status/1683499306479755276?s=20
Closing date: 31 October 2023
Questions? Get in touch!
Prof. Rachael E. Jack, Ph.D.
Professor of Computational Social Cognition
School of Psychology & Neuroscience
University of Glasgow
Scotland, G12 8QB
+44 (0) 141 330 5087
[cid:11880D74-6D34-4656-B001-CB0A0540DFA1]
Dear All,
I would appreciate it if you`d propagate the following opportunity among
prospective students.
The *Institute of Psychology at the University of Pecs, Hungary*, has
started a *PhD program for international students*. During the program,
among other possibilities, students can join research that aims to extend
our knowledge about the cognitive and neural background of *face perception*.
We`re particularly interested in how semantic knowledge about a person
interacts with affective processes during recognition. The students will
have access to the following equipment in our lab:
- device for accurate reaction time measurements (cedrus)
- eye-tracker (Toobi TX300)
- physiological measurements (BIOPAC modules: EDA, heart rate, respiration
rate, EMG etc)
- EEG (brain products, 64 channel)
- noldus observer
For a limited number of students who are *EU-citizens *we can provide a
*scholarship* that covers tuition fee and costs of housing.
For citizens of other countries there is a tuition fee (3500 euros per
semester in the first and second year, and 2500 euros per semester in the
third and fourth year).
The deadline for the application program is 15.06.2023 (with a possibility
of extension), prior informal inquiries are advised.
Details about the program:
https://international.pte.hu/study-programs/phd-psychology
Ferenc Kocsor, PhD, habil.
senior researcher
head of the international doctoral program
e-mail: kocsor.ferenc(a)pte.hu
In recent times digital biometrics is of immense importance in all spheres
of life. Mostly the advances are in the direction of 3D biometrics and the
face is the body part that is used
mostly. Though face biometrics is one of the most used forms after
fingerprint right now, it is also open to many kinds of presentation attack
instruments. Presentation attack instruments are mainly videos, photographs
or masks and many times expert impersonators with prosthetic makeup. The 3D
face biometrics is sometimes strengthened with the ear, and in many cases,
the ear alone is sufficient for the recognition of individuals. The ear is
agnostic of expressions and thus easy to recognize but forging a
plastic-based ear is also a lot easier than face. 3D ear recognition
mitigates the effect to a consider-
able extent. 3D vascular biometrics and palm-based biometrics have recently
gained steam. Thus in many forms of human biometrics, 3D information is
crucial. But the need for sophisticated and expensive hardware components
works as a deterrent to its widespread adoption. To record and promote this
area of this research we plan to host this special session. We invite
practitioners, researchers, and engineers from biometrics, signal
processing, computer vision, and machine learning fields to contribute
their expertise to uplift the state-of-the-art.
Topics of interest include but are not limited to
• 3D shape capturing and reconstruction for the human body or body parts
from monocular vision
• 3D vasculature and palm-based biometrics from monocular vision
• 3D ear biometrics from monocular vision
• 3D air signature from monocular vision
* Passive 3D Gait biometrics-based recognition from monocular vision
• 3D face by the monocular vision for biometric application
• Emotion and artifact agnostic 3D biometrics by monocular vision
• Multimodal sensors for real-time 3D shape capturing
• 3D face estimation with high occlusion and monocular camera
• 3D information capture under low lighting conditions from the monocular
camera
• 3D biometrics from short videos
• Advancement in inexpensive single-shot sensor technology for 3D
biometrics capture
Submission Guidelines:
Submit your papers at:
https://cmt3.research.microsoft.com/IJCB2023 in a special session track.
The paper presented at this session will be published as part
of the IJCB2023 and should, therefore, follow the same guideline as the
main conference.
Page limit: A paper can be up to 8 pages including figures
and tables, plus additional pages for references only.
Papers will be double-blind peer-reviewed by at least three
reviewers. Please remove author names, affiliations, email addresses, etc.
from the paper. Remove personal acknowledgements.
Important Dates:
Full Paper Submission: July 17, 2023, 23:59:59 PDT
Acceptance Notice: August 17, 2023, 23:59:59 PDT
Camera-Ready Paper: August 21, 2023, 23:59:59 PDT
Organizing Committee:
Abhijit Das, BITS Pilani, India
Aritra Mukherjee, BITS Pilani, India
Xiangyu Zhu, CAS, China
Recent Advances in Detecting Manipulation Attacks on Biometric Systems
(ADMA-2023) IJCB 2023 - Special Session
Manipulated attacks in biometrics via modified images/videos and other
material-based techniques such as presentation attacks and deep fakes have
become a tremendous threat to the security world owing to increasingly
realistic spoofing methods. Hence, such manipulations have triggered the
need for research attention towards robust and reliable methods for
detecting biometric manipulation attacks. The recent inclusion of
manipulation/generation methods such as auto-encoder and generative
adversarial network approaches combined with accurate localisation and
perceptual learning objectives added an extra challenge to such
manipulation detection tasks. Due to this, the performance of existing
state-of-the-art manipulation detection methods significantly degrades in
unknown scenarios. Apart from this, real-time processing, manipulation on
low-quality medium, limited availability of data, and inclusion of these
manipulation detection techniques for forensic investigation are yet to be
widely explored. Hence, this special session aims to profile recent
developments and push the border of the digital manipulation detection
technique on biometric systems.
We invite practitioners, researchers and engineers from biometrics, signal
processing, material science, mathematics, computer vision and machine
learning to contribute their expertise to underpin the highlighted
challenges. Further, this special session promotes cross-disciplinary
research by inviting the partitioner in the field of psychology where one
can perform the human observer (or super-recogniser) analysis to detect
attacks.
Topics of interest include but are not limited to:
Deepfake manipulation and detection technique
Novel generalised PAD to unknown attacks
Image manipulation techniques datasets
Database in image and video manipulation, and attacks
Privacy-preserving techniques in digital manipulation attack
detection
Image and video synthesis in PAD
Image and video manipulation generation and detection
Human observer analysis in detecting the manipulated
biometric images
Novel sensors for detecting manipulated attacks
Bias analyses and mitigation in attack detection algorithms
Submission Guidelines:
Submit your papers at:
https://cmt3.research.microsoft.com/IJCB2023 in a special session track.
The paper presented at ADMA-2023 will be published as part of
the IJCB2023 and should, therefore, follow the same guideline as the main
conference.
Page limit: A paper can be up to 8 pages including figures
and tables, plus additional pages for references only.
Papers will be double-blind peer-reviewed by at least three
reviewers. Please remove author names, affiliations, email addresses, etc.
from the paper. Remove personal acknowledgements.
Important Dates:
Full Paper Submission: July 17, 2023, 23:59:59 PDT
Acceptance Notice: August 17, 2023, 23:59:59 PDT
Camera-Ready Paper: August 21, 2023, 23:59:59 PDT
Organizing Committee:
Abhijit Das, BITS Pilani, India
Raghavendra Ramachandra, NTNU, Norway
Meiling Fang, Fraunhofer IGD, Germany
Dear colleagues, I've written a short note relating to a simulation I ran to clarify my muddy thinking about the effects of bias (towards match or mismatch) in face matching experiments and the way that principal components analysis separates match and mismatch items into different components. I can't see it making a published paper but I figure others may find it useful, so I've put it up on psyarxiv: https://psyarxiv.com/f2a9j Bottom line: when participants vary in bias and ability independently, PCA tends to separate match and mismatch trials, especially after varimax rotation.
The (not very elegant) matlab simulation code is on OSF, linked from the paper.
Comments welcome, to me rather than the whole list.
Peter
Peter Hancock (he/him)
Professor
Psychology, School of Natural Sciences
University of Stirling
FK9 4LA, UK
phone 01786 467675
http://rms.stir.ac.uk/converis-stirling/person/11587
@pjbhancock
Latest paper:
Simulated automated facial recognition systems as decision-aids in forensic face matching tasks.<https://psycnet.apa.org/record/2023-24366-001?doi=1>
https://psycnet.apa.org/doiLanding?doi=10.1037%2Fxge0001310
My messages may arrive outside of the working day but this does not imply any expectation that you should reply outside of your normal working hours. If you wish to respond, please do so when convenient.
________________________________
Scotland's University for Sporting Excellence
The University of Stirling is a charity registered in Scotland, number SC 011159
Apologies for cross-posting
***********************************************************************************
AAP 2023: CALL FOR PAPERS
International Workshop on Automated Assessment of Pain
http://aap-workshop.net/
Submission Deadline: July 20th, 2023
***********************************************************************************
The International Workshop on Automated Assessment of Pain (AAP
2023) will be held in conjunction with ICMI 2023 on October 9th-13rd, 2023, Paris, France.
For details concerning the workshop program, paper submission, and
guidelines please visit our workshop website at:
http://aap-workshop.net/
Best regards,
Zakia Hammal, Steffen Walter, Nadia Berthouze