*Final Call for Papers (**no more extension**)*
I*CPR 2024: 2nd Workshop on- Call for Papers Fairness in Biometric Systems*
Biometric systems have spread worldwide and therefore have been
increasingly involved in critical decision-making processes, including
finances, public security, and forensics. Despite their increasing impact
on everybody’s daily life, many biometric solutions perform highly
divergent for different groups of individuals, as previous works have
shown. Consequently, the recognition performance of such systems is
significantly impacted by demographic and non-demographic attributes of
users. This brings to the fore discriminatory and unfair treatment of users
of such systems.
At the same time, several political regulations, such as Article 7 of the
Universal Declaration of Human Rights and Article 71 of the General Data
Protection Regulation (GDPR), have highlighted the importance of the right
to non-discrimination. These political efforts show the pertinent need for
analyzing and mitigating equability concerns in biometric systems. Given
the increasing impact on everybody’s daily life, as well as the associated
social interest, research on fairness in biometric solutions is urgently
needed.
This includes
• Developing and analyzing biometric datasets
• Proposing metrics related to equability in biometrics
• Demographic and non-demographic factors in
biometric systems
• Investigating and mitigating equability concerns
in biometric algorithms including
o Identity verification and identification
o Soft-biometric attribute estimation
o Presentation attack detection
o Template protection
o Biometric image generation
o Quality assessment
*Important Dates*
--------------------------------
Full Paper Submission: September 8, 2024
Acceptance Notice: September 20, 2024
Camera-Ready Paper: September 24, 2024
Workshop: December 01, 2024
*Call for Papers*
I*CPR 2024: 2nd Workshop on- Call for Papers Fairness in Biometric Systems*
Biometric systems have spread worldwide and therefore have been
increasingly involved in critical decision-making processes, including
finances, public security, and forensics. Despite their increasing impact
on everybody’s daily life, many biometric solutions perform highly
divergent for different groups of individuals, as previous works have
shown. Consequently, the recognition performance of such systems is
significantly impacted by demographic and non-demographic attributes of
users. This brings to the fore discriminatory and unfair treatment of users
of such systems.
At the same time, several political regulations, such as Article 7 of the
Universal Declaration of Human Rights and Article 71 of the General Data
Protection Regulation (GDPR), have highlighted the importance of the right
to non-discrimination. These political efforts show the pertinent need for
analyzing and mitigating equability concerns in biometric systems. Given
the increasing impact on everybody’s daily life, as well as the associated
social interest, research on fairness in biometric solutions is urgently
needed.
This includes
• Developing and analyzing biometric datasets
• Proposing metrics related to equability in biometrics
• Demographic and non-demographic factors in
biometric systems
• Investigating and mitigating equability concerns
in biometric algorithms including
o Identity verification and identification
o Soft-biometric attribute estimation
o Presentation attack detection
o Template protection
o Biometric image generation
o Quality assessment
*Important Dates*
--------------------------------
Full Paper Submission: September 1, 2024
Acceptance Notice: September 20, 2024
Camera-Ready Paper: September 24, 2024
Workshop: December 01, 2024
Best regards
Abhijit
------------------ --------------------- ---- --------------------
*Prof.** Abhijit Das, * PhD, SMIEEE, LMIUPRAI, Member APPCAIR
<https://appcair.com/applied-ai-faculty.html>
Machine Intelligence Group
<https://sites.google.com/hyderabad.bits-pilani.ac.in/mig/home>Assistant
Professor.
Dept. of Computer Science and Information Systems,
BITS Pilani, Hyderabad Campus.
Contact no: +914066303744 (O)
Web: https://sites.google.com/site/dasabhijit2048/home
*▄▄▄▄▄▄▄▄▄▄▄▄**▄▄▄▄▄▄▄▄▄▄▄▄▄**▄▄▄▄▄▄▄▄▄▄▄▄*
*Call for Papers*
I*CPR 2024: 2nd Workshop on Fairness in Biometric Systems*
Biometric systems have spread worldwide and therefore have been
increasingly involved in critical decision-making processes, including
finances, public security, and forensics. Despite their increasing impact
on everybody’s daily life, many biometric solutions perform highly
divergent for different groups of individuals, as previous works have
shown. Consequently, the recognition performance of such systems is
significantly impacted by demographic and non-demographic attributes of
users. This brings to the fore discriminatory and unfair treatment of users
of such systems.
At the same time, several political regulations, such as Article 7 of the
Universal Declaration of Human Rights and Article 71 of the General Data
Protection Regulation (GDPR), have highlighted the importance of the right
to non-discrimination. These political efforts show the pertinent need for
analyzing and mitigating equability concerns in biometric systems. Given
the increasing impact on everybody’s daily life, as well as the associated
social interest, research on fairness in biometric solutions is urgently
needed.
This includes
• Developing and analyzing biometric datasets
• Proposing metrics related to equability in biometrics
• Demographic and non-demographic factors in
biometric systems
• Investigating and mitigating equability concerns
in biometric algorithms including
o Identity verification and identification
o Soft-biometric attribute estimation
o Presentation attack detection
o Template protection
o Biometric image generation
o Quality assessment
Important Dates
Workshop: December 01, 2024
Full Paper Submission: August 12, 2024
Acceptance Notice: September 20, 2024
Camera-Ready Paper: September 24, 2024
A postdoctoral researcher position is available at the Objects and
Knowledge Laboratory, headed by Dr. Olivia Cheung, at NYU Abu Dhabi (
https://www.oliviacheunglab.org). The postdoctoral researcher will carry
out experiments on high-level vision (e.g., object, face, letter, scene
recognition) in humans using behavioral, fMRI, and computational methods.
Potential research projects include, but are not limited to, investigations
of the influences of experience and conceptual knowledge on visual
recognition.
Apply here: https://apply.interfolio.com/145342
Applicants must have a Ph.D. in Psychology, Cognitive Neuroscience,
Cognitive Science, or a related field, and should possess strong
programming skills (e.g., R, Matlab or Python) and a strong publication
record in topics of high-level vision. Prior experience with fMRI,
computational, or psychophysical techniques is highly preferred. Initial
appointment is for two years, with the possibility of renewal. The start
date is flexible, and the position is available from August 15, 2024.
The Objects and Knowledge Laboratory is part of the rapidly growing
expertise in Cognitive Neuroscience at NYU Abu Dhabi. The lab has access to
the state-of-the-art neuroimaging and behavioral facilities (including MRI,
MEG, EEG, eyetracking).
For consideration, applicants should submit a cover letter, curriculum
vita, statement of research interests, the expected date of availability,
and at least 2 letters of recommendation here
<https://apply.interfolio.com/145342>. Informal inquiries regarding the
position, university, or area, are encouraged. If you have any questions,
please email Dr. Olivia Cheung (olivia.cheung(a)nyu.edu). Applications will
be accepted immediately, and candidates will be considered until the
position is filled.
The terms of employment are very competitive, including relocation and
housing costs, and other benefits among which educational subsidies for
children. The NYU Abu Dhabi campus is located on Saadiyat Island (Abu
Dhabi’s cultural hub), minutes away from the white sand beaches as well as
the world class entertainment, big city and nature activities that have
made the area one of the top ten tourist destinations in the world. More
information about living in Abu Dhabi can be found here:
https://nyuad.nyu.edu/en/campus-life/living-in-abu-dhabi.html
*About NYUAD:*
NYU Abu Dhabi is a degree-granting research university with a fully
integrated liberal arts and science undergraduate program in the Arts,
Sciences, Social Sciences, Humanities, and Engineering. NYU Abu Dhabi, NYU
New York, and NYU Shanghai, form the backbone of NYU’s global network
university, an interconnected network of portal campuses and academic
centers across six continents that enable seamless international mobility
of students and faculty in their pursuit of academic and scholarly
activity. This global university represents a transformative shift in
higher education, one in which the intellectual and creative endeavors of
academia are shaped and examined through an international and multicultural
perspective. As a major intellectual hub at the crossroads of the Arab
world, NYU Abu Dhabi serves as a center for scholarly thought, advanced
research, knowledge creation, and sharing, through its academic, research,
and creative activities.
Dear all
We’re excited to announce 2 x EPSRC-funded PhD Scholarships on CS+Psych projects hosted in the School of Psychology and Neuroscience in collaboration with the School of Computing Science at the University of Glasgow. Please share widely Deadline: 17 June
Project 1: The unconscious effect of physical beauty in human social interactions
The aim of the project is to investigate how "physical beauty" can bias the outcomes of social decisions (e.g. job interviews). To do this, we aim to create an algorithm able to transform the "physical beauty" of participants in real time and use that algorithm during negotiations to see how it influences social outcomes and non-verbal behavior. We are searching for a multidisciplinary candidate that is interested in real-time computer vision (e.g. voice/face transformation and analysis) as well as social cognition (e.g., social interactions, non-verbal data analysis, social biases).
Project description: https://www.gla.ac.uk/postgraduate/doctoraltraining/mvls-epsrc/projects/pab…
Project team members: Pablo Arias Sarah (primary supervisor), Alessandro Vinciarelli (co-supervisor), Mathieu Chollet (co-supervisor)
Questions? Contact: pablo.arias(a)glasgow.ac.uk<mailto:pablo.arias@glasgow.ac.uk>
Project 2: HIGH-FIDELITY 3D FACIAL RECONSTRUCTION FOR SOCIAL SIGNAL UNDERSTANDING
Human faces convey a wealth of rich social and emotional information—for example, facial expressions often convey our internal emotion states while the shape, colour, and texture of faces can betray our age, sex, and ethnicity. As a highly salient source of social information, human faces are integral to shaping social communication and interactions. The faces in the video can be viewed as a temporal sequence of facial images with intrinsic dynamic changes. Establishing correlations between faces in different frames is important for tracking and reconstructing faces from videos. Jointly modelling fine facial geometry and appearance in a data-driven manner enables the model to learn the relationship between a single 2D face image and the corresponding 3D face model and thus reconstruct its high-quality 3D face model by leveraging the high capacity of deep neural networks. This project is to investigate computational methods for high-fidelity 3D facial tracking on videos for social signal analysis in social interaction scenarios. It involves developing computational models for reconstruction of 3D facial details capturing geometric facial expression changes and analysing social signals.
Project description: https://www.gla.ac.uk/postgraduate/doctoraltraining/mvls-epsrc/projects/hui…
Project team members: Hui Yu (primary supervisor), Rachael Jack (co-supervisor), Tanaya Guha (co-supervisor)
Question? Contact : Hui.Yu(a)glasgow.ac.uk<mailto:Hui.Yu@glasgow.ac.uk>
Prof. Rachael E. Jack, Ph.D.
Professor of Computational Social Cognition
School of Psychology & Neuroscience
University of Glasgow
Scotland, G12 8QB
+44 (0) 141 330 5087
[MOSAIC_small_white.tiff]
A nearly final program for our workshop here, July 25-6th is now available. The posters are still a bit fluid: if you’d like to bring one, please let me know.
In keeping with the workshop format, we’re allowing extended time for discussion after each set of talks
To book attendance, which is free: https://faceresearch.stir.ac.uk/july-workshop/
Peter
Program
Thursday 25th July
9:00 Registration
Session 1 Face representations
9:30 How the learning of unfamiliar faces is affected by their resemblance to familiar faces
Katie L.H. Gray, Maddie Atkinson, Kay Ritchie, Peter Hancock
9:50 How Does Increased Familiarity Change Face Representation in Memory?
Mintao Zhao, Isabelle Bülthoff
10:10 The contribution of distinctive features to cost-efficient facial representations
Christel Devue and Mathieu Blondel
10:30 Discussion
10:50 Coffee break
11:30 Keynote 1: Meike Ramon: Unique traits, computational insights: studying Super-Recognizers for societal applications
12:30 Lunch
Session 2: Decision making
13:30 Human computer teaming with low mismatch incidence,
Anna Bobak, Melina Muller, Peter Hancock
13:50 Unfamiliar face matching and metacognitive efficiency
Robin Kramer, Robert McIntosh
14:10 Distinct criterion placement for intermixed face matching tasks
Kristen A. Baker, Markus Bindemann
14:30 Discussion
14:50 Break and posters
16:00 Keynote 2: Alice O’Toole: Dissecting Face Representations in Deep Neural Networks: Implications for Rethinking Neural Codes
17:00 Break
18:00 Public Lecture: Peter Hancock: Face recognition by humans and computers: criminal injustice?
19: 30 Dinner
Friday 26th July
Session 3: Factors affecting face recognition
9:30 Face masks and fake masks: Have we been underestimating the problem of face masks in face identity perception?
Kay L Ritchie, Daniel J Carragher, Josh P Davis, Katie Read, Ryan E Jenkins, Eilidh Noyes, Katie LH Gray, Peter JB Hancock
9:50 Identification of masked faces: typical observers, super-recognisers, forensic examiners and algorithms.
Eilidh Noyes, Reuben Moreton, Peter Hancock, Kay Ritchie, Sergio Castro Martinez, Katie Gray, and Josh Davis
10:10 Individual variation, socio-emotional functioning and face perception
Karen Lander, Grace Talbot, Anastasia Murphy & Richard Brown
10:30 Discussion
10:50 Coffee
Session 4: Identification of suspects
11:20 Identity Recognition of Composites Constructed of Unfamiliar Faces
Charlie Frowd
11:40 Inverse caricature effects in eyewitness identification performance and deep learning models of face recognition
Gaia Giampietro, Ryan McKay, Thora Bjornsdottir, Laura Mickes, Nicholas Furl
12:00 Implicit markers of concealed face recognition
Ailsa Millen
12:20 Discussion
13:00 Workshop end
Posters
As good as it gets? Computer-enhanced recognition of single-view faces does not improve performance across matching or recognition tasks. Scott P Jones, Peter Hancock
"They're just not my cup of tea": random preferences are more important than random effects in modelling facial attractiveness ratings. Thomas Hancock, Peter Hancock, Anthony Lee, Morgan Sidari, Amy Zhao, Brendan Zietsch
Investigating the modulatory effects of emotional expressions on short-term face familiarity. Constantin-Iulian Chiță, Simon Paul Liversedge, Philipp Ruhnau
Human-computer teaming with low quality images. Dan Carragher, Peter Hancock, David White
Wisdom of the crowds, within and between individuals, Dan Carragher and Peter Hancock
Islands of Expertise and face matching. Emily Cunningham, Anna Bobak, Peter Hancock
Investigating Face Recognition Ability in Neurodiverse Individuals. Caelan Dow, Anna Bobak, Jud Lowes
The Heterogeneity of Face Processing in Developmental Prosopagnosia from a Single Case Analysis Approach, Benjamin Armstrong, Anna Bobak, Jud Lowes
The effects of age on face recognition. Zsofi Kovacs-Bodo, Stephen Langton, Peter Hancock & Anna Bobak
Seeing through the lies: effectiveness of eye-tracking measures for the detection of concealed recognition of newly familiar faces and objects. Amir Shapira and Ailsa Millen
Peter Hancock (he/him)
Professor
Psychology, School of Natural Sciences
University of Stirling
FK9 4LA, UK
phone 01786 467675
http://rms.stir.ac.uk/converis-stirling/person/11587
@pjbhancock
Latest papers:
Face masks and fake masks: the effect of real and superimposed masks on face matching with super-recognisers, typical observers, and algorithms https://rdcu.be/dxAIR
Balanced Integration Score: A new way of classifying Developmental Prosopagnosia
https://www.sciencedirect.com/science/article/abs/pii/S0010945224000054
My messages may arrive outside of the working day but this does not imply any expectation that you should reply outside of your normal working hours. If you wish to respond, please do so when convenient.
Web: www.stir.ac.uk<http://www.stir.ac.uk/>
[Facebook icon]<https://www.facebook.com/universityofstirling/>[Twitter icon]<https://twitter.com/StirUni>[LinkedIn icon]<https://www.linkedin.com/edu/university-of-stirling-12676>[Instagram icon]<https://www.instagram.com/universityofstirling/>[Youtbue icon]<https://www.youtube.com/user/UniversityOfStirling>
[Banner]<https://www.stir.ac.uk/>
________________________________
Scotland’s University for Sporting Excellence
The University of Stirling is a charity registered in Scotland, number SC 011159
Dear Colleagues,
Please find below the invitation to contribute to the 5th Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW) to be held in conjunction with the IEEE Computer Vision and Pattern Recognition Conference (CVPR), 2023.
(1): The Competition is split into the below four Challenges:
* Valence-Arousal Estimation Challenge
* Expression Classification Challenge
* Action Unit Detection Challenge
*
Emotional Reaction Intensity Estimation Challenge
The first 3 Challenges are based on an augmented version of the Aff-Wild2 database, which is an audiovisual in-the-wild database of 594 videos of 584 subjects of around 3M frames; it contains annotations in terms of valence-arousal, expressions and action units.
The last Challenge is based on the Hume-Reaction dataset, which is a multimodal dataset of about 75 hours of video recordings of 2222 subjects; it contains continuous annotations for the intensity of 7 emotional experiences.
Participants are invited to participate in at least one of these Challenges.
There will be one winner per Challenge; the top-3 performing teams of each Challenge will have to contribute paper(s) describing their approach, methodology and results to our Workshop; the accepted papers will be part of the CVPR 2023 proceedings; all other teams are also encouraged to submit paper(s) describing their solutions and final results; the accepted papers will be part of the CVPR 2023 proceedings.
More information about the Competition can be found here<https://ibug.doc.ic.ac.uk/resources/cvpr-2023-5th-abaw/>.
Important Dates:
* Call for participation announced, team registration begins, data available:
13 January, 2023
* Final submission deadline:
18 March, 2023
* Winners Announcement:
19 March, 2023
* Final paper submission deadline:
24 March, 2023
* Review decisions sent to authors; Notification of acceptance:
3 April, 2023
* Camera ready version deadline:
8 April, 2023
Chairs:
Dimitrios Kollias, Queen Mary University of London, UK
Stefanos Zafeiriou, Imperial College London, UK
Panagiotis Tzirakis, Hume AI
Alice Baird, Hume AI
Alan Cowen, Hume AI
(2): The Workshop solicits contributions on the recent progress of recognition, analysis, generation and modelling of face, body, and gesture, while embracing the most advanced systems available for face and gesture analysis, particularly, in-the-wild (i.e., in unconstrained environments) and across modalities like face to voice. In parallel, this Workshop will solicit contributions towards building fair models that perform well on all subgroups and improve in-the-wild generalisation.
Original high-quality contributions, including:
- databases or
- surveys and comparative studies or
- Artificial Intelligence / Machine Learning / Deep Learning / AutoML / (Data-driven or physics-based) Generative Modelling Methodologies (either Uni-Modal or Multi-Modal; Uni-Task or Multi-Task ones)
are solicited on the following topics:
i) "in-the-wild" facial expression or micro-expression analysis,
ii) "in-the-wild" facial action unit detection,
iii) "in-the-wild" valence-arousal estimation,
iv) "in-the-wild" physiological-based (e.g.,EEG, EDA) affect analysis,
v) domain adaptation for affect recognition in the previous 4 cases
vi) "in-the-wild" face recognition, detection or tracking,
vii) "in-the-wild" body recognition, detection or tracking,
viii) "in-the-wild" gesture recognition or detection,
ix) "in-the-wild" pose estimation or tracking,
x) "in-the-wild" activity recognition or tracking,
xi) "in-the-wild" lip reading and voice understanding,
xii) "in-the-wild" face and body characterization (e.g., behavioral understanding),
xiii) "in-the-wild" characteristic analysis (e.g., gait, age, gender, ethnicity recognition),
xiv) "in-the-wild" group understanding via social cues (e.g., kinship, non-blood relationships, personality)
xv) subgroup distribution shift analysis in affect recognition
xvi) subgroup distribution shift analysis in face and body behaviour
xvii) subgroup distribution shift analysis in characteristic analysis
Accepted workshop papers will appear at CVPR 2023 proceedings.
Important Dates:
Paper Submission Deadline: 24 March, 2023
Review decisions sent to authors; Notification of acceptance: 3 April, 2023
Camera ready version 8 April, 2023
Chairs:
Dimitrios Kollias, Queen Mary University of London, UK
Stefanos Zafeiriou, Imperial College London, UK
Panagiotis Tzirakis, Hume AI
Alice Baird, Hume AI
Alan Cowen, Hume AI
In case of any queries, please contact d.kollias(a)qmul.ac.uk<mailto:d.kollias@qmul.ac.uk>
Kind Regards,
Dimitrios Kollias,
on behalf of the organising committee
========================================================================
Dr Dimitrios Kollias, PhD, MIEEE, FHEA
Lecturer (Assistant Professor) in Artificial Intelligence
Member of Multimedia and Vision (MMV) research group
Member of Queen Mary Computer Vision Group
Associate Member of Centre for Advanced Robotics (ARQ)
Academic Fellow of Digital Environment Research Institute (DERI)
School of EECS
Queen Mary University of London
========================================================================
Hello,
I hope this message finds you well.
We are excited to announce an upcoming workshop that aims to break new ground in the realm of affective computing. The workshop, titled “From Lab to Life: Realising the Potential of Affective Computing”, will encourage discussion from both academic and industry experts, which promises to be an enriching experience for both professionals and researchers alike.
Machine capabilities are on the rise. New advances in AI and Robotics have enabled the creation of ever more competent artificial systems that have the potential to contribute to various types of human activities. However, for this potential to result in a step-change in how humans and machines interact and work with each other, machines also need to be competent at understanding their human counterparts. How can task-competent machines become competent teammates, assistants, and companions for human users? How can technology make sense of human behaviour, responses, and experiences?
Affective Computing research has been spearheading the effort to answer these questions and resolve challenges of human-machine interaction. To take the unique insights and innovations developed in Affective Computing from lab prototypes to robust and reliable technology solutions for human users, there is a need for academic and industry researchers to come together. In this workshop, we create a forum for this conversation structured around three specific themes: (1) ethics and regulations, (2) industry perspectives, and (3) academic perspectives.
Here’s what you can expect from the workshop:
1. Expert insights: gain valuable insights from renowned experts in academia and industry who will share their experiences, perspectives, and ethical considerations on affective computing.
2. Interactive discussions: participate in discussions with experts by submitting a 2 page perspective piece and sharing your perspectives during a moderated panel where both you and speakers can discuss.
3. Networking opportunities: connect with fellow participants, industry professionals, and researchers to exchange ideas, forge new partnerships, and explore potential collaborations.
Whether you are a seasoned researcher, an industry professional, or someone in-between interested in the latest developments in affective computing, ethical concerns when navigating affective computing to industry, or curious as to how one can safely streamline affective computing products to consumers, this workshop offers a unique opportunity to expand your knowledge, broaden your network, and contribute to the advancement of this exciting field.
We invite you to submit your work and share your insights by submitting to our workshop by June 12th, 2024. For more information what to submit and how, please visit https://www.cambridgeconsultants.com/acii2024-fromlabtolife/<https://url.us.m.mimecastprotect.com/s/Jk9RC9rmgPhm5KmqTONsSC?domain=cambri…>.
Thank you for taking the time to consider our invitation and we hope to see you at our workshop this September.
Kind Regards,
Emma Hughson
Senior Affective Computing Engineer, Human Machine Understanding
Cambridge Consultants Ltd
29 Science Park, Milton Road,
Cambridge, CB4 0DW, UK
<https://www.cambridgeconsultants.com/home>www.cambridgeconsultants.com<https://www.cambridgeconsultants.com/home>
[cid:image001.png@01DAA148.210A2080]
[cid:image002.png@01DAA148.210A2080]<https://www.linkedin.com/company/cambridge-consultants>
[cid:image003.png@01DAA148.210A2080]<https://www.youtube.com/user/CamConsultants>
[cid:image004.png@01DAA148.210A2080]<https://twitter.com/CambConsultants>
[cid:image005.png@01DAA148.210A2080]<https://www.facebook.com/cambridgeconsultantsuk/>
[cid:image006.png@01DAA148.210A2080]<https://www.instagram.com/cambridge_consultants/>
Sign-up for our newsletter<https://www.cambridgeconsultants.com/newsletter>
This email, including attachments, contains confidential information belonging to Cambridge Consultants. It is intended for the addressee only and may only be copied or disclosed to others with permission. If you are not the intended recipient please delete the material from any computer. Emails are subject to monitoring for legal and security purposes. If this email has been sent as a personal message the company accepts no liability for the content.