Dear Colleagues,
Please find below the invitation to contribute to the 8th Workshop and Competition on Affective & Behavior Analysis in-the-wild (ABAW)<https://affective-behavior-analysis-in-the-wild.github.io/8th/> to be held in conjunction with the IEEE Computer Vision and Pattern Recognition Conference (CVPR), 2025.
(1): The Competition is split into the below six Challenges:
*
Valence-Arousal Estimation Challenge
*
Expression Recognition Challenge
*
Action Unit Detection Challenge
*
Compound Expression Recognition Challenge
*
Emotional Mimicry Intensity Estimation Challenge
*
Ambivalence/Hesitancy (AH) Recognition Challenge
The first 3 Challenges are based on an augmented version of the Aff-Wild2 database, which is an A/V in-the-wild database of 594 videos of 584 subjects of around 3M frames; it contains annotations in terms of valence-arousal, expressions and action units.
The 4th Challenge is based on C-EXPR-DB, which is an A/V in-the-wild database and in total consists of 400 videos of around 200K frames; each frame is annotated in terms of compound expressions.
The 5th Challenge is based on the Hume-Vidmimic2 dataset, which is a multimodal dataset of about 75 hours of video recordings of 2222 subjects; it contains continuous annotations for the intensity of 7 emotional experiences.
The last Challenge is based on the Behavioural Ambivalence/Hesitancy dataset, which is an A/V dataset of 630 videos of 5 hours duration of around 430K frames; it contains annotations in terms of presence and absence of ambivalence/hesitancy.
Participants are invited to participate in at least one of these Challenges.
There will be one winner per Challenge; the top-3 performing teams of each Challenge will have to contribute paper(s) describing their approach, methodology and results to our Workshop; the accepted papers will be part of the CVPR 2025 proceedings; all other teams are also encouraged to submit paper(s) describing their solutions and final results; the accepted papers will be part of the CVPR 2025 proceedings.
More information about the Competition can be found here<https://affective-behavior-analysis-in-the-wild.github.io/8th/#clients>.
Important Dates:
* Call for participation announced, team registration begins, data available:
22 January, 2025
* Final submission deadline:
12 March, 2025
* Winners Announcement:
17 March, 2025
* Final paper submission deadline:
21 March, 2025
* Review decisions sent to authors; Notification of acceptance:
3 April, 2025
* Camera ready version deadline:
7 April, 2025
Chairs:
Dimitrios Kollias, Queen Mary University of London, UK
Stefanos Zafeiriou, Imperial College London, UK
Irene Kotsia, Cogitat Ltd, UK
Panagiotis Tzirakis, Hume AI
Alan Cowen, Hume AI
Eric Granger, École de technologie supérieure, Canada
Marco Pedersoli, École de technologie supérieure, Canada
Simon Bacon, Concordia University, Canada
(2): The Workshop solicits contributions on cutting-edge advancements in analyzing, generating, modeling, and understanding human affect and behavior across multiple modalities, including facial expressions, body movements, gestures and speech. A special emphasis is placed on the integration of state-of-the-art systems designed for in-the-wild analysis, enabling research and applications in unconstrained environments. In parallel, this Workshop will solicit contributions towards building fair, explainable, trustworthy and privacy-aware models that perform well on all subgroups and improve in-the-wild generalisation.
Original high-quality contributions, in terms of databases, surveys, studies, foundation models, techniques and methodologies (either uni-modal or multi-modal; uni-task or multi-task ones) are solicited on -but are not limited to- the following topics:
i) facial expression (basic, compound or other) or micro-expression analysis
ii) facial action unit detection
iii) valence-arousal estimation
iv) physiological-based (e.g.,EEG, EDA) affect analysis
v) face recognition, detection or tracking
vi) body recognition, detection or tracking
vii) gesture recognition or detection
viii) pose estimation or tracking
ix) activity recognition or tracking
x) lip reading and voice understanding
xi) face and body characterization (e.g., behavioral understanding)
xii) characteristic analysis (e.g., gait, age, gender, ethnicity recognition)
xiii) group understanding via social cues (e.g., kinship, non-blood relationships, personality)
xiv) video, action and event understanding
xv) digital human modeling
xvi) characteristic analysis (e.g., gait, age, gender, ethnicity recognition)
xvii) violence detection
xviii) autonomous driving
xix) domain adaptation, domain generalisation, few- or zero-shot learning for the above cases
xx) fairness, explainability, interpretability, trustworthiness, privacy-awareness, bias mitigation and/or subgroup distribution shift analysis for the above cases
xxi) editing, manipulation, image-to-image translation, style mixing, interpolation, inversion and semantic diffusion for all afore mentioned cases
Accepted workshop papers will appear at CVPR 2025 proceedings.
Important Dates:
Paper Submission Deadline: 21 March, 2025
Review decisions sent to authors; Notification of acceptance: 3 April, 2025
Camera ready version 7 April, 2025
Chairs:
Dimitrios Kollias, Queen Mary University of London, UK
Stefanos Zafeiriou, Imperial College London, UK
Irene Kotsia, Cogitat Ltd, UK
Panagiotis Tzirakis, Hume AI
Alan Cowen, Hume AI
Eric Granger, École de technologie supérieure, Canada
Marco Pedersoli, École de technologie supérieure, Canada
Simon Bacon, Concordia University, Canada
In case of any queries, please contact d.kollias(a)qmul.ac.uk<mailto:d.kollias@qmul.ac.uk>
Kind Regards,
Dimitrios Kollias,
on behalf of the organising committee
========================================================================
Dr Dimitrios Kollias, PhD, FHEA, M-IEEE, M-BMVA, M-AAAI, M-TCPAMI, AM-IAPR
Lecturer (Assistant Professor) in Artificial Intelligence
Member of Centre for Multimodal AI
Affiliate Member of Centre for Human Centred Computing
Member of Multimedia and Vision Group
Member of Queen Mary Computer Vision Group
Associate Member of Centre for Advanced Robotics
Academic Fellow of Digital Environment Research Institute
School of EECS
Queen Mary University of London
========================================================================
Research Overview
The Wipro WIN Research Fellowship supports PhD enrolment at BITS Pilani to
pursue cutting-edge research, drive technological innovation and enhance
WIN’s competitive edge in smart technologies, sustainability, and
cross-domain solutions.
Research Topic/ Project 1: Sim2Real and Vision-Language-Action (VLA)
Foundation Models for Robot Control and Factory Coordination
Research Topic/ Project 2: Development of SIL Safety-Rated Vision-Based
Autonomous Mobile Robot (AMR) Navigation Without LiDAR
Fellowship Details
Fellowship Amount: The fellowship will provide a stipend of INR 37,000/- +
27% HRA per month. Fellowship will be enhanced after completing 2 years.
Research scholars will receive domestic and international travel support and
contingency funds for project work.
Eligibility Criteria: Experience in Python and Pytorch. Having exposure to
robotics/ computer vision will be a plus.
Deadline: till the position is available
How to apply: Interested candidates with the above-mentioned qualifications
and relevant experience can send their CV giving detailed information about
the educational qualifications, research experience and published research
papers, if any, to abhijit.das(a)hyderabad.bits-pilani.ac.in.
Best regards
Abhijit
------------------ --------------------- ---- --------------------
*Dr.** Abhijit Das, * PhD, SMIEEE, LMIUPRAI, Member APPCAIR
<https://appcair.com/applied-ai-faculty.html>
Lead Investigator, Machine Intelligence Group
<https://sites.google.com/hyderabad.bits-pilani.ac.in/mig/home>
Assistant Professor.
<https://sites.google.com/hyderabad.bits-pilani.ac.in/mig/home>
<https://sites.google.com/hyderabad.bits-pilani.ac.in/mig/home>
H112, Dept. of Computer Science and Information Systems,
BITS Pilani, Hyderabad Campus.
*Organising Chair:*
1st Workshop on Computer Vision for Biometrics, Identity and Behaviour
(CV4BIB 2025) at ICCV 2025.
*Competition Co-Chairs: *IEEE International Joint Conference on Biometrics
(IJCB 2026)
*Sponsorship** Co-Chairs: *6th Indian Symposium on Machine Learning (IndoML
2025)
*Area chair: *23rd IEEE Winter Conference on Applications in Computer
Vision (WACV 2026)
*Contact no:* +914066303744 (O)
*Website: *https://sites.google.com/site/dasabhijit2048/home
*▄▄▄▄▄▄▄▄▄▄▄▄**▄▄▄▄▄▄▄▄▄▄▄▄▄**▄▄▄▄▄▄▄▄▄▄▄▄*
[image: Visit Hyderabad Campus]
Hi Everyone,
I’m getting in touch as we're recruiting two PhD students for two fully-funded four year Centre-UB studentships<https://www.centre-ub.org/> at the University of Birmingham<https://www.birmingham.ac.uk/>. I’ve included the details below.
If you know anyone who might be interested, I’d be very grateful if you could share this with them. Alternatively, would you be able to forward the information more broadly to students on your UG or Master’s courses?
Many thanks in advance, and please let me know if you have any questions.
Best wishes,
Melissa
Human-Aligned Super-Resolution for Facial Identification: Behavioural Evaluation, Bias Analysis, and Explainable AI – with VisionMetric Ltd
Project supervised by Dr Melissa Colloff<https://www.birmingham.ac.uk/staff/profiles/psychology/colloff-melissa> <https://www.birmingham.ac.uk/staff/profiles/psychology/colloff-melissa> (Psychology), Professor Howard Bowman<https://www.birmingham.ac.uk/staff/profiles/psychology/bowman-howard> (Computer Science & Psychology) and Professor Heather Flowe<https://www.birmingham.ac.uk/staff/profiles/psychology/flowe-heather> (Psychology) together with Dr Stuart Gibson from VisionMetric Ltd<https://visionmetric.com/>.
Application deadline: Tuesday February 17th 2026, 5pm
Projects now available to view on the Centre-UB website: https://www.centre-ub.org/studentships/new-opportunities/
Background
CCTV footage is a predominant and crucial source of evidence in policing, with an estimated 21 million cameras operating in the UK. Yet more than 80% of real-world footage is too poor in quality to support reliable person identification. This severely limits investigative success and leaves offenders unidentified.
Generative AI–based super-resolution (SR) technologies—such as VisionMetric’s iREVEAL—promise transformative gains by enhancing low-quality facial images. However, there is little scientific evidence on whether these tools improve human accuracy, how they affect machine recognition, and whether they introduce demographic biases.
This interdisciplinary PhD (Psychology w. Computer Science) will investigate how generative AI–based super-resolution (SR) technologies influence human and machine-based facial identification. The PhD will combine behavioural experiments, machine learning, and explainable-AI methods to answer questions:
1. Do SR techniques improve human face identification accuracy?
2. How do SR-enhanced images affect machine-based facial recognition, and where do human and machine decisions diverge?
3. Do SR methods perform equitably across demographic groups?
4. Can SR models be improved using human perceptual insights?
This project provides extensive interdisciplinary training from subject experts and industry, including in behavioural experimental design and statistical modelling; computer vision and AI techniques; explainable AI and human–machine comparison methods; and responsible innovation.
The student will work closely with VisionMetric<https://visionmetric.com/>, which is a leading SME supplying facial software to police forces in over 30 countries. Two placements at VisionMetric will provide hands-on experience with AI development pipelines and product development.
This is an exceptional opportunity to build a skillset spanning psychology, AI, fairness, and forensic technology, positioning the candidate for careers in academia, applied behavioural science, AI research, technology, or policy.
The project addresses both the societal risks and potential benefits of AI in high-stakes environments.
Candidate
We are looking for a highly talented and dedicated student with a 1st class or 2:1 undergraduate degree in Psychology, Cognitive Science, Computer Science, Neuroscience, Data Science, or a related field. An MSc degree in a relevant area is desirable though not necessary. Experience in coding (e.g., Python/R/Matlab) and experience in behavioural experimentation, statistics, or machine learning is desirable but full training will be provided. Applicants with an interest in human perception, AI ethics, or forensic science are especially encouraged.
Interviews for this studentship are expected to take place on 16th March 2026.
To apply for this studentship, please submit your application using this link<https://app.geckoform.com/public/#/modern/21FO00gwt7o1i40093m7p30d6e>: https://app.geckoform.com/public/#/modern/21FO00gwt7o1i40093m7p30d6e
Further details on the application process can be found here, should applicants request them: https://www.centre-ub.org/studentships/application-process/
Informal enquiries about the project prior to application can be directed to Dr Melissa Colloff (m.colloff(a)bham.ac.uk).
————————————————
Enhanced Eyewitness ID: Predicting and Optimising ID Accuracy Through Behavioural Analysis – with Promat
Project supervised by Prof Heather Flowe (Psychology), Dr Jizheng Wan (Computer Science), and Dr Melissa Colloff (Psychology) together with Mr Matt Whitwam from Promat.<http://promaps.software/Identification-Parade.aspx>
Application deadline: Tuesday February 17th 2026, 5pm
Projects now available to view on the Centre-UB website: https://www.centre-ub.org/studentships/new-opportunities/
Background
Accurate eyewitness identification is critical for criminal investigations and public safety. Yet, despite major advances in psychological science, police lineup procedures have changed little in over a century. Most forces still rely on static photographs, and methods that struggle to capture the conditions under which crimes actually occur, such as poor lighting, variable viewpoints, and the use of disguises.
This interdisciplinary PhD offers an exciting opportunity to help modernise eyewitness identification by combining cognitive psychology, immersive technology, and artificial intelligence. The project will test participant witnesses using a mock witness paradigm. Witnesses will be able to adjust lighting, toggle disguise features, and control viewing angle during lineups, creating a memory-congruent identification environment. The project will examine whether these reinstatement opportunities improve accuracy relative to standard, non-adaptive lineups, and how witnesses naturally explore faces under these conditions.
A core innovation of the project is the integration of behavioural data with AI. The student will analyse eye movements, exploration patterns, and verbal reports to develop computational models that predict identification reliability. They will learn to design interpretable, legally robust AI systems, including attention-based deep learning models and reinforcement learning approaches that adapt lineup presentation in real time based on witness behaviour.
A defining feature of the project is close collaboration with Promat, the leading provider of police lineup software in the UK. Through this partnership, the student will gain first-hand experience working with real operational systems, understanding industry constraints, and contributing to research with direct pathways to deployment in policing practice. Joint supervision from Psychology and Computer Science will ensure strong interdisciplinary support while bridging academic research and industry innovation.
Candidate
We are looking for a highly talented and dedicated PhD student with a 1st class or 2:1 degree in the field of Psychology, Cognitive Science, Computer Science, Neuroscience, Data Science, or an allied field. An MSc degree in a relevant area is desirable though not necessary. Experience in coding (e.g., Python/R/Matlab) and experience in behavioural experimentation, statistics, or machine learning is desirable but full training will be provided.
Interviews for this studentship are expected to take place on 20th March 2026.
To apply for this studentship, please submit your application using this link<https://app.geckoform.com/public/#/modern/21FO00gwt7o1i40093m7p30d6e>: https://app.geckoform.com/public/#/modern/21FO00gwt7o1i40093m7p30d6e
Informal enquiries about the project prior to application can be directed to Professor Heather Flowe (h.flowe(a)bham.ac.uk).
Dr Melissa Colloff (she/her)
Associate Professor of Forensic Psychology
University of Birmingham
School of Psychology
B15 2TT
Edgbaston, Birmingham, B15 2TT, UK
office: 324, 52 Pritchatts Road.
phone: +44 (0)121 4144925
email: m.colloff(a)bham.ac.uk<mailto:m.colloff@bham.ac.uk>
www.birmingham.ac.uk<http://www.birmingham.ac.uk>
Lab Website<https://www.appliedmemorylab.co.uk/> / Lab LinkedIn<https://www.linkedin.com/company/applied-memory-labs/>
Personal Website<https://www.melissacolloff.com/> / Personal LinkedIn<https://www.linkedin.com/in/melissa-colloff-a5363a58/>
I work full-time, but my non-working day is Wednesday. I work flexibly and sometimes send emails at odd times. Please do not feel obliged to reply to this email outside of your normal working hours.
[University of Birmingham logo]