Dear face researchers,
I hope you are doing well.
I am Haiyang Jin, a researcher working on face perception and cognition, currently based in Hangzhou, the host city of the upcoming Joint ESCoP–CoPM Meeting 2026 (31 August – 4 September 2026 in Hangzhou, China). This meeting is a joint conference of the European Society for Cognitive Psychology (ESCoP) and the Cognitive Processes and Modelling Division of the Chinese Psychological Society — bringing together cognitive researchers from Europe, China, and beyond.
Conference website: https://www.escop-copm-2026.com/
More about Hangzhou: https://www.escop-copm-2026.com/cd_hz
As face processing is a vibrant and cross-disciplinary area spanning perception, attention, memory, social cognition, and neurocognition, I am interested in reaching out to see whether there are researchers who would be interested in participating in (or co-organizing) a symposium on face processing at the ESCoP–CoPM meeting.
Possible aims of the symposia could include
• Bringing together work on face perception, face recognition, social signals, neural mechanisms, and/or computational models of face processing,
• Highlighting cross-cultural perspectives or methodological advances,
• Providing a platform for European and Chinese researchers to meet and share research within this domain.
If you are planning to attend ESCoP–CoPM 2026 and would be interested in contributing to a face processing–related symposium (as speaker, chair, or discussant), please reply to this email by early March. I would be happy to coordinate a short planning call or email thread to prepare a symposium proposal together for submission before the deadline.
Of course, if you have specific themes within face processing that you would like to propose, that is also very welcome.
Looking forward to hearing from you and hopefully seeing many of you in Hangzhou!
Best regards,
Haiyang
————————————————————
金海洋
特聘副教授
心理学系
浙江理工大学
浙江省杭州市钱塘区2号大街928号
Haiyang Jin Ph.D.
Associate Professor (Honored Hired)
Department of Psychology
Zhejiang Sci-Tech University
928 No. 2 Street, Qiantang District
Hangzhou, Zhejiang, China
E-mail: haiyang.jin(a)outlook.com<mailto:haiyang.jin@outlook.com>
Phone: (+86) 17280159066
Website: https://haiyangjin.github.io/
ORCID: https://orcid.org/0000-0003-3290-3901
Github: https://github.com/HaiyangJin
Please do not feel obligated to reply to my email outside your normal working hours.
Dear Colleagues,
Please find below the invitation to contribute to the 8th Workshop and Competition on Affective & Behavior Analysis in-the-wild (ABAW)<https://affective-behavior-analysis-in-the-wild.github.io/8th/> to be held in conjunction with the IEEE Computer Vision and Pattern Recognition Conference (CVPR), 2025.
(1): The Competition is split into the below six Challenges:
*
Valence-Arousal Estimation Challenge
*
Expression Recognition Challenge
*
Action Unit Detection Challenge
*
Compound Expression Recognition Challenge
*
Emotional Mimicry Intensity Estimation Challenge
*
Ambivalence/Hesitancy (AH) Recognition Challenge
The first 3 Challenges are based on an augmented version of the Aff-Wild2 database, which is an A/V in-the-wild database of 594 videos of 584 subjects of around 3M frames; it contains annotations in terms of valence-arousal, expressions and action units.
The 4th Challenge is based on C-EXPR-DB, which is an A/V in-the-wild database and in total consists of 400 videos of around 200K frames; each frame is annotated in terms of compound expressions.
The 5th Challenge is based on the Hume-Vidmimic2 dataset, which is a multimodal dataset of about 75 hours of video recordings of 2222 subjects; it contains continuous annotations for the intensity of 7 emotional experiences.
The last Challenge is based on the Behavioural Ambivalence/Hesitancy dataset, which is an A/V dataset of 630 videos of 5 hours duration of around 430K frames; it contains annotations in terms of presence and absence of ambivalence/hesitancy.
Participants are invited to participate in at least one of these Challenges.
There will be one winner per Challenge; the top-3 performing teams of each Challenge will have to contribute paper(s) describing their approach, methodology and results to our Workshop; the accepted papers will be part of the CVPR 2025 proceedings; all other teams are also encouraged to submit paper(s) describing their solutions and final results; the accepted papers will be part of the CVPR 2025 proceedings.
More information about the Competition can be found here<https://affective-behavior-analysis-in-the-wild.github.io/8th/#clients>.
Important Dates:
* Call for participation announced, team registration begins, data available:
22 January, 2025
* Final submission deadline:
12 March, 2025
* Winners Announcement:
17 March, 2025
* Final paper submission deadline:
21 March, 2025
* Review decisions sent to authors; Notification of acceptance:
3 April, 2025
* Camera ready version deadline:
7 April, 2025
Chairs:
Dimitrios Kollias, Queen Mary University of London, UK
Stefanos Zafeiriou, Imperial College London, UK
Irene Kotsia, Cogitat Ltd, UK
Panagiotis Tzirakis, Hume AI
Alan Cowen, Hume AI
Eric Granger, École de technologie supérieure, Canada
Marco Pedersoli, École de technologie supérieure, Canada
Simon Bacon, Concordia University, Canada
(2): The Workshop solicits contributions on cutting-edge advancements in analyzing, generating, modeling, and understanding human affect and behavior across multiple modalities, including facial expressions, body movements, gestures and speech. A special emphasis is placed on the integration of state-of-the-art systems designed for in-the-wild analysis, enabling research and applications in unconstrained environments. In parallel, this Workshop will solicit contributions towards building fair, explainable, trustworthy and privacy-aware models that perform well on all subgroups and improve in-the-wild generalisation.
Original high-quality contributions, in terms of databases, surveys, studies, foundation models, techniques and methodologies (either uni-modal or multi-modal; uni-task or multi-task ones) are solicited on -but are not limited to- the following topics:
i) facial expression (basic, compound or other) or micro-expression analysis
ii) facial action unit detection
iii) valence-arousal estimation
iv) physiological-based (e.g.,EEG, EDA) affect analysis
v) face recognition, detection or tracking
vi) body recognition, detection or tracking
vii) gesture recognition or detection
viii) pose estimation or tracking
ix) activity recognition or tracking
x) lip reading and voice understanding
xi) face and body characterization (e.g., behavioral understanding)
xii) characteristic analysis (e.g., gait, age, gender, ethnicity recognition)
xiii) group understanding via social cues (e.g., kinship, non-blood relationships, personality)
xiv) video, action and event understanding
xv) digital human modeling
xvi) characteristic analysis (e.g., gait, age, gender, ethnicity recognition)
xvii) violence detection
xviii) autonomous driving
xix) domain adaptation, domain generalisation, few- or zero-shot learning for the above cases
xx) fairness, explainability, interpretability, trustworthiness, privacy-awareness, bias mitigation and/or subgroup distribution shift analysis for the above cases
xxi) editing, manipulation, image-to-image translation, style mixing, interpolation, inversion and semantic diffusion for all afore mentioned cases
Accepted workshop papers will appear at CVPR 2025 proceedings.
Important Dates:
Paper Submission Deadline: 21 March, 2025
Review decisions sent to authors; Notification of acceptance: 3 April, 2025
Camera ready version 7 April, 2025
Chairs:
Dimitrios Kollias, Queen Mary University of London, UK
Stefanos Zafeiriou, Imperial College London, UK
Irene Kotsia, Cogitat Ltd, UK
Panagiotis Tzirakis, Hume AI
Alan Cowen, Hume AI
Eric Granger, École de technologie supérieure, Canada
Marco Pedersoli, École de technologie supérieure, Canada
Simon Bacon, Concordia University, Canada
In case of any queries, please contact d.kollias(a)qmul.ac.uk<mailto:d.kollias@qmul.ac.uk>
Kind Regards,
Dimitrios Kollias,
on behalf of the organising committee
========================================================================
Dr Dimitrios Kollias, PhD, FHEA, M-IEEE, M-BMVA, M-AAAI, M-TCPAMI, AM-IAPR
Lecturer (Assistant Professor) in Artificial Intelligence
Member of Centre for Multimodal AI
Affiliate Member of Centre for Human Centred Computing
Member of Multimedia and Vision Group
Member of Queen Mary Computer Vision Group
Associate Member of Centre for Advanced Robotics
Academic Fellow of Digital Environment Research Institute
School of EECS
Queen Mary University of London
========================================================================