The first 3 Challenges are based on an augmented version of the Aff-Wild2 database, which is an A/V in-the-wild database of 594 videos of 584 subjects of around 3M frames; it contains annotations in terms of valence-arousal, expressions and action units.
The 4th Challenge is based on C-EXPR-DB, which is an A/V in-the-wild database and in total consists of 400 videos of around 200K frames; each frame is annotated in terms of compound expressions.
The 5th Challenge is based on the Hume-Vidmimic2 dataset, which is a multimodal dataset of about 75 hours of video recordings of 2222 subjects; it contains continuous annotations for the intensity of 7 emotional experiences.
|
22 January, 2025 |
||
|
12 March, 2025 |
||
|
17 March, 2025 |
||
|
21 March, 2025 |
|
3 April, 2025 |
|
7 April, 2025 |
Dimitrios Kollias, Queen Mary University of London, UK
Stefanos Zafeiriou, Imperial College London, UK
Irene Kotsia, Cogitat Ltd, UK
Panagiotis Tzirakis, Hume AI
Alan Cowen, Hume AI
Eric Granger, École de technologie supérieure, Canada
Marco Pedersoli, École de technologie supérieure, Canada
Simon Bacon, Concordia University, Canada
(2): The Workshop solicits contributions on cutting-edge advancements in analyzing, generating, modeling, and understanding human affect and behavior across multiple modalities, including facial expressions, body movements, gestures and speech. A special emphasis is placed on the integration of state-of-the-art systems designed for in-the-wild analysis, enabling research and applications in unconstrained environments. In parallel, this Workshop will solicit contributions towards building fair, explainable, trustworthy and privacy-aware models that perform well on all subgroups and improve in-the-wild generalisation.
Original high-quality contributions, in terms of databases, surveys, studies, foundation models, techniques and methodologies (either uni-modal or multi-modal; uni-task or multi-task ones) are solicited on -but are not limited to- the following topics:
i) facial expression (basic, compound or other) or micro-expression analysis
ii) facial action unit detection
iii) valence-arousal estimation
iv) physiological-based (e.g.,EEG, EDA) affect analysis
v) face recognition, detection or tracking
vi) body recognition, detection or tracking
vii) gesture recognition or detection
viii) pose estimation or tracking
ix) activity recognition or tracking
x) lip reading and voice understanding
xi) face and body characterization (e.g., behavioral understanding)
xii) characteristic analysis (e.g., gait, age, gender, ethnicity recognition)
xiii) group understanding via social cues (e.g., kinship, non-blood relationships, personality)
xiv) video, action and event understanding
xv) digital human modeling
xvi) characteristic analysis (e.g., gait, age, gender, ethnicity recognition)
xvii) violence detection
xviii) autonomous driving
xix) domain adaptation, domain generalisation, few- or zero-shot learning for the above cases
xx) fairness, explainability, interpretability, trustworthiness, privacy-awareness, bias mitigation and/or subgroup distribution shift analysis for the above cases
xxi) editing, manipulation, image-to-image translation, style mixing, interpolation, inversion and semantic diffusion for all afore mentioned cases
Accepted workshop papers will appear at CVPR 2025 proceedings.
Dimitrios Kollias, Queen Mary University of London, UK
Stefanos Zafeiriou, Imperial College London, UK
Irene Kotsia, Cogitat Ltd, UK
Panagiotis Tzirakis, Hume AI
Alan Cowen, Hume AI
Eric Granger, École de technologie supérieure, Canada
Marco Pedersoli, École de technologie supérieure, Canada
Simon Bacon, Concordia University, Canada
========================================================================
Dr Dimitrios Kollias, PhD, FHEA, M-IEEE, M-BMVA, M-AAAI, M-TCPAMI, AM-IAPR
Lecturer (Assistant Professor) in Artificial Intelligence
Member of Centre for Multimodal AI
Member of Multimedia and Vision Group
Member of Queen Mary Computer Vision Group
Associate Member of Centre for Advanced Robotics
Academic Fellow of Digital Environment Research Institute
School of EECS
Queen Mary University of London
========================================================================