Dear Colleagues,

Please find below the invitation to contribute to the 6th Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW) to be held in conjunction with the IEEE Computer Vision and Pattern Recognition Conference (CVPR), 2024.


 
(1): The Competition is split into the below five Challenges:

  • Valence-Arousal Estimation Challenge
  • Expression Recognition Challenge
  • Action Unit Detection Challenge
  • Compound Expression Recognition Challenge
  • Emotional Mimicry Intensity Estimation Challenge


The first 3 Challenges are based on an augmented version of the Aff-Wild2 database, which is an audiovisual in-the-wild database of 594 videos of 584 subjects of around 3M frames; it contains annotations in terms of valence-arousal, expressions and action units.
The 4th Challenge is based on C-EXPR-DB, which is an audiovisual in-the-wild database and in total consists of 400 videos of around 200K frames; each frame is annotated in terms of compound expressions. 
The last Challenge is based on the Hume-Vidmimic2 dataset, which is a multimodal dataset of about 75 hours of video recordings of 2222 subjects; it contains continuous annotations for the intensity of 7 emotional experiences.

Participants are invited to participate in at least one of these Challenges.
There will be one winner per Challenge; the top-3 performing teams of each Challenge will have to contribute paper(s) describing their approach, methodology and results to our Workshop; the accepted papers will be part of the CVPR 2024 proceedings; all other teams are also encouraged to submit paper(s) describing their solutions and final results; the accepted papers will be part of the CVPR 2024 proceedings.

More information about the Competition can be found here.



Important Dates:

  • Call for participation announced, team registration begins, data available:       

13 January, 2024

  • Final submission deadline:

18 March, 2024

  • Winners Announcement:      

25 March, 2024

  • Final paper submission deadline:                       
 

30 March, 2024

  • Review decisions sent to authors; Notification of acceptance:                     
  

 10 April, 2024

  • Camera ready version deadline:                                                                   
 

 14 April, 2024



Chairs:

Dimitrios Kollias, Queen Mary University of London, UK                                     

Stefanos Zafeiriou, Imperial College London, UK   

Irene Kotsia, Cogitat Ltd, UK                           

Panagiotis Tzirakis,  Hume AI                                                                                                                     

Alan Cowen, Hume AI                                                 




(2): The Workshop solicits contributions on the recent progress of recognition, analysis, generation-synthesis and modelling of face, body, gesture, speech, audio, text and language while embracing the most advanced systems available for such in-the-wild (i.e., in unconstrained environments) analysis, and across modalities like face to voice. In parallel, this Workshop will solicit contributions towards building fair, explainable, trustworthy and privacy-aware models that perform well on all subgroups and improve in-the-wild generalisation.


Original high-quality contributions, in terms of databases, surveys, studies, foundation models, techniques and methodologies (either uni-modal or multi-modal; uni-task or multi-task ones) are solicited on -but are not limited to- the following topics: 

i)  facial expression (basic, compound or other) or micro-expression analysis
ii) facial action unit detection
iii) valence-arousal estimation
iv) physiological-based (e.g.,EEG, EDA) affect analysis
v) face recognition, detection or tracking
vi) body recognition, detection or tracking
vii) gesture recognition or detection
viii) pose estimation or tracking
ix) activity recognition or tracking
x) lip reading and voice understanding
xi) face and body characterization (e.g., behavioral understanding)
xii) characteristic analysis (e.g., gait, age, gender, ethnicity recognition)
xiii) group understanding via social cues (e.g., kinship, non-blood relationships, personality)
xiv) video, action and event understanding
xv) digital human modeling
xvi) characteristic analysis (e.g., gait, age, gender, ethnicity recognition)
xvii) violence detection
xviii) autonomous driving
xix) domain adaptation, domain generalisation, few- or zero-shot learning for the above cases
xx) fairness, explainability, interpretability, trustworthiness, privacy-awareness, bias mitigation and/or subgroup distribution shift analysis for the above cases
xxi) editing, manipulation, image-to-image translation, style mixing, interpolation, inversion and semantic diffusion for all afore mentioned cases

 

Accepted workshop papers will appear at CVPR 2024 proceedings.



Important Dates:


Paper Submission Deadline:                                                          30 March, 2024

Review decisions sent to authors; Notification of acceptance:      10 April, 2024

Camera ready version                                                                    14 April, 2024
 



Chairs:

Dimitrios Kollias, Queen Mary University of London, UK                                     

Stefanos Zafeiriou, Imperial College London, UK   

Irene Kotsia, Cogitat Ltd, UK                           

Panagiotis Tzirakis,  Hume AI                                                                                                                     

Alan Cowen, Hume AI        

   

 


In case of any queries, please contact d.kollias@qmul.ac.uk


Kind Regards,
Dimitrios Kollias,
on behalf of the organising committee



========================================================================

Dr Dimitrios Kollias, PhD, MIEEE, MBMVA, MAAAI, AMIARP, FHEA

Lecturer (Assistant Professor) in Artificial Intelligence

Member of Multimedia and Vision (MMV) research group 

Member of Queen Mary Computer Vision Group 

Member of Health Data in Practice (HDiP) Theme

Associate Member of Centre for Advanced Robotics (ARQ) 
Academic Fellow of Digital Environment Research Institute (DERI) 

School of EECS

Queen Mary University of London

========================================================================