Dear Colleagues, Please find below the invitation to contribute to the 7<https://affective-behavior-analysis-in-the-wild.github.io/7th>th Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW)<https://affective-behavior-analysis-in-the-wild.github.io/7th> to be held in conjunction with the European Conference on Computer Vision (ECCV), 2024. (1): The Competition is split into the below two Challenges: * Multi-Task-Learning Challenge (three tasks are considered: i) valence-arousal estimation, ii) expression recognition and iii) action unit detection; a static version of Aff-Wild2 database is used) * Compound Expression Recognition Challenge (a part of C-EXPR-DB is used) Participants are invited to participate in at least one of these Challenges. There will be one winner per Challenge; the top-3 performing teams of each Challenge will have to contribute paper(s) describing their approach, methodology and results to our Workshop; the accepted papers will be part of the ECCV 2024 proceedings; all other teams are also encouraged to submit paper(s) describing their solutions and final results; the accepted papers will be part of the ECCV 2024 proceedings. More information about the Competition can be found here<https://affective-behavior-analysis-in-the-wild.github.io/7th/>. Important Dates: * Call for participation announced, team registration begins, data available: 09 May * Final submission deadline: 18 July * Winners Announcement: 24 July * Final paper submission deadline: 29 July * Notification of acceptance: 25 August * Camera ready version deadline: 31 August Chairs: Dimitrios Kollias, Queen Mary University of London, UK Stefanos Zafeiriou, Imperial College London, UK Irene Kotsia, Cogitat Ltd, UK Abhinav Dhall, Flinders University, Australia Shreya Ghosh, Curtin University, Australia (2): The Workshop solicits contributions on the recent progress of recognition, analysis, generation-synthesis and modelling of face, body, gesture, speech, audio, text and language while embracing the most advanced systems available for such in-the-wild (i.e., in unconstrained environments) analysis, and across modalities like face to voice. In parallel, this Workshop will solicit contributions towards building fair, explainable, trustworthy and privacy-aware models that perform well on all subgroups and improve in-the-wild generalisation. Original high-quality contributions, in terms of databases, surveys, studies, foundation models, techniques and methodologies (either uni-modal or multi-modal; uni-task or multi-task ones) are solicited on -but are not limited to- the following topics: i) facial expression (basic, compound or other) or micro-expression analysis ii) facial action unit detection iii) valence-arousal estimation iv) physiological-based (e.g.,EEG, EDA) affect analysis v) face recognition, detection or tracking vi) body recognition, detection or tracking vii) gesture recognition or detection viii) pose estimation or tracking ix) activity recognition or tracking x) lip reading and voice understanding xi) face and body characterization (e.g., behavioral understanding) xii) characteristic analysis (e.g., gait, age, gender, ethnicity recognition) xiii) group understanding via social cues (e.g., kinship, non-blood relationships, personality) xiv) video, action and event understanding xv) digital human modeling xvi) characteristic analysis (e.g., gait, age, gender, ethnicity recognition) xvii) violence detection xviii) autonomous driving xix) domain adaptation, domain generalisation, few- or zero-shot learning for the above cases xx) fairness, explainability, interpretability, trustworthiness, privacy-awareness, bias mitigation and/or subgroup distribution shift analysis for the above cases xxi) editing, manipulation, image-to-image translation, style mixing, interpolation, inversion and semantic diffusion for all afore mentioned cases Accepted workshop papers will appear at ECCV 2024 proceedings. Important Dates: Paper Submission Deadline: 29 July Review decisions sent to authors; Notification of acceptance: 25 August Camera ready version 31 August Chairs: Dimitrios Kollias, Queen Mary University of London, UK Stefanos Zafeiriou, Imperial College London, UK Irene Kotsia, Cogitat Ltd, UK Abhinav Dhall, Flinders University, Australia Shreya Ghosh, Curtin University, Australia In case of any queries, please contact d.kollias@qmul.ac.uk<mailto:d.kollias@qmul.ac.uk> Kind Regards, Dimitrios Kollias, on behalf of the organising committee ======================================================================== Dr Dimitrios Kollias, PhD, FHEA, M-IEEE, M-BMVA, M-AAAI, M-TCPAMI, AM-IAPR Lecturer (Assistant Professor) in Artificial Intelligence Member of Centre for Multimodal AI Affiliate Member of Centre for Human Centred Computing Member of Multimedia and Vision Group Member of Queen Mary Computer Vision Group Associate Member of Centre for Advanced Robotics Academic Fellow of Digital Environment Research Institute School of EECS Queen Mary University of London ========================================================================