Dear Colleagues,
Please find below the invitation to contribute to the 5th Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW) to be held in conjunction with the IEEE Computer Vision and Pattern Recognition Conference (CVPR), 2023.
(1): The Competition is split into the below four Challenges:
* Valence-Arousal Estimation Challenge
* Expression Classification Challenge
* Action Unit Detection Challenge
*
Emotional Reaction Intensity Estimation Challenge
The first 3 Challenges are based on an augmented version of the Aff-Wild2 database, which is an audiovisual in-the-wild database of 594 videos of 584 subjects of around 3M frames; it contains annotations in terms of valence-arousal, expressions and action units.
The last Challenge is based on the Hume-Reaction dataset, which is a multimodal dataset of about 75 hours of video recordings of 2222 subjects; it contains continuous annotations for the intensity of 7 emotional experiences.
Participants are invited to participate in at least one of these Challenges.
There will be one winner per Challenge; the top-3 performing teams of each Challenge will have to contribute paper(s) describing their approach, methodology and results to our Workshop; the accepted papers will be part of the CVPR 2023 proceedings; all other teams are also encouraged to submit paper(s) describing their solutions and final results; the accepted papers will be part of the CVPR 2023 proceedings.
More information about the Competition can be found here<https://ibug.doc.ic.ac.uk/resources/cvpr-2023-5th-abaw/>.
Important Dates:
* Call for participation announced, team registration begins, data available:
13 January, 2023
* Final submission deadline:
18 March, 2023
* Winners Announcement:
19 March, 2023
* Final paper submission deadline:
24 March, 2023
* Review decisions sent to authors; Notification of acceptance:
3 April, 2023
* Camera ready version deadline:
8 April, 2023
Chairs:
Dimitrios Kollias, Queen Mary University of London, UK
Stefanos Zafeiriou, Imperial College London, UK
Panagiotis Tzirakis, Hume AI
Alice Baird, Hume AI
Alan Cowen, Hume AI
(2): The Workshop solicits contributions on the recent progress of recognition, analysis, generation and modelling of face, body, and gesture, while embracing the most advanced systems available for face and gesture analysis, particularly, in-the-wild (i.e., in unconstrained environments) and across modalities like face to voice. In parallel, this Workshop will solicit contributions towards building fair models that perform well on all subgroups and improve in-the-wild generalisation.
Original high-quality contributions, including:
- databases or
- surveys and comparative studies or
- Artificial Intelligence / Machine Learning / Deep Learning / AutoML / (Data-driven or physics-based) Generative Modelling Methodologies (either Uni-Modal or Multi-Modal; Uni-Task or Multi-Task ones)
are solicited on the following topics:
i) "in-the-wild" facial expression or micro-expression analysis,
ii) "in-the-wild" facial action unit detection,
iii) "in-the-wild" valence-arousal estimation,
iv) "in-the-wild" physiological-based (e.g.,EEG, EDA) affect analysis,
v) domain adaptation for affect recognition in the previous 4 cases
vi) "in-the-wild" face recognition, detection or tracking,
vii) "in-the-wild" body recognition, detection or tracking,
viii) "in-the-wild" gesture recognition or detection,
ix) "in-the-wild" pose estimation or tracking,
x) "in-the-wild" activity recognition or tracking,
xi) "in-the-wild" lip reading and voice understanding,
xii) "in-the-wild" face and body characterization (e.g., behavioral understanding),
xiii) "in-the-wild" characteristic analysis (e.g., gait, age, gender, ethnicity recognition),
xiv) "in-the-wild" group understanding via social cues (e.g., kinship, non-blood relationships, personality)
xv) subgroup distribution shift analysis in affect recognition
xvi) subgroup distribution shift analysis in face and body behaviour
xvii) subgroup distribution shift analysis in characteristic analysis
Accepted workshop papers will appear at CVPR 2023 proceedings.
Important Dates:
Paper Submission Deadline: 24 March, 2023
Review decisions sent to authors; Notification of acceptance: 3 April, 2023
Camera ready version 8 April, 2023
Chairs:
Dimitrios Kollias, Queen Mary University of London, UK
Stefanos Zafeiriou, Imperial College London, UK
Panagiotis Tzirakis, Hume AI
Alice Baird, Hume AI
Alan Cowen, Hume AI
In case of any queries, please contact d.kollias(a)qmul.ac.uk<mailto:d.kollias@qmul.ac.uk>
Kind Regards,
Dimitrios Kollias,
on behalf of the organising committee
========================================================================
Dr Dimitrios Kollias, PhD, MIEEE, FHEA
Lecturer (Assistant Professor) in Artificial Intelligence
Member of Multimedia and Vision (MMV) research group
Member of Queen Mary Computer Vision Group
Associate Member of Centre for Advanced Robotics (ARQ)
Academic Fellow of Digital Environment Research Institute (DERI)
School of EECS
Queen Mary University of London
========================================================================
Two funded PhD positions are available at the Psychology and Neuroscience
of Cognition (PsyNCog) research unit
<https://www.psyncog.uliege.be/cms/c_5016065/en/about> (University of
Liège, Belgium), under the supervision of Dr. Christel Devue (Cognitive
Psychology research group). We are seeking two highly motivated candidates
to work on two different research projects.
*Position 1 – A cost-efficient mechanism of face learning - Interactions
between stability in appearance and learning conditions (3 years funding)*
The aim of the project is to test a new integrative theory of human face
learning (introduced in a recent paper here
<https://www.sciencedirect.com/science/article/pii/S0010027723002032?via%3Di…>)
that explains how recognition performance changes as familiarity with faces
develops. We hypothesise that the relative stability of a given face’s
appearance interact with learning demands to determine the level of details
that are stored in memory over time and the quality of facial
representations. In that framework, recognition errors are viewed as the
flip side of an otherwise efficient and economical mechanism.
This theory will be tested with (online) behavioural and eye-tracking
experiments that will track the development of facial representations. A
new understanding of human face learning will help address limits of facial
recognition technologies and contribute to the improvement of the treatment
of people with debilitating face recognition difficulties.
*Position 2 -* *Spatio-temporal compression in memory for real-world events
(4 years funding)*
Most of the current knowledge on episodic memory comes from laboratory
studies in which participants memorize stimuli under artificial conditions.
Yet, a new line of research suggests that information processing can
manifest in dramatically different ways in the lab and in the real world.
Here, we aim to determine how real-life events, and people and objects that
populate these events, are represented parsimoniously to deal with storage
limitations inherent to the human cognitive system. More specifically, we
will investigate how the complexity of real-world events is summarized and
compressed in episodic memory along the two crucial dimensions of space and
time.
This question will be examined using a novel experimental paradigm that
leverages information gathered by wearable camera technology and mobile
eye-tracking. This project is part of a broader project in collaboration
with Dr. Arnaud D’Argembeau and will imply to collaborate with another PhD
student. The candidate will focus on the spatial aspects and on person
recognition (including face processing).
*Profile*
We are seeking two highly motivated candidates with:
§ A Master's degree in Experimental/cognitive psychology, cognitive
neuroscience or equivalent.
§ A strong affinity with or interest in episodic memory and/or face
processing.
§ Excellent academic records.
§ Strong research skills including experimental designs and statistical
analyses
§ Experience with experiment programming software (e.g. OpenSesame,
E-prime, PsychoPy).
§ Coding skills (e.g., R, Matlab, or Python).
§ Excellent writing and oral communication skills.
§ A good command of English.
§ Organisational and time management skills.
§ Enthusiasm, self-motivation, team spirit and benevolence.
§ Experience with eye-tracking is a plus.
§ Experience with image and/or video editing software is a plus.
§ A command of or willingness to learn French is a plus for Position #2.
*Environment*
The Psychology and Neuroscience of Cognition (PsyNCog) research unit
<https://www.psyncog.uliege.be/cms/c_10112686/en/core-members> is
recognized internationally for its research on human memory. It includes
several research groups that investigate different aspects of memory and
perception, creating a dynamic research environment. The Psychology
department is located on a wooded campus (Sart Tilman
<https://www.campus.uliege.be/cms/c_9038317/en/liege-sart-tilman>) about 15
minutes’ drive from the centre of Liège and well connected via public
transports.
*Procedure*
To apply, please send the following to cdevue(a)uliege.be with email subject
“Application for PhD position #1 – Face learning” or “Application for PhD
position #2 – Spatio-temporal compression”:
§ A cover letter detailing your background and motivations.
§ A curriculum vitae, including a link to a copy of your master thesis and
a list of research projects in which you were involved, with a brief
description of your contribution.
§ Transcripts and diplomas for bachelor's and master's degrees.
§ Contact details of at least two academic references who agreed to be
contacted.
Applications will be accepted immediately and candidates will be considered
until the positions are filled. Selected candidates will be invited for an
interview online. Please contact Christel Devue (cdevue(a)uliege.be) for more
information or informal inquiries.
*Expected starting date: *as soon as possible (negotiable but no later than
December 2023).