********************************************************************
CALL FOR PAPERS - HBU 2021
11th International Workshop on Human Behavior Understanding (HBU)
Focus theme: Multi-source aspects of behavioral understanding
Held in conjunction with WACV 2021
https://lmi.fe.uni-lj.si/hbu2021
Paper submission deadline: November 2nd, 2020
Notifications: November 18th, 2020
*********************************************************************
ORGANIZERS
Abhijit Das, Indian Statistical Institute, Kolkata, India
Qiang Ji, Rensselaer Polytechnic Institute, United States
Umapada Pal, Indian Statistical Institute, Kolkata, India
Albert Ali Salah, Utrecht University, The Netherlands
Vitomir Štruc, University of Ljubljana, Slovenia
ABOUT
Domains for human behaviour understanding predominantly (e.g.,
multimedia, human-computer interaction, robotics, affective computing
and social signal processing) rely on advanced pattern recognition
techniques to automatically interpret complex behavioural patterns
generated when humans interact with machines or with other agents. This
is a challenging research area where many issues are still open,
including the joint modelling of behavioural cues taking place at
different time scales, the inherent uncertainty of machine detectable
evidence of human behaviour, the mutual influence of people involved in
interactions, the presence of long term dependencies in observations
extracted from human behaviour, and the important role of dynamics in
human behaviour understanding. Computer vision is a key technology for
analysis and synthesis of human behaviour but stands to gain much from
multi-modality and multi-source processing, in terms of improving
accuracy, resource use, robustness, and contextualization.
This workshop, organized as part of WACV 2021, will gather researchers
dealing with the problem of modelling human behaviour under its multiple
facets (expression of emotions, display of relational attitudes, the
performance of an individual or joint actions, etc.), with particular
attention to multi-source aspects, including multi-sensor,
multi-participant and multi-modal settings. Example challenges are the
additional resource and robustness constraints, explorations in
information fusion, social and contextual aspects of interactions, and
building multi-source representations of social and affective signals
with the goal of advancing the state-of-the-art.
The HBU workshops, previously organized as satellite events to major
conferences in different disciplines such as ICPR’10, AMI’11, IROS’12,
ACMMM’13, ECCV’14, UBICOMP’15, ACMMM’16, FG’18, ECCV’18, ICCV’19 have a
unique aspect of fostering cross-pollination of disciplines, bringing
together researchers from a variety of fields, such as computer vision,
HCI, artificial intelligence, pattern recognition, interaction design,
ambient intelligence, psychology and robotics. The diversity of human
behaviour, the richness of multimodal data that arises from its
analysis, and the multitude of applications that demand rapid progress
in this area ensure that the HBU Workshops provide a timely and relevant
discussion and dissemination platform. For HBU@WACV, we particularly
solicit contributions on human behaviour understanding that combine
multiple sources of information, be it across modalities, sensors, or
subjects under observation. The workshop solicits papers on general
topics related to human behaviour understanding, but with a distinct
focus on multi-source solutions.
TOPICS OF INTEREST
Topics of interest include, but are not limited to:
+ Multimodal solutions for human behaviour modelling and analysis
+ Multimodal solutions towards behavioural biometrics (gait,
handwriting, keystroke dynamics, etc.)
+ Methods for multi-instance learning in behavioural understanding,
+ Analysis of multi-participant settings and of social interactions,
+ Multi-instance representation for characterizing human health,
empathy,
+ Deep learning for multi-party interactions
+ Multimodal deep learning for behaviour understanding
+ Adversarial learning approaches
+ Related sensor technologies
+ Information fusions approach for behaviour analysis
+ Realistic behaviour synthesis in multiple modalities and for
multi-party settings
+ Mobile and wearable systems for behaviour monitoring
+ Datasets and benchmarks
+ Related applications
PAPER SUBMISSION
Submission instruction can be found at
https://lmi.fe.uni-lj.si/hbu2021/paper-submission/
Please feel free to contact for any further details.
Abhijit Das, Qiang Ji, Umapada Pal, Albert Ali Salah, Vitomir Štruc
HBU 2021 Organizers
Hello,
I have a student interested in doing a face identity matching project with Korean faces both with and without make up, male and female (specifically look at the K-Pop style of make- up). I would be really grateful if anyone knows of, or can point me towards this type of face database.
Best wishes
Carrie
Please consider the environment and think before you print.
The University of the West of Scotland is a registered Scottish charity. Charity number SC002520.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender.
Please note that any views or opinions presented in this email are solely those of the author and do not necessarily represent those of the University of the West of Scotland.
As a public body, the University of the West of Scotland may be required to make available emails as well as other written forms of information as a result of a request made under the Freedom of Information (Scotland) Act 2002.
Dear All,
I'm looking for facial images of white men with neutral expression between
18 and 60 years. I already have a set from the Chicago Face Database, but I
would need additional 200-250 images in similar quality. It is important
that there should be no mask around the face, and the whole face should be
visible (from chin to the top of the hair). Coloured pictures are
preferred. They will be used in a go-nogo task in our lab, later possibly
in an online experiment as well. CFD faces will be the "go" trials, these
are already selected and rated, so the quality of the "no-go" stimuli
really needs to be similar to the selected set. Any suggestions?
Thanks in advance
Ferenc Kocsor, PhD
research associate
Institute of Psychology, University of Pécs, Hungary
Dear all,
I am working with Professor Colin Tredoux to analyse verbal or written
descriptions of perpetrators / faces and statements; unfortunately, we do
not have enough diversity in the descriptions that we have collected so
far. Specifically, our sample sizes are comparatively small compared to
studies that have used MTurk, and when we have found other researchers’
data on OSF (e.g., the Alogna et al replication of the VOE study), our
dataset is overwhelming tinted with descriptions of the ‘same’ perpetrator
(i.e., the moustached bank robber in the original video used of Schooler).
For these reasons, we are reaching out to other researchers for help. We
often collect witness statements in our experiments as part of our
laboratory procedure (encode – recall – recognise), but we do not normally
analyse these statement data. We are hoping that other researchers follow a
similar procedure and have description/statement data available. We’d like
to ask if you would be willing to share your data where you have collected
descriptions of the perpetrator/s or statements of the coding event, and
also have corresponding lineup identification data.
The data does not need to be processed – we can process it on our side and
recode the variables (and I can send this back to you). To do this, we just
need to know stimuli characteristics (e.g., tp/ta, position of
suspect/target, confidence scale, lineup choice made etc).
Please let me know if you are able to assist us. Apologies again for the
random e-mail, but any assistance would be much assisted.
Thanks again! Hope to hear from you.
Kind regards,
Alicia Nortje and Colin Tredoux
--
*Alicia Nortje, PhD*
*Postdoctoral Research Fellow,PhD (Applied Psychology; Psychology and
Law)**Eyewitness
Laboratory, *
*Applied Cognitive Science and Experimental Neuropsychology Team (ACSENT)
laboratory, *
*Department of Psychology, *
*University of Cape TownSkype: lichza*
Dear all,
*This is a friendly reminder about the 8th Face Science Symposium that will
take place on Tuesday, 17 November 2020 at 08h30, to start at 09h00 and to
end at 13h00.*
The Face Science Symposium is a joint research and support initiative
between the Eyewitness Research Group (UCT, Department of Psychology), the
South African Police Service, and Facial Identification Researchers and
Practitioners. The initiative was formed in 2014, and we have held a
conference every year since.
*The format is different this year because the symposium will be hosted
online.*
We aim to have between 5 and 7 speakers, depending on the number of keynote
speakers. All presentations will be 20 minutes long, with 5-10 minutes for
questions/discussion.
*There is no cost involved; therefore, it is free to attend or to present.*
*To register for the conference, as a delegate or a speaker, please go
here:https://forms.gle/XFKMpw8ZUK91MjV79
<https://forms.gle/XFKMpw8ZUK91MjV79>*
*Registration as a delegate closes on 12 November.The deadline for abstract
submissions is 30 October.*
We want to ensure *a range of topics*, therefore submission of an abstract
is not a guarantee that you will be presenting. Therefore, we will confirm
with you (between 30 October and 5 November) if your abstract submission
was successful. Registration as a delegate is immediately successful.
Previous topics include police interviewing techniques, eyewitness memory
for multiple perpetrator crimes, interviewing witnesses, reconstruction of
faces from skulls), drawing techniques for faces, different tools for
constructing composites, and person recognition from CCTV footage.
*Please share this e-mail far and wide.* We are not limited by physical
borders this year, so we can accommodate international delegates and
attendees. Our original intention for this collaboration was to foster
discussion and research between researchers and practitioners who are
interested in questions and issues about faces. Therefore, as always, we
would like a mix of presentations, ranging from academic topics (from
different departments), in-field experience/case studies/cases, and
practical presentations.
Please do not RSVP to me personally; instead, RSVP through the link above.
If you have questions or issues, then you are welcome to e-mail me.
Thanks in advance, and looking forward to spending the day with you at the
virtual symposium.
Kind regards,
Alicia Nortje
--
*Alicia Nortje, PhD*
*Postdoctoral Research Fellow,PhD (Applied Psychology; Psychology and
Law)**Eyewitness
Laboratory, *
*Applied Cognitive Science and Experimental Neuropsychology Team (ACSENT)
laboratory, *
*Department of Psychology, *
*University of Cape TownSkype: lichza*
Dear colleagues
See below for the Intelligent Virtual Agents 2020 program October 20-22th including this year’s special topic: Exploring connections between computer science, robotics, and psychology.
All welcome! See registrations links below.
Sharing widely much appreciated.
Prof. Rachael E. Jack, Ph.D.
Professor of Computational Social Cognition
Institute of Neuroscience & Psychology
School of Psychology
University of Glasgow
Scotland, G12 8QB
+44 (0) 141 330 5087
[https://mcusercontent.com/b50a71cb719c1a4fecb313822/images/a7f52241-575b-44…]
Call for Participation
ACM 20th International Conference on Intelligent Virtual Agents
Online Conference, October 20th-22nd 2020
https://iva2020.gla.ac.uk/
---------------------------------------------
2020 Intelligent Virtual Agents
---------------------------------------------
SPECIAL IVA 2020 TOPIC:
Exploring Connections between Computer Science, Robotics and Psychology.
Register now for the Intelligent Virtual Agents (IVA) Annual Conference!
https://iva2020.psy.gla.ac.uk/registration/
*** In light of Covid-19, this year the IVA 2020 conference will be delivered online via Virtual Chair and GatherTown. Our goal is to ensure the spontaneous interactions that invigorate physical conferences while realizing low attendance costs.
Please see https://iva2020.psy.gla.ac.uk/attendees/ for more details ***
IVA is the premier international event for interdisciplinary research on the design, application, and evaluation of Intelligent Virtual Agents (IVAs) – interactive characters that exhibit human-like qualities including communicating using natural human modalities such as facial expressions, speech and gesture. IVA 2020 will showcase cutting-edge research on the design, application, and evaluation of IVAs, including basic research underlying the technology that supports human-agent interaction such as social perception, dialog modeling, and social behavior planning, as well as work on central theoretical issues, uses of virtual agents in psychological research and showcases of working applications.
IVA 2020’s Special Topic was an invitation to researchers and developers across the disciplines of computer science, robotics, psychology and the commercial world to share their work on the challenges and uses of social agent research, in the hope to further trans-disciplinary collaboration.
The invited talks this year will include a social roboticist, a neuroscientist and an artist panel:
- Professor Jodi Forlizzi: “HRI and HAI: Merging Perspectives from Two Fields”
- Professor Lars Muckli: “Emotions and other contextual signals in early visual cortex and the computational role for AI”
- Behnaz Farahi, Ph.D.: “Emotive Matter: Affective Computing from Fashion to Architecture”
- Güvenç Özel: “Persuasion Machines: Networked Architectures in the Age of Surveillance Capitalism”
See https://iva2020.psy.gla.ac.uk/program/invited-speakers/ <https://iva2020.psy.gla.ac.uk/program/invited-speakers/%20> for more details.
CONFERENCE ORGANIZERS
Conference Chairs:
* Stacy Marsella, University of Glasgow
* Rachael Jack, University of Glasgow
Program Chairs:
* Hannes Vilhjalmsson, Reykjavik University
* Pedro Sequeira, SRI International
* Emily Cross, University of Glasgow
* Contact: iva20(a)easychair.org<mailto:iva20@easychair.org>
Workshop/Demonstration Organization Chairs:
* Lucile Callebert, University of Glasgow
* Florian Pecune, University of Glasgow
* Contact: workshopsdemos.iva2020(a)gmail.com<mailto:workshopsdemos.iva2020@gmail.com>
Web Site
* Amol Deshmukh, University of Glasgow Doctoral Consortium
Volunteer Coordinator:
* Carolyn Saund, University of Glasgow
This email was sent to rachael.jack(a)glasgow.ac.uk<mailto:rachael.jack@glasgow.ac.uk>
why did I get this?<x-msg://12/*|ABOUT_LIST|*> unsubscribe from this list<x-msg://12/*|UNSUB|*> update subscription preferences<x-msg://12/*|UPDATE_PROFILE|*>
*|LIST:ADDRESSLINE|*
Prof. Rachael E. Jack, Ph.D.
Professor of Computational Social Cognition
Institute of Neuroscience & Psychology
School of Psychology
University of Glasgow
Scotland, G12 8QB
+44 (0) 141 330 5087
Prof. Rachael E. Jack, Ph.D.
Professor of Computational Social Cognition
Institute of Neuroscience & Psychology
School of Psychology
University of Glasgow
Scotland, G12 8QB
+44 (0) 141 330 5087
Call For Abstracts - Workshop on Affective Shared Perception (WASP)
Dear all,
We are delighted to invite you to submit an abstract, and to participate in our Workshop on Affective Shared Perception (WASP), hosted by the International Conference on Developmental Learning and Epigenetic Robotics (ICDL-EPIROB2020).
Please find below all the information!
Cheers,
Pablo
I. Aim and Scope
Our perception of the world, in particular, the ones which are influenced by affective understanding, depends both on sensory perception and prior knowledge. Most of the current research on modeling affective behavior as computer models ground their contribution to pre-trained learning models, which are purely data-driven, or on reproducing existing human behavior. Such approaches allow for easily reproducible solutions that fail when applied to complex social scenarios.
Understanding shared perception as part of the affective processing mechanisms will allow us to tackle this problem and to provide the next step towards a real-world affective computing system. The goal of this workshop is to present and discuss new findings, theories, systems, and trends in computational models of affective shared perception. The workshop will feature a multidisciplinary list of invited speakers with experience in different aspects of social interaction, which will allow a rich and diverse debate about our overarching theme of affective shared perception.
The workshop is scheduled to happen on the 30th of October of 2020 virtually. It will start at 15h00 CET and it will have a total duration of 3h30min.
The participation is free, and a Zoom room will be available shortly before the workshop starts.
II. Registration
The registration to the workshop is free, but to guarantee a place to attend the workshop, please register here: shorturl.at/ciCQX
III. Potential Topics
Topics include, but are not limited to:
- Affective perception and learning
- Affective modulation and decision making
- Developmental perspectives of shared perception
- Machine learning for shared perception
- Bio-inspired approaches for affective shared perception
- Affective processing for embodied and cognitive robots
- Multisensory modeling for conflict resolution in shared perception
- New psychological findings on shared perception
- Assistive aspects and applications of shared affective perception
IV. Invited Speakers
Prof. Dr. Ginevra Castellano - Uppsala University, Sweeden
Prof. Dr. Ellen Souza - Federal Rural University of Pernambuco, Brazil
Prof. Dr. Yukie Nagai - The University of Tokyo, Japan
V. Submission
Prospective participants in the workshop are invited to submit a contribution as an abstract with a maximum of 350 words.
Submission information: https://www.whisperproject.eu/wasp2020
The abstracts will be peer-reviewed by experts from all over the world.
To encourage the integration with the local affective computing communities, we will allow student abstracts to be submitted in English, Spanish, and Portuguese.
Each accepted abstract will be presented as a 5 min video (in English!) that will be shared on the workshop's social media. During the workshop, all the videos will be streamed and the authors will have a joint live Q/A with the audience for 60minutes.
Participants can also opt-in for participating in our Frontiers Research Topic on Affective Shared Perception ( https://www.frontiersin.org/research-topics/16086/affective-shared-percepti…). The same abstract sent to the workshop can be sent as an abstract submission to the research topic. If you want to opt-in for the research topic, only English submissions will be accepted.
- Abstract submission deadline: 12th of October
- Notification of acceptance: 17th of October
- Frontiers Research Topic Abstract Deadline: 24th of October
- Video submission: 24th of October
VI. Organizers
Pablo Vinicius Alves De Barros, Italian Institute of Technology (IIT), Genova, Italy
Alessandra Sciutti, Italian Institute of Technology (IIT), Genova, Italy
----------------------------------------
Dr. Pablo Barros
Postdoctoral Researcher - CONTACT Unit
Istituto Italiano di Tecnologia – Center for Human Technologies
Via Enrico Melen 83, Building B 16152 Genova, Italy
email: pablo.alvesdebarros(a)iit.it
website: https://www.pablobarros.net<http://www.pablobarros.net><https://www.p>
twitter: @PBarros_br
PhD and Postdoc position in Cognitive Computational Neuroscience Lab at Justus-Liebig University in Giessen, Germany
The newly established research group for Cognitive Computational Neuroscience of Katharina Dobs at Justus-Liebig University Giessen (JLU), Germany, is recruiting a PhD student and a Postdoc starting from October 1st, 2020 or later.
Our lab is based at the Psychology Department at JLU and works in the field of cognitive computational neuroscience at the intersection of human cognitive science, human neuroscience and computational modeling. We are particularly interested in visual neuroscience and potential projects include human behavioral (e.g., psychophysics, large-scale crowdsourcing) and neuroimaging (e.g., fMRI, EEG) experiments and computational modeling (e.g., CNNs). For more information on the lab’s research, please visit katharinadobs.com <http://katharinadobs.com/> or see also here: https://www.youtube.com/watch?v=iycpoi-GX2Y&t <https://www.youtube.com/watch?v=iycpoi-GX2Y&t>.
Our lab and the psychology department at JLU (http://www.allpsych.uni-giessen.de <http://www.allpsych.uni-giessen.de/>) have excellent links to scientists within Europe and worldwide and offer a stimulating, multi-national and interdisciplinary research environment. The student / postdoc will be integrated in a thriving research community under the umbrella of the Center for Mind, Brain and Behavior with strong regional, national and international connections, including numerous collaborative research endeavors (e.g., SFB on Cardinal Mechanisms of Perception, IRTG 1901 The Brain in Action). Giessen is a university town, located in the center of Germany around 60km north of Frankfurt.
We encourage applicants to apply who
- are highly motivated to do research on human visual neuroscience and computational modeling
- are familiar with at least one of the methods we use (see above)
- have experience in programming (e.g., Python, Matlab) and strong quantitative skills
- have completed or are close to completing a master’s degree in psychology, computer science, neuroscience or a related field
- enjoy working independently as well as in a team
The PhD position is funded for at least 3 years, with a salary corresponding to 65% of payment group E13 according to the collective agreement of the State of Hesse (TV-H). The postdoc position depends on a successful fellowship application (e.g., DFG, Humboldt Foundation) and international students are particularly encouraged to apply.
To apply, please email your CV, a brief statement of research interests, and the names of up to three references to Katharina Dobs (katharina.dobs(a)gmail.com <mailto:katharina.dobs@gmail.com>).
--
Katharina Dobs
www.katharinadobs.com <http://www.katharinadobs.com/>
katharina.dobs(a)gmail.com <mailto:katharina.dobs@gmail.com>