Postdoctoral Research Assistant/Associate in
Computing the Face Syntax of Social Communication
Dr. Rachael Jack is delighted to announce the opening of a 3-year ERC-funded postdoctoral researcher position on the project Computing the Face Syntax of Social Communication at the Institute of Neuroscience & Psychology and School of Psychology at the University of Glasgow, Scotland, UK.
The Project. This ambitious project aims to mathematically model the human face as an algebraic generator of dynamic social signals and build a psychologically and culturally valid generative model of social face signalling that is transferrable to social robots. The project will use a multidisciplinary approach that combines social and cultural psychology with dynamic 3D structural face computer graphics, vision science psychophysical methods, and mathematical psychology. Given that project involves interdisciplinary knowledge and skills, the ideal candidate would have experience of both computational (e.g., programming) and social psychology, for example via a joint degree or research experience/interests.
Research Environment. The successful applicant will experience a unique and intellectually stimulating research environment within the Institute of Neuroscience & Psychology, undertake a specific programme of specialist research skill development, and contribute to progressing an internationally competitive and strategic research agenda. The applicant will have access to (1) a unique, state-of-the-art 4D structural face imaging technology and dynamic face movement generator; (2) specialist in-house training on advanced quantitative methods and statistical analyses (e.g., 4D image processing, model fitting); (3) postdoctoral communities; (4) a dedicated full-time Research Technologist specializing in 3D and 4D computer graphics; (5) a dedicated full-time computing support team who provide data storage (>5 Petabytes), high-security data management systems, high-performance equipment and software; (6) a secure on online Subject Pool (7,000+ members, 106 nationalities); (7) international collaborators; and (8) a full suite of brain imaging facilities including 7T fMRI, MEG, EEG, and TMS.
The Team
Primary Investigator: Dr. Rachael E. Jack
http://www.gla.ac.uk/schools/psychology/staff/rachaeljack/
The successful applicant will join an internationally renowned, high performance interdisciplinary research team and receive regular, close mentorship and collegial interaction from PI Jack and other lab members via lab meetings. The successful applicant will develop and apply state-of-the-art specialist skills and knowledge of social face perception and face signalling including 3D & 4D face capture and generation, advanced MATLAB programming, lab testing booth preparation, high volume data collection, mathematically modelling 3D dynamic face signals, analyzing high-dimensional data, scientific writing, and producing high-quality data visualizations for presentations and high-profile publications. The successful applicant will also have the opportunity to present at national and international academic conferences, participate in public engagement activities, and submit their work to high-impact and specialist peer reviewed academic journals. Successful applications may also have the opportunity to work with other interested parties (e.g., social robotics designers).
Affiliate labs. The Jack lab regularly interacts with and has joint lab meetings with the following labs:
Prof. Stacy Marsella
https://www.gla.ac.uk/researchinstitutes/neurosciencepsychology/staff/stacy…
Prof. Philippe G. Schyns
http://www.gla.ac.uk/researchinstitutes/neurosciencepsychology/staff/philip…
Start date: May 2020 (negotiable)
Closing date: 20th February 2020
Reference number: 032999
Apply here: https://www.jobs.ac.uk/job/BYD853/research-assistant-associate
Full-time Permanent Faculty Positions: Centre for Social, Cognitive and Affective Neuroscience (cSCAN) at the Institute of Neuroscience & Psychology (INP), University of Glasgow, Scotland
The Centre for Social, Cognitive and Affective Neuroscience (cSCAN) is actively pursuing multiple research appointments at all levels (Lecturer/Assistant Professor, Senior Lecturer/Associate Professor, Reader/Associate Professor, Professor/Full Professor) to enrich our internationally connected research-intense environment and world-leading reputation in social perception, social cognition, social neuroscience, social interaction and communication, with an emerging leadership in behaviour change. cSCAN members operate in a research-rich capacity, with teaching-rich staff leading in the innovation and delivery of education.
Research focus. cSCAN is a uniquely interdisciplinary research environment that brings together international researchers across a range of complementary disciplines including Psychology, Neuroscience, Cognitive Science, Computer Science and Engineering who work together to address central questions in social, cognitive, and affective science.
Interdisciplinary links. Our researchers have close links with centres and departments across the university, including:
* Computing Sciences. Our interdisciplinary Glasgow Social Robotics group was recently awarded a UKRI Centre for Doctoral Training (CDT) in Socially Intelligent Artificial Agents with funding for 50 PhD students over the next 8 years (https://socialcdt.org/)
* Institute of Health and Well-Being. Many of our members work closely with top researchers in this world-leading interdisciplinary research centre focused on improving population health and reducing inequalities.
Facilities and in-house expertise. A wide variety of state-of-the-art methods and technologies are available:
* Full, on-site brain-imaging suite comprising 7T fMRI, 3T fMRI, MEG, EEG and TMS via the Centre for Cognitive Neuroimaging (CCNi) with expertise in combining technologies (e.g., fMRI+EEG, MEG+Eye tracking, EEG+TMS). Affiliate facilities include ECoG at The Glasgow Epilepsy Centre
* A range of technologies used for studying human behavior as well as for the design of applications, including virtual reality environments and the cSCAN's own state of the art tools for creating virtual and robotic social agents, specifically for generating dialog, posture, gestures, facial actions and facial morphology.
* Multiple technologies for capturing behaviour, including eye trackers, 3D facial morphology, motion capture systems and voice capture.
Researchers also have the following dedicated research support facilities:
* Online, secure Subject Pool (1000+ members, 16-80+ years, 100+ nationalities)
* In-house computing support team providing 6+ Petabytes of storage, High Performance Computing (HPC, 5k cores, 5Tb RAM, 20+ GPU cards), high-security data management systems, advice, sourcing and installation of high-performance computing equipment and software development of on-line research facilities
Links to cSCAN and Institute of Neuroscience & Psychology
https://www.gla.ac.uk/researchinstitutes/neurosciencepsychology/research/cs…http://cscan.gla.ac.ukhttps://www.gla.ac.uk/researchinstitutes/neurosciencepsychology/
Please contact cSCAN-vacancies(a)glasgow.ac.uk<mailto:cSCAN-vacancies@glasgow.ac.uk> for further information
It is the University of Glasgow's mission to foster an inclusive climate, which ensures equality in our working, learning, research, and teaching environment. We strongly endorse the principles of Athena SWAN, including a supportive and flexible working environment, with commitment from all levels of the organization in promoting gender equity.
The University of Glasgow, charity number SC004401
CALL FOR PAPERS:
10th IEEE International Conference on Development and Learning (ICDL)
(This is the follow-up of the previous ICDL-EpiRob conference series)
7th-10th September 2020, Valparaiso, Chile
https://cdstc.gitlab.io/icdl-2020/
An IEEE Computational Society-sponsored conference
==== Important Dates ====
Submission deadline: 15th March 2020
Author notification: 15th May 2020
Camera ready due: 1st July 2020
Conference: 7th-10th September 2020
==== Overview ====
The IEEE International Conference on Development and Learning (ICDL),
previously referred to as ICDL-EpiRob, is the premier gathering of
professionals dedicated to the advancement of cognitive and
developmental learning. As such, ICDL is a unique conference gathering
researchers from computer science, robotics, psychology and
developmental studies.
ICDL is a highly selective annual international conference that aims to
showcase and share the very best interdisciplinary and multidisciplinary
research on how humans and animals develop sensing, reasoning and actions.
ICDL community focusses on the understanding of how biological agents
take advantage of interaction with social and physical environments to
develop their cognitive capabilities. Moreover, how this knowledge can
be used to improve future computing and robotic systems.
We invite submissions for the conference to explore, extend, and
consolidate the interdisciplinary boundaries of this exciting research
field.
==== Scope and Topics ====
The primary list of topics of interest includes, but not limited to:
- principles and theories of development and learning;
- development of skills in biological systems and robots;
- nature vs nurture, developmental stages;
- models on the contributions of interaction to learning;
- verbal, non-verbal and multi-modal interaction;
- models on active learning;
- architectures for lifelong learning;
- emergence of body and affordance perception;
- analysis and modelling of human motion and state;
- models for prediction, planning and problem solving;
- models of human-human and human-robot interaction;
- emergence of verbal and non-verbal communication;
- epistemological foundations and philosophical issues;
- robot prototyping of human and animal skills;
- ethics and trust in computational intelligence and robotics;
- social learning in humans, animals, and robots.
==== Submissions ====
Authors are invited to submit original and unpublished papers of six
pages with the possibility of two-extra pages at a fee. Submissions are
in the IEEE conference template. Submission will be selected for either
oral or poster presentation based on the reviews. Accepted and presented
regular six-page paper submissions will be included in the conference
proceedings published by IEEE Xplore after the conference.
For more information about submissions, travel grants, social events,
etc. visit the conference webpage under https://cdstc.gitlab.io/icdl-2020/
==== Organizing committee ====
General Chairs: Giulio Sandini, and Javier Ruiz-del-Solar
Program and Finance Chairs: Nicolás Navarro-Guerrero, and María-José Escobar
Bridge Chairs: Minoru Asada, Frédéric Alexandre, and Linda Smith
Publicity Chairs: Carmelo Bastos, Maya Cakmak, Angelo Cangelosi, Yukie
Nagai, and Emre Ugur
Publication Chairs: Pablo Barros, and Haian Wu
Tutorials and Workshops Chair: Miguel Solis
Travel and Registration Awards Chair: Francisco Cruz
Local chairs: Mauricio Araya
Webpage Chairs: Cristóbal Nettle, and Patricio Castillo
Graphics: Camila Angel Alfaro
Best regards from the organizing committee,
--
Best regards,
Nicolás Navarro-Guerrero
https://nicolas-navarro-guerrero.github.io/
***********************************************************************************
ICMI 2020: Call for Workshops (deadline extended to 17 Feb)
https://icmi.acm.org/2020/index.php?id=CfW
***********************************************************************************
The International Conference on Multimodal Interaction (ICMI 2020) will be
held in Utrecht, the Netherlands, October 25-29, 2020. ICMI is the premier
international conference for multidisciplinary research on multimodal
human-human and human-computer interaction analysis, interface design, and
system development. The theme of the ICMI 2020 conference is Art, Culture,
and Society. ICMI has developed a tradition of hosting workshops in
conjunction with the main conference to foster discourse on new research,
technologies, social science models and applications. Examples of recent
workshops include:
- Media Analytics for Societal Trends
- Neuromanagement and Intelligent Computing
- Multi-sensorial Approaches to Human-Food Interaction
- Group Interaction Frontiers in Technology
- Modeling Cognitive Processes from Multimodal Data
- Human-Habitat for Health
- Multimodal Analyses enabling Artificial Agents in Human-Machine
Interaction
- Investigating Social Interactions with Artificial Agents
- Child Computer Interaction
- Multimodal Interaction for Education
We are seeking workshop proposals on emerging research areas related to the
main conference topics, and those that focus on multi-disciplinary
research. We would also strongly encourage workshops that will include a
diverse set of keynote speakers (factors to consider include: gender,
ethnic background, institutions, years of experience, geography, etc.).
The content of accepted workshops are under the control of the workshop
organizers. Workshops may be of a half-day or one day in duration. Workshop
organizers will be expected to manage the workshop content, solicit
submissions, be present to moderate the discussion and panels, invite
experts in the domain, conduct the reviewing process, and maintain a
website for the workshop. Workshop papers will be indexed by ACM Digital
Library in an adjunct proceedings, and a short workshop summary by the
organizers will be published in the main conference proceedings.
Submission
Prospective workshop organizers are invited to submit proposals in PDF
format (Max. 3 pages). Please email proposals to the workshop chairs:
Yukiko Nakano (y.nakano(a)st.seikei.ac.jp) and Albert Ali Salah (
a.a.salah(a)uu.nl) The proposal should include the following:
- Workshop title
- List of organizers including affiliation, email address, and short
biographies
- Workshop motivation, expected outcomes and impact
- Tentative list of keynote speakers
- Workshop format (by invitation only, call for papers, etc.), anticipated
number of talks/posters, workshop duration (half-day or full-day) including
tentative program
- Planned advertisement means, website hosting, and estimated participation
- Paper review procedure (single/double-blind, internal/external,
solicited/invited-only, pool of reviewers, etc.)
- Paper submission and acceptance deadlines
- Special space and equipment requests, if any
Important Dates:
*Workshop proposal submission: Monday, February 17, 2020*
Notification of acceptance: Monday, February 24, 2020
Workshop papers due: End of July, 2020 (suggested)
Workshop dates: October 25 or 29, 2020
[Macintosh HD:ExtraCurric:Departmental:Admin:Logos:InsNeuroPsy_colour.eps]
[cid:image010.jpg@01D5DCD9.58CEA7D0][A close up of a logo Description automatically generated][A picture containing person Description automatically generated]
Postdoctoral Research Assistant/Associate in
Computing the Face Syntax of Social Communication
Grade 6/7
Dr. Rachael Jack is delighted to announce the opening of a 3-year ERC-funded postdoctoral researcher position on the project Computing the Face Syntax of Social Communication at the Institute of Neuroscience & Psychology and School of Psychology at the University of Glasgow, Scotland, UK.
The Project. This ambitious project aims to mathematically model the human face as an algebraic generator of dynamic social signals and build a psychologically and culturally valid generative model of social face signalling that is transferrable to social robots. The project will use a multidisciplinary approach that combines social and cultural psychology with dynamic 3D structural face computer graphics, vision science psychophysical methods, and mathematical psychology. Given that project involves interdisciplinary knowledge and skills, the ideal candidate would have experience of both computational (e.g., programming) and social psychology, for example via a joint degree or research experience/interests.
Research Environment. The successful applicant will experience a unique and intellectually stimulating research environment within the Institute of Neuroscience & Psychology, undertake a specific programme of specialist research skill development, and contribute to progressing an internationally competitive and strategic research agenda. The applicant will have access to (1) a unique, state-of-the-art 4D structural face imaging technology and dynamic face movement generator; (2) specialist in-house training on advanced quantitative methods and statistical analyses (e.g., 4D image processing, model fitting); (3) postdoctoral communities; (4) a dedicated full-time Research Technologist specializing in 3D and 4D computer graphics; (5) a dedicated full-time computing support team who provide data storage (>5 Petabytes), high-security data management systems, high-performance equipment and software; (6) a secure on online Subject Pool (7,000+ members, 106 nationalities); (7) international collaborators; and (8) a full suite of brain imaging facilities including 7T fMRI, MEG, EEG, and TMS.
The Team
Primary Investigator: Dr. Rachael E. Jack
http://www.gla.ac.uk/schools/psychology/staff/rachaeljack/
The successful applicant will join an internationally renowned, high performance interdisciplinary research team and receive regular, close mentorship and collegial interaction from PI Jack and other lab members via lab meetings. The successful applicant will develop and apply state-of-the-art specialist skills and knowledge of social face perception and face signalling including 3D & 4D face capture and generation, advanced MATLAB programming, lab testing booth preparation, high volume data collection, mathematically modelling 3D dynamic face signals, analyzing high-dimensional data, scientific writing, and producing high-quality data visualizations for presentations and high-profile publications. The successful applicant will also have the opportunity to present at national and international academic conferences, participate in public engagement activities, and submit their work to high-impact and specialist peer reviewed academic journals. Successful applications may also have the opportunity to work with other interested parties (e.g., social robotics designers).
Affiliate labs. The Jack lab regularly interacts with and has joint lab meetings with the following labs:
Prof. Stacy Marsella
https://www.gla.ac.uk/researchinstitutes/neurosciencepsychology/staff/stacy…
Prof. Philippe G. Schyns
http://www.gla.ac.uk/researchinstitutes/neurosciencepsychology/staff/philip…
Start date: May 2020 (negotiable)
Closing date: 20th February 2020
Reference number: 032999
Apply here: https://www.jobs.ac.uk/job/BYD853/research-assistant-associate
***************************************
ICMI 2020: 2nd Call for Long and Short Papers
http://icmi.acm.org/2020/index.php?id=cfp
25-29 Oct 2020, Utrecht, The Netherlands
***************************************
Call for Long and Short Papers
The 22nd International Conference on Multimodal Interaction (ICMI 2020)
will be held in Utrecht, the Netherlands. ICMI is the premier
international forum for multidisciplinary research on multimodal
human-human and human-computer interaction, interfaces, and system
development. The conference focuses on theoretical and empirical
foundations, component technologies, and combined multimodal processing
techniques that define the field of multimodal interaction analysis,
interface design, and system development.
We are keen to showcase novel input and output modalities and interactions
to the ICMI community. ICMI 2020 will feature a single-track main
conference which includes: keynote speakers, technical full and short
papers (including oral and poster presentations), demonstrations, exhibits
and doctoral spotlight papers. The conference will also feature workshops
and grand challenges. The proceedings of ICMI 2020 will be published by ACM
as part of their series of International Conference Proceedings and Digital
Library.
We also want to welcome conference papers from behavioral and social
sciences. These papers allow us to understand how technology can be used to
increase our scientific knowledge and may focus less on presenting
technical or algorithmic novelty. For this reason, the "novelty" criteria
used during ICMI 2020 review will be based on two sub-criteria (i.e.,
scientific novelty and technical novelty as described below). Accepted
papers at ICMI 2020 only need to be novel on one of these sub-criteria. In
other words, a paper which is strong on scientific knowledge contribution
but low on algorithmic novelty should be ranked similarly to a paper that
is high on algorithmic novelty but low on knowledge discovery.
- Scientific Novelty: Papers should bring some new knowledge to the
scientific community. For example, discovering new behavioral markers that
are predictive of mental health or how new behavioral patterns relate to
children’s interactions during learning. It is the responsibility of the
authors to perform a proper literature review and clearly discuss the
novelty in the scientific discoveries made in their paper.
- Technical Novelty: Papers reviewed with this sub-criterion should include
novelty in their computational approach for recognizing, generating or
modeling data. Examples include: novelty in the learning and prediction
algorithms, in the neural architecture, or in the data representation.
Novelty can also be associated to a new usage of an existing approach.
Please see the Submission Guidelines for Authors
https://icmi.acm.org/2020/index.php?id=authors for detailed submission
instructions.
This year’s conference theme: In this information age, technological
innovation is at the core of our lives and rapidly transforming and
impacting the state of the world in art, culture, and society, and science
as well - the borders between classical disciplines such as humanities and
computer science are fading. In particular, we wonder how multimodal
processing of human behavioural data can create meaningful impact in art,
culture, and society practices. And vice versa, how does art, culture, and
society influence our approaches and techniques in multimodal processing?
As such, this year, ICMI welcomes contributions on our theme for Multimodal
processing and representation of Human Behaviour in Art, Culture, and
Society.
Additional topics of interest include but are not limited to:
- Affective computing and interaction
- Cognitive modeling and multimodal interaction
- Gesture, touch and haptics
- Healthcare, assistive technologies
- Human communication dynamics
- Human-robot/agent multimodal interaction
- Interaction with smart environment
- Machine learning for multimodal interaction
- Mobile multimodal systems
- Multimodal behavior generation
- Multimodal datasets and validation
- Multimodal dialogue modeling
- Multimodal fusion and representation
- Multimodal interactive applications
- Speech behaviors in social interaction
- System components and multimodal platforms
- Visual behaviours in social interaction
- Virtual/augmented reality and multimodal interaction
Important Dates
*Paper Submission: May 4, 2020 (11:59pm GMT-7)*
Reviews to authors: July 3, 2020
Rebuttal due: July 10, 2020 (11:59pm GMT-7)
Paper notification: July 20, 2020
Camera ready paper: August 17, 2020
Presenting at main conference: October 25-29, 2020
***********************************************************************************
ICMI 2020: Call for Workshops
https://icmi.acm.org/2020/index.php?id=CfW
***********************************************************************************
The International Conference on Multimodal Interaction (ICMI 2020) will be
held in Utrecht, the Netherlands, October 25-29, 2020. ICMI is the premier
international conference for multidisciplinary research on multimodal
human-human and human-computer interaction analysis, interface design, and
system development. The theme of the ICMI 2020 conference is Art, Culture,
and Society. ICMI has developed a tradition of hosting workshops in
conjunction with the main conference to foster discourse on new research,
technologies, social science models and applications. Examples of recent
workshops include:
- Media Analytics for Societal Trends
- Neuromanagement and Intelligent Computing
- Multi-sensorial Approaches to Human-Food Interaction
- Group Interaction Frontiers in Technology
- Modeling Cognitive Processes from Multimodal Data
- Human-Habitat for Health
- Multimodal Analyses enabling Artificial Agents in Human-Machine
Interaction
- Investigating Social Interactions with Artificial Agents
- Child Computer Interaction
- Multimodal Interaction for Education
We are seeking workshop proposals on emerging research areas related to the
main conference topics, and those that focus on multi-disciplinary
research. We would also strongly encourage workshops that will include a
diverse set of keynote speakers (factors to consider include: gender,
ethnic background, institutions, years of experience, geography, etc.).
The content of accepted workshops are under the control of the workshop
organizers. Workshops may be of a half-day or one day in duration. Workshop
organizers will be expected to manage the workshop content, solicit
submissions, be present to moderate the discussion and panels, invite
experts in the domain, conduct the reviewing process, and maintain a
website for the workshop. Workshop papers will be indexed by ACM Digital
Library in an adjunct proceedings, and a short workshop summary by the
organizers will be published in the main conference proceedings.
Submission
Prospective workshop organizers are invited to submit proposals in PDF
format (Max. 3 pages). Please email proposals to the workshop chairs:
Yukiko Nakano (y.nakano(a)st.seikei.ac.jp) and Albert Ali Salah (
a.a.salah(a)uu.nl) The proposal should include the following:
- Workshop title
- List of organizers including affiliation, email address, and short
biographies
- Workshop motivation, expected outcomes and impact
- Tentative list of keynote speakers
- Workshop format (by invitation only, call for papers, etc.), anticipated
number of talks/posters, workshop duration (half-day or full-day) including
tentative program
- Planned advertisement means, website hosting, and estimated participation
- Paper review procedure (single/double-blind, internal/external,
solicited/invited-only, pool of reviewers, etc.)
- Paper submission and acceptance deadlines
- Special space and equipment requests, if any
Important Dates:
*Workshop proposal submission: Monday, February 10, 2020*
Notification of acceptance: Monday, February 24, 2020
Workshop papers due: End of July, 2020 (suggested)
Workshop dates: October 25 or 29, 2020
Apologies for cross-posting
********************************
Dear Colleagues,
Please find below the invitation to contribute to the:
Affect Recognition in-the-wild: Uni/Multi-Modal Analysis Workshop,
which will be held in conjunction with the IEEE Automatic Face and Gesture Recognition (FG) Conference 2020 in Buenos Aires, Argentina, 18-22 May 2020.
Website: https://ibug.doc.ic.ac.uk/resources/affect-recognition-wild-unimulti-modal-…
SCOPE:
------------
The workshop aims at advancing the state-of-the-art in the problem of analysis of human affective behavior in-the-wild. It solicits any original paper on databases, benchmarks and technical contributions related to affect recognition, using audio, visual or other modalities (e.g., EEG), in unconstrained conditions. Either uni-modal, or multi-modal approaches will be considered. It would be of particular interest to see methodologies that study detection of action units based on audio data (with the recent development of the largest in-the-wild audiovisual database, Aff-Wild2, annotated in terms of three behavior tasks: valence-arousal, basic expressions and action units).
The scope of the workshop includes but is not limited to contributions in the following topics:
(a) databases for spontaneous and naturalistic:
i) facial expression or micro-expression analysis, or
ii) facial action unit detection, or
iii) valence-arousal estimation, ”in the-wild”.
(b) methodologies for:
i) facial expression or micro-expression analysis, or
ii) facial action unit detection, or
iii) valence-arousal estimation, ”in the wild”;
The methodologies can use uni-modal or multi-modal information (e.g., visual, audio data, hand and body gestures, EEG, EDA)
(c) methodologies for action unit prediction from audio (e.g., using Aff-Wild2, which is an audiovisual database)
(d) domain-adaptation techniques for the above described behavior tasks (either using Aff-Wild2, or any other database, or combination of databases).
IMPORTANT DATES:
-----------------------------
• 14 February 2020 - Paper submission deadline
• 21 February 2020 - Review decisions sent to authors; Notification of acceptance
• 28 February 2020 - Camera ready version deadline
Accepted workshop papers will appear at IEEE FG 2020 proceedings.
SUBMISSION INFORMATION:
--------------------------------------------
The paper format should adhere to the paper submission guidelines for FG2020.
The submission process will be handled through the CMT<https://cmt3.research.microsoft.com/ABAW2020>.
KEYNOTE SPEAKERS:
----------------------------------
Aleix M. Martinez, Ohio State University, United States
Pablo Barros, Italian Institute of Technology,Italy
CHAIRS:
-------------
Stefanos Zafeiriou, Imperial College London, UK
Dimitrios Kollias, Imperial College London, UK
Attila Schulc, Realeyes - Emotional Intelligence
In case of any queries, please contact dimitrios.kollias15(a)imperial.ac.uk<mailto:dimitrios.kollias15@imperial.ac.uk> .
Kind Regards,
Dimitrios Kollias,
on behalf of the organising committee
iBUG Group
Department of Computing
Imperial College London