Open Positions: 1 Ph.D. student and 2 Postdocs in the area of Computer Vision and Deep Learning at INRIA Sophia Antipolis, France
------------------------------ ------------------------------ ------------------------------ ------------------------------ ------------------------------ ----------------------
Positions are offered within the frameworks of the prestigious grants
- ANR JCJC Grant *ENVISION*: "Computer Vision for Automated Holistic Analysis for Humans" and the
- INRIA - CAS grant *FER4HM* "Facial expression recognition with application in health monitoring"
and are ideally located in the heart of the French Riviera, inside the multi-cultural silicon valley of Europe.
Full announcements:
- Open Ph.D.-Position in Computer Vision / Deep Learning (M/F) *ENVISION*: http://antitza.com/ANR_phd.pdf
- Open Post Doc - Position in Computer Vision / Deep Learning (M/F) *FER4HM*: http://antitza.com/INRIA_CAS_p ostdoc.pdf
- Open Post Doc - Position in Computer Vision / Deep Learning (M/F) (advanced level) *ENVISION*: http://antitza.com/ANR_postdoc .pdf
To apply, please email a full application to Antitza Dantcheva ( antitza.dantcheva(a)inria.fr ), indicating the position in the e-mail subject line.
I am wondering whether anyone knows (or has) a database of faces with
the same
individual's face shown at multiple times across the lifespan - even
just childhood and
adulthood (and if there is a third point, this would be even better).
Many thanks, Marlene Behrmann
--
Marlene Behrmann, Ph.D
George A. and Helen Dunham Cowan Professor of Cognitive Neuroscience
Center for the Neural Basis of Cognition and
Department of Psychology
Carnegie Mellon University, Pittsburgh, USA
(412) 268-2790
behrmann(a)cmu.edu
Apologies for cross-postings
Call for challenge participation
Sixth Emotion Recognition in the Wild (EmotiW) Challenge 2018
https://sites.google.com/view/emotiw2018
@ ACM International Conference on Multimodal Interaction 2018, Boulder,
Colarado.
---------------------------------------------------------------------
The sixth Emotion Recognition in the Wild (EmotiW) 2018 Grand Challenge
consists of an all-day event with a focus on affective sensing in
unconstrained conditions. There are three sub-challenges: engagement in the
wild prediction sub-challenge, audio-video based emotion classification
sub-challenge and image based group emotion recognition sub-challenge.
*Challenge website*: https://sites.google.com/view/emotiw2018
*Contact email*: emotiw2014[AT]gmail.com
*Timeline*
Challenge website - January 2018
Train and validate data available - March 2018
Test data available - 8 June 2018
Last date for uploading the results - 23 June 2018
Paper submission deadline - 1 July 2018
Paper notification - 30 July 2018
Camera-ready papers - 8 August 2018
*Organizers*
Abhinav Dhall (Indian Institute of Technology Ropar, India)
Roland Goecke (University of Canberra, Australia)
Jyoti Joshi
Tom Gedeon (Australian National University, Australia)
--
Abhinav Dhall, PhD
Assistant Professor,
Indian Institute of Technology, Ropar
Webpage: https://goo.gl/5LrRB7
Google Scholar: https://goo.gl/iDwNTx
Hi everyone,
Could I please ask you to pass on this PhD bursary opportunity to any students you think might be interested?
Queen Margaret University (Edinburgh, UK) now invites applications to its PhD bursary competition. One of the bursaries available may be awarded to a student interested in studying eyewitness identification. Dr Jamal Mansour (https://www.qmu.ac.uk/schools-and-divisions/psychology-and-sociology/psycho… ) welcomes applications from competitive students with an honours undergraduate or masters degree. The bursary covers tuition as well as provides an annual stipend for living and a small research budget. The deadline for applications is Friday, March 30. The details of the eyewitness identification project can be found here: https://www.qmu.ac.uk/media/4209/cass-phd-bursary-topics-2018.pdf (BUR18-03). Further details about the competition can be found here: https://www.qmu.ac.uk/study-here/postgraduate-research-study/graduate-schoo…. Jamal would like to encourage anyone who is considering applying to email her directly at jmansour(a)qmu.ac.uk<mailto:jmansour@qmu.ac.uk>.
Thanks!
Jamal.
---------------------------------------------------------------------------------
Jamal K. Mansour, PhD
Senior Lecturer in Psychology
Psychology & Sociology
Queen Margaret University
Edinburgh, UK
EH21 6UU
Email: jmansour(a)qmu.ac.uk
Phone: +44 (0) 131 474 0000 and say my name (Jam-el Man-sir) when prompted
Fax: +44 (0) 131 474 0001
Web: https://www.qmu.ac.uk/schools-and-divisions/psychology-and-sociology/psycho…
Memory Research Group Web site: https://memoryresearchgroup.wordpress.com/
Twitter: @EyewitnessIDUp
Check out my recent paper on conducting multiple-trial lineup experiments: https://link.springer.com/article/10.3758/s13428-017-0855-0
Participate in our study on legal attitudes! https://www.psytoolkit.org/cgi-bin/psy2.4.0/survey?s=Z8jMR
This message and its attachment(s) are intended for the addressee(s) only and should not be read, copied, disclosed, forwarded or relied upon by any person other than the intended addressee(s) without the permission of the sender. If you are not the intended addressee, you must not take any action based on this message and its attachment(s) nor must you copy or show them to anyone. If you have received this email in error, please inform the sender immediately and delete all copies of it.
It is your responsibility to ensure that this message and its attachment(s) are scanned for viruses or other defects. Queen Margaret University does not accept liability for any loss or damage which may result from this message or its attachment(s), or for errors or omissions arising after it was sent. Email is not a secure medium. Email traffic entering Queen Margaret University's system is subject to routine monitoring and filtering by Queen Margaret University.
Queen Margaret University, Edinburgh is a registered charity: Scottish Charity Number SC002750.
Thanks Peter, that's very helpful!
And Thanks Lisa De Bruine for the other email about WebMorph, I had already
applied for an account, so I will check it out. :)
I will also send a separate email to you about template files.
Regards,
Rachel
>
>
> Today's Topics:
>
> 1. Re: PsychoMorph Questions (Peter Hancock)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Thu, 25 Jan 2018 09:22:57 +0000
> From: Peter Hancock <p.j.b.hancock(a)stir.ac.uk>
> To: face-research-list Mailing List
> <face-research-list(a)lists.stir.ac.uk>
> Subject: Re: [Face-research-list] PsychoMorph Questions
> Message-ID: <4d6140ad51e44576905d8da9be1b6cb6(a)havra.ad.stir.ac.uk>
> Content-Type: text/plain; charset="utf-8"
>
> The lines in Psychomorph are also for helping with placement, so far as I
> know. The end of the file does just tell you which points are joined
> together. Here’s the start of a ‘standard’ template file line section:
>
> 39 # 39 line definitions
> 0 # move on
> 2 #first line has two points
> 0 0 #they are both point 0 ## I don’t know the significance of defining
> a zero length line, this is the pupil
> 0 # move on
> 2 #next line has two points
> 1 1 #other pupil
> 0
> 9 # a proper line with 9 points!
> 2 3 4 5 6 7 8 9 2 # the line forms a ring, starting and ending at point 2
> 0
> 9
> 10 11 12 13 14 15 16 17 10
>
> You can define the lines from within Psychomorph from the delineate menu.
> I’ve attached a superbatch file for caricaturing, with some comments
>
> Peter
>
>
> From: Face-research-list [mailto:face-research-list-
> bounces(a)lists.stir.ac.uk] On Behalf Of Rachel Robbins
> Sent: 16 January 2018 23:30
> To: face-research-list Mailing List <face-research-list(a)lists.stir.ac.uk>
> Subject: [Face-research-list] PsychoMorph Questions
>
> Hi everyone,
> I am trying to learn PsychoMorph having previously used Fantamorph. I have
> read through Clare Sutherland's basic guide, but I need some help with more
> detailed qus and I can't find anything Wiki site. If anyone can provide
> advice on any of these questions I would be very grateful!
>
> In Fantamorph the lines are are purely for visual grouping and don't do
> anything, morphing is all to do with matched dot placement, and you can
> check that dots are correctly matched by looking at the triangles. Do the
> lines in PsychoMorph do anything, or are they just guides?
>
> Part of the reason I need to know is that I am trying to import Fantamorph
> information into PsychoMorph. I have managed to import my point dot
> information by taking the lines of paired dot position information from the
> .fmd files and putting them into a .tem file with the number of dots in the
> first line corrected. However, I couldn't figure out exactly what the info
> at the end of the original .tem files generated by PsychoMorph is and
> whether I need it or something equivalent. It SEEMS to be the information
> about lines, does anyone know about this?
>
> I would also love to be able to batch importing and/or making caricatures
> if I can get the .tem files set up properly. It seems like I might be able
> to do this with SuperBatchTransform, but from this page
> http://cherry.dcs.aber.ac.uk:8080/wiki/batch
> I can't figure out exactly what needs to go in my input file. Does anyone
> have an example they would be willing to share?
>
> Thanks!
> Rachel
>
> --
> You make a living by what you get; you make a life by what you give.
> -Winston Churchill.
>