Open Positions: 1 Ph.D. student and 2 Postdocs in the area of Computer Vision and Deep Learning at INRIA Sophia Antipolis, France
------------------------------ ------------------------------ ------------------------------ ------------------------------ ------------------------------ ----------------------
Positions are offered within the frameworks of the prestigious grants
- ANR JCJC Grant *ENVISION*: "Computer Vision for Automated Holistic Analysis for Humans" and the
- INRIA - CAS grant *FER4HM* "Facial expression recognition with application in health monitoring"
and are ideally located in the heart of the French Riviera, inside the multi-cultural silicon valley of Europe.
Full announcements:
- Open Ph.D.-Position in Computer Vision / Deep Learning (M/F) *ENVISION*: http://antitza.com/ANR_phd.pdf
- Open Post Doc - Position in Computer Vision / Deep Learning (M/F) *FER4HM*: http://antitza.com/INRIA_CAS_p ostdoc.pdf
- Open Post Doc - Position in Computer Vision / Deep Learning (M/F) (advanced level) *ENVISION*: http://antitza.com/ANR_postdoc .pdf
To apply, please email a full application to Antitza Dantcheva ( antitza.dantcheva(a)inria.fr ), indicating the position in the e-mail subject line.
I have two funded studentships available for start in October (or earlier, if convenient), to work in the general area of human face perception and recognition. I'd be interested in working with someone on computational modelling for one of them, for which some facility in a programming language such as Matlab or Python would be helpful. One is funded by the Dylis Crabtree Scholarship, for which preference will be given to female applicants. Both are funded for UK/EU citizens only. The studentships will be based in the Face Lab at Stirling and join two postdocs working on the EPSRC-funded FACER2VM project (https://facer2vm.org/). Please get in touch with me directly in the first instance to discuss your interests, pjbh1(a)stir.ac.uk<mailto:pjbh1@stir.ac.uk>. I have no formal closing date but hope to make decisions by early May.
Peter Hancock
Professor,
Deputy Head of Psychology,
Faculty of Natural Sciences
University of Stirling
FK9 4LA, UK
phone 01786 467675
fax 01786 467641
http://stir.ac.uk/190http://orcid.org/0000-0001-6025-7068http://www.researcherid.com/rid/A-4633-2009
Psychology at Stirling: 100% 4* Impact, REF2014
Come and study Face Perception at the University of Stirling! Our unique MSc in the Psychology of Faces is open for applications. For more information see http://www.stir.ac.uk/postgraduate/programme-information/prospectus/psychol…
[highly cited 2016]
CALL FOR PARTICIPATION
The One-Minute Gradual-Emotion Recognition (OMG-Emotion)
held in partnership with the WCCI/IJCNN 2018 in Rio de Janeiro, Brazil.
https://www2.informatik.uni-hamburg.de/wtm/OMG-EmotionChallenge/
I. Aim and Scope
Our One-Minute-Gradual Emotion Dataset (OMG-Emotion Dataset) is composed
of 420 relatively long emotion videos with an average length of 1
minute, collected from a variety of Youtube channels. The videos were
selected automatically based on specific search terms related to the
term ``monologue''. Using monologue videos allowed for different
emotional behaviors to be presented in one context and that changes
gradually over time. Videos were separated into clips based on
utterances, and each utterance was annotated by at least five
independent subjects using the Amazon Mechanical Turk tool. To maintain
the contextual information for each video, each annotator watched the
clips of a video in sequence and had to annotate each video using an
arousal/valence scale and a categorical emotion based on the universal
emotions from Ekman.
The participants are encouraged to use crossmodal information in their
models, as the videos were labeled by humans without distinction of any
modality.
II. How to Participate
To participate, please send us an email to
barros(a)informatik.uni-hamburg.de with the title "OMG-Emotion Recognition
Team Registration". This e-mail must contain the following information:
Team Name
Team Members
Affiliation
Each team can have a maximum of 5 participants. You will receive from us
the access to the dataset and all the important information about how to
train and evaluate your models.
For the final submission, each team will have to send us a .csv file
containing the final arousal/valence values for each of the utterances
on the test dataset. We also request a link to a GitHub repository where
your solution must be stored, and a link to an ArXiv paper with 4-6
pages describing your model and results. The best papers will be invited
to submit their detailed research to a journal yet to be specified.
Also, the best participating teams will hold an oral presentation about
their solution during the WCCI/IJCNN 2018 conference.
III. Important Dates
Publishing of training and validation data with annotations: March 14,
2018.
Publishing of the test data, and an opening of the online submission:
April 11, 2018.
Closing of the submission portal: April 13, 2018.
Announcement of the winner through the submission portal: April 18, 2018.
IV. Organization
Pablo Barros, University of Hamburg, Germany
Egor Lakomkin, University of Hamburg, Germany
Henrique Siqueira, Hamburg University, Germany
Alexander Sutherland, Hamburg University, Germany
Stefan Wermter, Hamburg University, Germany
--------------------------
Database update:
Our face database at http://pics.stir.ac.uk/ESRC/index.htm has been updated. There are now 64 male and 71 female identities in most of the image types. I'm still updating a few, such as the stereo images and the conformed 3D models. The unedited 3D models are mostly of higher quality than previously.
While there are still relatively few identities, the variety of imagery provided for each person is still the widest that I know of.
Peter
Peter Hancock
Professor,
Deputy Head of Psychology,
Faculty of Natural Sciences
University of Stirling
FK9 4LA, UK
phone 01786 467675
fax 01786 467641
http://stir.ac.uk/190http://orcid.org/0000-0001-6025-7068http://www.researcherid.com/rid/A-4633-2009
Psychology at Stirling: 100% 4* Impact, REF2014
Come and study Face Perception at the University of Stirling! Our unique MSc in the Psychology of Faces is open for applications. For more information see http://www.stir.ac.uk/postgraduate/programme-information/prospectus/psychol…
[highly cited 2016]
I am wondering whether anyone knows (or has) a database of faces with
the same
individual's face shown at multiple times across the lifespan - even
just childhood and
adulthood (and if there is a third point, this would be even better).
Many thanks, Marlene Behrmann
--
Marlene Behrmann, Ph.D
George A. and Helen Dunham Cowan Professor of Cognitive Neuroscience
Center for the Neural Basis of Cognition and
Department of Psychology
Carnegie Mellon University, Pittsburgh, USA
(412) 268-2790
behrmann(a)cmu.edu
If you wanted unfamiliar faces for use with an American audience, then there are many soccer players who have childhood and adult images available online.
Edwin.
________________________________________
From: Face-research-list [face-research-list-bounces(a)lists.stir.ac.uk] on behalf of face-research-list-request(a)lists.stir.ac.uk [face-research-list-request(a)lists.stir.ac.uk]
Sent: Friday, 2 March, 2018 8:00:01 PM
To: face-research-list(a)lists.stir.ac.uk
Subject: Face-research-list Digest, Vol 84, Issue 2
Send Face-research-list mailing list submissions to
face-research-list(a)lists.stir.ac.uk
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.stir.ac.uk/cgi-bin/mailman/listinfo/face-research-list
or, via email, send a message with subject or body 'help' to
face-research-list-request(a)lists.stir.ac.uk
You can reach the person managing the list at
face-research-list-owner(a)lists.stir.ac.uk
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Face-research-list digest..."
Today's Topics:
1. Re: Database of faces across lifespan (Marlene Behrmann)
(Jodie Davies-Thompson)
----------------------------------------------------------------------
Message: 1
Date: Thu, 1 Mar 2018 12:29:18 +0000
From: Jodie Davies-Thompson <davies.jodie(a)gmail.com>
To: face-research-list(a)lists.stir.ac.uk
Subject: Re: [Face-research-list] Database of faces across lifespan
(Marlene Behrmann)
Message-ID: <9866AE96-15ED-4BD2-8CFC-A73C6BD098FA(a)gmail.com>
Content-Type: text/plain; charset=windows-1252
Dear Marlene,
I don’t know of any databases per se, but I recently wondered the same thing and started pulling various links together. Below are a few instances I’m aware of when individuals or groups have taken photos every year (not ideal, but depending on what you’re after, could suffice).
There is also a BBC documentary by Robert Winston (‘Child of Our Time') which follows 25 children born in 2000 - you could probably get some good images from there.
If you ever pull together a database though, that would be a brilliant resource!
- http://diply.com/same-family-photo-taken-22-years?publisher=trendyjoe
- https://petapixel.com/2015/08/03/father-and-son-take-the-same-picture-every…
- http://www.news.com.au/lifestyle/real-life/true-stories/five-friends-recrea…
- https://www.nytimes.com/interactive/2014/10/03/magazine/01-brown-sisters-fo…
Sorry to not be able to supply anything better!
All the best,
Jodie
------------------------------
Subject: Digest Footer
_______________________________________________
Face-research-list mailing list
Face-research-list(a)lists.stir.ac.uk
http://lists.stir.ac.uk/cgi-bin/mailman/listinfo/face-research-list
------------------------------
End of Face-research-list Digest, Vol 84, Issue 2
*************************************************
________________________________
**Disclaimer** The sender of this email does not represent Nanyang Technological University and this email does not express the views or opinions of the University.