Dear colleagues,
Final call! The deadline for early registration and abstract submission to APCV & CVSC 2018 has been extended:
Abstract submission: Extended to April 15th
Early registration: Extended to April 29th
-----------------------------------------------------------------------------------------------------------------------------------------------------
APCV & CVSC 2018 will take place from Friday July 13 through to Monday July 16, 2018.
Information about the conference can be found at: http://apcv2018.org/
The conference will be held at the Lake View Hotel, Hangzhou, China (http://www.lake-view-hotel-hangzhou.008h.com/introduction-en.html)
Hangzhou is the capital and most populous city of Zhejiang Province in east China, which is renowned for its historic relics and natural beauty. It is known as one of the most beautiful cities in China, also ranking as one of the most scenic cities.
We are now accepting abstracts. APCV &CVSC welcomes original research work on, but not limited to visual psychophysics, visual physiology/anatomy, visual memory, perception and attention, computational vision, social perception, brain imaging, eye movements, multisensory perception, visual development, artificial vision, reading and word recognition, face and object perception, art and visual science, and clinical vision.
Confirmed keynote speakers include
· Professor Brian Scholl (Department of Psychology, Chair of the Cognitive Science Program, Yale University, USA)
· Professor Edward Awh (Department of Psychology and Institute for Mind and Biology, University of Chicago, USA)
· Professor Zhaoping Li (Department of Computer Science, University College London, UK)
· Professor Tony Movshon (Center for Neural Science ,New York University, USA)
· Professor Takeo Watanabe (Department of Cognitive, Linguistic & Psychological Sciences, Brown University, USA)
· Professor Hongjing Lu (Departments of Psychology & Statistics, University of California, Los Angeles, USA)
· Professor William Hayward (Faculty of Social Sciences, University of Hong Kong, Hong Kong, China)
Confirmed symposium presentations:
1. Vision in nonhuman primates, Organizer: Dajun Xing, Beijing Normal University
2. Translational and clinical vision, Organizer: Changbing Huang, CAS
3. Visual attention, Organizer: Liqiang Huang, The Chinese University of Hong Kong
4. Multisensory processing, Organizer: Xiongjie Yu /Anna Wang Roe, Zhejiang University
5. Binocular depth perception, Organizer: Gang Chen /Anna Wang Roe, Zhejiang University
6. Culture shapes face processing, Organizer: Roberto Caldara, University of Fribourg
7. Changes in statistical regularity as people age, Organizer: Su-Ling Yeh, National Taiwan University
8. Linking objects from vision and beyond: Development in early life, Organizer: Chia-huei Tseng, Tohoku University
We hope to see you in Hangzhou this July 13-16!
APCV &CVSC Organizing Committee
http://apcv2018.org/
I have two funded studentships available for start in October (or earlier, if convenient), to work in the general area of human face perception and recognition. I'd be interested in working with someone on computational modelling for one of them, for which some facility in a programming language such as Matlab or Python would be helpful. One is funded by the Dylis Crabtree Scholarship, for which preference will be given to female applicants. Both are funded for UK/EU citizens only. The studentships will be based in the Face Lab at Stirling and join two postdocs working on the EPSRC-funded FACER2VM project (https://facer2vm.org/). Please get in touch with me directly in the first instance to discuss your interests, pjbh1(a)stir.ac.uk<mailto:pjbh1@stir.ac.uk>. I have no formal closing date but hope to make decisions by early May.
Peter Hancock
Professor,
Deputy Head of Psychology,
Faculty of Natural Sciences
University of Stirling
FK9 4LA, UK
phone 01786 467675
fax 01786 467641
http://stir.ac.uk/190http://orcid.org/0000-0001-6025-7068http://www.researcherid.com/rid/A-4633-2009
Psychology at Stirling: 100% 4* Impact, REF2014
Come and study Face Perception at the University of Stirling! Our unique MSc in the Psychology of Faces is open for applications. For more information see http://www.stir.ac.uk/postgraduate/programme-information/prospectus/psychol…
[highly cited 2016]
CALL FOR PARTICIPATION
The One-Minute Gradual-Emotion Recognition (OMG-Emotion)
held in partnership with the WCCI/IJCNN 2018 in Rio de Janeiro, Brazil.
https://www2.informatik.uni-hamburg.de/wtm/OMG-EmotionChallenge/
I. Aim and Scope
Our One-Minute-Gradual Emotion Dataset (OMG-Emotion Dataset) is composed
of 420 relatively long emotion videos with an average length of 1
minute, collected from a variety of Youtube channels. The videos were
selected automatically based on specific search terms related to the
term ``monologue''. Using monologue videos allowed for different
emotional behaviors to be presented in one context and that changes
gradually over time. Videos were separated into clips based on
utterances, and each utterance was annotated by at least five
independent subjects using the Amazon Mechanical Turk tool. To maintain
the contextual information for each video, each annotator watched the
clips of a video in sequence and had to annotate each video using an
arousal/valence scale and a categorical emotion based on the universal
emotions from Ekman.
The participants are encouraged to use crossmodal information in their
models, as the videos were labeled by humans without distinction of any
modality.
II. How to Participate
To participate, please send us an email to
barros(a)informatik.uni-hamburg.de with the title "OMG-Emotion Recognition
Team Registration". This e-mail must contain the following information:
Team Name
Team Members
Affiliation
Each team can have a maximum of 5 participants. You will receive from us
the access to the dataset and all the important information about how to
train and evaluate your models.
For the final submission, each team will have to send us a .csv file
containing the final arousal/valence values for each of the utterances
on the test dataset. We also request a link to a GitHub repository where
your solution must be stored, and a link to an ArXiv paper with 4-6
pages describing your model and results. The best papers will be invited
to submit their detailed research to a journal yet to be specified.
Also, the best participating teams will hold an oral presentation about
their solution during the WCCI/IJCNN 2018 conference.
III. Important Dates
Publishing of training and validation data with annotations: March 14,
2018.
Publishing of the test data, and an opening of the online submission:
April 11, 2018.
Closing of the submission portal: April 13, 2018.
Announcement of the winner through the submission portal: April 18, 2018.
IV. Organization
Pablo Barros, University of Hamburg, Germany
Egor Lakomkin, University of Hamburg, Germany
Henrique Siqueira, Hamburg University, Germany
Alexander Sutherland, Hamburg University, Germany
Stefan Wermter, Hamburg University, Germany
--------------------------
Database update:
Our face database at http://pics.stir.ac.uk/ESRC/index.htm has been updated. There are now 64 male and 71 female identities in most of the image types. I'm still updating a few, such as the stereo images and the conformed 3D models. The unedited 3D models are mostly of higher quality than previously.
While there are still relatively few identities, the variety of imagery provided for each person is still the widest that I know of.
Peter
Peter Hancock
Professor,
Deputy Head of Psychology,
Faculty of Natural Sciences
University of Stirling
FK9 4LA, UK
phone 01786 467675
fax 01786 467641
http://stir.ac.uk/190http://orcid.org/0000-0001-6025-7068http://www.researcherid.com/rid/A-4633-2009
Psychology at Stirling: 100% 4* Impact, REF2014
Come and study Face Perception at the University of Stirling! Our unique MSc in the Psychology of Faces is open for applications. For more information see http://www.stir.ac.uk/postgraduate/programme-information/prospectus/psychol…
[highly cited 2016]
I am wondering whether anyone knows (or has) a database of faces with
the same
individual's face shown at multiple times across the lifespan - even
just childhood and
adulthood (and if there is a third point, this would be even better).
Many thanks, Marlene Behrmann
--
Marlene Behrmann, Ph.D
George A. and Helen Dunham Cowan Professor of Cognitive Neuroscience
Center for the Neural Basis of Cognition and
Department of Psychology
Carnegie Mellon University, Pittsburgh, USA
(412) 268-2790
behrmann(a)cmu.edu
If you wanted unfamiliar faces for use with an American audience, then there are many soccer players who have childhood and adult images available online.
Edwin.
________________________________________
From: Face-research-list [face-research-list-bounces(a)lists.stir.ac.uk] on behalf of face-research-list-request(a)lists.stir.ac.uk [face-research-list-request(a)lists.stir.ac.uk]
Sent: Friday, 2 March, 2018 8:00:01 PM
To: face-research-list(a)lists.stir.ac.uk
Subject: Face-research-list Digest, Vol 84, Issue 2
Send Face-research-list mailing list submissions to
face-research-list(a)lists.stir.ac.uk
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.stir.ac.uk/cgi-bin/mailman/listinfo/face-research-list
or, via email, send a message with subject or body 'help' to
face-research-list-request(a)lists.stir.ac.uk
You can reach the person managing the list at
face-research-list-owner(a)lists.stir.ac.uk
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Face-research-list digest..."
Today's Topics:
1. Re: Database of faces across lifespan (Marlene Behrmann)
(Jodie Davies-Thompson)
----------------------------------------------------------------------
Message: 1
Date: Thu, 1 Mar 2018 12:29:18 +0000
From: Jodie Davies-Thompson <davies.jodie(a)gmail.com>
To: face-research-list(a)lists.stir.ac.uk
Subject: Re: [Face-research-list] Database of faces across lifespan
(Marlene Behrmann)
Message-ID: <9866AE96-15ED-4BD2-8CFC-A73C6BD098FA(a)gmail.com>
Content-Type: text/plain; charset=windows-1252
Dear Marlene,
I don’t know of any databases per se, but I recently wondered the same thing and started pulling various links together. Below are a few instances I’m aware of when individuals or groups have taken photos every year (not ideal, but depending on what you’re after, could suffice).
There is also a BBC documentary by Robert Winston (‘Child of Our Time') which follows 25 children born in 2000 - you could probably get some good images from there.
If you ever pull together a database though, that would be a brilliant resource!
- http://diply.com/same-family-photo-taken-22-years?publisher=trendyjoe
- https://petapixel.com/2015/08/03/father-and-son-take-the-same-picture-every…
- http://www.news.com.au/lifestyle/real-life/true-stories/five-friends-recrea…
- https://www.nytimes.com/interactive/2014/10/03/magazine/01-brown-sisters-fo…
Sorry to not be able to supply anything better!
All the best,
Jodie
------------------------------
Subject: Digest Footer
_______________________________________________
Face-research-list mailing list
Face-research-list(a)lists.stir.ac.uk
http://lists.stir.ac.uk/cgi-bin/mailman/listinfo/face-research-list
------------------------------
End of Face-research-list Digest, Vol 84, Issue 2
*************************************************
________________________________
**Disclaimer** The sender of this email does not represent Nanyang Technological University and this email does not express the views or opinions of the University.
Apologies for cross-postings
Call for challenge participation
Sixth Emotion Recognition in the Wild (EmotiW) Challenge 2018
https://sites.google.com/view/emotiw2018
@ ACM International Conference on Multimodal Interaction 2018, Boulder,
Colarado.
---------------------------------------------------------------------
The sixth Emotion Recognition in the Wild (EmotiW) 2018 Grand Challenge
consists of an all-day event with a focus on affective sensing in
unconstrained conditions. There are three sub-challenges: engagement in the
wild prediction sub-challenge, audio-video based emotion classification
sub-challenge and image based group emotion recognition sub-challenge.
*Challenge website*: https://sites.google.com/view/emotiw2018
*Contact email*: emotiw2014[AT]gmail.com
*Timeline*
Challenge website - January 2018
Train and validate data available - March 2018
Test data available - 8 June 2018
Last date for uploading the results - 23 June 2018
Paper submission deadline - 1 July 2018
Paper notification - 30 July 2018
Camera-ready papers - 8 August 2018
*Organizers*
Abhinav Dhall (Indian Institute of Technology Ropar, India)
Roland Goecke (University of Canberra, Australia)
Jyoti Joshi
Tom Gedeon (Australian National University, Australia)
--
Abhinav Dhall, PhD
Assistant Professor,
Indian Institute of Technology, Ropar
Webpage: https://goo.gl/5LrRB7
Google Scholar: https://goo.gl/iDwNTx
Hi everyone,
Could I please ask you to pass on this PhD bursary opportunity to any students you think might be interested?
Queen Margaret University (Edinburgh, UK) now invites applications to its PhD bursary competition. One of the bursaries available may be awarded to a student interested in studying eyewitness identification. Dr Jamal Mansour (https://www.qmu.ac.uk/schools-and-divisions/psychology-and-sociology/psycho… ) welcomes applications from competitive students with an honours undergraduate or masters degree. The bursary covers tuition as well as provides an annual stipend for living and a small research budget. The deadline for applications is Friday, March 30. The details of the eyewitness identification project can be found here: https://www.qmu.ac.uk/media/4209/cass-phd-bursary-topics-2018.pdf (BUR18-03). Further details about the competition can be found here: https://www.qmu.ac.uk/study-here/postgraduate-research-study/graduate-schoo…. Jamal would like to encourage anyone who is considering applying to email her directly at jmansour(a)qmu.ac.uk<mailto:jmansour@qmu.ac.uk>.
Thanks!
Jamal.
---------------------------------------------------------------------------------
Jamal K. Mansour, PhD
Senior Lecturer in Psychology
Psychology & Sociology
Queen Margaret University
Edinburgh, UK
EH21 6UU
Email: jmansour(a)qmu.ac.uk
Phone: +44 (0) 131 474 0000 and say my name (Jam-el Man-sir) when prompted
Fax: +44 (0) 131 474 0001
Web: https://www.qmu.ac.uk/schools-and-divisions/psychology-and-sociology/psycho…
Memory Research Group Web site: https://memoryresearchgroup.wordpress.com/
Twitter: @EyewitnessIDUp
Check out my recent paper on conducting multiple-trial lineup experiments: https://link.springer.com/article/10.3758/s13428-017-0855-0
Participate in our study on legal attitudes! https://www.psytoolkit.org/cgi-bin/psy2.4.0/survey?s=Z8jMR
This message and its attachment(s) are intended for the addressee(s) only and should not be read, copied, disclosed, forwarded or relied upon by any person other than the intended addressee(s) without the permission of the sender. If you are not the intended addressee, you must not take any action based on this message and its attachment(s) nor must you copy or show them to anyone. If you have received this email in error, please inform the sender immediately and delete all copies of it.
It is your responsibility to ensure that this message and its attachment(s) are scanned for viruses or other defects. Queen Margaret University does not accept liability for any loss or damage which may result from this message or its attachment(s), or for errors or omissions arising after it was sent. Email is not a secure medium. Email traffic entering Queen Margaret University's system is subject to routine monitoring and filtering by Queen Margaret University.
Queen Margaret University, Edinburgh is a registered charity: Scottish Charity Number SC002750.
Thanks Peter, that's very helpful!
And Thanks Lisa De Bruine for the other email about WebMorph, I had already
applied for an account, so I will check it out. :)
I will also send a separate email to you about template files.
Regards,
Rachel
>
>
> Today's Topics:
>
> 1. Re: PsychoMorph Questions (Peter Hancock)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Thu, 25 Jan 2018 09:22:57 +0000
> From: Peter Hancock <p.j.b.hancock(a)stir.ac.uk>
> To: face-research-list Mailing List
> <face-research-list(a)lists.stir.ac.uk>
> Subject: Re: [Face-research-list] PsychoMorph Questions
> Message-ID: <4d6140ad51e44576905d8da9be1b6cb6(a)havra.ad.stir.ac.uk>
> Content-Type: text/plain; charset="utf-8"
>
> The lines in Psychomorph are also for helping with placement, so far as I
> know. The end of the file does just tell you which points are joined
> together. Here’s the start of a ‘standard’ template file line section:
>
> 39 # 39 line definitions
> 0 # move on
> 2 #first line has two points
> 0 0 #they are both point 0 ## I don’t know the significance of defining
> a zero length line, this is the pupil
> 0 # move on
> 2 #next line has two points
> 1 1 #other pupil
> 0
> 9 # a proper line with 9 points!
> 2 3 4 5 6 7 8 9 2 # the line forms a ring, starting and ending at point 2
> 0
> 9
> 10 11 12 13 14 15 16 17 10
>
> You can define the lines from within Psychomorph from the delineate menu.
> I’ve attached a superbatch file for caricaturing, with some comments
>
> Peter
>
>
> From: Face-research-list [mailto:face-research-list-
> bounces(a)lists.stir.ac.uk] On Behalf Of Rachel Robbins
> Sent: 16 January 2018 23:30
> To: face-research-list Mailing List <face-research-list(a)lists.stir.ac.uk>
> Subject: [Face-research-list] PsychoMorph Questions
>
> Hi everyone,
> I am trying to learn PsychoMorph having previously used Fantamorph. I have
> read through Clare Sutherland's basic guide, but I need some help with more
> detailed qus and I can't find anything Wiki site. If anyone can provide
> advice on any of these questions I would be very grateful!
>
> In Fantamorph the lines are are purely for visual grouping and don't do
> anything, morphing is all to do with matched dot placement, and you can
> check that dots are correctly matched by looking at the triangles. Do the
> lines in PsychoMorph do anything, or are they just guides?
>
> Part of the reason I need to know is that I am trying to import Fantamorph
> information into PsychoMorph. I have managed to import my point dot
> information by taking the lines of paired dot position information from the
> .fmd files and putting them into a .tem file with the number of dots in the
> first line corrected. However, I couldn't figure out exactly what the info
> at the end of the original .tem files generated by PsychoMorph is and
> whether I need it or something equivalent. It SEEMS to be the information
> about lines, does anyone know about this?
>
> I would also love to be able to batch importing and/or making caricatures
> if I can get the .tem files set up properly. It seems like I might be able
> to do this with SuperBatchTransform, but from this page
> http://cherry.dcs.aber.ac.uk:8080/wiki/batch
> I can't figure out exactly what needs to go in my input file. Does anyone
> have an example they would be willing to share?
>
> Thanks!
> Rachel
>
> --
> You make a living by what you get; you make a life by what you give.
> -Winston Churchill.
>