Dear all,
I would like to share with you all the results of our first OMG- Emotion
Recognition Challenge.
Our challenge is based on the One-Minute-Gradual Emotion Dataset
(OMG-Emotion Dataset), which is composed of 567 emotion videos with an
average length of 1 minute, collected from a variety of Youtube
channels. Each team had a task to describe each video with a continuous
space of arousal/valence domain.
The challenge had a total of 34 teams registered, from which we got 11
final submissions. Each final submission was composed of a short paper
describing the solution and the link to the code repository.
The solutions used different modalities (ranging from unimodal audio and
vision to multimodal audio, vision, and text), and thus provide us with
a very complex evaluation scenario. All the submissions were based on
neural network models.
We split results into arousal and valence. For arousal, the best results
came from the GammaLab team. Their three submissions are our top 3 CCC
arousal, followed by the three submissions from the audEERING team, and
the two submissions from the HKUST-NISL2018
team.
For valence, the GammaLab team stays still in first (with their three
submissions), followed by the two submissions of ADSC team and the three
submissions from the iBug team.
Congratulations to you all!
We provide a leaderboard on our website (
https://www2.informatik.uni-hamburg.de/wtm/OMG-EmotionChallenge/ ),
which will be permanently stored. This way, everyone can see the final
results of the challenge, have a quick access to a formal description of
the solutions and to the codes. This will help to disseminate knowledge
even further and will improve the reproducibility of your solutions.
We also provide a general leaderboard which will be updated constantly
with new submissions. If you are interested in having your score in our
general leaderboard, just send us an e-mail following the instructions
on our website.
I would also to invite you all to the presentation of the challenge
summary during the WCCI/IJCNN 2018 in Rio de Janeiro, Brasil.
Best Regards,
Pablo
--
Dr. Pablo Barros
Postdoctoral Research Associate - Crossmodal Learning Project (CML)
Knowledge Technology
Department of Informatics
University of Hamburg
Vogt-Koelln-Str. 30
22527 Hamburg, Germany
Phone: +49 40 42883 2535
Fax: +49 40 42883 2515
barros at informatik.uni-hamburg.de
https://www.inf.uni-hamburg.de/en/inst/ab/wtm/people/barros.htmlhttps://www.inf.uni-hamburg.de/en/inst/ab/wtm/
Please see attached an advert for a Research Associate position working with Dr Rachael Jack within the School of Psychology/Institute of Neuroscience & Psychology.
Informal enquiries can be directed to Rachael
Email: Rachael.Jack(a)glasgow.ac.uk<mailto:Rachael.Jack@glasgow.ac.uk>
[University of Glasgow: The Times Scottish University of the Year 2018]
This year's automatic face and gesture recognition conference has not yet happened but we already have the call for next year's which will be in France. There is a desire to get more cross-fertilisation between psychology and computer science in this field; I think it would be great to have a session on the latest advances in understanding the psychology of face perception. Please think whether you might have something useful to say.
Thanks, Peter
Peter Hancock
Professor,
Deputy Head of Psychology,
Faculty of Natural Sciences
University of Stirling
FK9 4LA, UK
phone 01786 467675
fax 01786 467641
http://stir.ac.uk/190http://orcid.org/0000-0001-6025-7068http://www.researcherid.com/rid/A-4633-2009
Psychology at Stirling: 100% 4* Impact, REF2014
Come and study Face Perception at the University of Stirling! Our unique MSc in the Psychology of Faces is open for applications. For more information see http://www.stir.ac.uk/postgraduate/programme-information/prospectus/psychol…
[highly cited 2016]
Apologies for cross-postings
---------------------------------------------
2nd Call for challenge participation
----------------------------------------------
Train and Validation data are available!
Sixth Emotion Recognition in the Wild (EmotiW) Challenge 2018
https://sites.google.com/view/emotiw2018
@ ACM International Conference on Multimodal Interaction 2018, Boulder,
Colarado.
---------------------------------------------------------------------
The sixth Emotion Recognition in the Wild (EmotiW) 2018 Grand Challenge
consists of an all-day event with a focus on affective sensing in
unconstrained conditions. There are three sub-challenges: engagement in the
wild prediction sub-challenge, audio-video based emotion classification
sub-challenge and image based group emotion recognition sub-challenge.
*Challenge website*: https://sites.google.com/view/emotiw2018
*Contact email*: emotiw2014[AT]gmail.com
*Timeline*
Challenge website - January 2018
Train and validate data available - March 2018
Test data available - 8 June 2018
Last date for uploading the results - 23 June 2018
Paper submission deadline - 1 July 2018
Paper notification - 30 July 2018
Camera-ready papers - 8 August 2018
*Organizers*
Abhinav Dhall (Indian Institute of Technology Ropar, India)
Roland Goecke (University of Canberra, Australia)
Jyoti Joshi
Tom Gedeon (Australian National University, Australia)
--
Abhinav Dhall, PhD
Assistant Professor,
Indian Institute of Technology, Ropar
Webpage: https://goo.gl/5LrRB7
Google Scholar: https://goo.gl/iDwNTx
Dear fellow face researchers,
The 34th Annual BPS Cognitive Psychology Section Conference will be held at Liverpool Hope University 29-31st August 2018. Abstract submissions are now open and close on 1st May 2018. We invite members of the face research list to consider submitting an abstract.
Our conference is always well attended by face researchers from the UK and beyond with face perception research always well represented in the programme and a particular highlight of the conference for many. This year there will be a symposia on 'Recent Developments in Person Perception' Chaired by Dr Andrew Dunn. Keynote speakers will be Prof Fernand Gobet, Prof James Nairne and the final keynote will be delivered by the winner of the 2018 Cognitive Section Award.
We also offer bursaries<https://www1.bps.org.uk/networks-and-communities/member-microsite/cognitive…> to assist postgraduate students to attend and present posters / papers at the conference, up to £400 per applicant.
To submit an abstract or for more information about the conference please visit the conference webpage<http://www.hope.ac.uk/cognitiveconference/>.
Best wishes
Natalie Butcher
BPS Cognitive Section Honorary Treasurer, Social Media Officer, & Assistant Editor of The Cognitive Psychology Bulletin
Dr Natalie Butcher
Senior Lecturer in Psychology
School of Social Sciences, Business & Law
Teesside University
TS13BX
Tel: +44(0)1642 342385
Twitter: @TeesPysch @Dr_N_Butcher
[cid:image001.png@01D2812F.AE636040]
Dr Natalie Butcher
Senior Lecturer in Psychology
T: 01642 432385
School of Social Sciences, Humanities & Law<http://www.tees.ac.uk/schools/sssbl/index.cfm?utm_source=teesside-uni-staff…>
[cid:image002.jpg@01D30A0E.D394A370]
[cid:image008.png@01D27E0D.5D7096A0]
[cid:image009.png@01D2812E.D0714A90]<https://www.facebook.com/TeessideUni/>
[cid:image011.png@01D2812F.5EAD56F0]<https://twitter.com/TUSSSBL>
[cid:image012.png@01D27E0E.301992F0]<https://uk.linkedin.com/edu/teesside-university-12727>
[cid:image012.png@01D2812F.5EAD56F0]<https://www.instagram.com/teessideuni/>
tees.ac.uk<http://www.tees.ac.uk/?utm_source=teesside-uni-staff&utm_medium=email&utm_t…>
Dear colleagues,
Final call! The deadline for early registration and abstract submission to APCV & CVSC 2018 has been extended:
Abstract submission: Extended to April 15th
Early registration: Extended to April 29th
-----------------------------------------------------------------------------------------------------------------------------------------------------
APCV & CVSC 2018 will take place from Friday July 13 through to Monday July 16, 2018.
Information about the conference can be found at: http://apcv2018.org/
The conference will be held at the Lake View Hotel, Hangzhou, China (http://www.lake-view-hotel-hangzhou.008h.com/introduction-en.html)
Hangzhou is the capital and most populous city of Zhejiang Province in east China, which is renowned for its historic relics and natural beauty. It is known as one of the most beautiful cities in China, also ranking as one of the most scenic cities.
We are now accepting abstracts. APCV &CVSC welcomes original research work on, but not limited to visual psychophysics, visual physiology/anatomy, visual memory, perception and attention, computational vision, social perception, brain imaging, eye movements, multisensory perception, visual development, artificial vision, reading and word recognition, face and object perception, art and visual science, and clinical vision.
Confirmed keynote speakers include
· Professor Brian Scholl (Department of Psychology, Chair of the Cognitive Science Program, Yale University, USA)
· Professor Edward Awh (Department of Psychology and Institute for Mind and Biology, University of Chicago, USA)
· Professor Zhaoping Li (Department of Computer Science, University College London, UK)
· Professor Tony Movshon (Center for Neural Science ,New York University, USA)
· Professor Takeo Watanabe (Department of Cognitive, Linguistic & Psychological Sciences, Brown University, USA)
· Professor Hongjing Lu (Departments of Psychology & Statistics, University of California, Los Angeles, USA)
· Professor William Hayward (Faculty of Social Sciences, University of Hong Kong, Hong Kong, China)
Confirmed symposium presentations:
1. Vision in nonhuman primates, Organizer: Dajun Xing, Beijing Normal University
2. Translational and clinical vision, Organizer: Changbing Huang, CAS
3. Visual attention, Organizer: Liqiang Huang, The Chinese University of Hong Kong
4. Multisensory processing, Organizer: Xiongjie Yu /Anna Wang Roe, Zhejiang University
5. Binocular depth perception, Organizer: Gang Chen /Anna Wang Roe, Zhejiang University
6. Culture shapes face processing, Organizer: Roberto Caldara, University of Fribourg
7. Changes in statistical regularity as people age, Organizer: Su-Ling Yeh, National Taiwan University
8. Linking objects from vision and beyond: Development in early life, Organizer: Chia-huei Tseng, Tohoku University
We hope to see you in Hangzhou this July 13-16!
APCV &CVSC Organizing Committee
http://apcv2018.org/
I have two funded studentships available for start in October (or earlier, if convenient), to work in the general area of human face perception and recognition. I'd be interested in working with someone on computational modelling for one of them, for which some facility in a programming language such as Matlab or Python would be helpful. One is funded by the Dylis Crabtree Scholarship, for which preference will be given to female applicants. Both are funded for UK/EU citizens only. The studentships will be based in the Face Lab at Stirling and join two postdocs working on the EPSRC-funded FACER2VM project (https://facer2vm.org/). Please get in touch with me directly in the first instance to discuss your interests, pjbh1(a)stir.ac.uk<mailto:pjbh1@stir.ac.uk>. I have no formal closing date but hope to make decisions by early May.
Peter Hancock
Professor,
Deputy Head of Psychology,
Faculty of Natural Sciences
University of Stirling
FK9 4LA, UK
phone 01786 467675
fax 01786 467641
http://stir.ac.uk/190http://orcid.org/0000-0001-6025-7068http://www.researcherid.com/rid/A-4633-2009
Psychology at Stirling: 100% 4* Impact, REF2014
Come and study Face Perception at the University of Stirling! Our unique MSc in the Psychology of Faces is open for applications. For more information see http://www.stir.ac.uk/postgraduate/programme-information/prospectus/psychol…
[highly cited 2016]
CALL FOR PARTICIPATION
The One-Minute Gradual-Emotion Recognition (OMG-Emotion)
held in partnership with the WCCI/IJCNN 2018 in Rio de Janeiro, Brazil.
https://www2.informatik.uni-hamburg.de/wtm/OMG-EmotionChallenge/
I. Aim and Scope
Our One-Minute-Gradual Emotion Dataset (OMG-Emotion Dataset) is composed
of 420 relatively long emotion videos with an average length of 1
minute, collected from a variety of Youtube channels. The videos were
selected automatically based on specific search terms related to the
term ``monologue''. Using monologue videos allowed for different
emotional behaviors to be presented in one context and that changes
gradually over time. Videos were separated into clips based on
utterances, and each utterance was annotated by at least five
independent subjects using the Amazon Mechanical Turk tool. To maintain
the contextual information for each video, each annotator watched the
clips of a video in sequence and had to annotate each video using an
arousal/valence scale and a categorical emotion based on the universal
emotions from Ekman.
The participants are encouraged to use crossmodal information in their
models, as the videos were labeled by humans without distinction of any
modality.
II. How to Participate
To participate, please send us an email to
barros(a)informatik.uni-hamburg.de with the title "OMG-Emotion Recognition
Team Registration". This e-mail must contain the following information:
Team Name
Team Members
Affiliation
Each team can have a maximum of 5 participants. You will receive from us
the access to the dataset and all the important information about how to
train and evaluate your models.
For the final submission, each team will have to send us a .csv file
containing the final arousal/valence values for each of the utterances
on the test dataset. We also request a link to a GitHub repository where
your solution must be stored, and a link to an ArXiv paper with 4-6
pages describing your model and results. The best papers will be invited
to submit their detailed research to a journal yet to be specified.
Also, the best participating teams will hold an oral presentation about
their solution during the WCCI/IJCNN 2018 conference.
III. Important Dates
Publishing of training and validation data with annotations: March 14,
2018.
Publishing of the test data, and an opening of the online submission:
April 11, 2018.
Closing of the submission portal: April 13, 2018.
Announcement of the winner through the submission portal: April 18, 2018.
IV. Organization
Pablo Barros, University of Hamburg, Germany
Egor Lakomkin, University of Hamburg, Germany
Henrique Siqueira, Hamburg University, Germany
Alexander Sutherland, Hamburg University, Germany
Stefan Wermter, Hamburg University, Germany
--------------------------
Database update:
Our face database at http://pics.stir.ac.uk/ESRC/index.htm has been updated. There are now 64 male and 71 female identities in most of the image types. I'm still updating a few, such as the stereo images and the conformed 3D models. The unedited 3D models are mostly of higher quality than previously.
While there are still relatively few identities, the variety of imagery provided for each person is still the widest that I know of.
Peter
Peter Hancock
Professor,
Deputy Head of Psychology,
Faculty of Natural Sciences
University of Stirling
FK9 4LA, UK
phone 01786 467675
fax 01786 467641
http://stir.ac.uk/190http://orcid.org/0000-0001-6025-7068http://www.researcherid.com/rid/A-4633-2009
Psychology at Stirling: 100% 4* Impact, REF2014
Come and study Face Perception at the University of Stirling! Our unique MSc in the Psychology of Faces is open for applications. For more information see http://www.stir.ac.uk/postgraduate/programme-information/prospectus/psychol…
[highly cited 2016]
I am wondering whether anyone knows (or has) a database of faces with
the same
individual's face shown at multiple times across the lifespan - even
just childhood and
adulthood (and if there is a third point, this would be even better).
Many thanks, Marlene Behrmann
--
Marlene Behrmann, Ph.D
George A. and Helen Dunham Cowan Professor of Cognitive Neuroscience
Center for the Neural Basis of Cognition and
Department of Psychology
Carnegie Mellon University, Pittsburgh, USA
(412) 268-2790
behrmann(a)cmu.edu
If you wanted unfamiliar faces for use with an American audience, then there are many soccer players who have childhood and adult images available online.
Edwin.
________________________________________
From: Face-research-list [face-research-list-bounces(a)lists.stir.ac.uk] on behalf of face-research-list-request(a)lists.stir.ac.uk [face-research-list-request(a)lists.stir.ac.uk]
Sent: Friday, 2 March, 2018 8:00:01 PM
To: face-research-list(a)lists.stir.ac.uk
Subject: Face-research-list Digest, Vol 84, Issue 2
Send Face-research-list mailing list submissions to
face-research-list(a)lists.stir.ac.uk
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.stir.ac.uk/cgi-bin/mailman/listinfo/face-research-list
or, via email, send a message with subject or body 'help' to
face-research-list-request(a)lists.stir.ac.uk
You can reach the person managing the list at
face-research-list-owner(a)lists.stir.ac.uk
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Face-research-list digest..."
Today's Topics:
1. Re: Database of faces across lifespan (Marlene Behrmann)
(Jodie Davies-Thompson)
----------------------------------------------------------------------
Message: 1
Date: Thu, 1 Mar 2018 12:29:18 +0000
From: Jodie Davies-Thompson <davies.jodie(a)gmail.com>
To: face-research-list(a)lists.stir.ac.uk
Subject: Re: [Face-research-list] Database of faces across lifespan
(Marlene Behrmann)
Message-ID: <9866AE96-15ED-4BD2-8CFC-A73C6BD098FA(a)gmail.com>
Content-Type: text/plain; charset=windows-1252
Dear Marlene,
I don’t know of any databases per se, but I recently wondered the same thing and started pulling various links together. Below are a few instances I’m aware of when individuals or groups have taken photos every year (not ideal, but depending on what you’re after, could suffice).
There is also a BBC documentary by Robert Winston (‘Child of Our Time') which follows 25 children born in 2000 - you could probably get some good images from there.
If you ever pull together a database though, that would be a brilliant resource!
- http://diply.com/same-family-photo-taken-22-years?publisher=trendyjoe
- https://petapixel.com/2015/08/03/father-and-son-take-the-same-picture-every…
- http://www.news.com.au/lifestyle/real-life/true-stories/five-friends-recrea…
- https://www.nytimes.com/interactive/2014/10/03/magazine/01-brown-sisters-fo…
Sorry to not be able to supply anything better!
All the best,
Jodie
------------------------------
Subject: Digest Footer
_______________________________________________
Face-research-list mailing list
Face-research-list(a)lists.stir.ac.uk
http://lists.stir.ac.uk/cgi-bin/mailman/listinfo/face-research-list
------------------------------
End of Face-research-list Digest, Vol 84, Issue 2
*************************************************
________________________________
**Disclaimer** The sender of this email does not represent Nanyang Technological University and this email does not express the views or opinions of the University.
Apologies for cross-postings
Call for challenge participation
Sixth Emotion Recognition in the Wild (EmotiW) Challenge 2018
https://sites.google.com/view/emotiw2018
@ ACM International Conference on Multimodal Interaction 2018, Boulder,
Colarado.
---------------------------------------------------------------------
The sixth Emotion Recognition in the Wild (EmotiW) 2018 Grand Challenge
consists of an all-day event with a focus on affective sensing in
unconstrained conditions. There are three sub-challenges: engagement in the
wild prediction sub-challenge, audio-video based emotion classification
sub-challenge and image based group emotion recognition sub-challenge.
*Challenge website*: https://sites.google.com/view/emotiw2018
*Contact email*: emotiw2014[AT]gmail.com
*Timeline*
Challenge website - January 2018
Train and validate data available - March 2018
Test data available - 8 June 2018
Last date for uploading the results - 23 June 2018
Paper submission deadline - 1 July 2018
Paper notification - 30 July 2018
Camera-ready papers - 8 August 2018
*Organizers*
Abhinav Dhall (Indian Institute of Technology Ropar, India)
Roland Goecke (University of Canberra, Australia)
Jyoti Joshi
Tom Gedeon (Australian National University, Australia)
--
Abhinav Dhall, PhD
Assistant Professor,
Indian Institute of Technology, Ropar
Webpage: https://goo.gl/5LrRB7
Google Scholar: https://goo.gl/iDwNTx
Hi everyone,
Could I please ask you to pass on this PhD bursary opportunity to any students you think might be interested?
Queen Margaret University (Edinburgh, UK) now invites applications to its PhD bursary competition. One of the bursaries available may be awarded to a student interested in studying eyewitness identification. Dr Jamal Mansour (https://www.qmu.ac.uk/schools-and-divisions/psychology-and-sociology/psycho… ) welcomes applications from competitive students with an honours undergraduate or masters degree. The bursary covers tuition as well as provides an annual stipend for living and a small research budget. The deadline for applications is Friday, March 30. The details of the eyewitness identification project can be found here: https://www.qmu.ac.uk/media/4209/cass-phd-bursary-topics-2018.pdf (BUR18-03). Further details about the competition can be found here: https://www.qmu.ac.uk/study-here/postgraduate-research-study/graduate-schoo…. Jamal would like to encourage anyone who is considering applying to email her directly at jmansour(a)qmu.ac.uk<mailto:jmansour@qmu.ac.uk>.
Thanks!
Jamal.
---------------------------------------------------------------------------------
Jamal K. Mansour, PhD
Senior Lecturer in Psychology
Psychology & Sociology
Queen Margaret University
Edinburgh, UK
EH21 6UU
Email: jmansour(a)qmu.ac.uk
Phone: +44 (0) 131 474 0000 and say my name (Jam-el Man-sir) when prompted
Fax: +44 (0) 131 474 0001
Web: https://www.qmu.ac.uk/schools-and-divisions/psychology-and-sociology/psycho…
Memory Research Group Web site: https://memoryresearchgroup.wordpress.com/
Twitter: @EyewitnessIDUp
Check out my recent paper on conducting multiple-trial lineup experiments: https://link.springer.com/article/10.3758/s13428-017-0855-0
Participate in our study on legal attitudes! https://www.psytoolkit.org/cgi-bin/psy2.4.0/survey?s=Z8jMR
This message and its attachment(s) are intended for the addressee(s) only and should not be read, copied, disclosed, forwarded or relied upon by any person other than the intended addressee(s) without the permission of the sender. If you are not the intended addressee, you must not take any action based on this message and its attachment(s) nor must you copy or show them to anyone. If you have received this email in error, please inform the sender immediately and delete all copies of it.
It is your responsibility to ensure that this message and its attachment(s) are scanned for viruses or other defects. Queen Margaret University does not accept liability for any loss or damage which may result from this message or its attachment(s), or for errors or omissions arising after it was sent. Email is not a secure medium. Email traffic entering Queen Margaret University's system is subject to routine monitoring and filtering by Queen Margaret University.
Queen Margaret University, Edinburgh is a registered charity: Scottish Charity Number SC002750.
Thanks Peter, that's very helpful!
And Thanks Lisa De Bruine for the other email about WebMorph, I had already
applied for an account, so I will check it out. :)
I will also send a separate email to you about template files.
Regards,
Rachel
>
>
> Today's Topics:
>
> 1. Re: PsychoMorph Questions (Peter Hancock)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Thu, 25 Jan 2018 09:22:57 +0000
> From: Peter Hancock <p.j.b.hancock(a)stir.ac.uk>
> To: face-research-list Mailing List
> <face-research-list(a)lists.stir.ac.uk>
> Subject: Re: [Face-research-list] PsychoMorph Questions
> Message-ID: <4d6140ad51e44576905d8da9be1b6cb6(a)havra.ad.stir.ac.uk>
> Content-Type: text/plain; charset="utf-8"
>
> The lines in Psychomorph are also for helping with placement, so far as I
> know. The end of the file does just tell you which points are joined
> together. Here’s the start of a ‘standard’ template file line section:
>
> 39 # 39 line definitions
> 0 # move on
> 2 #first line has two points
> 0 0 #they are both point 0 ## I don’t know the significance of defining
> a zero length line, this is the pupil
> 0 # move on
> 2 #next line has two points
> 1 1 #other pupil
> 0
> 9 # a proper line with 9 points!
> 2 3 4 5 6 7 8 9 2 # the line forms a ring, starting and ending at point 2
> 0
> 9
> 10 11 12 13 14 15 16 17 10
>
> You can define the lines from within Psychomorph from the delineate menu.
> I’ve attached a superbatch file for caricaturing, with some comments
>
> Peter
>
>
> From: Face-research-list [mailto:face-research-list-
> bounces(a)lists.stir.ac.uk] On Behalf Of Rachel Robbins
> Sent: 16 January 2018 23:30
> To: face-research-list Mailing List <face-research-list(a)lists.stir.ac.uk>
> Subject: [Face-research-list] PsychoMorph Questions
>
> Hi everyone,
> I am trying to learn PsychoMorph having previously used Fantamorph. I have
> read through Clare Sutherland's basic guide, but I need some help with more
> detailed qus and I can't find anything Wiki site. If anyone can provide
> advice on any of these questions I would be very grateful!
>
> In Fantamorph the lines are are purely for visual grouping and don't do
> anything, morphing is all to do with matched dot placement, and you can
> check that dots are correctly matched by looking at the triangles. Do the
> lines in PsychoMorph do anything, or are they just guides?
>
> Part of the reason I need to know is that I am trying to import Fantamorph
> information into PsychoMorph. I have managed to import my point dot
> information by taking the lines of paired dot position information from the
> .fmd files and putting them into a .tem file with the number of dots in the
> first line corrected. However, I couldn't figure out exactly what the info
> at the end of the original .tem files generated by PsychoMorph is and
> whether I need it or something equivalent. It SEEMS to be the information
> about lines, does anyone know about this?
>
> I would also love to be able to batch importing and/or making caricatures
> if I can get the .tem files set up properly. It seems like I might be able
> to do this with SuperBatchTransform, but from this page
> http://cherry.dcs.aber.ac.uk:8080/wiki/batch
> I can't figure out exactly what needs to go in my input file. Does anyone
> have an example they would be willing to share?
>
> Thanks!
> Rachel
>
> --
> You make a living by what you get; you make a life by what you give.
> -Winston Churchill.
>
Hi Rachel,
You might find Webmorph.org<http://Webmorph.org> useful (it’s a web-based version of Psychomorph). It has a lot of extra batch functions that are easier to use than Psychmorph.
Send me your email and I’ll sign you up for a beta testing account.
I’d also be keen to add a function to webmorph to import FantaMorph templates. If you have any examples of template files you could send me, I can have a bash at writing a conversion script.
Cheers,
Lisa
----------------------------------------------------------
Dr Lisa M DeBruine
Institute of Neuroscience and Psychology
University of Glasgow
58 Hillhead Street
G12 8QB
lisa.debruine(a)glasgow.ac.uk<mailto:lisa.debruine@glasgow.ac.uk>
http://facelab.org
0141 330 5351
----------------------------------------------------------
On 25 Jan 2018, at 09:22, face-research-list-request(a)lists.stir.ac.uk<mailto:face-research-list-request@lists.stir.ac.uk> wrote:
Send Face-research-list mailing list submissions to
face-research-list(a)lists.stir.ac.uk<mailto:face-research-list@lists.stir.ac.uk>
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.stir.ac.uk/cgi-bin/mailman/listinfo/face-research-list
or, via email, send a message with subject or body 'help' to
face-research-list-request(a)lists.stir.ac.uk
You can reach the person managing the list at
face-research-list-owner(a)lists.stir.ac.uk
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Face-research-list digest..."
Today's Topics:
1. Re: PsychoMorph Questions (Peter Hancock)
----------------------------------------------------------------------
Message: 1
Date: Thu, 25 Jan 2018 09:22:57 +0000
From: Peter Hancock <p.j.b.hancock(a)stir.ac.uk>
To: face-research-list Mailing List
<face-research-list(a)lists.stir.ac.uk>
Subject: Re: [Face-research-list] PsychoMorph Questions
Message-ID: <4d6140ad51e44576905d8da9be1b6cb6(a)havra.ad.stir.ac.uk>
Content-Type: text/plain; charset="utf-8"
The lines in Psychomorph are also for helping with placement, so far as I know. The end of the file does just tell you which points are joined together. Here’s the start of a ‘standard’ template file line section:
39 # 39 line definitions
0 # move on
2 #first line has two points
0 0 #they are both point 0 ## I don’t know the significance of defining a zero length line, this is the pupil
0 # move on
2 #next line has two points
1 1 #other pupil
0
9 # a proper line with 9 points!
2 3 4 5 6 7 8 9 2 # the line forms a ring, starting and ending at point 2
0
9
10 11 12 13 14 15 16 17 10
You can define the lines from within Psychomorph from the delineate menu.
I’ve attached a superbatch file for caricaturing, with some comments
Peter
From: Face-research-list [mailto:face-research-list-bounces@lists.stir.ac.uk] On Behalf Of Rachel Robbins
Sent: 16 January 2018 23:30
To: face-research-list Mailing List <face-research-list(a)lists.stir.ac.uk>
Subject: [Face-research-list] PsychoMorph Questions
Hi everyone,
I am trying to learn PsychoMorph having previously used Fantamorph. I have read through Clare Sutherland's basic guide, but I need some help with more detailed qus and I can't find anything Wiki site. If anyone can provide advice on any of these questions I would be very grateful!
In Fantamorph the lines are are purely for visual grouping and don't do anything, morphing is all to do with matched dot placement, and you can check that dots are correctly matched by looking at the triangles. Do the lines in PsychoMorph do anything, or are they just guides?
Part of the reason I need to know is that I am trying to import Fantamorph information into PsychoMorph. I have managed to import my point dot information by taking the lines of paired dot position information from the .fmd files and putting them into a .tem file with the number of dots in the first line corrected. However, I couldn't figure out exactly what the info at the end of the original .tem files generated by PsychoMorph is and whether I need it or something equivalent. It SEEMS to be the information about lines, does anyone know about this?
I would also love to be able to batch importing and/or making caricatures if I can get the .tem files set up properly. It seems like I might be able to do this with SuperBatchTransform, but from this page http://cherry.dcs.aber.ac.uk:8080/wiki/batch
I can't figure out exactly what needs to go in my input file. Does anyone have an example they would be willing to share?
Thanks!
Rachel
--
You make a living by what you get; you make a life by what you give.
-Winston Churchill.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.stir.ac.uk/pipermail/face-research-list/attachments/20180125/1…>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: superbatch_new.xls
Type: application/vnd.ms-excel
Size: 27648 bytes
Desc: superbatch_new.xls
URL: <http://lists.stir.ac.uk/pipermail/face-research-list/attachments/20180125/1…>
------------------------------
Subject: Digest Footer
_______________________________________________
Face-research-list mailing list
Face-research-list(a)lists.stir.ac.uk
http://lists.stir.ac.uk/cgi-bin/mailman/listinfo/face-research-list
------------------------------
End of Face-research-list Digest, Vol 82, Issue 6
*************************************************
Hi everyone,
I am trying to learn PsychoMorph having previously used Fantamorph. I have
read through Clare Sutherland's basic guide, but I need some help with more
detailed qus and I can't find anything Wiki site. If anyone can provide
advice on any of these questions I would be very grateful!
In Fantamorph the lines are are purely for visual grouping and don't do
anything, morphing is all to do with matched dot placement, and you can
check that dots are correctly matched by looking at the triangles. Do the
lines in PsychoMorph do anything, or are they just guides?
Part of the reason I need to know is that I am trying to import Fantamorph
information into PsychoMorph. I have managed to import my point dot
information by taking the lines of paired dot position information from the
.fmd files and putting them into a .tem file with the number of dots in the
first line corrected. However, I couldn't figure out exactly what the info
at the end of the original .tem files generated by PsychoMorph is and
whether I need it or something equivalent. It SEEMS to be the information
about lines, does anyone know about this?
I would also love to be able to batch importing and/or making caricatures
if I can get the .tem files set up properly. It seems like I might be able
to do this with SuperBatchTransform, but from this page
http://cherry.dcs.aber.ac.uk:8080/wiki/batch
I can't figure out exactly what needs to go in my input file. Does anyone
have an example they would be willing to share?
Thanks!
Rachel
--
You make a living by what you get; you make a life by what you give.
-Winston Churchill.
** Apologies for cross-posting **
**********************************************
CALL FOR PAPERS - FG 2018 WORKSHOPS
Submission deadlines approaching
May 15th and May 19th, 2018
Xi'an China
Visit: https://fg2018.cse.sc.edu/Workshop.html
**********************************************
The paper submission deadline for several workshops held in conjunction
with the 2018 edition of the IEEE
International Conference on Automatic Face and Gesture Recognition is
approaching. Prospective authors are
invited to submit a contribution.
** Workshops **
1. 8th Int. Workshop on Human Behavior Understanding in conjunction
with the 2nd Int. Workshop on Automatic Face Analytics for Human
Behavior Understanding
Organizers: Carlos Busso, Xiaohua Huang, Takatsugu Hirayama, Guoying Zhao,
Albert Ali Salah, Matti Pietikäinen, Roberto Vezzani, Wenming Zheng,
Abhinav Dhall
2. Latest developments of FG technologies in China
Organizers: Qingshan Liu, Shiqi Yu, Zhen Lei
3. First Workshop on Large-scale Emotion Recognition and Analysis
Organizers: Abhinav Dhall, Yelin Kim, Qiang Ji
4. Workshop on Dense 3D Reconstruction of 2D Face Images in the Wild
Organizers: Zhenhua Feng, Patrik Huber, Josef Kittler, Xiaojun Wu
5. Face and Gesture Analysis for Health Informatics (FGAHI)
Organizers: Kévin Bailly, Liming Chen, Mohamed Daoudi, Arnaud Dapogny,
Zakia Hammal, Di Huang
6. Facial Micro-Expression Grand Challenge (MEGC): Methods and Datasets
Organizers: Moi Hoon Yap,Sujing Wang, John See, Xiaopeng Hong, Stefanos
Zafeiriou
7. The 1st International Workshop on Real-World Face and Object
Recognition from Low-Quality Images (FOR-LQ)
Organizers: Dong Liu, Weisheng Dong, Zhangyang Wang, Ding Liu
** Additional Information **
For more information on the workshops please visit:
https://fg2018.cse.sc.edu/Workshop.html
--
assoc.prof. Vitomir Štruc, PhD
Laboratory for Machine Intelligence
Faculty of Electrical Engineering
University of Ljubljana
Slovenia
Tel: +386 1 4768 839
Fax: +386 1 4768 316
URL: luks.fe.uni-lj.si/nluks/people/vitomir-struc/
Workshop and Tutorial Co-Chair: Automatic Face and Gesture Recognition 2018
http://www.fg2018.org/
Finance Chair: Automatic Face and Gesture Recognition 2019
Guest editor:
Image and Vision Computing SI: Biometrics in the Wild
Several researcher positions (Postdocs and PhD students) are available at the Human Communication Research Group, led by Katharina von Kriegstein. The group is currently based at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig (MPI CBS; http://www.cbs.mpg.de/independent-research-groups/human-communication ) and will transfer to the Psychology Faculty of the TU Dresden in 2018.
The positions are funded by the ERC consolidator grant SENSOCOM. The aim of the SENSOCOM project is to investigate the role of auditory and visual subcortical sensory structures in analysing human communication signals and to specify how their dysfunction contributes to human communication disorders such as developmental dyslexia and autism spectrum disorders. For examples of our work on these topics see von Kriegstein et al., 2008 Current Biology, Diaz et al., 2012 PNAS; Müller-Axt et al., 2017 Current Biology. The projects include experiments using cognitive neuroscience methods to understand the basic mechanisms of cortico-subcortical interactions as well as development of training programmes that are aimed at creating behavioural intervention programmes for communication deficits (for a brief description see http://cordis.europa.eu/project/rcn/199655_en.html ).
The positions will be based at the TU Dresden. Research will be performed at the Neuroimaging Centre at the TU Dresden ( http://www.nic-tud.de ) and MPI CBS in Leipzig. The centres offer cutting-edge infrastructure with 3-Tesla MRI, 7-Tesla MRI, a Connectom scanner, MRI compatible eye-tracking, several EEG systems, 306-channel MEG, neurostimulation units including neuronavigaton, TMS and tDCS devices. Besides an excellent infrastructure, the centres offer an international and friendly environment with researchers from diverse backgrounds. All experimental facilities are supported by experienced physics and IT staff. For analyses with high computational demands, there is access to high-performance computing clusters.
Candidates should have a strong interest in perceptual aspects of human communication and experience with experimental methods of cognitive neuroscience, such as psychophysics, functional or structural MRI, TMS, diffusion-weighted imaging, brainstem recordings or EEG/MEG. Experience with clinical populations (e.g. developmental dyslexia) would be an asset but is not essential. PhD student candidates must have a Master’s degree (or equivalent) in neuroscience, clinical linguistics, psychology, cognitive science, biology, or a related field. Postdoc candidates must have a PhD in similar fields and should be able to demonstrate a consistently outstanding academic record, including publications.
The position starting date is flexible. Initially for two (postdocs) or three (PhD) years, the positions offer the possibility of an extension. Remuneration depends on experience and is based on regulations of the Max Planck Society payscale. MPI CBS is an equal opportunities employer, committed to the advancement of individuals without regard to ethnicity, religion, gender, or disability. PhD students will have the opportunity to participate in the TU Dresden graduate academy (https://tu-dresden.de/ga?set_language=en). TU Dresden is one of eleven German Universities of Excellence and offers an interdisciplinary scientific environment.
To apply, please submit a CV, contact information of two references, a brief personal statement describing your qualifications and future research interests, copies of up to two of your publications. Please submit your application via our online system at http://www.cbs.mpg.de/vacancies (using subject heading “ERC 01/18”). The deadline for application submission is 15st February 2018. Contact for informal enquiries regarding the post: Prof. Dr. Katharina von Kriegstein (katharina.von_kriegstein(a)tu-dresden.de).
---
Prof. Dr. Katharina von Kriegstein
Max Planck Institute for Human Cognitive and Brain Sciences
Stephanstr. 1A, 04103 Leipzig, Germany
Technische Universität Dresden
Bamberger Str. 7, 01187 Dresden, Germany
Phone +49 (0) 341-9940-2476
http://www.cbs.mpg.de/independent-research-groups/human-communicationhttps://twitter.com/kvonkriegstein
Apologies for cross-posting
***********************************************************************************
FGAHI 2017: CALL FOR PAPERS
1st International Workshop on Face and Gesture Analysis for Health Informatics
http://fgahi.isir.upmc.fr
Submission Deadline: January 28th, 2018
***********************************************************************************
The 1st International Workshop on Face and Gesture Analysis for Health Informatics (FGAHI
2018) will be held in conjunction with IEEE FG 2018 on May 15-19, 2018, Xi’an, China – https://fg2018.cse.sc.edu/
For details concerning the workshop program, paper submission, and
guidelines please visit our workshop website at:
http://fgahi.isir.upmc.fr
Best regards,
Zakia Hammal
Organising committee
Kevin Bailly, Liming Chen, Mohamed Daoudi, Arnaud Dapogny, Zakia Hammal, and Di Huang
Zakia Hammal, PhD
The Robotics Institute, Carnegie Mellon University
http://www.ri.cmu.edu/http://ri.cmu.edu/personal-pages/ZakiaHammal/
Hi, all
I have a funded PhD studentship on the influence of social contexts and social motivation on face memory, face recognition and first impression formation. The closing date is 26th February. I would be very grateful if you would circulate this advert around your contacts, and send it to any students who you think might be interested.
The School of Psychology at the University of Lincoln has recently moved into a purpose-built building, and is expanding its research expertise in the area of person perception.
https://www.findaphd.com/search/ProjectDetails.aspx?PJID=94153
Thanks a lot
Kay
[University of Lincoln]<http://lncn.eu/jv>
Dr. Kay Ritchie | Lecturer in Cognitive Psychology
School of Psychology, College of Social Science
University of Lincoln. Brayford Pool, Lincoln, Lincolnshire. LN6 7TS
tel: +44 (0)1522 835463
lincoln.ac.uk/psychology<http://www.lincoln.ac.uk/home/psychology/> | @PsychLincoln<http://twitter.com/PsychLincoln> | @kayritchiepsych<http://twitter.com/kayritchiepsych> | Website<https://kayritchie87.wixsite.com/kayritchiepsychology>
[TEF Gold]<http://www.lincoln.ac.uk/opendays>
The University of Lincoln, located in the heart of the city of Lincoln, has established an international reputation based on high student satisfaction, excellent graduate employment and world-class research.
The information in this e-mail and any attachments may be confidential. If you have received this email in error please notify the sender immediately and remove it from your system. Do not disclose the contents to another person or take copies.
Email is not secure and may contain viruses. The University of Lincoln makes every effort to ensure email is sent without viruses, but cannot guarantee this and recommends recipients take appropriate precautions.
The University may monitor email traffic data and content in accordance with its policies and English law. Further information can be found at: http://www.lincoln.ac.uk/legal.
Postdoctoral Fellowships in Model-based Cognitive Neuroscience at Vanderbilt
Department of Psychology
Vanderbilt Vision Research Center
We eagerly seek postdoctoral fellows to join research projects in model-based cognitive neuroscience, developing and testing computational models of visual cognition that connect behavior and brain data.
Fellows will join an ongoing collaboration of Thomas Palmeri, Jeffrey Schall, and Gordon Logan at Vanderbilt using cognitive and neural models to understand perceptual decision making, cognitive control, and visual attention. Successful models predict details of observed behavior and are constrained by or predict neurophysiological, electrophysiological, and brain imaging data in human and non-human primates. Fellows also have opportunities to join a collaboration of Thomas Palmeri, Isabel Gauthier, and their colleagues using combinations of cognitive, psychometric, and deep learning models to understand object recognition, categorization, visual learning, and perceptual expertise and to explain individual differences in behavior and brain data.
Research facilities include high-end laboratory workstations, behavioral testing stations, a web-based server infrastructure for online experiments, eye trackers, a shared 7000+ core CPU cluster and large-scale GPU cluster at Vanderbilt’s ACCRE, and state-of-the art facilities for neurophysiology, electrophysiology, and brain imaging. Postdoctoral fellows will also take advantage of the facilities and support provided by the Department of Psychology (www.vanderbilt.edu/psychological_sciences/<http://www.vanderbilt.edu/psychological_sciences/>) and the Vanderbilt Vision Research Center (vvrc.vanderbilt.edu<http://vvrc.vanderbilt.edu>). And as Dave Grohl of the Foo Fighters said, “Everybody now thinks that Nashville is ... the coolest city in America”.
Candidates can hold a Ph.D. in psychology, neuroscience, computer science, mathematics, engineering, or related disciplines. Candidates should have demonstrated skills in computer programming and statistical analyses. Some background and/or strong interest in computational modeling is desired. For those with significant modeling expertise, a knowledge of basic neuroscience is desired but not required. Start date is negotiable, but preference will be given to candidates who can begin soon. Applications will be reviewed on a rolling basis as they arrive. Salary will be based on the NIH postdoctoral scale.
Please forward to potential applicants.
Applicants should send a cover letter with a brief research statement, a CV, and names and email addresses of three references to:
Thomas Palmeri
Department of Psychology
Vanderbilt Vision Research Center
Vanderbilt University
Nashville, TN 37240
thomas.j.palmeri(a)vanderbilt.edu<mailto:thomas.j.palmeri@vanderbilt.edu>
catlab.psy.vanderbilt.edu<http://catlab.psy.vanderbilt.edu>
*apologies for cross-posting*
---------------------------
Thomas Palmeri
Professor of Psychology
co-Director of Scientific Computing
Department of Psychology
507 Wilson Hall
Vanderbilt University
111 21st Avenue South
Nashville, Tennessee 37240
thomas.j.palmeri(a)vanderbilt.edu<mailto:thomas.j.palmeri@vanderbilt.edu>
http://catlab.psy.vanderbilt.edu
Dear colleagues,
We would be grateful if you could share this job advert within your institutions.
University of Glasgow has THREE posts available for Knowledge Exchange Associates. Link to advert is below
https://www22.i-grasp.com/fe/tpl_glasgow01.asp?s=4A515F4E5A565B1A&jobid=939…
Closing date is 4th Feb.
Best,
Dr. Rachael E. Jack, Ph.D.
Lecturer
Chair of Athena Swan SAT
Institute of Neuroscience & Psychology
School of Psychology
+44 (0) 141 330 5087
[University of Glasgow: The Times Scottish University of the Year 2018]
Dear All,
There is still time to send abstracts to the annual conference of the
European Human Behaviour and Evolution Association.
For more information, follow our Twitter account, like our Facebook page
and check our website!
https://twitter.com/ehbea2018https://www.facebook.com/EHBEA2018/http://psychology.pte.hu/ehbea2018
All the best
The EHBEA 2018 Organizing Committe
Institute of Psychology, University of Pécs, Hungary
[image: Szövegközi kép 1]
Postdoctoral fellowship openings are available in the lab of Dr. Charles Or (http://research.ntu.edu.sg/expertise/academicprofile/Pages/StaffProfile.asp…) at Nanyang Technological University (NTU), Singapore to perform original research in the areas of face perception, visual motion perception, and form perception. Our work focuses on understanding (1) the human brain’s rapid and holistic nature of face detection, categorization, and recognition, and (2) the neural interaction of visual motion and form information in pattern and object recognition. We use a combination of approaches including psychophysics, eye tracking, EEG, and computational modelling.
The lab is housed in the Psychology Division of the School of Social Sciences, with a rapidly expanding group of researchers interested in experimental psychology and cognitive neuroscience. We are also part of the multidisciplinary Cognition and Neuroscience research cluster and Neuroscience, society, and governance research cluster. We have access to state-of-the-art facilities for psychophysics, eye tracking, EEG, MEG, fMRI, fNIRS, TMS, and tDCS.
Applicants should have a doctoral degree in cognitive science, experimental psychology, cognitive neuroscience, computer science, or related fields. The degree must be obtained after 1st January, 2015 and conferred latest by July 2018. The ideal candidate should be driven by scientific curiosity and self-motivated, with awareness of relevant literature, an excellent command in written and spoken English, and a strong background in statistical data analysis (e.g., SPSS, R). Aptitudes and interests in mathematics and computer programming (e.g. MATLAB, Python) are preferred.
NTU is a young and research-intensive university ranking consistently amongst the top 10 in Asia and the 1st amongst young universities under 50. It has been ranked consistently and progressively under the top 100 universities in the world by the Times Higher Education since 2013, leaping its ranking into 52nd in 2017/2018. Singapore is a fascinating, dynamic multi-cultural city in Southeast Asia with a large expat community, and a great hub for exploring neighbouring travel destinations.
Funding is available through application to NTU’s Centre for Liberal Arts and Social Sciences (CLASS), College of Humanities, Arts, and Social Sciences. Salary is competitive and includes extra benefits such as conference travel grants and relocation allowance. The postdoctoral fellowships are for one year, renewable for a second year, subject to satisfactory performance.
Applicants must state the research theme for which they are applying on the application form and demonstrate in their research plan how their expertise crosses different disciplines and relates to the theme they are applying for. Successful candidates are expected to be co-supervised by a second professor to keep within the aim of the interdisciplinary nature. Completed application forms and TWO reference letters should reach the College by December 31, 2017.
Details are available at:
http://class.cohass.ntu.edu.sg/Research/Pages/Postdoctoral-Fellowship-Schem…
Owing to the tight deadline, interested candidates are invited to contact Charles Or (charlesor(a)ntu.edu.sg) as soon as possible, with curriculum vitae and a brief research statement provided.
=======================
Charles C.-F. Or, PhD
Assistant Professor
Division of Psychology
School of Social Sciences
Nanyang Technological University
14 Nanyang Drive, Singapore 637332
________________________________
CONFIDENTIALITY: This email is intended solely for the person(s) named and may be confidential and/or privileged. If you are not the intended recipient, please delete it, notify us and do not copy, use, or disclose its contents.
Towards a sustainable earth: Print only when necessary. Thank you.