Monkeys and Humans Share a Common Computation for Face/Voice Integration

dc.contributor.authorChandrasekaran, Chandramoulien
dc.contributor.authorLemus, Luisen
dc.contributor.authorTrubanova, Andreaen
dc.contributor.authorGondan, Matthiasen
dc.contributor.authorGhazanfar, Asif A.en
dc.contributor.departmentPsychologyen
dc.date.accessioned2019-05-15T18:16:35Zen
dc.date.available2019-05-15T18:16:35Zen
dc.date.issued2011-09-29en
dc.description.abstractSpeech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a ‘‘race’’ model failed to account for their behavior patterns. Conversely, a ‘‘superposition model’’, positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates.en
dc.description.sponsorshipAAG is supported by the National Institute of Neurological Disorders and Stroke (NINDS, R01NS054898), the National Science Foundation CAREER award (BCS-0547760) and the James S McDonnell Scholar Award. CC was supported by the Charlotte Elizabeth Procter and Centennial Fellowships from Princeton University.en
dc.identifier.doihttps://doi.org/10.1371/journal.pcbi.1002165en
dc.identifier.issue9en
dc.identifier.urihttp://hdl.handle.net/10919/89535en
dc.identifier.volume7en
dc.language.isoen_USen
dc.publisherPLOSen
dc.rightsCreative Commons Attribution 3.0 United Statesen
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/us/en
dc.titleMonkeys and Humans Share a Common Computation for Face/Voice Integrationen
dc.title.serialPLoS Computational Biologyen
dc.typeArticle - Refereeden

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
journal.pcbi.1002165.PDF
Size:
835.77 KB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
Name:
license.txt
Size:
1.5 KB
Format:
Item-specific license agreed upon to submission
Description: