Show simple item record

dc.contributor.authorChandrasekaran, Chandramouli
dc.contributor.authorLemus, Luis
dc.contributor.authorTrubanova, Andrea
dc.contributor.authorGondan, Matthias
dc.contributor.authorGhazanfar, Asif A.
dc.date.accessioned2019-05-15T18:16:35Z
dc.date.available2019-05-15T18:16:35Z
dc.date.issued2011-09-29
dc.identifier.urihttp://hdl.handle.net/10919/89535
dc.description.abstractSpeech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a ‘‘race’’ model failed to account for their behavior patterns. Conversely, a ‘‘superposition model’’, positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates.en_US
dc.description.sponsorshipAAG is supported by the National Institute of Neurological Disorders and Stroke (NINDS, R01NS054898), the National Science Foundation CAREER award (BCS-0547760) and the James S McDonnell Scholar Award. CC was supported by the Charlotte Elizabeth Procter and Centennial Fellowships from Princeton University.en_US
dc.language.isoen_USen_US
dc.publisherPLOSen_US
dc.rightsAttribution 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/us/*
dc.titleMonkeys and Humans Share a Common Computation for Face/Voice Integrationen_US
dc.typeArticleen_US
dc.title.serialPLoS Computational Biologyen_US
dc.identifier.doihttps://doi.org/10.1371/journal.pcbi.1002165
dc.identifier.volume7en_US
dc.identifier.issue9en_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution 3.0 United States
License: Attribution 3.0 United States