Show simple item record

dc.contributor.authorChandrasekaran, Chandramouli
dc.contributor.authorTrubanova, Andrea
dc.contributor.authorStillitano, Sébastien
dc.contributor.authorCaplier, Alice
dc.contributor.authorGhazanfar, Asif A.
dc.date.accessioned2019-05-15T18:17:20Z
dc.date.available2019-05-15T18:17:20Z
dc.date.issued2009-07-17
dc.identifier.urihttp://hdl.handle.net/10919/89536
dc.description.abstractHumans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it’s been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2–7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver.en_US
dc.description.sponsorshipThis work was supported by the National Institutes of Health (NINDS) R01NS054898 (AAG), the National Science Foundation BCS-0547760 CAREER Award (AAG), and Princeton Neuroscience Institute Quantitative and Computational Neuroscience training grant NIH R90 DA023419-02 (CC). The Wisconsin x-ray facility is supported in part by NIH NIDCD R01 DC00820 (John Westbury and Carl Johnson).en_US
dc.format.mimetypeapplication/pdf
dc.language.isoen_USen_US
dc.publisherPLOSen_US
dc.rightsCreative Commons Attribution 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.titleThe Natural Statistics of Audiovisual Speechen_US
dc.typeArticle - Refereeden_US
dc.title.serialPLoS Computational Biologyen_US
dc.identifier.doihttps://doi.org/10.1371/journal.pcbi.1000436
dc.identifier.volume5en_US
dc.identifier.issue7en_US
dc.type.dcmitypeText


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Creative Commons Attribution 4.0 International
License: Creative Commons Attribution 4.0 International