VTechWorks is currently accessible only on the VT network (campus, VPN). Elements deposit is now enabled. We are working to restore full access as soon as possible.
 

Auditory grouping mechanisms reflect a sound's relative position in a sequence

dc.contributor.authorHill, Kevin T.en
dc.contributor.authorBishop, Christopher W.en
dc.contributor.authorMiller, Lee M.en
dc.date.accessioned2019-11-05T13:49:52Zen
dc.date.available2019-11-05T13:49:52Zen
dc.date.issued2012-06-08en
dc.description.abstractThe human brain uses acoustic cues to decompose complex auditory scenes into its components. For instance to improve communication, a listener can select an individual "stream," such as a talker in a crowded room, based on cues such as pitch or location. Despite numerous investigations into auditory streaming, few have demonstrated clear correlates of perception; instead, in many studies perception covaries with changes in physical stimulus properties (e.g., frequency separation). In the current report, we employ a classic ABA streaming paradigm and human electroencephalography (EEG) to disentangle the individual contributions of stimulus properties from changes in auditory perception. We find that changes in perceptual state-that is the perception of one versus two auditory streams with physically identical stimuli-and changes in physical stimulus properties are reflected independently in the event-related potential (ERP) during overlapping time windows. These findings emphasize the necessity of controlling for stimulus properties when studying perceptual effects of streaming. Furthermore, the independence of the perceptual effect from stimulus properties suggests the neural correlates of streaming reflect a tone's relative position within a larger sequence (1st, 2nd, 3rd) rather than its acoustics. By clarifying the role of stimulus attributes along with perceptual changes, this study helps explain precisely how the brain is able to distinguish a sound source of interest in an auditory scene.en
dc.description.notesThis work was supported by the National Institutes of Health: National Institute on Deafness and other Communication Disorders [R01-DC8171 to Lee M. Miller, T32-DC8072-01A2 to Kevin T. Hill, and F31-DC011429 to Christopher W. Bishop].en
dc.description.sponsorshipNational Institutes of Health: National Institute on Deafness and other Communication DisordersUnited States Department of Health & Human ServicesNational Institutes of Health (NIH) - USANIH National Institute on Deafness & Other Communication Disorders (NIDCD) [R01-DC8171, T32-DC8072-01A2, F31-DC011429]en
dc.format.mimetypeapplication/pdfen
dc.identifier.doihttps://doi.org/10.3389/fnhum.2012.00158en
dc.identifier.issn1662-5161en
dc.identifier.other158en
dc.identifier.pmid22701410en
dc.identifier.urihttp://hdl.handle.net/10919/95252en
dc.identifier.volume6en
dc.language.isoenen
dc.publisherFrontiersen
dc.rightsCreative Commons Attribution 4.0 Internationalen
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/en
dc.subjectauditoryen
dc.subjectgroupingen
dc.subjectstreamingen
dc.subjectEEGen
dc.subjectperceptionen
dc.subjectbistableen
dc.titleAuditory grouping mechanisms reflect a sound's relative position in a sequenceen
dc.title.serialFrontiers in Human Neuroscienceen
dc.typeArticle - Refereeden
dc.type.dcmitypeTexten
dc.type.dcmitypeStillImageen

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
fnhum-06-00158.pdf
Size:
1.8 MB
Format:
Adobe Portable Document Format
Description: