Show simple item record

dc.contributor.authorRusch, Cen_US
dc.contributor.authorRoth, Een_US
dc.contributor.authorVinauger, Cen_US
dc.contributor.authorRiffell, JAen_US
dc.date.accessioned2018-01-06T20:27:43Z
dc.date.available2018-01-06T20:27:43Z
dc.date.issued2017-12-15en_US
dc.identifier.issn0022-0949en_US
dc.identifier.urihttp://hdl.handle.net/10919/81555
dc.description.abstractHoneybees arewell-known models for the study of visual learning and memory. Whereas most of our knowledge of learned responses comes from experiments using free-flying bees, a tethered preparation would allow fine-scale control of the visual stimuli as well as accurate characterization of the learned responses. Unfortunately, conditioning procedures using visual stimuli in tethered bees have been limited in their efficacy. In this study, using a novel virtual reality environment and a differential training protocol in tethered walking bees, we show that the majority of honeybees learn visual stimuli, and need only six paired training trials to learn the stimulus. We found that bees readily learn visual stimuli that differ in both shape and colour. However, bees learn certain components over others (colour versus shape), and visual stimuli are learned in a nonadditive manner with the interaction of specific colour and shape combinations being crucial for learned responses. To better understand which components of the visual stimuli the bees learned, the shape–colour association of the stimuli was reversed either during or after training. Results showed that maintaining the visual stimuli in training and testing phases was necessary to elicit visual learning, suggesting that bees learn multiple components of the visual stimuli. Together, our results demonstrate a protocol for visual learning in restrained bees that provides a powerful tool for understanding how components of a visual stimulus elicit learned responses as well as elucidating how visual information is processed in the honeybee brain.en
dc.format.extent4746 - 4746 (1) page(s)en_US
dc.format.mimetypeapplication/pdf
dc.languageEnglishen_US
dc.publisherCompany Of Biologists Ltden_US
dc.relation.urihttp://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000417822800026&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=930d57c9ac61a043676db62af60056c1en_US
dc.subjectHoneybeesen_US
dc.subjectVisual associative learningen_US
dc.subjectLocomotion compensatoren
dc.subjectVirtual environmenten
dc.titleHoneybees in a virtual reality environment learn unique combinations of colour and shapeen_US
dc.typeArticle - Refereed
dc.description.versionPublished (Publication status)en_US
dc.title.serialJOURNAL OF EXPERIMENTAL BIOLOGYen_US
dc.identifier.doihttps://doi.org/10.1242/jeb.173062
dc.type.otherCorrectionen_US
dc.identifier.volume220en_US
dc.identifier.issue24en_US
dc.identifier.orcidVinauger, C [0000-0002-3704-5427]en_US
dc.type.dcmitypeText
dc.identifier.eissn1477-9145en_US
pubs.organisational-group/Virginia Tech
pubs.organisational-group/Virginia Tech/Agriculture & Life Sciences
pubs.organisational-group/Virginia Tech/Agriculture & Life Sciences/Biochemistry
pubs.organisational-group/Virginia Tech/Agriculture & Life Sciences/CALS T&R Faculty
pubs.organisational-group/Virginia Tech/All T&R Faculty


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record