Show simple item record

dc.contributor.authorLorenzi, Jill Elizabethen_US
dc.date.accessioned2014-03-14T21:36:08Z
dc.date.available2014-03-14T21:36:08Z
dc.date.issued2012-05-01en_US
dc.identifier.otheretd-05122012-135557en_US
dc.identifier.urihttp://hdl.handle.net/10919/42642
dc.description.abstractPrevious research on emotion identification in Autism Spectrum Disorders (ASD) has demonstrated inconsistent results. While some studies have cited a deficit in emotion identification for individuals with ASD compared to controls, others have failed to find a difference. Many studies have used static photographs that do not capture subtle details of dynamic, real-life facial expressions that characterize authentic social interactions, and therefore have not been able to provide complete information regarding emotion identification. The current study aimed to build upon prior research by using dynamic, talking videos where the speaker expresses emotions of happiness, sadness, fear, anger, and excitement, both with and without a voice track. Participants included 10 children with ASD between the ages of four and 12, and 10 gender- and mental age-matched children with typical development between six and 12. Overall, both ASD and typically developing groups performed similarly in their accuracy, though the group with typical development benefited more from the addition of voice. Eye tracking analyses considered the eye region and mouth as areas of interest (AOIs). Eye tracking data from accurately identified trials resulted in significant main effects for group (longer and more fixations for participants with typical development) and condition (longer and more fixations on voiced emotions), and a significant condition by AOI interaction, where participants fixated longer and more on the eye region in the voiced condition compared to the silent condition, but fixated on the mouth approximately the same in both conditions. Treatment implications and directions for future research are discussed.en_US
dc.publisherVirginia Techen_US
dc.relation.haspartLorenzi_JE_T_2012.pdfen_US
dc.rightsI hereby certify that, if appropriate, I have obtained and attached hereto a written permission statement from the owner(s) of each third party copyrighted matter to be included in my thesis, dissertation, or project report, allowing distribution as specified below. I certify that the version I submitted is the same as that approved by my advisory committee. I hereby grant to Virginia Tech or its agents the non-exclusive license to archive and make accessible, under the conditions specified below, my thesis, dissertation, or project report in whole or in part in all forms of media, now or hereafter known. I retain all other ownership rights to the copyright of the thesis, dissertation or project report. I also retain the right to use in future works (such as articles or books) all or part of this thesis, dissertation, or project report.en_US
dc.subjectChildrenen_US
dc.subjectEmotion Identificationen_US
dc.subjectAutism Spectrum Disordersen_US
dc.subjectEye Trackingen_US
dc.subjectAudiovisual Integrationen_US
dc.titleAbility of Children with Autism Spectrum Disorders to Identify Emotional Facial Expressionsen_US
dc.typeThesisen_US
dc.contributor.departmentPsychologyen_US
thesis.degree.nameMaster of Scienceen_US
thesis.degree.levelmastersen_US
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen_US
dc.contributor.committeechairScarpa-Friedman, Angelaen_US
dc.contributor.committeememberCooper, Robin K. Pannetonen_US
dc.contributor.committeememberWhite, Susan W.en_US
dc.identifier.sourceurlhttp://scholar.lib.vt.edu/theses/available/etd-05122012-135557/en_US
dc.date.sdate2012-05-12en_US
dc.date.rdate2012-06-05
dc.date.adate2012-06-05en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record