Show simple item record

dc.contributor.authorBarnett, Matthew
dc.contributor.authorEvans, Thomas
dc.contributor.authorIslam, Fuadul
dc.contributor.authorZhang, Yibing
dc.date.accessioned2018-05-11T01:32:18Z
dc.date.available2018-05-11T01:32:18Z
dc.date.issued2018-05-02
dc.identifier.urihttp://hdl.handle.net/10919/83212
dc.description.abstractThe report describes the EmoViz project for the Multimedia, Hypertext, and Information Access Capstone at Virginia Tech during the Spring 2018 semester. The goal of the EmoViz project is to develop a tool that generates and displays visualizations made from Facial Action Coding System (FACS) emotion data. The client, Dr. Steven D. Sheetz, is a Professor of Accounting and Information Systems at Virginia Tech. Dr. Sheetz conducted a research project in 2009 to determine how human emotions are affected when a subject is confronted with analyzing a business audit. In the study, an actor was hired to record a five minute video of a simulated business audit in which they read a pre-written script containing specific visual cues at highlighted points throughout the duration of the audit. Participants of the study were divided into two groups, each of which was given a distinct set of accounting data to review prior to watching the simulation video. The first group received accounting data that had purposely been altered in a way that would indicate the actor was committing fraud by lying to the auditor. The second group received accounting data that correctly corresponded to the actor’s script so that it would appear there was no fraud committed. All participants watched the simulation video while their face movements were tracked using the Noldus FaceReader software to catalog emotional states. FaceReader samples data points on the face every 33 milliseconds and uses a proprietary algorithm to quantify the following emotions at each sampling: neutral, happy, sad, angry, surprise, and disgust. After cataloging roughly 9,000 data rows per participant, Dr. Sheetz adjusted the data and exported each set into .csv files. From there, the EmoViz team uploaded these files into the newly developed system where the data was then processed using Apache Spark. Using Spark’s virtual cluster computing, the .csv data was transformed into DataFrames which helps to map each emotion to a named column. These named columns were then queried in order to generate visualizations and display certain emotions over time. Additionally, the queries helped to compare and contrast different data sets so the client could analyze the visualizations. After the analysis, the client could draw conclusions about how human emotions are affected when confronted with a business audit.en_US
dc.language.isoenen_US
dc.publisherVirginia Techen_US
dc.rightsAttribution 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/us/*
dc.subjectVisualizationen_US
dc.subjectBusiness Auditen_US
dc.subjectEmotionen_US
dc.subjectFacial Recognitionen_US
dc.subjectPythonen_US
dc.subjectSparken_US
dc.titleEmoViz - Facial Expression Analysis & Emotion Data Visualizationen_US
dc.typePresentationen_US
dc.typeReporten_US
dc.typeSoftwareen_US
dc.description.notesEmoViz_Final_Presentation.pdf - PDF of final presentation EmoViz_Final_Presentation.pptx - PowerPoint of final presentation EmoViz_Final_Report.docx - Word document of final report EmoViz_Final_Report.pdf - PDF of final report EmoViz.tar - Compressed archive of python scriptsen_US


Files in this item

Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution 3.0 United States
License: Attribution 3.0 United States