EmoViz - Facial Expression Analysis & Emotion Data Visualization

dc.contributor.authorBarnett, Matthewen
dc.contributor.authorEvans, Thomasen
dc.contributor.authorIslam, Fuadulen
dc.contributor.authorZhang, Yibingen
dc.date.accessioned2018-05-11T01:32:18Zen
dc.date.available2018-05-11T01:32:18Zen
dc.date.issued2018-05-02en
dc.description.abstractThe report describes the EmoViz project for the Multimedia, Hypertext, and Information Access Capstone at Virginia Tech during the Spring 2018 semester. The goal of the EmoViz project is to develop a tool that generates and displays visualizations made from Facial Action Coding System (FACS) emotion data. The client, Dr. Steven D. Sheetz, is a Professor of Accounting and Information Systems at Virginia Tech. Dr. Sheetz conducted a research project in 2009 to determine how human emotions are affected when a subject is confronted with analyzing a business audit. In the study, an actor was hired to record a five minute video of a simulated business audit in which they read a pre-written script containing specific visual cues at highlighted points throughout the duration of the audit. Participants of the study were divided into two groups, each of which was given a distinct set of accounting data to review prior to watching the simulation video. The first group received accounting data that had purposely been altered in a way that would indicate the actor was committing fraud by lying to the auditor. The second group received accounting data that correctly corresponded to the actor’s script so that it would appear there was no fraud committed. All participants watched the simulation video while their face movements were tracked using the Noldus FaceReader software to catalog emotional states. FaceReader samples data points on the face every 33 milliseconds and uses a proprietary algorithm to quantify the following emotions at each sampling: neutral, happy, sad, angry, surprise, and disgust. After cataloging roughly 9,000 data rows per participant, Dr. Sheetz adjusted the data and exported each set into .csv files. From there, the EmoViz team uploaded these files into the newly developed system where the data was then processed using Apache Spark. Using Spark’s virtual cluster computing, the .csv data was transformed into DataFrames which helps to map each emotion to a named column. These named columns were then queried in order to generate visualizations and display certain emotions over time. Additionally, the queries helped to compare and contrast different data sets so the client could analyze the visualizations. After the analysis, the client could draw conclusions about how human emotions are affected when confronted with a business audit.en
dc.description.notesEmoViz_Final_Presentation.pdf - PDF of final presentation EmoViz_Final_Presentation.pptx - PowerPoint of final presentation EmoViz_Final_Report.docx - Word document of final report EmoViz_Final_Report.pdf - PDF of final report EmoViz.tar - Compressed archive of python scriptsen
dc.identifier.urihttp://hdl.handle.net/10919/83212en
dc.language.isoenen
dc.publisherVirginia Techen
dc.rightsCreative Commons Attribution 3.0 United Statesen
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/us/en
dc.subjectVisualizationen
dc.subjectBusiness Auditen
dc.subjectEmotionen
dc.subjectFacial Recognitionen
dc.subjectPythonen
dc.subjectSparken
dc.titleEmoViz - Facial Expression Analysis & Emotion Data Visualizationen
dc.typePresentationen
dc.typeReporten
dc.typeSoftwareen

Files

Original bundle
Now showing 1 - 5 of 5
Name:
EmoViz_Final_Presentation.pptx
Size:
41.89 MB
Format:
Microsoft Powerpoint XML
Loading...
Thumbnail Image
Name:
EmoViz_Final_Presentation.pdf
Size:
1.68 MB
Format:
Adobe Portable Document Format
Loading...
Thumbnail Image
Name:
EmoViz_Final_Report.pdf
Size:
1.06 MB
Format:
Adobe Portable Document Format
Name:
EmoViz.tar
Size:
2.19 MB
Format:
Name:
EmoViz_Final_Report.docx
Size:
895.15 KB
Format:
Microsoft Word XML
License bundle
Now showing 1 - 1 of 1
Name:
license.txt
Size:
1.5 KB
Format:
Item-specific license agreed upon to submission
Description: