Comparative Analysis of Facial Affect Detection Algorithms
Thomas, Ashin Marin
MetadataShow full item record
There has been much research on facial affect detection, but many of them fall short on accurately identifying expressions, due to changes in illumination, occlusion, or noise in uncontrolled environments. Also, not much research has been conducted on implementing the algorithms using multiple datasets, varying the size of the dataset and the dimension of each image in the dataset. My ultimate goal is to develop an optimized algorithm that can be used for real-time affect detection of automated vehicles. In this study, I implemented the facial affect detection algorithms with various datasets and conducted a comparative analysis of performance across the algorithms. The algorithms implemented in the study included a Convolutional Neural Network (CNN) in Tensorflow, FaceNet using Transfer Learning, and Capsule Network. Each of these algorithms was trained using the three datasets (FER2013, CK+, and Ohio) to get the predicted results. The Capsule Network showed the best detection accuracy (99.3%) with the CK+dataset. Results are discussed with implications and future work.