Development of a Protocol to Classify Drivers’ Emotional Conversation
Russell, Sheldon M.
McClafferty, Julie A.
MetadataShow full item record
To facilitate future analyses of emotion in naturalistic driving study (NDS) data, a protocol was developed to rate the emotional content of video samples collected during NDS. The protocol required data reductionists to observe video footage of the driver’s face and rate their emotional demeanor in a reasonable amount of time. The Facial Action Coding System (FACS; Ekman, 1978) was used to guide the development of the emotion reduction protocol. Similar to FACS, the protocol instructed reductionists how to classify the driver’s emotion into one of six categories: Neutral/No Emotion Shown, Happy, Angry/Frustrated, Sad, Surprised, and Other. Once reductionists rated the type of emotion expressed by a driver, they then indicated the intensity of the emotion expression, using a four-point scale derived from the five-point scale used in FACS. Although FACS was used to guide development, the protocol was developed to capture the overall emotion of the driver, not necessarily specific facial muscle activations on a frame-by-frame basis. Seventy-two cases for reduction were selected from previously collected NDS data drawn from studies of light vehicle drivers and heavy-truck drivers (Blanco et al., in press; Fitch et al., 2013; Hanowski et al., 2008). Each case was categorized by the experimenters for its specific emotion and intensity level content. The protocol was applied by two groups of reductionists, experienced and novice, in order to determine if training level would impact ratings. Results showed that both experienced and novice reductionists rated cases with similar levels of reliability. Furthermore, both groups of reductionists exhibited inter-rater reliability that was significantly different than chance for all rating types. For both experienced and novice reductionists, accuracy was moderate to good; however, there was evidence of confusion for certain cases. Specifically, confusion existed when a driver exhibited low-intensity emotion. Rescoring the accuracy results to estimate if emotional content was presented by a driver (originally rated as marked or severe emotion present) and or not presented by the driver (originally rated as no emotion or slight emotion) further improved the reductionists’ accuracy. Accuracy using rescored data was 85%, suggesting a high degree of accuracy for detecting emotion reaction. It is expected that future iterations of the protocol will show improved accuracy with slight modifications. Future work applying the protocol to other NDS data sets can support the investigation of emotional cell phone conversation while driving. With further development, the protocol will ultimately be used to shed additional insight into the safety-critical event (SCE) risk of cell phone conversations while driving, and has the potential to be developed for use as a generic and standardized means of classifying the emotions experienced by drivers not only in naturalistic driving studies, but also in driving studies using other methods, including simulation.