The Effects of a Humanoid Robot's Non-lexical Vocalization on Emotion Recognition and Robot Perception

dc.contributor.authorLiu, Xiaozhenen
dc.contributor.committeechairJeon, Myounghoonen
dc.contributor.committeememberLim, Sol Ieen
dc.contributor.committeememberCheon, EunJeongen
dc.contributor.departmentIndustrial and Systems Engineeringen
dc.date.accessioned2023-07-01T08:00:18Zen
dc.date.available2023-07-01T08:00:18Zen
dc.date.issued2023-06-30en
dc.description.abstractAs robots have become more pervasive in our everyday life, social aspects of robots have attracted researchers' attention. Because emotions play a key role in social interactions, research has been conducted on conveying emotions via speech, whereas little research has focused on the effects of non-speech sounds on users' robot perception. We conducted a within-subjects exploratory study with 40 young adults to investigate the effects of non-speech sounds (regular voice, characterized voice, musical sound, and no sound) and basic emotions (anger, fear, happiness, sadness, and surprise) on user perception. While listening to the fairytale with the participant, a humanoid robot (Pepper) responded to the story with a recorded emotional sound with a gesture. Participants showed significantly higher emotion recognition accuracy from the regular voice than from other sounds. The confusion matrix showed that happiness and sadness had the highest emotion recognition accuracy, which aligns with the previous research. Regular voice also induced higher trust, naturalness, and preference compared to other sounds. Interestingly, musical sound mostly showed lower perceptions than no sound. A further exploratory study was conducted with an additional 49 young people to investigate the effect of regular non-verbal voices (female voices and male voices) and basic emotions (happiness, sadness, anger, and relief) on user perception. We also further explored the impact of participants' gender on emotion and social perception toward robot Pepper. While listening to a fairy tale with the participants, a humanoid robot (Pepper) responded to the story with gestures and emotional voices. Participants showed significantly higher emotion recognition accuracy and social perception from the voice + Gesture condition than Gesture only conditions. The confusion matrix showed that happiness and sadness had the highest emotion recognition accuracy, which aligns with the previous research. Interestingly, participants felt more discomfort and anthropomorphism in male voices compared to female voices. Male participants were more likely to feel uncomfortable when interacting with Pepper. In contrast, female participants were more likely to feel warm. However, the gender of the robot voice or the gender of the participant did not affect the accuracy of emotion recognition. Results are discussed with social robot design guidelines for emotional cues and future research directions.en
dc.description.abstractgeneralAs robots increasingly appear in people's lives as functional assistants or for entertainment, there are more and more scenarios in which people interact with robots. More research on human-robot interaction is being proposed to help develop more natural ways of interaction. Our study focuses on the effects of emotions conveyed by a humanoid robot's non-speech sounds on people's perception about the robot and its emotions. The results of our experiments show that the accuracy of emotion recognition of regular voices is significantly higher than that of music and robot-like voices and elicits higher trust, naturalness, and preference. The gender of the robot's voice or the gender of the participant did not affect the accuracy of emotion recognition. People are now not inclined to traditional stereotypes of robotic voices (e.g., like old movies), and expressing emotions with music and gestures mostly shows a lower perception. Happiness and sadness were identified with the highest accuracy among the emotions we studied. Participants felt more discomfort and human-likeness in the male voices than in female voices. Male participants were more likely to feel uncomfortable when interacting with the humanoid robot, while female participants were more likely to feel warm. Our study discusses design guidelines and future research directions for emotional cues in social robots.en
dc.description.degreeMaster of Scienceen
dc.format.mediumETDen
dc.identifier.othervt_gsexam:37875en
dc.identifier.urihttp://hdl.handle.net/10919/115612en
dc.language.isoenen
dc.publisherVirginia Techen
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectHuman-centered computingen
dc.subjectHuman robot interaction (HRI)en
dc.subjectInteraction techniquesen
dc.subjectAuditory feedbacken
dc.subjectRoboticsen
dc.subjectRobotic componentsen
dc.titleThe Effects of a Humanoid Robot's Non-lexical Vocalization on Emotion Recognition and Robot Perceptionen
dc.typeThesisen
thesis.degree.disciplineIndustrial and Systems Engineeringen
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen
thesis.degree.levelmastersen
thesis.degree.nameMaster of Scienceen

Files

Original bundle
Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
Liu_X_T_2023.pdf
Size:
3.58 MB
Format:
Adobe Portable Document Format
Loading...
Thumbnail Image
Name:
Liu_X_T_2023_support_1.pdf
Size:
560.79 KB
Format:
Adobe Portable Document Format
Description:
Supporting documents

Collections