Human-Guided Learning for Personalizing Robot Behaviors

dc.contributor.authorRamirez Sanchez, Robert Javieren
dc.contributor.committeechairLosey, Dylan Patricken
dc.contributor.committeememberSouthward, Steve C.en
dc.contributor.committeememberAkbari Hamed, Kavehen
dc.contributor.departmentMechanical Engineeringen
dc.date.accessioned2025-06-06T08:03:45Zen
dc.date.available2025-06-06T08:03:45Zen
dc.date.issued2025-06-05en
dc.description.abstractThe presence of robots performing tasks in real-world environments is rapidly growing. These robots will interact with various humans with different personal preferences, highlighting the need for robots that adapt their behavior accordingly. In this thesis, we develop tools and interfaces to convey task-critical information and personalize robot behavior. First, we explore settings where humans provide demonstrations for multiple tasks. For this setting, we introduce PECAN (Personalizing Robot Behavior through a Learned Canonical Space), a learning and interface-based approach that enables users to directly select their desired style. PECAN learn a continuous canonical space from demonstrations, where each point in the space corresponds to a style consistent across each task. Our simulation experiments and user studies indicate that humans prefer using PECAN to personalize robot behavior compared to existing methods. We then examine scenarios where robots complete a task in dynamic environments. A fundamental limitation when learning from demonstrations is causal confusion due to observations containing both task-relevant and extraneous information. Because the robot does not know what aspects of its observations are important a priori, it may fail to learn the intended task. We propose RECON (Reducing Causal Confusion with Human-Placed Markers), a framework that leverages beacons (UWB trackers) attached to task-relevant objects by the human before providing demonstrations. RECON learns a compact observation embedding correlated to the beacon information, and autonomously filters out extraneous information. Our experiments indicate that RECON significantly reduces the number of demonstrations required for teaching a task to the robot.en
dc.description.abstractgeneralRobotics systems are finding their place in public spaces and our homes. As a result, robots will interact with various humans, each with a unique set of preferences on how robots should assist them. In this thesis, we develop user-friendly tools and interfaces to guide the robot and personalize its behavior. First, we explore settings where humans provide the robot examples of multiple tasks. In this setup, we introduce (Personalizing Robot Behavior through a Learned Canonical Space), an intuitive interface that enables users to directly select how the robot performs previously demonstrated tasks. For example, consider a robot in a kitchen preparing breakfast; some users may prefer the robot to move as fast as possible, while others prioritize accuracy. The results from our simulation and user studies show that users prefer using PECAN to personalize the robot's behavior compared to existing methods. We then consider scenarios where robots must learn a task in unpredictable environments like our home. Typical methods for learning from demonstrations require a larger collection of examples, because the robot learner does not know which aspects of its observations are important. Returning to our kitchen example, the robot could learn that a plate in the background is important, and it will only perform the task when the plate is present. For this issue, we propose RECON (Reducing Causal Confusion with Human-Placed Markers), a framework that enable humans to place beacons (tool like Apple Air Tags) to task-relevant objects before providing demonstrations. Beacons allow the robot to identify the position of marked objects and ignore the rest. Our experiments show that RECON significantly reduces the number of demonstrations required for the robot to learn a task.en
dc.description.degreeMaster of Scienceen
dc.format.mediumETDen
dc.identifier.othervt_gsexam:44051en
dc.identifier.urihttps://hdl.handle.net/10919/135096en
dc.language.isoenen
dc.publisherVirginia Techen
dc.rightsCreative Commons Attribution-NonCommercial-ShareAlike 4.0 Internationalen
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en
dc.subjectHuman-Robot Interactionen
dc.subjectImitation Learningen
dc.subjectRepresentation Learningen
dc.titleHuman-Guided Learning for Personalizing Robot Behaviorsen
dc.typeThesisen
thesis.degree.disciplineMechanical Engineeringen
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen
thesis.degree.levelmastersen
thesis.degree.nameMaster of Scienceen

Files

Original bundle
Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
Ramirez_Sanchez_RJ_T_2025.pdf
Size:
7.98 MB
Format:
Adobe Portable Document Format
Loading...
Thumbnail Image
Name:
Ramirez_Sanchez_RJ_T_2025_support_1.pdf
Size:
61.19 KB
Format:
Adobe Portable Document Format
Description:
Supporting documents

Collections