Teaching Robots using Interactive Imitation Learning

dc.contributor.authorJonnavittula, Ananthen
dc.contributor.committeechairLosey, Dylan Patricken
dc.contributor.committeememberAkbari Hamed, Kavehen
dc.contributor.committeememberWilliams, Ryan K.en
dc.contributor.committeememberLeonessa, Alexanderen
dc.contributor.departmentMechanical Engineeringen
dc.date.accessioned2024-06-29T08:00:12Zen
dc.date.available2024-06-29T08:00:12Zen
dc.date.issued2024-06-28en
dc.description.abstractAs robots transition from controlled environments, such as industrial settings, to more dynamic and unpredictable real-world applications, the need for adaptable and robust learning methods becomes paramount. In this dissertation we develop Interactive Imitation Learning (IIL) based methods that allow robots to learn from imperfect demonstrations. We achieve this by incorporating human factors such as the quality of their demonstrations and the level of effort they are willing to invest in teaching the robot. Our research is structured around three key contributions. First, we examine scenarios where robots have access to high-quality human demonstrations and abundant corrective feedback. In this setup, we introduce an algorithm called SARI (Shared Autonomy across Repeated Interactions), that leverages repeated human-robot interactions to learn from humans. Through extensive simulations and real-world experiments, we demonstrate that SARI significantly enhances the robot's ability to perform complex tasks by iteratively improving its understanding and responses based on human feedback. Second, we explore scenarios where human demonstrations are suboptimal and no additional corrective feedback is provided. This approach acknowledges the inherent imperfections in human teaching and aims to develop robots that can learn effectively under such conditions. We accomplish this by allowing the robot to adopt a risk-averse strategy that underestimates the human's abilities. This method is particularly valuable in household environments where users may not have the expertise or patience to provide perfect demonstrations. Finally, we address the challenge of learning from a single video demonstration. This is particularly relevant for enabling robots to learn tasks without extensive human involvement. We present VIEW (Visual Imitation lEarning with Waypoints), a method that focuses on extracting critical waypoints from video demonstrations. By identifying key positions and movements, VIEW allows robots to efficiently replicate tasks with minimal training data. Our experiments show that VIEW can significantly reduce both the number of trials required and the time needed for the robot to learn new tasks. The findings from this research highlight the importance of incorporating advanced learning algorithms and interactive methods to enhance the robot's ability to operate autonomously in diverse environments. By addressing the variability in human teaching and leveraging innovative learning strategies, this dissertation contributes to the development of more adaptable, efficient, and user-friendly robotic systems.en
dc.description.abstractgeneralRobots are becoming increasingly common outside manufacturing facilities. In these unstructured environments, people might not always be able to give perfect instructions or might make mistakes. This dissertation explores methods that allow robots to learn tasks by observing human demonstrations, even when those demonstrations are imperfect. First, we look at scenarios where humans can provide high-quality demonstrations and corrections. We introduce an algorithm called SARI (Shared Autonomy across Repeated Interactions). SARI helps robots get better at tasks by learning from repeated interactions with humans. Through various experiments, we found that SARI significantly improves the robot's ability to perform complex tasks, making it more reliable and efficient. Next, we explore scenarios where the human demonstrations are not perfect, and no additional corrections are given. This approach takes everyday scenarios into account, where people might not have the time or expertise to provide perfect instructions. By designing a method that assumes humans might make mistakes, we can create robots that can learn safely and effectively. This makes the robots more adaptable and easier to use for a diverse group of people. Finally, we tackle the challenge of teaching robots from a single video demonstration. This method is particularly useful because it requires less involvement from humans. We developed VIEW (Visual Imitation lEarning with Waypoints), a method that helps robots learn tasks by focusing on the most important parts of a video demonstration. By identifying key points and movements, VIEW allows robots to quickly and efficiently replicate tasks with minimal training. This method significantly reduces the time and effort needed for robots to learn new tasks. Overall, this research shows that by using advanced learning techniques and interactive methods, we can create robots that are more adaptable, efficient, and user-friendly. These robots can learn from humans in various environments and become valuable assistants in our daily lives.en
dc.description.degreeDoctor of Philosophyen
dc.format.mediumETDen
dc.identifier.othervt_gsexam:41118en
dc.identifier.urihttps://hdl.handle.net/10919/120552en
dc.language.isoenen
dc.publisherVirginia Techen
dc.rightsCreative Commons Attribution 4.0 Internationalen
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/en
dc.subjectImitation Learningen
dc.subjectRoboticsen
dc.subjectArtificial Intelligenceen
dc.subjectMachine Learningen
dc.titleTeaching Robots using Interactive Imitation Learningen
dc.typeDissertationen
thesis.degree.disciplineMechanical Engineeringen
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen
thesis.degree.leveldoctoralen
thesis.degree.nameDoctor of Philosophyen

Files

Original bundle
Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
Jonnavittula_A_D_2024.pdf
Size:
4.38 MB
Format:
Adobe Portable Document Format
Loading...
Thumbnail Image
Name:
Jonnavittula_A_D_2024_support_1.pdf
Size:
57.57 KB
Format:
Adobe Portable Document Format
Description:
Supporting documents