Browsing by Author "Losey, Dylan Patrick"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
- Adaptive Communication Interfaces for Human-Robot CollaborationChristie, Benjamin Alexander (Virginia Tech, 2024-05-07)Robots can use a collection of auditory, visual, or haptic interfaces to convey information to human collaborators. The way these interfaces select signals typically depends on the task that the human is trying to complete: for instance, a haptic wristband may vibrate when the human is moving quickly and stop when the user is stationary. But people interpret the same signals in different ways, so what one user finds intuitive another user may not understand. In the absence of task knowledge, conveying signals is even more difficult: without knowing what the human wants to do, how should the robot select signals that helps them accomplish their task? When paired with the seemingly infinite ways that humans can interpret signals, designing an optimal interface for all users seems impossible. This thesis presents an information-theoretic approach to communication in task-agnostic settings: a unified algorithmic formalism for learning co-adaptive interfaces from scratch without task knowledge. The resulting approach is user-specific and not tied to any interface modality. This method is further improved by introducing symmetrical properties using priors on communication. Although we cannot anticipate how a human will interpret signals, we can anticipate interface properties that humans may like. By integrating these functional priors in the aforementioned learning scheme, we achieve performance far better than baselines that have access to task knowledge. The results presented here indicate that users subjectively prefer interfaces generated from the presented learning scheme while enabling better performance and more efficient interactions.
- Adaptive Predictive Controllers for Agile Quadrupedal Locomotion with Unknown PayloadsAmanzadeh, Leila (Virginia Tech, 2024-07-12)Quadrupedal robots play a vital role in various applications, from search and rescue operations to exploration in challenging terrains. However, locomotion tasks involving unknown payload transportation on rough terrains pose significant challenges, requiring adaptive control strategies to ensure stability and performance. This dissertation contributes to the advancement of adaptive motion planning and control solutions that enable quadrupedal robots to traverse unknown rough environments while tasked with transporting unknown payloads. In the first project, a novel hierarchical planning and control framework for robust payload transportation by quadrupedal robots is developed. This framework integrates an adaptive model predictive control (AMPC) algorithm with a gradient-descent-based adaptive updating law applied to reduced-order locomotion (i.e., template) models. At the high level of the control hierarchy, an indirect adaptive law estimates unknown parameters of the reduced-order locomotion model under varying payloads, ensuring stability during trajectory planning. The optimal trajectories generated by the AMPC are then passed to a low-level and full-order nonlinear whole-body controller (WBC) for tracking. Extensive numerical investigations and hardware experiments on the A1 quadru[pedal robot validate the framework's capabilities, showcasing significant improvements in payload transportation on both flat and rough terrains compared to conventional MPC strategies. Specifically, the robot demonstrates proficiency in transporting unmodeled, unknown static payloads up to 109% of its own mass in experiments on flat terrains and 91% on rough experimental terrains. Moreover, the robot successfully manages dynamic payloads with 73% of its mass on rough terrains. Adaptive controllers must also address external disturbances inherent in real-world environments. Therefore, the second project introduces a hierarchical planning and control scheme with an adaptive L1 nonlinear model predictive control (ANMPC) at the high level, which integrates nonlinear MPC (NMPC) with an L1 adaptive controller. The prescribed optimal state and control input profiles generated by the ANMPC are then fed to the low-level nonlinear WBC. This approach aims to stabilize locomotion gaits in the presence of parametric uncertainties and external disturbances. The proposed controller is analyzed to accommodate uncertainties and external disturbances. Comprehensive numerical simulations and experimental validations on the A1 quadrupedal robot demonstrate its effectiveness on rough terrains. Numerical results suggest that ANMPC significantly improves the stability of the gaits in the presence of uncertainties and external disturbances compared to NMPC and AMPC. The robot can carry payloads up to 109% of its own mass on its trunk on flat and rough terrains. Simulation results show that the robot achieves a maximum payload capacity of 26.3 (kg), which is equivalent to 211% of its own mass on rough terrains with uncertainties and disturbances.
- Autonomous Mobile Robot Navigation in Dynamic Real-World Environments Without Maps With Zero-Shot Deep Reinforcement LearningSivashangaran, Shathushan (Virginia Tech, 2024-06-04)Operation of Autonomous Mobile Robots (AMRs) of all forms that include wheeled ground vehicles, quadrupeds and humanoids in dynamically changing GPS denied environments without a-priori maps, exclusively using onboard sensors, is an unsolved problem that has potential to transform the economy, and vastly improve humanity's capabilities with improvements to agriculture, manufacturing, disaster response, military and space exploration. Conventional AMR automation approaches are modularized into perception, motion planning and control which is computationally inefficient, and requires explicit feature extraction and engineering, that inhibits generalization, and deployment at scale. Few works have focused on real-world end-to-end approaches that directly map sensor inputs to control outputs due to the large amount of well curated training data required for supervised Deep Learning (DL) which is time consuming and labor intensive to collect and label, and sample inefficiency and challenges to bridging the simulation to reality gap using Deep Reinforcement Learning (DRL). This dissertation presents a novel method to efficiently train DRL with significantly fewer samples in a constrained racetrack environment at physical limits in simulation, transferred zero-shot to the real-world for robust end-to-end AMR navigation. The representation learned in a compact parameter space with 2 fully connected layers with 64 nodes each is demonstrated to exhibit emergent behavior for Out-of-Distribution (OOD) generalization to navigation in new environments that include unstructured terrain without maps, dynamic obstacle avoidance, and navigation to objects of interest with vision input that encompass low light scenarios with the addition of a night vision camera. The learned policy outperforms conventional navigation algorithms while consuming a fraction of the computation resources, enabling execution on a range of AMR forms with varying embedded computer payloads.
- Design and Simulation of a Model Reference Adaptive Control System Employing Reproducing Kernel Hilbert Space for Enhanced Flight Control of a QuadcopterScurlock, Brian Patrick (Virginia Tech, 2024-06-04)This thesis presents the integration of reproducing kernel Hilbert spaces (RKHSs) into the model reference adaptive control (MRAC) framework to enhance the control systems of quadcopters. Traditional MRAC systems, while robust under predictable conditions, can struggle with the dynamic uncertainties typical in unmanned aerial vehicle (UAV) operations such as wind gusts and payload variations. By incorporating RKHS, we introduce a non-parametric, data-driven approach that significantly enhances system adaptability to in-flight dynamics changes. The research focuses on the design, simulation, and analysis of an RKHS-enhanced MRAC system applied to quadcopters. Through theoretical developments and simulation results, the thesis demonstrates how RKHS can be used to improve the precision, adaptability, and error handling of MRAC systems, especially in managing the complexities of UAV flight dynamics under various disturbances. The simulations validate the improved performance of the RKHS-MRAC system compared to traditional MRAC, showing finer control over trajectory tracking and adaptive gains. Further contributions of this work include the exploration of the computational impact and the relationship between the configuration of basis centers and system performance. Detailed analysis reveals that the number and distribution of basis centers critically influence the system's computational efficiency and adaptive capability, demonstrating a significant trade-off between efficiency and performance. The thesis concludes with potential future research directions, emphasizing the need for further tests and implementations in real-world scenarios to explore the full potential of RKHS in adaptive UAV control, especially in critical applications requiring high precision and reliability. This work lays the groundwork for future explorations into scalable RKHS applications in MRAC systems, aiming to optimize computational resources while maximizing control system performance.
- Inferring the Human's Objective in Human Robot InteractionHoegerman, Joshua Thomas (Virginia Tech, 2024-05-03)This thesis discusses the use of Bayesian Inference in inferring over the human's objective for Human-Robot Interaction, more specifically, it focuses upon the adaptation of methods to better utilize the information for inferring upon the human's objective for Reward Learning and Communicative Shared Autonomy settings. To accomplish this, we first examine state-of-the-art methods for approaching Bayesian Inverse Reinforcement learning where we explore the strengths and weaknesses of current approaches. After which we explore alternative methods for approaching the problem, borrowing similar approaches to those of the statistics community to apply alternative methods to improve the sampling process over the human's belief. After this, I then move to a discussion on the setting of Shared Autonomy in the presence and absence of communication. These differences are then explored in our method for inferring upon an environment where the human is aware of the robot's intention and how this can be used to dramatically improve the robot's ability to cooperate and infer upon the human's objective. In total, I conclude that the use of these methods to better infer upon the human's objective significantly improves the performance and cohesion between the human and robot agents within these settings.
- Multi-Objective Control for Physical and Cognitive Human-Exoskeleton InteractionBeiter, Benjamin Christopher (Virginia Tech, 2024-05-09)Powered exoskeletons have the potential to revolutionize the labor workplace across many disciplines, from manufacturing to agriculture. However, there are still many barriers to adoption and widespread implementation of exoskeletons. One major research gap of powered exoskeletons currently is the development of a control framework to best cooperate with the user. This limitation is first in understanding the physical and cognitive interaction between the user and exoskeleton, and then in designing a controller that addresses this interaction in a way that provides both physical assistance towards completing a task, and a decrease in the cognitive demand of operating the device. This work demonstrates that multi-objective, optimization-based control can be used to provide a coincident implementation of autonomous robot control, and human-input driven control. A parameter called 'acceptance' can be added to the weights of the cost functions to allow for an automatic trade-off in control priority between the user and robot objectives. This is paired with an update function that allows for the exoskeleton control objectives to track the user objectives over time. This results in a cooperative, powered exoskeleton controller that is responsive to user input, dynamically adjusting control autonomy to allow the user to act to complete a task, learn the control objective, and then offload all effort required to complete the task to the autonomous controller. This reduction in effort is physical assistance directly towards completing the task, and should reduce the cognitive load the user experiences when completing the task. To test the hypothesis of whether high task assistance lowers the cognitive load of the user, a study is designed and conducted to test the effect of the shared autonomy controller on the user's experience operating the robot. The user operates the robot under zero-, full-, and shared-autonomy control cases. Physical workload, measured through the force they exert to complete the task, and cognitive workload, measured through pupil dilation, are evaluated to significantly show that high-assistance operation can lower the cognitive load experienced by a user alongside the physical assistance provided. Automatic adjustment in autonomy works to allow this assistance while allowing the user to be responsive to changing objectives and disturbances. The controller does not remove all mental effort from operation, but shows that high acceptance does lead to less mental effort. When implementing this control beyond the simple reaching task used in the study, however, the controller must be able to both track to the user's desired objective and converge to a high-assistance state to lead to the reduction in cognitive load. To achieve this behavior, first is presented a method to design and enforce Lyapunov stability conditions of individual tasks within a multi-objective controller. Then, with an assumption on the form of the input the user will provide to accomplish their intended task, it is shown that the exoskeleton can stably track an acceptance-weighted combination of the user and robot desired objectives. This guarantee of following the proper trajectory at corresponding autonomy levels results in comparable accuracy in tracking a simulated objective as the base shared autonomy approach, but with a much higher acceptance level, indicating a better match between the user and exoskeleton control objectives, as well as a greater decrease in cognitive load. This process of enforcing stability conditions to shape human-exoskeleton system behavior is shown to be applicable to more tasks, and is in preparation for validation with further user studies.
- Teaching Robots using Interactive Imitation LearningJonnavittula, Ananth (Virginia Tech, 2024-06-28)As robots transition from controlled environments, such as industrial settings, to more dynamic and unpredictable real-world applications, the need for adaptable and robust learning methods becomes paramount. In this dissertation we develop Interactive Imitation Learning (IIL) based methods that allow robots to learn from imperfect demonstrations. We achieve this by incorporating human factors such as the quality of their demonstrations and the level of effort they are willing to invest in teaching the robot. Our research is structured around three key contributions. First, we examine scenarios where robots have access to high-quality human demonstrations and abundant corrective feedback. In this setup, we introduce an algorithm called SARI (Shared Autonomy across Repeated Interactions), that leverages repeated human-robot interactions to learn from humans. Through extensive simulations and real-world experiments, we demonstrate that SARI significantly enhances the robot's ability to perform complex tasks by iteratively improving its understanding and responses based on human feedback. Second, we explore scenarios where human demonstrations are suboptimal and no additional corrective feedback is provided. This approach acknowledges the inherent imperfections in human teaching and aims to develop robots that can learn effectively under such conditions. We accomplish this by allowing the robot to adopt a risk-averse strategy that underestimates the human's abilities. This method is particularly valuable in household environments where users may not have the expertise or patience to provide perfect demonstrations. Finally, we address the challenge of learning from a single video demonstration. This is particularly relevant for enabling robots to learn tasks without extensive human involvement. We present VIEW (Visual Imitation lEarning with Waypoints), a method that focuses on extracting critical waypoints from video demonstrations. By identifying key positions and movements, VIEW allows robots to efficiently replicate tasks with minimal training data. Our experiments show that VIEW can significantly reduce both the number of trials required and the time needed for the robot to learn new tasks. The findings from this research highlight the importance of incorporating advanced learning algorithms and interactive methods to enhance the robot's ability to operate autonomously in diverse environments. By addressing the variability in human teaching and leveraging innovative learning strategies, this dissertation contributes to the development of more adaptable, efficient, and user-friendly robotic systems.