Khursheed, Shahwar Atiq2024-08-212024-08-212024-08-20vt_gsexam:41310https://hdl.handle.net/10919/120971We propose a Model-Based Deep Reinforcement Learning (MBDRL) framework for collaborative paylaod transportation using Unmanned Aerial Vehicles (UAVs) in Search and Rescue (SAR) missions, enabling heavier payload conveyance while maintaining vehicle agility. Our approach extends the single-drone application to a novel multi-drone one, using the Probabilistic Ensembles with Trajectory Sampling (PETS) algorithm to model the unknown stochastic system dynamics and uncertainty. We use the Multi-Agent Reinforcement Learning (MARL) framework via a centralized controller in a leader-follower configuration. The agents utilize the approximated transition function in a Model Predictive Controller (MPC) configured to maximize the reward function for waypoint navigation, while a position-based formation controller ensures stable flights of these physically linked UAVs. We also developed an Unreal Engine (UE) simulation connected to an offboard planner and controller via a Robot Operating System (ROS) framework that is transferable to real robots. This work achieves stable waypoint navigation in a stochastic environment with a sample efficiency following that seen in single UAV work. This work has been funded by the National Science Foundation (NSF) under Award No. 2046770.ETDenCreative Commons Attribution 4.0 InternationalUnmanned aerial vehiclesUnreal EngineModel-based deep reinforcement learningCooperative multi-agent systemsMotion planningPayload transportationCooperative Payload Transportation by UAVs: A Model-Based Deep Reinforcement Learning (MBDRL) ApplicationThesis