Efficient Reinforcement Learning for Control
dc.contributor.author | Baddam, Vasanth Reddy | en |
dc.contributor.committeechair | Eldardiry, Hoda Mohamed | en |
dc.contributor.committeechair | Boker, Almuatazbellah M. | en |
dc.contributor.committeemember | Gumussoy, Suat | en |
dc.contributor.committeemember | Cho, Jin-Hee | en |
dc.contributor.committeemember | Watson, Layne T. | en |
dc.contributor.department | Computer Science and#38; Applications | en |
dc.date.accessioned | 2025-07-02T08:00:34Z | en |
dc.date.available | 2025-07-02T08:00:34Z | en |
dc.date.issued | 2025-07-01 | en |
dc.description.abstract | The landscape of control systems has evolved rapidly with the emergence of Reinforcement Learning (RL), offering promising solutions to a wide range of dynamic decision-making problems. However, the application of RL to real-world control systems is often hindered by computational inefficiencies, scalability issues, and a lack of structure in learning mech- anisms. This thesis explores a central question: How can we design reinforcement learning algorithms that are not only effective but also computationally effi- cient and scalable for control systems of increasing complexity? To address this, we present a progression of approaches—starting with time-scale decomposition in small- scale systems and moving towards structured and adaptive learning strategies for large-scale, multi-agent control problems. Each chapter builds upon the previous one by introducing new methods tailored to the complexity and scale of the environment, culminating in a unified framework for efficient RL-driven control | en |
dc.description.abstractgeneral | The landscape of control systems has evolved rapidly with the emergence of Reinforcement Learning (RL), offering promising solutions to a wide range of dynamic decision-making problems. However, the application of RL to real-world control systems is often hindered by computational inefficiencies, scalability issues, and a lack of structure in learning mech- anisms. This thesis explores a central question: How can we design reinforcement learning algorithms that are not only effective but also computationally effi- cient and scalable for control systems of increasing complexity? To address this, we present a progression of approaches—starting with time-scale decomposition in small- scale systems and moving towards structured and adaptive learning strategies for large-scale, multi-agent control problems. Each chapter builds upon the previous one by introducing new methods tailored to the complexity and scale of the environment, culminating in a unified framework for efficient RL-driven control | en |
dc.description.degree | Doctor of Philosophy | en |
dc.format.medium | ETD | en |
dc.identifier.other | vt_gsexam:43548 | en |
dc.identifier.uri | https://hdl.handle.net/10919/135747 | en |
dc.language.iso | en | en |
dc.publisher | Virginia Tech | en |
dc.rights | Creative Commons Attribution-NonCommercial 4.0 International | en |
dc.rights.uri | http://creativecommons.org/licenses/by-nc/4.0/ | en |
dc.subject | Optimal Control; Reinforcement Learning | en |
dc.title | Efficient Reinforcement Learning for Control | en |
dc.type | Dissertation | en |
thesis.degree.discipline | Computer Science & Applications | en |
thesis.degree.grantor | Virginia Polytechnic Institute and State University | en |
thesis.degree.level | doctoral | en |
thesis.degree.name | Doctor of Philosophy | en |
Files
Original bundle
1 - 1 of 1