A Neuro-Symbolic Reinforcement Learning Architecture: Integrating Perception, Reasoning, and Control
dc.contributor.author | Ellis, Hunter Wayne | en |
dc.contributor.committeechair | Doan, Thinh Thanh | en |
dc.contributor.committeechair | Hsiao, Michael S. | en |
dc.contributor.committeemember | Williams, Ryan K. | en |
dc.contributor.department | Electrical and Computer Engineering | en |
dc.date.accessioned | 2025-06-04T08:03:26Z | en |
dc.date.available | 2025-06-04T08:03:26Z | en |
dc.date.issued | 2025-06-03 | en |
dc.description.abstract | In recent years, neuro-symbolic learning methods have demonstrated promise in tasks re- quiring a semantic understanding that can often be missed by traditional deep learning techniques. By integrating symbolic reasoning with deep learning, neuro-symbolic architec- tures aim to be both interpretable and flexible. This thesis aims to apply neuro-symbolic learning to the domain of reinforcement learning. First, a simulation environment for robotic manipulation tasks is presented. In this environment, an analysis of policy-gradient-based reinforcement learning algorithms is given. Then, by leveraging the performance of deep learning with the semantic reasoning and interpretability of symbolically defined program- ming, a novel neuro-symbolic learning method is proposed to generalize tasks and motion planning for robotics applications using natural language. This novel neuro-symbolic can be seen as an adaptation of the Neuro-Symbolic Concept Learner[1] developed by IBM Wat- son, in which images and natural language are first processed by convolutional and residual neural networks, respectively, and then parsed by a symbolically reasoned program. Where the architecture proposed in this paper differs is in its use of the Neuro-Symbolic Concept Learner for preprocessing of a given input task, to then inform a reinforcement learning agent of how to act in a given environment. Finally, the novel adaptation of the Neuro-Symbolic Concept Learner is introduced as a method of demonstrating generalizable behavior through symbolic preprocessing. | en |
dc.description.abstractgeneral | Robots are becoming more capable, but teaching them to perform complex tasks in changing environments remains a major challenge. Traditional learning systems, like deep learning, are powerful, but they are often seen as black-boxes. This project explores a new approach that combines the strengths of deep learning with symbolic reasoning, which allows robots to reason about their actions and goals in a more human-interpretable way. In this thesis, a simulated environment was built for training and testing a robotic arm on object manipula- tion tasks using. A reinforcement learning system was developed to allow the robot to learn through trial and error, improving its performance over time. To improve generalization and task understanding, a new hybrid model was proposed that combines deep learning with symbolic logic. Inspired by IBM's Neuro-Symbolic Concept Learner, this model uses visual and language inputs to guide the robot's behavior based on symbolic representations. Unlike the original Concept Learner, this version is adapted to help a reinforcement learning agent decide what to do in a specific situation based on these symbolic cues. This research shows that combining symbolic reasoning with modern learning techniques could make robots more flexible, explainable, and capable of handling a wider variety of real-world tasks. | en |
dc.description.degree | Master of Science | en |
dc.format.medium | ETD | en |
dc.identifier.other | vt_gsexam:43971 | en |
dc.identifier.uri | https://hdl.handle.net/10919/135030 | en |
dc.language.iso | en | en |
dc.publisher | Virginia Tech | en |
dc.rights | Creative Commons Attribution 4.0 International | en |
dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | en |
dc.subject | Reinforcement Learning | en |
dc.subject | Neuro-Symbolic | en |
dc.subject | Concept Learner | en |
dc.subject | Robotics | en |
dc.title | A Neuro-Symbolic Reinforcement Learning Architecture: Integrating Perception, Reasoning, and Control | en |
dc.type | Thesis | en |
thesis.degree.discipline | Computer Engineering | en |
thesis.degree.grantor | Virginia Polytechnic Institute and State University | en |
thesis.degree.level | masters | en |
thesis.degree.name | Master of Science | en |
Files
Original bundle
1 - 1 of 1