A Neuro-Symbolic Reinforcement Learning Architecture: Integrating Perception, Reasoning, and Control
Files
TR Number
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
In recent years, neuro-symbolic learning methods have demonstrated promise in tasks re- quiring a semantic understanding that can often be missed by traditional deep learning techniques. By integrating symbolic reasoning with deep learning, neuro-symbolic architec- tures aim to be both interpretable and flexible. This thesis aims to apply neuro-symbolic learning to the domain of reinforcement learning. First, a simulation environment for robotic manipulation tasks is presented. In this environment, an analysis of policy-gradient-based reinforcement learning algorithms is given. Then, by leveraging the performance of deep learning with the semantic reasoning and interpretability of symbolically defined program- ming, a novel neuro-symbolic learning method is proposed to generalize tasks and motion planning for robotics applications using natural language. This novel neuro-symbolic can be seen as an adaptation of the Neuro-Symbolic Concept Learner[1] developed by IBM Wat- son, in which images and natural language are first processed by convolutional and residual neural networks, respectively, and then parsed by a symbolically reasoned program. Where the architecture proposed in this paper differs is in its use of the Neuro-Symbolic Concept Learner for preprocessing of a given input task, to then inform a reinforcement learning agent of how to act in a given environment. Finally, the novel adaptation of the Neuro-Symbolic Concept Learner is introduced as a method of demonstrating generalizable behavior through symbolic preprocessing.