Chapman, Kevin L.2014-03-142014-03-141996-09-05etd-03042009-041449http://hdl.handle.net/10919/41435A distributed reinforcement-learning system is designed and implemented on a mobile robot for the study of complex task decomposition in real robot learning environments. The Distributed Q-learning Classifier System (DQLCS) is evolved from the standard Learning Classifier System (LCS) proposed by J.H. Holland. Two of the limitations of the standard LCS are its monolithic nature and its complex apportionment of credit scheme, the bucket brigade algorithm (BBA). The DQLCS addresses both of these problems as well as the inherent difficulties faced by learning systems operating in real environments. We introduce Q-learning as the apportionment of credit component of the DQLCS, and we develop a distributed learning architecture to facilitate complex task decomposition. Based upon dynamic programming, the Q-learning update equation is derived and its advantages over the complex BBA are discussed. The distributed architecture is implemented to provide for faster learning by allowing the system to effectively decrease the size of the problem space it must explore. Holistic and monolithic shaping approaches are used to distribute reward among the learning modules of the DQLCS in a variety of real robot learning experiments. The results of these experiments support the DQLCS as a useful reinforcement learning paradigm and suggest future areas of study in distributed learning systems.ix, 110 leavesBTDapplication/pdfenIn CopyrightQ-learningLearning Classifier Systemsartificial intelligencemobile robotstask decompositionLD5655.V855 1996.C437A Distributed Q-learning Classifier System for task decomposition in real robot learning problemsThesishttp://scholar.lib.vt.edu/theses/available/etd-03042009-041449/