A Distributed Q-learning Classifier System for task decomposition in real robot learning problems

dc.contributor.authorChapman, Kevin L.en
dc.contributor.committeechairBay, John S.en
dc.contributor.committeememberAbbott, A. Lynnen
dc.contributor.committeememberVanLandingham, Hugh F.en
dc.contributor.departmentElectrical Engineeringen
dc.date.accessioned2014-03-14T21:31:02Zen
dc.date.adate2009-03-04en
dc.date.available2014-03-14T21:31:02Zen
dc.date.issued1996-09-05en
dc.date.rdate2009-03-04en
dc.date.sdate2009-03-04en
dc.description.abstractA distributed reinforcement-learning system is designed and implemented on a mobile robot for the study of complex task decomposition in real robot learning environments. The Distributed Q-learning Classifier System (DQLCS) is evolved from the standard Learning Classifier System (LCS) proposed by J.H. Holland. Two of the limitations of the standard LCS are its monolithic nature and its complex apportionment of credit scheme, the bucket brigade algorithm (BBA). The DQLCS addresses both of these problems as well as the inherent difficulties faced by learning systems operating in real environments. We introduce Q-learning as the apportionment of credit component of the DQLCS, and we develop a distributed learning architecture to facilitate complex task decomposition. Based upon dynamic programming, the Q-learning update equation is derived and its advantages over the complex BBA are discussed. The distributed architecture is implemented to provide for faster learning by allowing the system to effectively decrease the size of the problem space it must explore. Holistic and monolithic shaping approaches are used to distribute reward among the learning modules of the DQLCS in a variety of real robot learning experiments. The results of these experiments support the DQLCS as a useful reinforcement learning paradigm and suggest future areas of study in distributed learning systems.en
dc.description.degreeMaster of Scienceen
dc.format.extentix, 110 leavesen
dc.format.mediumBTDen
dc.format.mimetypeapplication/pdfen
dc.identifier.otheretd-03042009-041449en
dc.identifier.sourceurlhttp://scholar.lib.vt.edu/theses/available/etd-03042009-041449/en
dc.identifier.urihttp://hdl.handle.net/10919/41435en
dc.language.isoenen
dc.publisherVirginia Techen
dc.relation.haspartLD5655.V855_1996.C437.pdfen
dc.relation.isformatofOCLC# 36114106en
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectQ-learningen
dc.subjectLearning Classifier Systemsen
dc.subjectartificial intelligenceen
dc.subjectmobile robotsen
dc.subjecttask decompositionen
dc.subject.lccLD5655.V855 1996.C437en
dc.titleA Distributed Q-learning Classifier System for task decomposition in real robot learning problemsen
dc.typeThesisen
dc.type.dcmitypeTexten
thesis.degree.disciplineElectrical Engineeringen
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen
thesis.degree.levelmastersen
thesis.degree.nameMaster of Scienceen

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
LD5655.V855_1996.C437.pdf
Size:
12.21 MB
Format:
Adobe Portable Document Format
Description:

Collections