Architecture design and simulation for distributed learning classifier systems
MetadataShow full item record
In this thesis, we introduce the Distributed Learning Classifier System (DLCS) as a novel extension of J. H. Holland's standard learning classifier system. While the standard LCS offers effective real-time control and learning, one of its limitations is that it does not provide a mechanism for allowing communication between LCS agents in a multiple-agent scenario. Often multiple-agents are used to solve large tasks collectively by subdividing the task into smaller parts. Multiple agents can also be used to solve a task in parallel so that a solution can be arrived at more rapidly. With the DLCS, we introduce mechanisms that satisfy both of these cases, while still providing compatible operation with the LCS.
We introduce three types of messages that can be passed between DLCS agents. The first, the classifier message, allows agents to share learned information with one another, thereby helping agents benefit from each other's successes. The second, the action message, allows agents to "talk" to one another. The third, the bucket brigade algorithm payoff message, extends the chain rewarding payoff scheme of the standard LCS to multiple DLCS agents.
Finally, we present some simulation results for both the standard LCS and the DLCS. Our LCS simulations examine some of the important aspects of learning classifier system operation, as well as illustrate some of the shortcomings. The DCLS simulations justify the distributed architecture and suggest future directions for achieving learning among multiple agents.
- Masters Theses