Show simple item record

dc.contributor.authorCunningham, Bryanen_US
dc.contributor.authorCao, Yongen_US
dc.date.accessioned2013-05-28T20:43:43Zen_US
dc.date.accessioned2013-06-19T14:36:27Z
dc.date.available2013-05-28T20:43:43Zen_US
dc.date.available2013-06-19T14:36:27Z
dc.date.issued2012-06-01
dc.identifierhttp://eprints.cs.vt.edu/archive/00001200/en_US
dc.identifier.urihttp://hdl.handle.net/10919/19430
dc.descriptionPast research on multiagent simulation with cooperative reinforcement learning (RL) focuses on developing sharing strategies that are adopted and used by all agents in the environment. In this paper, we target situations where this assumption of a single sharing strategy that is employed by all agents is not valid. We seek to address how agents with no predetermined sharing partners can exploit groups of cooperatively learning agents to improve learning performance when compared to independent learning. Specifically, we propose three intra-agent methods that do not assume a reciprocating sharing relationship and leverage the pre-existing agent interface associated with Q-Learning to expedite learning.en_US
dc.formatpdf http://eprints.cs.vt.edu/archive/00001200/01/bare_conf.pdfen_US
dc.publisherDepartment of Computer Science, Virginia Polytechnic Institute & State Universityen_US
dc.subjectArtificial intelligenceen_US
dc.titleNonreciprocating Sharing Methods in Cooperative Q-Learning Environmentsen_US
dc.typeTechnical report - Departmentalen_US
dc.identifier.trnumberTR-12-15en_US
dc.type.dcmitypeTexten_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record