SMART: Situationally-Aware Multi-Agent Reinforcement Learning-Based Transmissions

dc.contributor.authorJiang, Zhiyuanen
dc.contributor.authorLiu, Yanen
dc.contributor.authorHribar, Jernejen
dc.contributor.authorDaSilva, Luiz A.en
dc.contributor.authorZhou, Shengen
dc.contributor.authorNiu, Zhishengen
dc.date.accessioned2022-01-04T19:38:10Zen
dc.date.available2022-01-04T19:38:10Zen
dc.date.issued2021-12-01en
dc.date.updated2022-01-04T19:38:08Zen
dc.description.abstractIn future wireless systems, latency of information needs to be minimized to satisfy the requirements of many mission-critical applications. Meanwhile, not all terminals carry equally-urgent packets given their distinct situations, e.g., status freshness. Leveraging this feature, we propose an on-demand Medium Access Control (MAC) scheme, whereby each terminal transmits with dynamically adjusted aggressiveness based on its situations which are modeled as Markov states. A Multi-Agent Reinforcement Learning (MARL) framework is utilized and each agent is trained with a Deep Deterministic Policy Gradient (DDPG) network. A notorious issue for MARL is slow and non-scalable convergence – to address this, a new Situationally-aware MARL-based Transmissions (SMART) scheme is proposed. It is shown that SMART can significantly shorten the convergence time and the converged performance is also dramatically improved compared with state-of-the-art DDPG-based MARL schemes, at the expense of an additional offline training stage. SMART also outperforms conventional MAC schemes significantly, e.g., Carrier Sensing and Multiple Access (CSMA), in terms of average and peak Age of Information (AoI). In addition, SMART also has the advantage of versatility – different Quality-of-Service (QoS) metrics and hence various state space definitions are tested in extensive simulations, where SMART shows robustness and scalability in all considered scenarios.en
dc.description.versionAccepted versionen
dc.format.extentPages 1430-1443en
dc.format.extent14 page(s)en
dc.format.mimetypeapplication/pdfen
dc.identifier.doihttps://doi.org/10.1109/TCCN.2021.3068740en
dc.identifier.eissn2332-7731en
dc.identifier.issn2332-7731en
dc.identifier.issue4en
dc.identifier.urihttp://hdl.handle.net/10919/107354en
dc.identifier.volume7en
dc.language.isoenen
dc.publisherIEEEen
dc.relation.urihttp://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000728144400034&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=930d57c9ac61a043676db62af60056c1en
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectTechnologyen
dc.subjectTelecommunicationsen
dc.subjectInternet-of-Thingsen
dc.subjectmedium access controlen
dc.subjectmulti-agent reinforcement learningen
dc.subjectcontention-based random accessen
dc.subjectMarkov decision processen
dc.subjectMULTIPLE-ACCESSen
dc.subjectWIRELESSen
dc.titleSMART: Situationally-Aware Multi-Agent Reinforcement Learning-Based Transmissionsen
dc.title.serialIEEE Transactions on Cognitive Communications and Networkingen
dc.typeArticle - Refereeden
dc.type.dcmitypeTexten
dc.type.otherArticleen
dc.type.otherJournalen
pubs.organisational-group/Virginia Techen
pubs.organisational-group/Virginia Tech/University Research Institutesen
pubs.organisational-group/Virginia Tech/All T&R Facultyen

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Jiang_SMART.pdf
Size:
657.47 KB
Format:
Adobe Portable Document Format
Description:
Accepted version