An Assessment of Multistage Reward Function Design for Deep Reinforcement Learning-Based Microgrid Energy Management
dc.contributor.author | Goh, Hui Hwang | en |
dc.contributor.author | Huang, Yifeng | en |
dc.contributor.author | Lim, Chee Shen | en |
dc.contributor.author | Zhang, Dongdong | en |
dc.contributor.author | Liu, Hui | en |
dc.contributor.author | Dai, Wei | en |
dc.contributor.author | Kurniawan, Tonni Agustiono | en |
dc.contributor.author | Rahman, Saifur | en |
dc.date.accessioned | 2024-03-27T19:10:38Z | en |
dc.date.available | 2024-03-27T19:10:38Z | en |
dc.date.issued | 2022-06-01 | en |
dc.description.abstract | Reinforcement learning based energy management strategy has been an active research subject in the past few years. Different from the baseline reward function (BRF), the work proposes and investigates a multi-stage reward mechanism (MSRM) that scores the agent's step and final performance during training and returns it to the agent in real time as a reward. MSRM will also improve the agent's training through expert intervention which aims to prevent the agent from being trapped in sub-optimal strategies. The energy management performance considered by MSRM-based algorithm includes the energy balance, economic cost, and reliability. The reward function is assessed in conjunction with two deep reinforcement learning algorithms: double deep Q-learning network (DDQN) and policy gradient (PG). Upon benchmarking with BRF, the numerical simulation shows that MSRM tends to improve the convergence characteristic, reduce the explained variance, and reduce the tendency of the agent being trapped in suboptimal strategies. In addition, the methods have been assessed with MPC-based energy management strategies in terms of relative cost, self-balancing rate, and computational time. The assessment concludes that, in the given context, PG-MSRM has the best overall performance. | en |
dc.description.version | Accepted version | en |
dc.format.extent | Pages 4300-4311 | en |
dc.format.extent | 12 page(s) | en |
dc.format.mimetype | application/pdf | en |
dc.identifier.doi | https://doi.org/10.1109/TSG.2022.3179567 | en |
dc.identifier.eissn | 1949-3061 | en |
dc.identifier.issn | 1949-3053 | en |
dc.identifier.issue | 6 | en |
dc.identifier.orcid | Rahman, Saifur [0000-0001-6226-8406] | en |
dc.identifier.uri | https://hdl.handle.net/10919/118472 | en |
dc.identifier.volume | 13 | en |
dc.language.iso | en | en |
dc.publisher | IEEE | en |
dc.rights | In Copyright | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
dc.subject | Microgrids | en |
dc.subject | Energy management | en |
dc.subject | Real-time systems | en |
dc.subject | Costs | en |
dc.subject | Prediction algorithms | en |
dc.subject | Training | en |
dc.subject | Convergence | en |
dc.subject | Microgrid energy management | en |
dc.subject | deep reinforcement learning | en |
dc.subject | reward function | en |
dc.subject | optimal scheduling | en |
dc.title | An Assessment of Multistage Reward Function Design for Deep Reinforcement Learning-Based Microgrid Energy Management | en |
dc.title.serial | IEEE Transactions on Smart Grid | en |
dc.type | Article - Refereed | en |
dc.type.dcmitype | Text | en |
dc.type.other | Article | en |
dc.type.other | Journal | en |
pubs.organisational-group | /Virginia Tech | en |
pubs.organisational-group | /Virginia Tech/Engineering | en |
pubs.organisational-group | /Virginia Tech/Engineering/Advanced Research Institute | en |
pubs.organisational-group | /Virginia Tech/Engineering/Electrical and Computer Engineering | en |
pubs.organisational-group | /Virginia Tech/All T&R Faculty | en |
pubs.organisational-group | /Virginia Tech/Engineering/COE T&R Faculty | en |