Goh, Hui HwangHuang, YifengLim, Chee ShenZhang, DongdongLiu, HuiDai, WeiKurniawan, Tonni AgustionoRahman, Saifur2024-03-272024-03-272022-06-011949-3053https://hdl.handle.net/10919/118472Reinforcement learning based energy management strategy has been an active research subject in the past few years. Different from the baseline reward function (BRF), the work proposes and investigates a multi-stage reward mechanism (MSRM) that scores the agent's step and final performance during training and returns it to the agent in real time as a reward. MSRM will also improve the agent's training through expert intervention which aims to prevent the agent from being trapped in sub-optimal strategies. The energy management performance considered by MSRM-based algorithm includes the energy balance, economic cost, and reliability. The reward function is assessed in conjunction with two deep reinforcement learning algorithms: double deep Q-learning network (DDQN) and policy gradient (PG). Upon benchmarking with BRF, the numerical simulation shows that MSRM tends to improve the convergence characteristic, reduce the explained variance, and reduce the tendency of the agent being trapped in suboptimal strategies. In addition, the methods have been assessed with MPC-based energy management strategies in terms of relative cost, self-balancing rate, and computational time. The assessment concludes that, in the given context, PG-MSRM has the best overall performance.Pages 4300-431112 page(s)application/pdfenIn CopyrightMicrogridsEnergy managementReal-time systemsCostsPrediction algorithmsTrainingConvergenceMicrogrid energy managementdeep reinforcement learningreward functionoptimal schedulingAn Assessment of Multistage Reward Function Design for Deep Reinforcement Learning-Based Microgrid Energy ManagementArticle - RefereedIEEE Transactions on Smart Gridhttps://doi.org/10.1109/TSG.2022.3179567136Rahman, Saifur [0000-0001-6226-8406]1949-3061