VTechWorks staff will be away for the winter holidays until January 5, 2026, and will respond to requests at that time.
 

From Static to Adaptive: Dynamic Cost Function Weight Adaptation in Hierarchical Reinforcement Learning for Sustainable 6G Radio Access Networks

TR Number

Date

2025-12-19

Journal Title

Journal ISSN

Volume Title

Publisher

Virginia Tech

Abstract

The rapid growth of mobile network traffic and the densification required for 6G networks significantly increase energy consumption, with base stations (BS) accounting for up to 70% of total network energy use. Energy-efficient BS switching has therefore become a critical research focus. Traditional solutions rely on static thresholds or fixed cost function weights, limiting adaptability in dynamic environments. This thesis investigates how cost function design and weight adaptation influence the trade-off between energy consumption and Quality of Service (QoS) degradation in Deep Reinforcement Learning (DRL)-based BS switching. Using a realistic spatio-temporal dataset, we show that static cost weights lead to suboptimal performance under varying traffic conditions. To address this, we propose a Hierarchical Reinforcement Learning (HRL) architecture in which a high-level controller dynamically selects low-level policies trained with different cost function weights. Experimental results demonstrate that the proposed HRL approach achieves up to 64% energy reduction—improving by 5% over the static DRL baseline—while maintaining acceptable QoS levels. These findings highlight the potential of hierarchical control and adaptive weighting in achieving scalable, sustainable 6G Radio Access Networks operations.

Description

Keywords

Energy efficiency, Base Station Switching, Hierarchical Reinforcement Learning

Citation

Collections