Concepedia

TLDR

Cloud Radio Access Networks enable 5G by handling massive data traffic growth. The study aims to improve resource allocation in cloud RANs to minimize power consumption while satisfying user demands over extended operation. A deep reinforcement learning framework is proposed, defining state, action, and reward spaces, using a deep neural network to approximate the action‑value function, and formulating each decision epoch as a convex optimization problem, then evaluated against two baselines in simulation. Simulations demonstrate that the framework achieves substantial power savings, meets user demands, and adapts effectively to highly dynamic scenarios.

Abstract

Cloud Radio Access Networks (RANs) have become a key enabling technique for the next generation (5G) wireless communications, which can meet requirements of massively growing wireless data traffic. However, resource allocation in cloud RANs still needs to be further improved in order to reach the objective of minimizing power consumption and meeting demands of wireless users over a long operational period. Inspired by the success of Deep Reinforcement Learning (DRL) on solving complicated control problems, we present a novel DRL-based framework for power-efficient resource allocation in cloud RANs. Specifically, we define the state space, action space and reward function for the DRL agent, apply a Deep Neural Network (DNN) to approximate the action-value function, and formally formulate the resource allocation problem (in each decision epoch) as a convex optimization problem. We evaluate the performance of the proposed framework by comparing it with two widely-used baselines via simulation. The simulation results show it can achieve significant power savings while meeting user demands, and it can well handle highly dynamic cases.

References

YearCitations

Page 1