Concepedia

Publication | Closed Access

IEDQN: Information Exchange DQN with a Centralized Coordinator for Traffic Signal Control

18

Citations

30

References

2020

Year

Abstract

Finding the optimal control strategy for traffic signals, especially for multi-intersection traffic signals, is still a difficult task. The use of reinforcement learning (RL) algorithms to this problem is greatly limited because of the partially observable and nonstationary environment. In this paper, we study how to eliminate the above influence from the environment through communication among agents. The proposed method, called Information Exchange Deep Q-Network (IEDQN), has a learning communication protocol, which makes each local agent pay unbalanced and asymmetric attention to other agents' information. Besides the protocol, each agent has the ability to abstract local information from its own history data for interacting, which means that the communication can avoid the dependent instant information and it is robust to the potential time delay of communication. Specifically, by alleviating the effects of partial observation, experience replay can recover to good performance. We evaluate IEDQN via simulation experiments in the simulation of urban mobility (SUMO) in a traffic grid, and it outperforms the comparative multi-agent RL (MARL) methods in both efficiency and effectiveness.

References

YearCitations

Page 1