Concepedia

TLDR

Deep Reinforcement Learning has produced proficient controllers for complex tasks, yet these controllers have limited memory and depend on full observation at each decision point. This study investigates adding recurrency to a Deep Q‑Network by replacing its first post‑convolutional fully‑connected layer with an LSTM to address these limitations. The authors modify the DQN architecture by inserting an LSTM after the convolutional layers, creating a Deep Recurrent Q‑Network (DRQN). DRQN matches DQN performance on standard and partially observed Atari games, scales with observability, and degrades less than DQN when evaluated with partial observations, showing that recurrency is a viable alternative to frame stacking but offers no systematic learning advantage.

Abstract

Deep Reinforcement Learning has yielded proficient controllers for complex tasks. However, these controllers have limited memory and rely on being able to perceive the complete game screen at each decision point. To address these shortcomings, this article investigates the effects of adding recurrency to a Deep Q-Network (DQN) by replacing the first post-convolutional fully-connected layer with a recurrent LSTM. The resulting \textit{Deep Recurrent Q-Network} (DRQN), although capable of seeing only a single frame at each timestep, successfully integrates information through time and replicates DQN's performance on standard Atari games and partially observed equivalents featuring flickering game screens. Additionally, when trained with partial observations and evaluated with incrementally more complete observations, DRQN's performance scales as a function of observability. Conversely, when trained with full observations and evaluated with partial observations, DRQN's performance degrades less than DQN's. Thus, given the same length of history, recurrency is a viable alternative to stacking a history of frames in the DQN's input layer and while recurrency confers no systematic advantage when learning to play the game, the recurrent net can better adapt at evaluation time if the quality of observations changes.

References

YearCitations

Page 1