Publication | Open Access
Dueling Network Architectures for Deep Reinforcement Learning
1.8K
Citations
16
References
2015
Year
Artificial IntelligenceDeep RepresentationsConventional ArchitecturesModel-free LearningEngineeringMachine LearningData ScienceStochastic GameDeep Reinforcement LearningNetwork ArchitecturesAction Model LearningComputer ScienceMulti-agent LearningRobot LearningLearning ControlDeep LearningWorld Model
Deep reinforcement learning has achieved many successes, yet most methods still rely on conventional architectures such as CNNs, LSTMs, or auto‑encoders. This work introduces a novel neural network architecture for model‑free reinforcement learning. The proposed dueling network separates the estimation of the state value function from the state‑dependent action advantage function. Factoring the value and advantage functions improves policy evaluation for actions with similar values and enables the agent to outperform state‑of‑the‑art Atari 2600 benchmarks.
In recent years there have been many successes of using deep representations in reinforcement learning. Still, many of these applications use conventional architectures, such as convolutional networks, LSTMs, or auto-encoders. In this paper, we present a new neural network architecture for model-free reinforcement learning. Our dueling network represents two separate estimators: one for the state value function and one for the state-dependent action advantage function. The main benefit of this factoring is to generalize learning across actions without imposing any change to the underlying reinforcement learning algorithm. Our results show that this architecture leads to better policy evaluation in the presence of many similar-valued actions. Moreover, the dueling architecture enables our RL agent to outperform the state-of-the-art on the Atari 2600 domain.
| Year | Citations | |
|---|---|---|
Page 1
Page 1