Publication | Open Access
Prioritized Experience Replay
2K
Citations
23
References
2015
Year
Artificial IntelligenceEngineeringMachine LearningSequential LearningEducationReinforcement Learning (Educational Psychology)Intelligent SystemsMulti-agent LearningLifelong Reinforcement LearningUniform ReplayStochastic GameExperience ReplaySystems EngineeringRobot LearningSequential Decision MakingComputer ScienceExploration V ExploitationReward HackingDeep Reinforcement LearningReplay Memory
Experience replay allows online reinforcement learning agents to reuse past experiences, but prior methods uniformly sample transitions, replaying them at the same frequency regardless of their importance. This paper develops a framework for prioritizing experience so that important transitions are replayed more frequently, improving learning efficiency. The authors implement prioritized experience replay in Deep Q‑Networks, a reinforcement learning algorithm that has achieved human‑level performance on many Atari games. Prioritized experience replay in DQN sets a new state‑of‑the‑art, outperforming uniform replay on 41 of 49 Atari games.
Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games.
| Year | Citations | |
|---|---|---|
Page 1
Page 1