Publication | Open Access
Conservative Q-Learning for Offline Reinforcement Learning
534
Citations
48
References
2020
Year
Artificial IntelligenceEngineeringMachine LearningData ScienceOffline RlDeep Reinforcement LearningSequential LearningLower BoundAction Model LearningSequential Decision MakingComputer ScienceRobot LearningLearning ControlDeep LearningOffline Reinforcement Learning
Effectively leveraging large, previously collected datasets in reinforcement learning (RL) is a key challenge for large-scale real-world applications. Offline RL algorithms promise to learn effective policies from previously-collected, static datasets without further interaction. However, in practice, offline RL presents a major challenge, and standard off-policy RL methods can fail due to overestimation of values induced by the distributional shift between the dataset and the learned policy, especially when training on complex and multi-modal data distributions. In this paper, we propose conservative Q-learning (CQL), which aims to address these limitations by learning a conservative Q-function such that the expected value of a policy under this Q-function lower-bounds its true value. We theoretically show that CQL produces a lower bound on the value of the current policy and that it can be incorporated into a policy learning procedure with theoretical improvement guarantees. In practice, CQL augments the standard Bellman error objective with a simple Q-value regularizer which is straightforward to implement on top of existing deep Q-learning and actor-critic implementations. On both discrete and continuous control domains, we show that CQL substantially outperforms existing offline RL methods, often learning policies that attain 2-5 times higher final return, especially when learning from complex and multi-modal data distributions.
| Year | Citations | |
|---|---|---|
Page 1
Page 1