Concepedia

Publication | Closed Access

Near-optimal Regret Bounds for Reinforcement Learning

208

Citations

0

References

2010

Year

TLDR

The paper studies the total regret of learning algorithms for undiscounted reinforcement learning in Markov decision processes. The authors introduce a diameter parameter D for MDPs, design an algorithm that achieves regret \(O(DS\sqrt{AT})\), extend it to handle up to \(l\) changes with regret \(O(l^{1/3}T^{2/3}DS\sqrt{A})\), and provide a sample‑complexity bound. They prove the algorithm attains near‑optimal regret, establishing an upper bound \(O(DS\sqrt{AT})\) and a matching lower bound \(\Omega(\sqrt{DSAT})\), and show additional sample‑complexity, gap‑dependent logarithmic, and non‑stationary MDP bounds.

Abstract

For undiscounted reinforcement learning in Markov decision processes (MDPs) we consider the total regret of a learning algorithm with respect to an optimal policy. In order to describe the transition structure of an MDP we propose a new parameter: An MDP has diameter D if for any pair of states s,s' there is a policy which moves from s to s' in at most D steps (on average). We present a reinforcement learning algorithm with total regret O(DS√AT) after T steps for any unknown MDP with S states, A actions per state, and diameter D. A corresponding lower bound of Ω(√DSAT) on the total regret of any learning algorithm is given as well. These results are complemented by a sample complexity bound on the number of suboptimal steps taken by our algorithm. This bound can be used to achieve a (gap-dependent) regret bound that is logarithmic in T. Finally, we also consider a setting where the MDP is allowed to change a fixed number of l times. We present a modification of our algorithm that is able to deal with this setting and show a regret bound of O(l1/3T2/3DS√A).