Publication | Closed Access
Policy Gradient Methods for Reinforcement Learning with Function Approximation
5K
Citations
10
References
1999
Year
Unknown Venue
Function approximation is essential to reinforcement learning, but standard value‑function based methods are theoretically intractable; prior work such as REINFORCE and actor‑critic illustrate this challenge. This paper proposes explicitly representing the policy with its own function approximator and updating it via the gradient of expected reward with respect to policy parameters. We show the policy gradient can be estimated using an approximate action‑value or advantage function, and prove that policy iteration with arbitrary differentiable function approximation converges to a locally optimal policy.
Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly represented by its own function approximator, independent of the value function, and is updated according to the gradient of expected reward with respect to the policy parameters. Williams's REINFORCE method and actor-critic methods are examples of this approach. Our main new result is to show that the gradient can be written in a form suitable for estimation from experience aided by an approximate action-value or advantage function. Using this result, we prove for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal policy.
| Year | Citations | |
|---|---|---|
2005 | 25.7K | |
1992 | 7.4K | |
1984 | 778 | |
1996 | 448 | |
1994 | 352 | |
1998 | 238 | |
1997 | 216 | |
1998 | 108 | |
2002 | 96 | |
2000 | 48 |
Page 1
Page 1