Concepedia

Concept

value function approximation

Parents

1.1K

Publications

79.6K

Citations

2.3K

Authors

721

Institutions

About

Value function approximation is a methodological approach within reinforcement learning and dynamic programming focused on estimating the value function of a policy or state-action pairs through the use of a parameterized function. This concept is crucial for addressing problems with large or continuous state and action spaces where exact tabular representation of the value function is computationally infeasible. It investigates techniques for generalizing value estimates from observed states to unseen states, employing various function approximators, including linear models, neural networks, and basis functions. Its primary significance is enabling the application of reinforcement learning algorithms to complex, high-dimensional environments by providing a scalable means to represent and learn the expected future rewards.

Top Authors

Rankings shown are based on concept H-Index.

DS

Google DeepMind (United Kingdom)

DL

Chinese Academy of Sciences

SM

Technion – Israel Institute of Technology

ZY

Princeton University

HV

Google DeepMind (United Kingdom)

Top Institutions

Rankings shown are based on concept H-Index.

University of California, Berkeley

Berkeley, United States

Tsinghua University

Beijing, China

Princeton University

Princeton, United States