Concepedia

Publication | Closed Access

Discrete-Time Nonlinear HJB Solution Using Approximate Dynamic Programming: Convergence Proof

1.1K

Citations

25

References

2008

Year

TLDR

The exact solution assumption holds for some classes of nonlinear systems, notably the discrete‑time linear quadratic regulator (LQR) where the action is linear, the value is quadratic in the states, and neural networks have zero approximation error. The algorithm assumes exact solvability of value and action updates at each iteration and employs a critic neural network to approximate the value function and an action network to approximate the optimal control policy. The study proves that the value‑iteration‑based heuristic dynamic programming algorithm converges to the optimal control and value function solving the Hamilton‑Jacobi‑Bellman equation, and demonstrates that this convergence holds even without knowledge of the system dynamics, including for the discrete‑time LQR where two neural networks suffice—an insight not widely recognized in the literature.

Abstract

Convergence of the value-iteration-based heuristic dynamic programming (HDP) algorithm is proven in the case of general nonlinear systems. That is, it is shown that HDP converges to the optimal control and the optimal value function that solves the Hamilton-Jacobi-Bellman equation appearing in infinite-horizon discrete-time (DT) nonlinear optimal control. It is assumed that, at each iteration, the value and action update equations can be exactly solved. The following two standard neural networks (NN) are used: a critic NN is used to approximate the value function, whereas an action network is used to approximate the optimal control policy. It is stressed that this approach allows the implementation of HDP without knowing the internal dynamics of the system. The exact solution assumption holds for some classes of nonlinear systems and, specifically, in the specific case of the DT linear quadratic regulator (LQR), where the action is linear and the value quadratic in the states and NNs have zero approximation error. It is stressed that, for the LQR, HDP may be implemented without knowing the system A matrix by using two NNs. This fact is not generally appreciated in the folklore of HDP for the DT LQR, where only one critic NN is generally used.

References

YearCitations

Page 1