Concepedia

Publication | Closed Access

Constrained Undiscounted Stochastic Dynamic Programming

109

Citations

16

References

1984

Year

Abstract

In this paper we investigate the computation of optimal policies in constrained discrete stochastic dynamic programming with the average reward as utility function. The state-space and action-sets are assumed to be finite. Constraints which are linear functions of the state-action frequencies are allowed. In the general multichain case, an optimal policy will be a randomized nonstationary policy. An algorithm to compute such an optimal policy is presented. Furthermore, sufficient conditions for optimal policies to be stationary are derived. There are many applications for constrained undiscounted stochastic dynamic programming, e.g., in multiple objective Markovian decision models.

References

YearCitations

Page 1