Publication | Closed Access
Constrained Undiscounted Stochastic Dynamic Programming
109
Citations
16
References
1984
Year
Mathematical ProgrammingEngineeringStochastic GameOptimal PoliciesDynamic ProgrammingOptimal PolicyProbability TheoryMechanism DesignStochastic DynamicMarkov Decision ProcessAverage RewardDynamic OptimizationOperations Research
In this paper we investigate the computation of optimal policies in constrained discrete stochastic dynamic programming with the average reward as utility function. The state-space and action-sets are assumed to be finite. Constraints which are linear functions of the state-action frequencies are allowed. In the general multichain case, an optimal policy will be a randomized nonstationary policy. An algorithm to compute such an optimal policy is presented. Furthermore, sufficient conditions for optimal policies to be stationary are derived. There are many applications for constrained undiscounted stochastic dynamic programming, e.g., in multiple objective Markovian decision models.
| Year | Citations | |
|---|---|---|
Page 1
Page 1