Publication | Closed Access
Performability analysis using semi-Markov reward processes
151
Citations
17
References
1990
Year
EngineeringMarkov Decision ProcessesStochastic AnalysisStochastic SimulationMarkov ChainsStochastic ProcessesSemi-markov Reward ProcessSystems EngineeringPerformability AnalysisStochastic DynamicQuantitative ManagementStochastic SystemStochastic Petri NetStochastic NetworksSequential Decision MakingComputer ScienceProbability TheoryMarkov Decision ProcessQueueing SystemsAutomated ReasoningProbabilistic VerificationMarkov Reward Process
M.D. Beaudry (1978) proposed a simple method of computing the distribution of performability in a Markov reward process. Two extensions of Beaudry's approach are presented. The authors generalize the method to a semi-Markov reward process by removing the restriction requiring the association of zero reward to absorbing states only. The algorithm proceeds by replacing zero reward nonabsorbing states by a probabilistic switch; it is therefore related to the elimination of vanishing states from the reachability graph of a generalized stochastic Petri net and to the elimination of fast transient states in a decomposition approach to stiff Markov chains. The use of the approach is illustrated with three applications.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">></ETX>
| Year | Citations | |
|---|---|---|
Page 1
Page 1