Publication | Closed Access
State learning and mixing in entropy of hidden Markov processes and the Gilbert-Elliott channel
20
Citations
9
References
1999
Year
EngineeringChannel Capacity CalculationsState LearningComputational ComplexityChannel CharacterizationStatistical Signal ProcessingChannel Capacity EstimationHidden Markov ModelStochastic ProcessesHidden MarkovHidden Markov ProcessesInformation TheoryComputer ScienceProbability TheoryGilbert-elliott ChannelSignal ProcessingMarkov Decision ProcessEntropyInfinite PastMarkov KernelMulti-terminal Information Theory
Hidden Markov processes such as the Gilbert-Elliott (1960) channel have an infinite dependency structure. Therefore, entropy and channel capacity calculations require knowledge of the infinite past. In practice, such calculations are often approximated with a finite past. It is commonly assumed that the approximations require an unbounded amount of the past as the memory in the underlying Markov chain increases. We show that this is not necessarily true. We derive an exponentially decreasing upper bound on the accuracy of the finite-past approximation that is much tighter than existing upper hounds when the Markov chain mixes well. We also derive an exponentially decreasing upper bound that applies when the Markov chain does not mix at all. Our methods are demonstrated on the Gilbert-Elliott channel, where we prove that a prescribed finite-past accuracy is quickly reached, independently of the Markovian memory. We conclude that the past can be used either to learn the channel state when the memory is high, or wait until the states mix when the memory is low. Implications fur computing and achieving capacity on the Gilbert-Elliott channel are discussed.
| Year | Citations | |
|---|---|---|
Page 1
Page 1