Concepedia

Publication | Closed Access

Dispersion Entropy: A Measure for Time-Series Analysis

750

Citations

20

References

2016

Year

TLDR

Entropy measures are powerful for time‑series analysis, but sample entropy is slow for long signals and permutation entropy discards amplitude information. The authors propose dispersion entropy (DE) to quantify time‑series regularity, addressing speed and amplitude‑information limitations of existing entropy measures. DE is computed by mapping amplitudes to dispersion patterns and counting their frequencies; its dependence on signal‑processing parameters was examined with synthetic series and applied to three public datasets. DE detects noise bandwidth and simultaneous frequency‑amplitude changes, outperforms PE in discriminating groups in real datasets, and runs faster than SE and PE.

Abstract

One of the most powerful tools to assess the dynamical characteristics of time series is entropy. Sample entropy (SE), though powerful, is not fast enough, especially for long signals. Permutation entropy (PE), as a broadly used irregularity indicator, considers only the order of the amplitude values and hence some information regarding the amplitudes may be discarded. To tackle these problems, we introduce a new method, termed dispersion entropy (DE), to quantify the regularity of time series. We gain insight into the dependency of DE on several straightforward signal-processing concepts via a set of synthetic time series. The results show that DE, unlike PE, can detect the noise bandwidth and simultaneous frequency and amplitude change. We also employ DE to three publicly available real datasets. The simulations on real-valued signals show that the DE method considerably outperforms PE to discriminate different groups of each dataset. In addition, the computation time of DE is significantly less than that of SE and PE.

References

YearCitations

Page 1