Concepedia

Publication | Open Access

Temporal Fusion Transformers for Interpretable Multi-horizon Time Series\n Forecasting

82

Citations

0

References

2019

Year

Abstract

Multi-horizon forecasting problems often contain a complex mix of inputs --\nincluding static (i.e. time-invariant) covariates, known future inputs, and\nother exogenous time series that are only observed historically -- without any\nprior information on how they interact with the target. While several deep\nlearning models have been proposed for multi-step prediction, they typically\ncomprise black-box models which do not account for the full range of inputs\npresent in common scenarios. In this paper, we introduce the Temporal Fusion\nTransformer (TFT) -- a novel attention-based architecture which combines\nhigh-performance multi-horizon forecasting with interpretable insights into\ntemporal dynamics. To learn temporal relationships at different scales, the TFT\nutilizes recurrent layers for local processing and interpretable self-attention\nlayers for learning long-term dependencies. The TFT also uses specialized\ncomponents for the judicious selection of relevant features and a series of\ngating layers to suppress unnecessary components, enabling high performance in\na wide range of regimes. On a variety of real-world datasets, we demonstrate\nsignificant performance improvements over existing benchmarks, and showcase\nthree practical interpretability use-cases of TFT.\n