Concepedia

Publication | Open Access

Transformer-XL: Attentive Language Models beyond a Fixed-Length Context

3.1K

Citations

49

References

2019

Year

TLDR

Transformers can learn long‑term dependencies but are limited by a fixed‑length context in language modeling. The authors propose Transformer‑XL, a neural architecture that learns dependencies beyond a fixed length while preserving temporal coherence. Transformer‑XL uses a segment‑level recurrence mechanism and a novel positional encoding scheme, with code and pretrained models released in TensorFlow and PyTorch. Transformer‑XL captures longer‑term dependencies, resolves context fragmentation, outperforms RNNs and vanilla Transformers by 80% and 450% respectively, is up to 1,800× faster, achieves state‑of‑the‑art perplexities on multiple benchmarks, and can generate coherent long‑form text.

Abstract

Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency beyond a fixed length without disrupting temporal coherence. It consists of a segment-level recurrence mechanism and a novel positional encoding scheme. Our method not only enables capturing longer-term dependency, but also resolves the context fragmentation problem. As a result, Transformer-XL learns dependency that is 80% longer than RNNs and 450% longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up to 1,800+ times faster than vanilla Transformers during evaluation. Notably, we improve the state-of-the-art results of bpc/perplexity to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion Word, and 54.5 on Penn Treebank (without finetuning). When trained only on WikiText-103, Transformer-XL manages to generate reasonably coherent, novel text articles with thousands of tokens. Our code, pretrained models, and hyperparameters are available in both Tensorflow and PyTorch.

References

YearCitations

Page 1