Concepedia

TLDR

Current state‑of‑the‑art sequence transduction relies on complex recurrent or convolutional encoder‑decoder networks that use attention to link encoder and decoder. The authors introduce the Transformer, a simple architecture that relies only on attention, eliminating recurrence and convolutions. The Transformer replaces traditional encoder‑decoder layers with multi‑head self‑attention and feed‑forward sub‑layers, removing recurrence and convolutions. Experiments demonstrate that the Transformer outperforms existing models on WMT14 English‑to‑German and English‑to‑French translation, achieving 28.4 and 41.8 BLEU respectively, while training faster and generalizing to constituency parsing.

Abstract

The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.

References

YearCitations

2016

214.9K

2015

8.5K

2016

5.6K

2013

3.1K

2017

1.9K

2017

1.3K

2016

915

2006

808

2006

615

2017

460

Page 1