Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, Illia Polosukhin
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include a...
Recurrent neural networks, long short-term memory [13] and gated recurrent [7] neural networks in particular, have been ...
The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU [16], ByteNet [18] and ...
Most competitive neural sequence transduction models have an encoder-decoder structure [5, 2, 35]. Here, the encoder map...
Encoder: The encoder is composed of a stack of $N = 6$ identical layers. Each layer has two sub-layers. The first is a m...
We call our particular attention "Scaled Dot-Product Attention" (Figure 2). The input consists of queries and keys of di...
Instead of performing a single attention function with $d_{\text{model}}$-dimensional keys, values and queries, we found...
The Transformer uses multi-head attention in three different ways: • In "encoder-decoder attention" layers, the queries ...
In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forwa...
Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens...
Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequen...
In this section we compare various aspects of self-attention layers to the recurrent and convolutional layers commonly u...
We trained on the standard WMT 2014 English-German dataset consisting of about 4.5 million sentence pairs. Sentences wer...
We trained our models on one machine with 8 NVIDIA P100 GPUs. For our base models using the hyperparameters described th...
We used the Adam optimizer [20] with $\beta_1 = 0.9$, $\beta_2 = 0.98$ and $\epsilon = 10^{-9}$. We varied the learning ...
We employ three types of regularization during training: Residual Dropout: We apply dropout [33] to the output of each ...
On the WMT 2014 English-to-German translation task, the big transformer model (Transformer (big) in Table 2) outperforms...
To evaluate the importance of different components of the Transformer, we varied our base model in different ways, measu...
To evaluate if the Transformer can generalize to other tasks we performed experiments on English constituency parsing. T...
In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing...