The Transformer has been on a lot of
people's minds over the last year five years.
This post presents an annotated version of the paper in the
form of a line-by-line implementation. It reorders and deletes
some sections from the original paper and adds comments
throughout. This document itself is a working notebook, and should
be a completely usable implementation.
Code is available
here.
import os from os.path import exists import torch import torch.nn as nn from torch.nn.functional import log_softmax, pad import math import copy import time from torch.optim.lr_scheduler import LambdaLR import pandas as pd import altair as alt from torchtext.data.functional import to_map_style_dataset from torch.utils.data import DataLoader from torchtext.vocab import build_vocab_from_iterator import torchtext.datasets as datasets import spacy import GPUtil import warnings from torch.utils.data.distributed import DistributedSampler import torch.distributed as dist import torch.multiprocessing as mp from torch.nn.parallel import DistributedDataParallel as DDP
# Set to False to skip notebook execution (e.g. for debugging) warnings.filterwarnings("ignore") RUN_EXAMPLES = True
My comments are blockquoted. The main text is all from the paper itself.
Background
The goal of reducing sequential computation also forms the
foundation of the Extended Neural GPU, ByteNet and ConvS2S, all of
which use convolutional neural networks as basic building block,
computing hidden representations in parallel for all input and
output positions. In these models, the number of operations required
to relate signals from two arbitrary input or output positions grows
in the distance between positions, linearly for ConvS2S and
logarithmically for ByteNet. This makes it more difficult to learn
dependencies between distant positions. In the Transformer this is
reduced to a constant number of operations, albeit at the cost of
reduced effective resolution due to averaging attention-weighted
positions, an effect we counteract with Multi-Head Attention.
Self-attention, sometimes called intra-attention is an attention
mechanism relating different positions of a single sequence in order
to compute a representation of the sequence. Self-attention has been
used successfully in a variety of tasks including reading
comprehension, abstractive summarization, textual entailment and
learning task-independent sentence representations. End-to-end
memory networks are based on a recurrent attention mechanism instead
of sequencealigned recurrence and have been shown to perform well on
simple-language question answering and language modeling tasks.
To the best of our knowledge, however, the Transformer is the first
transduction model relying entirely on self-attention to compute
representations of its input and output without using sequence
aligned RNNs or convolution.
Part 1: Model Architecture
Model Architecture
Most competitive neural sequence transduction models have an
encoder-decoder structure
(cite). Here, the encoder maps an
input sequence of symbol representations to a
sequence of continuous representations . Given , the decoder then generates an output
sequence of symbols one element at a time. At each
step the model is auto-regressive
(cite), consuming the previously
generated symbols as additional input when generating the next.
defforward(self, src, tgt, src_mask, tgt_mask): "Take in and process masked src and target sequences." returnself.decode(self.encode(src, src_mask), src_mask, tgt, tgt_mask)
The Transformer follows this overall architecture using stacked
self-attention and point-wise, fully connected layers for both the
encoder and decoder, shown in the left and right halves of Figure 1,
respectively.
Encoder and Decoder Stacks
Encoder
The encoder is composed of a stack of identical layers.
1 2 3
defclones(module, N): "Produce N identical layers." return nn.ModuleList([copy.deepcopy(module) for _ inrange(N)])
1 2 3 4 5 6 7 8 9 10 11 12 13
classEncoder(nn.Module): "Core encoder is a stack of N layers"
That is, the output of each sub-layer is , where is the function
implemented by the sub-layer itself. We apply dropout
(cite) to the
output of each sub-layer, before it is added to the sub-layer input
and normalized.
To facilitate these residual connections, all sub-layers in the
model, as well as the embedding layers, produce outputs of dimension
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14
classSublayerConnection(nn.Module): """ A residual connection followed by a layer norm. Note for code simplicity the norm is first as opposed to last. """
defforward(self, x, sublayer): "Apply residual connection to any sublayer with the same size." return x + self.dropout(sublayer(self.norm(x)))
Each layer has two sub-layers. The first is a multi-head
self-attention mechanism, and the second is a simple, position-wise
fully connected feed-forward network.
1 2 3 4 5 6 7 8 9 10 11 12 13 14
classEncoderLayer(nn.Module): "Encoder is made up of self-attn and feed forward (defined below)"
defforward(self, x, memory, src_mask, tgt_mask): for layer inself.layers: x = layer(x, memory, src_mask, tgt_mask) returnself.norm(x)
In addition to the two sub-layers in each encoder layer, the decoder
inserts a third sub-layer, which performs multi-head attention over
the output of the encoder stack. Similar to the encoder, we employ
residual connections around each of the sub-layers, followed by
layer normalization.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
classDecoderLayer(nn.Module): "Decoder is made of self-attn, src-attn, and feed forward (defined below)"
defforward(self, x, memory, src_mask, tgt_mask): "Follow Figure 1 (right) for connections." m = memory x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, tgt_mask)) x = self.sublayer[1](x, lambda x: self.src_attn(x, m, m, src_mask)) returnself.sublayer[2](x, self.feed_forward)
We also modify the self-attention sub-layer in the decoder stack to
prevent positions from attending to subsequent positions. This
masking, combined with fact that the output embeddings are offset by
one position, ensures that the predictions for position can
depend only on the known outputs at positions less than .
Below the attention mask shows the position each tgt word (row) is
allowed to look at (column). Words are blocked for attending to
future words during training.
defexample_mask(): LS_data = pd.concat( [ pd.DataFrame( { "Subsequent Mask": subsequent_mask(20)[0][x, y].flatten(), "Window": y, "Masking": x, } ) for y inrange(20) for x inrange(20) ] )
An attention function can be described as mapping a query and a set
of key-value pairs to an output, where the query, keys, values, and
output are all vectors. The output is computed as a weighted sum of
the values, where the weight assigned to each value is computed by a
compatibility function of the query with the corresponding key.
We call our particular attention "Scaled Dot-Product Attention".
The input consists of queries and keys of dimension , and
values of dimension . We compute the dot products of the query
with all keys, divide each by , and apply a softmax
function to obtain the weights on the values.
In practice, we compute the attention function on a set of queries
simultaneously, packed together into a matrix . The keys and
values are also packed together into matrices and . We
compute the matrix of outputs as:
The two most commonly used attention functions are additive
attention (cite), and dot-product
(multiplicative) attention. Dot-product attention is identical to
our algorithm, except for the scaling factor of
. Additive attention computes the
compatibility function using a feed-forward network with a single
hidden layer. While the two are similar in theoretical complexity,
dot-product attention is much faster and more space-efficient in
practice, since it can be implemented using highly optimized matrix
multiplication code.
While for small values of the two mechanisms perform
similarly, additive attention outperforms dot product attention
without scaling for larger values of
(cite). We suspect that for
large values of , the dot products grow large in magnitude,
pushing the softmax function into regions where it has extremely
small gradients (To illustrate why the dot products get large,
assume that the components of and are independent random
variables with mean and variance . Then their dot product,
, has mean and variance
.). To counteract this effect, we scale the dot products by
.
Multi-head attention allows the model to jointly attend to
information from different representation subspaces at different
positions. With a single attention head, averaging inhibits this.
Where the projections are parameter matrices , , and .
In this work we employ parallel attention layers, or
heads. For each of these we use . Due
to the reduced dimension of each head, the total computational cost
is similar to that of single-head attention with full
dimensionality.
classMultiHeadedAttention(nn.Module): def__init__(self, h, d_model, dropout=0.1): "Take in model size and number of heads." super(MultiHeadedAttention, self).__init__() assert d_model % h == 0 # We assume d_v always equals d_k self.d_k = d_model // h self.h = h self.linears = clones(nn.Linear(d_model, d_model), 4) self.attn = None self.dropout = nn.Dropout(p=dropout)
defforward(self, query, key, value, mask=None): "Implements Figure 2" if mask isnotNone: # Same mask applied to all h heads. mask = mask.unsqueeze(1) nbatches = query.size(0)
# 1) Do all the linear projections in batch from d_model => h x d_k query, key, value = [ lin(x).view(nbatches, -1, self.h, self.d_k).transpose(1, 2) for lin, x inzip(self.linears, (query, key, value)) ]
# 2) Apply attention on all the projected vectors in batch. x, self.attn = attention( query, key, value, mask=mask, dropout=self.dropout )
# 3) "Concat" using a view and apply a final linear. x = ( x.transpose(1, 2) .contiguous() .view(nbatches, -1, self.h * self.d_k) ) del query del key del value returnself.linears[-1](x)
Applications of Attention in our Model
The Transformer uses multi-head attention in three different ways:
In "encoder-decoder attention" layers, the queries come from the
previous decoder layer, and the memory keys and values come from the
output of the encoder. This allows every position in the decoder to
attend over all positions in the input sequence. This mimics the
typical encoder-decoder attention mechanisms in sequence-to-sequence
models such as (cite).
The encoder contains self-attention layers. In a self-attention
layer all of the keys, values and queries come from the same place,
in this case, the output of the previous layer in the encoder. Each
position in the encoder can attend to all positions in the previous
layer of the encoder.
Similarly, self-attention layers in the decoder allow each
position in the decoder to attend to all positions in the decoder up
to and including that position. We need to prevent leftward
information flow in the decoder to preserve the auto-regressive
property. We implement this inside of scaled dot-product attention
by masking out (setting to ) all values in the input of the
softmax which correspond to illegal connections.
Position-wise Feed-Forward Networks
In addition to attention sub-layers, each of the layers in our
encoder and decoder contains a fully connected feed-forward network,
which is applied to each position separately and identically. This
consists of two linear transformations with a ReLU activation in
between.
While the linear transformations are the same across different
positions, they use different parameters from layer to
layer. Another way of describing this is as two convolutions with
kernel size 1. The dimensionality of input and output is
, and the inner-layer has dimensionality
.
Similarly to other sequence transduction models, we use learned
embeddings to convert the input tokens and output tokens to vectors
of dimension . We also use the usual learned
linear transformation and softmax function to convert the decoder
output to predicted next-token probabilities. In our model, we
share the same weight matrix between the two embedding layers and
the pre-softmax linear transformation, similar to
(cite). In the embedding layers,
we multiply those weights by .
Since our model contains no recurrence and no convolution, in order
for the model to make use of the order of the sequence, we must
inject some information about the relative or absolute position of
the tokens in the sequence. To this end, we add "positional
encodings" to the input embeddings at the bottoms of the encoder and
decoder stacks. The positional encodings have the same dimension
as the embeddings, so that the two can be summed.
There are many choices of positional encodings, learned and fixed
(cite).
In this work, we use sine and cosine functions of different frequencies:
where is the position and is the dimension. That is, each
dimension of the positional encoding corresponds to a sinusoid. The
wavelengths form a geometric progression from to . We chose this function because we hypothesized it would
allow the model to easily learn to attend by relative positions,
since for any fixed offset , can be represented as a
linear function of .
In addition, we apply dropout to the sums of the embeddings and the
positional encodings in both the encoder and decoder stacks. For
the base model, we use a rate of .
We also experimented with using learned positional embeddings
(cite) instead, and found
that the two versions produced nearly identical results. We chose
the sinusoidal version because it may allow the model to extrapolate
to sequence lengths longer than the ones encountered during
training.
Full Model
Here we define a function from hyperparameters to a full model.
defmake_model( src_vocab, tgt_vocab, N=6, d_model=512, d_ff=2048, h=8, dropout=0.1 ): "Helper: Construct a model from hyperparameters." c = copy.deepcopy attn = MultiHeadedAttention(h, d_model) ff = PositionwiseFeedForward(d_model, d_ff, dropout) position = PositionalEncoding(d_model, dropout) model = EncoderDecoder( Encoder(EncoderLayer(d_model, c(attn), c(ff), dropout), N), Decoder(DecoderLayer(d_model, c(attn), c(attn), c(ff), dropout), N), nn.Sequential(Embeddings(d_model, src_vocab), c(position)), nn.Sequential(Embeddings(d_model, tgt_vocab), c(position)), Generator(d_model, tgt_vocab), )
# This was important from their code. # Initialize parameters with Glorot / fan_avg. for p in model.parameters(): if p.dim() > 1: nn.init.xavier_uniform_(p) return model
Inference:
Here we make a forward step to generate a prediction of the
model. We try to use our transformer to memorize the input. As you
will see the output is randomly generated due to the fact that the
model is not trained yet. In the next tutorial we will build the
training function and try to train our model to memorize the numbers
from 1 to 10.
defrun_tests(): for _ inrange(10): inference_test()
show_example(run_tests)
Example Untrained Model Prediction: tensor([[ 0, 10, 0, 10, 0, 0, 0, 0, 0, 10]])
Example Untrained Model Prediction: tensor([[ 0, 8, 1, 10, 0, 8, 1, 10, 0, 8]])
Example Untrained Model Prediction: tensor([[ 0, 9, 0, 10, 4, 5, 3, 2, 4, 3]])
Example Untrained Model Prediction: tensor([[0, 5, 5, 5, 5, 5, 5, 5, 5, 5]])
Example Untrained Model Prediction: tensor([[0, 2, 8, 3, 8, 5, 0, 4, 0, 4]])
Example Untrained Model Prediction: tensor([[ 0, 10, 3, 10, 2, 9, 0, 3, 10, 3]])
Example Untrained Model Prediction: tensor([[0, 3, 3, 3, 3, 3, 3, 3, 3, 3]])
Example Untrained Model Prediction: tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
Example Untrained Model Prediction: tensor([[0, 3, 2, 2, 2, 4, 0, 3, 1, 3]])
Example Untrained Model Prediction: tensor([[0, 6, 6, 6, 6, 6, 6, 6, 6, 6]])
Part 2: Model Training
Training
This section describes the training regime for our models.
We stop for a quick interlude to introduce some of the tools
needed to train a standard encoder decoder model. First we define a
batch object that holds the src and target sentences for training,
as well as constructing the masks.
@staticmethod defmake_std_mask(tgt, pad): "Create a mask to hide padding and future words." tgt_mask = (tgt != pad).unsqueeze(-2) tgt_mask = tgt_mask & subsequent_mask(tgt.size(-1)).type_as( tgt_mask.data ) return tgt_mask
Next we create a generic training and scoring function to keep
track of loss. We pass in a generic loss compute function that
also handles parameter updates.
Training Loop
1 2 3 4 5 6 7
classTrainState: """Track number of steps, examples, and tokens processed"""
step: int = 0# Steps in the current epoch accum_step: int = 0# Number of gradient accumulation steps samples: int = 0# total # of examples used tokens: int = 0# total # of tokens processed
total_loss += loss total_tokens += batch.ntokens tokens += batch.ntokens if i % 40 == 1and (mode == "train"or mode == "train+log"): lr = optimizer.param_groups[0]["lr"] elapsed = time.time() - start print( ( "Epoch Step: %6d | Accumulation Step: %3d | Loss: %6.2f " + "| Tokens / Sec: %7.1f | Learning Rate: %6.1e" ) % (i, n_accum, loss / batch.ntokens, tokens / elapsed, lr) ) start = time.time() tokens = 0 del loss del loss_node return total_loss / total_tokens, train_state
Training Data and Batching
We trained on the standard WMT 2014 English-German dataset
consisting of about 4.5 million sentence pairs. Sentences were
encoded using byte-pair encoding, which has a shared source-target
vocabulary of about 37000 tokens. For English-French, we used the
significantly larger WMT 2014 English-French dataset consisting of
36M sentences and split tokens into a 32000 word-piece vocabulary.
Sentence pairs were batched together by approximate sequence length.
Each training batch contained a set of sentence pairs containing
approximately 25000 source tokens and 25000 target tokens.
Hardware and Schedule
We trained our models on one machine with 8 NVIDIA P100 GPUs. For
our base models using the hyperparameters described throughout the
paper, each training step took about 0.4 seconds. We trained the
base models for a total of 100,000 steps or 12 hours. For our big
models, step time was 1.0 seconds. The big models were trained for
300,000 steps (3.5 days).
Optimizer
We used the Adam optimizer (cite)
with , and . We
varied the learning rate over the course of training, according to
the formula:
This corresponds to increasing the learning rate linearly for the
first training steps, and decreasing it thereafter
proportionally to the inverse square root of the step number. We
used .
Note: This part is very important. Need to train with this setup
of the model.
Example of the curves of this model for different model sizes and
for optimization hyperparameters.
1 2 3 4 5 6 7 8 9 10
defrate(step, model_size, factor, warmup): """ we have to default the step to 1 for LambdaLR function to avoid zero raising to negative power. """ if step == 0: step = 1 return factor * ( model_size ** (-0.5) * min(step ** (-0.5), step * warmup ** (-1.5)) )
# we have 3 examples in opts list. for idx, example inenumerate(opts): # run 20000 epoch for each example optimizer = torch.optim.Adam( dummy_model.parameters(), lr=1, betas=(0.9, 0.98), eps=1e-9 ) lr_scheduler = LambdaLR( optimizer=optimizer, lr_lambda=lambda step: rate(step, *example) ) tmp = [] # take 20K dummy training steps, save the learning rate at each step for step inrange(20000): tmp.append(optimizer.param_groups[0]["lr"]) optimizer.step() lr_scheduler.step() learning_rates.append(tmp)
learning_rates = torch.tensor(learning_rates)
# Enable altair to handle more than 5000 rows alt.data_transformers.disable_max_rows()
During training, we employed label smoothing of value
(cite).
This hurts perplexity, as the model learns to be more unsure, but
improves accuracy and BLEU score.
We implement label smoothing using the KL div loss. Instead of
using a one-hot target distribution, we create a distribution that
has confidence of the correct word and the rest of the
smoothing mass distributed throughout the vocabulary.
We can begin by trying out a simple copy-task. Given a random set
of input symbols from a small vocabulary, the goal is to generate
back those same symbols.
Synthetic Data
1 2 3 4 5 6 7 8
defdata_gen(V, batch_size, nbatches): "Generate random data for a src-tgt copy task." for i inrange(nbatches): data = torch.randint(1, V, size=(batch_size, 10)) data[:, 0] = 1 src = data.requires_grad_(False).clone().detach() tgt = data.requires_grad_(False).clone().detach() yield Batch(src, tgt, 0)
Loss Computation
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
classSimpleLossCompute: "A simple loss compute and train function."
Now we consider a real-world example using the Multi30k
German-English Translation task. This task is much smaller than
the WMT task considered in the paper, but it illustrates the whole
system. We also show how to use multi-gpu processing to make it
really fast.
Data Loading
We will load the dataset using torchtext and spacy for
tokenization.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
# Load spacy tokenizer models, download them if they haven't been # downloaded already
if is_interactive_notebook(): # global variables used later in the script spacy_de, spacy_en = show_example(load_tokenizers) vocab_src, vocab_tgt = show_example(load_vocab, args=[spacy_de, spacy_en])
Finished.
Vocabulary sizes:
59981
36745
Batching matters a ton for speed. We want to have very evenly
divided batches, with absolutely minimal padding. To do this we
have to hack a bit around the default torchtext batching. This
code patches their default batching to make sure we search over
enough sentences to find tight batches.
model = make_model(len(vocab_src), len(vocab_tgt), N=6) model.load_state_dict(torch.load("multi30k_model_final.pt")) return model
if is_interactive_notebook(): model = load_trained_model()
Once trained we can decode the model to produce a set of
translations. Here we simply translate the first sentence in the
validation set. This dataset is pretty small so the translations
with greedy search are reasonably accurate.
Additional Components: BPE, Search, Averaging
So this mostly covers the transformer model itself. There are four
aspects that we didn't cover explicitly. We also have all these
additional features implemented in
OpenNMT-py.
BPE/ Word-piece: We can use a library to first preprocess the
data into subword units. See Rico Sennrich's
subword-nmt
implementation. These models will transform the training data to
look like this:
▁Die ▁Protokoll datei ▁kann ▁ heimlich ▁per ▁E - Mail ▁oder ▁FTP
▁an ▁einen ▁bestimmte n ▁Empfänger ▁gesendet ▁werden .
Shared Embeddings: When using BPE with shared vocabulary we can
share the same weight vectors between the source / target /
generator. See the (cite) for
details. To add this to the model simply do this:
Beam Search: This is a bit too complicated to cover here. See the
OpenNMT-py
for a pytorch implementation.
Model Averaging: The paper averages the last k checkpoints to
create an ensembling effect. We can do this after the fact if we
have a bunch of models:
1 2 3 4
defaverage(model, models): "Average models into model" for ps inzip(*[m.params() for m in [model] + models]): ps[0].copy_(torch.sum(*ps[1:]) / len(ps[1:]))
Results
On the WMT 2014 English-to-German translation task, the big
transformer model (Transformer (big) in Table 2) outperforms the
best previously reported models (including ensembles) by more than
2.0 BLEU, establishing a new state-of-the-art BLEU score of
28.4. The configuration of this model is listed in the bottom line
of Table 3. Training took 3.5 days on 8 P100 GPUs. Even our base
model surpasses all previously published models and ensembles, at a
fraction of the training cost of any of the competitive models.
On the WMT 2014 English-to-French translation task, our big model
achieves a BLEU score of 41.0, outperforming all of the previously
published single models, at less than 1/4 the training cost of the
previous state-of-the-art model. The Transformer (big) model trained
for English-to-French used dropout rate Pdrop = 0.1, instead of 0.3.
With the addtional extensions in the last section, the OpenNMT-py
replication gets to 26.9 on EN-DE WMT. Here I have loaded in those
parameters to our reimplemenation.
src_tokens = [ vocab_src.get_itos()[x] for x in rb.src[0] if x != pad_idx ] tgt_tokens = [ vocab_tgt.get_itos()[x] for x in rb.tgt[0] if x != pad_idx ]
print( "Source Text (Input) : " + " ".join(src_tokens).replace("\n", "") ) print( "Target Text (Ground Truth) : " + " ".join(tgt_tokens).replace("\n", "") ) model_out = greedy_decode(model, rb.src, rb.src_mask, 72, 0)[0] model_txt = ( " ".join( [vocab_tgt.get_itos()[x] for x in model_out if x != pad_idx] ).split(eos_string, 1)[0] + eos_string ) print("Model Output : " + model_txt.replace("\n", "")) results[idx] = (rb, src_tokens, tgt_tokens, model_out, model_txt) return results
defrun_model_example(n_examples=5): global vocab_src, vocab_tgt, spacy_de, spacy_en
defmtx2df(m, max_row, max_col, row_tokens, col_tokens): "convert a dense matrix to a data frame with row and column indices" return pd.DataFrame( [ ( r, c, float(m[r, c]), "%.3d %s" % (r, row_tokens[r] iflen(row_tokens) > r else"<blank>"), "%.3d %s" % (c, col_tokens[c] iflen(col_tokens) > c else"<blank>"), ) for r inrange(m.shape[0]) for c inrange(m.shape[1]) if r < max_row and c < max_col ], # if float(m[r,c]) != 0 and r < max_row and c < max_col], columns=["row", "column", "value", "row_token", "col_token"], )
defviz_encoder_self(): model, example_data = run_model_example(n_examples=1) example = example_data[ len(example_data) - 1 ] # batch object for the final example
Preparing Data ...
Loading Trained Model ...
Checking Model Outputs:
Example 0 ========
Source Text (Input) : <s> Mehrere Kinder heben die Hände , während sie auf einem bunten Teppich in einem Klassenzimmer sitzen . </s>
Target Text (Ground Truth) : <s> Several children are raising their hands while sitting on a colorful rug in a classroom . </s>
Model Output : <s> A group of children are in their hands while sitting on a colorful carpet . </s>
Preparing Data ...
Loading Trained Model ...
Checking Model Outputs:
Example 0 ========
Source Text (Input) : <s> Drei Menschen wandern auf einem stark verschneiten Weg . </s>
Target Text (Ground Truth) : <s> A <unk> of people are hiking throughout a heavily snowed path . </s>
Model Output : <s> Three people hiking on a busy <unk> . </s>
Preparing Data ...
Loading Trained Model ...
Checking Model Outputs:
Example 0 ========
Source Text (Input) : <s> Baby sieht sich die Blätter am Zweig eines Baumes an . </s>
Target Text (Ground Truth) : <s> Baby looking at the leaves on a branch of a tree . </s>
Model Output : <s> A baby is looking at the leaves at a tree . </s>
Conclusion
Hopefully this code is useful for future research. Please reach
out if you have any issues.