{ "cells": [ { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "s_qNSzzyaCbD" }, "source": [ "##### Copyright 2019 The TensorFlow Authors." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "cellView": "form", "colab": {}, "colab_type": "code", "id": "jmjh290raIky" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "J0Qjg6vuaHNt" }, "source": [ "# Neural machine translation with attention" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "AOpGoE2T-YXS" }, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", " View on TensorFlow.org\n", " \n", " \n", " \n", " Run in Google Colab\n", " \n", " \n", " \n", " View source on GitHub\n", " \n", " Download notebook\n", "
" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "CiwtNgENbx2g" }, "source": [ "This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation. This is an advanced example that assumes some knowledge of sequence to sequence models.\n", "\n", "After training the model in this notebook, you will be able to input a Spanish sentence, such as *\"¿todavia estan en casa?\"*, and return the English translation: *\"are you still at home?\"*\n", "\n", "The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:\n", "\n", "\"spanish-english\n", "\n", "Note: This example takes approximately 10 minutes to run on a single P100 GPU." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "tnxXKDjq3jEL" }, "outputs": [], "source": [ "import tensorflow as tf\n", "\n", "import matplotlib.pyplot as plt\n", "import matplotlib.ticker as ticker\n", "from sklearn.model_selection import train_test_split\n", "\n", "import unicodedata\n", "import re\n", "import numpy as np\n", "import os\n", "import io\n", "import time" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "wfodePkj3jEa" }, "source": [ "## Download and prepare the dataset\n", "\n", "We'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:\n", "\n", "```\n", "May I borrow this book?\t¿Puedo tomar prestado este libro?\n", "```\n", "\n", "There are a variety of languages available, but we'll use the English-Spanish dataset. For convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:\n", "\n", "1. Add a *start* and *end* token to each sentence.\n", "2. Clean the sentences by removing special characters.\n", "3. Create a word index and reverse word index (dictionaries mapping from word → id and id → word).\n", "4. Pad each sentence to a maximum length." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "kRVATYOgJs1b" }, "outputs": [], "source": [ "# Download the file\n", "path_to_zip = tf.keras.utils.get_file(\n", " 'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',\n", " extract=True)\n", "\n", "path_to_file = os.path.dirname(path_to_zip)+\"/spa-eng/spa.txt\"" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "rd0jw-eC3jEh" }, "outputs": [], "source": [ "# Converts the unicode file to ascii\n", "def unicode_to_ascii(s):\n", " return ''.join(c for c in unicodedata.normalize('NFD', s)\n", " if unicodedata.category(c) != 'Mn')\n", "\n", "\n", "def preprocess_sentence(w):\n", " w = unicode_to_ascii(w.lower().strip())\n", "\n", " # creating a space between a word and the punctuation following it\n", " # eg: \"he is a boy.\" => \"he is a boy .\"\n", " # Reference:- https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation\n", " w = re.sub(r\"([?.!,¿])\", r\" \\1 \", w)\n", " w = re.sub(r'[\" \"]+', \" \", w)\n", "\n", " # replacing everything with space except (a-z, A-Z, \".\", \"?\", \"!\", \",\")\n", " w = re.sub(r\"[^a-zA-Z?.!,¿]+\", \" \", w)\n", "\n", " w = w.strip()\n", "\n", " # adding a start and an end token to the sentence\n", " # so that the model know when to start and stop predicting.\n", " w = ' ' + w + ' '\n", " return w" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "opI2GzOt479E" }, "outputs": [], "source": [ "en_sentence = u\"May I borrow this book?\"\n", "sp_sentence = u\"¿Puedo tomar prestado este libro?\"\n", "print(preprocess_sentence(en_sentence))\n", "print(preprocess_sentence(sp_sentence).encode('utf-8'))" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "OHn4Dct23jEm" }, "outputs": [], "source": [ "# 1. Remove the accents\n", "# 2. Clean the sentences\n", "# 3. Return word pairs in the format: [ENGLISH, SPANISH]\n", "def create_dataset(path, num_examples):\n", " lines = io.open(path, encoding='UTF-8').read().strip().split('\\n')\n", "\n", " word_pairs = [[preprocess_sentence(w) for w in l.split('\\t')] for l in lines[:num_examples]]\n", "\n", " return zip(*word_pairs)" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "cTbSbBz55QtF" }, "outputs": [], "source": [ "en, sp = create_dataset(path_to_file, None)\n", "print(en[-1])\n", "print(sp[-1])" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "bIOn8RCNDJXG" }, "outputs": [], "source": [ "def tokenize(lang):\n", " lang_tokenizer = tf.keras.preprocessing.text.Tokenizer(\n", " filters='')\n", " lang_tokenizer.fit_on_texts(lang)\n", "\n", " tensor = lang_tokenizer.texts_to_sequences(lang)\n", "\n", " tensor = tf.keras.preprocessing.sequence.pad_sequences(tensor,\n", " padding='post')\n", "\n", " return tensor, lang_tokenizer" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "eAY9k49G3jE_" }, "outputs": [], "source": [ "def load_dataset(path, num_examples=None):\n", " # creating cleaned input, output pairs\n", " targ_lang, inp_lang = create_dataset(path, num_examples)\n", "\n", " input_tensor, inp_lang_tokenizer = tokenize(inp_lang)\n", " target_tensor, targ_lang_tokenizer = tokenize(targ_lang)\n", "\n", " return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "GOi42V79Ydlr" }, "source": [ "### Limit the size of the dataset to experiment faster (optional)\n", "\n", "Training on the complete dataset of >100,000 sentences will take a long time. To train faster, we can limit the size of the dataset to 30,000 sentences (of course, translation quality degrades with less data):" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "cnxC7q-j3jFD" }, "outputs": [], "source": [ "# Try experimenting with the size of that dataset\n", "num_examples = 30000\n", "input_tensor, target_tensor, inp_lang, targ_lang = load_dataset(path_to_file, num_examples)\n", "\n", "# Calculate max_length of the target tensors\n", "max_length_targ, max_length_inp = target_tensor.shape[1], input_tensor.shape[1]" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "4QILQkOs3jFG" }, "outputs": [], "source": [ "# Creating training and validation sets using an 80-20 split\n", "input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2)\n", "\n", "# Show length\n", "print(len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val))" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "lJPmLZGMeD5q" }, "outputs": [], "source": [ "def convert(lang, tensor):\n", " for t in tensor:\n", " if t!=0:\n", " print (\"%d ----> %s\" % (t, lang.index_word[t]))" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "VXukARTDd7MT" }, "outputs": [], "source": [ "print (\"Input Language; index to word mapping\")\n", "convert(inp_lang, input_tensor_train[0])\n", "print ()\n", "print (\"Target Language; index to word mapping\")\n", "convert(targ_lang, target_tensor_train[0])" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "rgCLkfv5uO3d" }, "source": [ "### Create a tf.data dataset" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "TqHsArVZ3jFS" }, "outputs": [], "source": [ "BUFFER_SIZE = len(input_tensor_train)\n", "BATCH_SIZE = 64\n", "steps_per_epoch = len(input_tensor_train)//BATCH_SIZE\n", "embedding_dim = 256\n", "units = 1024\n", "vocab_inp_size = len(inp_lang.word_index)+1\n", "vocab_tar_size = len(targ_lang.word_index)+1\n", "\n", "dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE)\n", "dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "qc6-NK1GtWQt" }, "outputs": [], "source": [ "example_input_batch, example_target_batch = next(iter(dataset))\n", "example_input_batch.shape, example_target_batch.shape" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "TNfHIF71ulLu" }, "source": [ "## Write the encoder and decoder model\n", "\n", "Implement an encoder-decoder model with attention which you can read about in the TensorFlow [Neural Machine Translation (seq2seq) tutorial](https://github.com/tensorflow/nmt). This example uses a more recent set of APIs. This notebook implements the [attention equations](https://github.com/tensorflow/nmt#background-on-the-attention-mechanism) from the seq2seq tutorial. The following diagram shows that each input words is assigned a weight by the attention mechanism which is then used by the decoder to predict the next word in the sentence. The below picture and formulas are an example of attention mechanism from [Luong's paper](https://arxiv.org/abs/1508.04025v5). \n", "\n", "\"attention\n", "\n", "The input is put through an encoder model which gives us the encoder output of shape *(batch_size, max_length, hidden_size)* and the encoder hidden state of shape *(batch_size, hidden_size)*.\n", "\n", "Here are the equations that are implemented:\n", "\n", "\"attention\n", "\"attention\n", "\n", "This tutorial uses [Bahdanau attention](https://arxiv.org/pdf/1409.0473.pdf) for the encoder. Let's decide on notation before writing the simplified form:\n", "\n", "* FC = Fully connected (dense) layer\n", "* EO = Encoder output\n", "* H = hidden state\n", "* X = input to the decoder\n", "\n", "And the pseudo-code:\n", "\n", "* `score = FC(tanh(FC(EO) + FC(H)))`\n", "* `attention weights = softmax(score, axis = 1)`. Softmax by default is applied on the last axis but here we want to apply it on the *1st axis*, since the shape of score is *(batch_size, max_length, hidden_size)*. `Max_length` is the length of our input. Since we are trying to assign a weight to each input, softmax should be applied on that axis.\n", "* `context vector = sum(attention weights * EO, axis = 1)`. Same reason as above for choosing axis as 1.\n", "* `embedding output` = The input to the decoder X is passed through an embedding layer.\n", "* `merged vector = concat(embedding output, context vector)`\n", "* This merged vector is then given to the GRU\n", "\n", "The shapes of all the vectors at each step have been specified in the comments in the code:" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "nZ2rI24i3jFg" }, "outputs": [], "source": [ "class Encoder(tf.keras.Model):\n", " def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):\n", " super(Encoder, self).__init__()\n", " self.batch_sz = batch_sz\n", " self.enc_units = enc_units\n", " self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)\n", " self.gru = tf.keras.layers.GRU(self.enc_units,\n", " return_sequences=True,\n", " return_state=True,\n", " recurrent_initializer='glorot_uniform')\n", "\n", " def call(self, x, hidden):\n", " x = self.embedding(x)\n", " output, state = self.gru(x, initial_state = hidden)\n", " return output, state\n", "\n", " def initialize_hidden_state(self):\n", " return tf.zeros((self.batch_sz, self.enc_units))" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "60gSVh05Jl6l" }, "outputs": [], "source": [ "encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)\n", "\n", "# sample input\n", "sample_hidden = encoder.initialize_hidden_state()\n", "sample_output, sample_hidden = encoder(example_input_batch, sample_hidden)\n", "print ('Encoder output shape: (batch size, sequence length, units) {}'.format(sample_output.shape))\n", "print ('Encoder Hidden state shape: (batch size, units) {}'.format(sample_hidden.shape))" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "umohpBN2OM94" }, "outputs": [], "source": [ "class BahdanauAttention(tf.keras.layers.Layer):\n", " def __init__(self, units):\n", " super(BahdanauAttention, self).__init__()\n", " self.W1 = tf.keras.layers.Dense(units)\n", " self.W2 = tf.keras.layers.Dense(units)\n", " self.V = tf.keras.layers.Dense(1)\n", "\n", " def call(self, query, values):\n", " # query hidden state shape == (batch_size, hidden size)\n", " # query_with_time_axis shape == (batch_size, 1, hidden size)\n", " # values shape == (batch_size, max_len, hidden size)\n", " # we are doing this to broadcast addition along the time axis to calculate the score\n", " query_with_time_axis = tf.expand_dims(query, 1)\n", "\n", " # score shape == (batch_size, max_length, 1)\n", " # we get 1 at the last axis because we are applying score to self.V\n", " # the shape of the tensor before applying self.V is (batch_size, max_length, units)\n", " score = self.V(tf.nn.tanh(\n", " self.W1(query_with_time_axis) + self.W2(values)))\n", "\n", " # attention_weights shape == (batch_size, max_length, 1)\n", " attention_weights = tf.nn.softmax(score, axis=1)\n", "\n", " # context_vector shape after sum == (batch_size, hidden_size)\n", " context_vector = attention_weights * values\n", " context_vector = tf.reduce_sum(context_vector, axis=1)\n", "\n", " return context_vector, attention_weights" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "k534zTHiDjQU" }, "outputs": [], "source": [ "attention_layer = BahdanauAttention(10)\n", "attention_result, attention_weights = attention_layer(sample_hidden, sample_output)\n", "\n", "print(\"Attention result shape: (batch size, units) {}\".format(attention_result.shape))\n", "print(\"Attention weights shape: (batch_size, sequence_length, 1) {}\".format(attention_weights.shape))" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "yJ_B3mhW3jFk" }, "outputs": [], "source": [ "class Decoder(tf.keras.Model):\n", " def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz):\n", " super(Decoder, self).__init__()\n", " self.batch_sz = batch_sz\n", " self.dec_units = dec_units\n", " self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)\n", " self.gru = tf.keras.layers.GRU(self.dec_units,\n", " return_sequences=True,\n", " return_state=True,\n", " recurrent_initializer='glorot_uniform')\n", " self.fc = tf.keras.layers.Dense(vocab_size)\n", "\n", " # used for attention\n", " self.attention = BahdanauAttention(self.dec_units)\n", "\n", " def call(self, x, hidden, enc_output):\n", " # enc_output shape == (batch_size, max_length, hidden_size)\n", " context_vector, attention_weights = self.attention(hidden, enc_output)\n", "\n", " # x shape after passing through embedding == (batch_size, 1, embedding_dim)\n", " x = self.embedding(x)\n", "\n", " # x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)\n", " x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)\n", "\n", " # passing the concatenated vector to the GRU\n", " output, state = self.gru(x)\n", "\n", " # output shape == (batch_size * 1, hidden_size)\n", " output = tf.reshape(output, (-1, output.shape[2]))\n", "\n", " # output shape == (batch_size, vocab)\n", " x = self.fc(output)\n", "\n", " return x, state, attention_weights" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "P5UY8wko3jFp" }, "outputs": [], "source": [ "decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE)\n", "\n", "sample_decoder_output, _, _ = decoder(tf.random.uniform((BATCH_SIZE, 1)),\n", " sample_hidden, sample_output)\n", "\n", "print ('Decoder output shape: (batch_size, vocab size) {}'.format(sample_decoder_output.shape))" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "_ch_71VbIRfK" }, "source": [ "## Define the optimizer and the loss function" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "WmTHr5iV3jFr" }, "outputs": [], "source": [ "optimizer = tf.keras.optimizers.Adam()\n", "loss_object = tf.keras.losses.SparseCategoricalCrossentropy(\n", " from_logits=True, reduction='none')\n", "\n", "def loss_function(real, pred):\n", " mask = tf.math.logical_not(tf.math.equal(real, 0))\n", " loss_ = loss_object(real, pred)\n", "\n", " mask = tf.cast(mask, dtype=loss_.dtype)\n", " loss_ *= mask\n", "\n", " return tf.reduce_mean(loss_)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "DMVWzzsfNl4e" }, "source": [ "## Checkpoints (Object-based saving)" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "Zj8bXQTgNwrF" }, "outputs": [], "source": [ "checkpoint_dir = './training_checkpoints'\n", "checkpoint_prefix = os.path.join(checkpoint_dir, \"ckpt\")\n", "checkpoint = tf.train.Checkpoint(optimizer=optimizer,\n", " encoder=encoder,\n", " decoder=decoder)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "hpObfY22IddU" }, "source": [ "## Training\n", "\n", "1. Pass the *input* through the *encoder* which return *encoder output* and the *encoder hidden state*.\n", "2. The encoder output, encoder hidden state and the decoder input (which is the *start token*) is passed to the decoder.\n", "3. The decoder returns the *predictions* and the *decoder hidden state*.\n", "4. The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss.\n", "5. Use *teacher forcing* to decide the next input to the decoder.\n", "6. *Teacher forcing* is the technique where the *target word* is passed as the *next input* to the decoder.\n", "7. The final step is to calculate the gradients and apply it to the optimizer and backpropagate." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "sC9ArXSsVfqn" }, "outputs": [], "source": [ "@tf.function\n", "def train_step(inp, targ, enc_hidden):\n", " loss = 0\n", "\n", " with tf.GradientTape() as tape:\n", " enc_output, enc_hidden = encoder(inp, enc_hidden)\n", "\n", " dec_hidden = enc_hidden\n", "\n", " dec_input = tf.expand_dims([targ_lang.word_index['']] * BATCH_SIZE, 1)\n", "\n", " # Teacher forcing - feeding the target as the next input\n", " for t in range(1, targ.shape[1]):\n", " # passing enc_output to the decoder\n", " predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)\n", "\n", " loss += loss_function(targ[:, t], predictions)\n", "\n", " # using teacher forcing\n", " dec_input = tf.expand_dims(targ[:, t], 1)\n", "\n", " batch_loss = (loss / int(targ.shape[1]))\n", "\n", " variables = encoder.trainable_variables + decoder.trainable_variables\n", "\n", " gradients = tape.gradient(loss, variables)\n", "\n", " optimizer.apply_gradients(zip(gradients, variables))\n", "\n", " return batch_loss" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "ddefjBMa3jF0" }, "outputs": [], "source": [ "EPOCHS = 10\n", "\n", "for epoch in range(EPOCHS):\n", " start = time.time()\n", "\n", " enc_hidden = encoder.initialize_hidden_state()\n", " total_loss = 0\n", "\n", " for (batch, (inp, targ)) in enumerate(dataset.take(steps_per_epoch)):\n", " batch_loss = train_step(inp, targ, enc_hidden)\n", " total_loss += batch_loss\n", "\n", " if batch % 100 == 0:\n", " print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,\n", " batch,\n", " batch_loss.numpy()))\n", " # saving (checkpoint) the model every 2 epochs\n", " if (epoch + 1) % 2 == 0:\n", " checkpoint.save(file_prefix = checkpoint_prefix)\n", "\n", " print('Epoch {} Loss {:.4f}'.format(epoch + 1,\n", " total_loss / steps_per_epoch))\n", " print('Time taken for 1 epoch {} sec\\n'.format(time.time() - start))" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "mU3Ce8M6I3rz" }, "source": [ "## Translate\n", "\n", "* The evaluate function is similar to the training loop, except we don't use *teacher forcing* here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output.\n", "* Stop predicting when the model predicts the *end token*.\n", "* And store the *attention weights for every time step*.\n", "\n", "Note: The encoder output is calculated only once for one input." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "EbQpyYs13jF_" }, "outputs": [], "source": [ "def evaluate(sentence):\n", " attention_plot = np.zeros((max_length_targ, max_length_inp))\n", "\n", " sentence = preprocess_sentence(sentence)\n", "\n", " inputs = [inp_lang.word_index[i] for i in sentence.split(' ')]\n", " inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs],\n", " maxlen=max_length_inp,\n", " padding='post')\n", " inputs = tf.convert_to_tensor(inputs)\n", "\n", " result = ''\n", "\n", " hidden = [tf.zeros((1, units))]\n", " enc_out, enc_hidden = encoder(inputs, hidden)\n", "\n", " dec_hidden = enc_hidden\n", " dec_input = tf.expand_dims([targ_lang.word_index['']], 0)\n", "\n", " for t in range(max_length_targ):\n", " predictions, dec_hidden, attention_weights = decoder(dec_input,\n", " dec_hidden,\n", " enc_out)\n", "\n", " # storing the attention weights to plot later on\n", " attention_weights = tf.reshape(attention_weights, (-1, ))\n", " attention_plot[t] = attention_weights.numpy()\n", "\n", " predicted_id = tf.argmax(predictions[0]).numpy()\n", "\n", " result += targ_lang.index_word[predicted_id] + ' '\n", "\n", " if targ_lang.index_word[predicted_id] == '':\n", " return result, sentence, attention_plot\n", "\n", " # the predicted ID is fed back into the model\n", " dec_input = tf.expand_dims([predicted_id], 0)\n", "\n", " return result, sentence, attention_plot" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "s5hQWlbN3jGF" }, "outputs": [], "source": [ "# function for plotting the attention weights\n", "def plot_attention(attention, sentence, predicted_sentence):\n", " fig = plt.figure(figsize=(10,10))\n", " ax = fig.add_subplot(1, 1, 1)\n", " ax.matshow(attention, cmap='viridis')\n", "\n", " fontdict = {'fontsize': 14}\n", "\n", " ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)\n", " ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)\n", "\n", " ax.xaxis.set_major_locator(ticker.MultipleLocator(1))\n", " ax.yaxis.set_major_locator(ticker.MultipleLocator(1))\n", "\n", " plt.show()" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "sl9zUHzg3jGI" }, "outputs": [], "source": [ "def translate(sentence):\n", " result, sentence, attention_plot = evaluate(sentence)\n", "\n", " print('Input: %s' % (sentence))\n", " print('Predicted translation: {}'.format(result))\n", "\n", " attention_plot = attention_plot[:len(result.split(' ')), :len(sentence.split(' '))]\n", " plot_attention(attention_plot, sentence.split(' '), result.split(' '))" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "n250XbnjOaqP" }, "source": [ "## Restore the latest checkpoint and test" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "UJpT9D5_OgP6" }, "outputs": [], "source": [ "# restoring the latest checkpoint in checkpoint_dir\n", "checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "WrAM0FDomq3E" }, "outputs": [], "source": [ "translate(u'hace mucho frio aqui.')" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "zSx2iM36EZQZ" }, "outputs": [], "source": [ "translate(u'esta es mi vida.')" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "A3LLCx3ZE0Ls" }, "outputs": [], "source": [ "translate(u'¿todavia estan en casa?')" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "DUQVLVqUE1YW" }, "outputs": [], "source": [ "# wrong translation\n", "translate(u'trata de averiguarlo.')" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "RTe5P5ioMJwN" }, "source": [ "## Next steps\n", "\n", "* [Download a different dataset](http://www.manythings.org/anki/) to experiment with translations, for example, English to German, or English to French.\n", "* Experiment with training on a larger dataset, or using more epochs\n" ] } ], "metadata": { "accelerator": "GPU", "colab": { "collapsed_sections": [], "name": "nmt_with_attention.ipynb", "private_outputs": true, "provenance": [], "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }