{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "X4cRE8IbIrIV" }, "source": [ "If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers and 🤗 Datasets. Uncomment the following cell and run it. We also use the `sacrebleu` and `sentencepiece` libraries - you may need to install these even if you already have 🤗 Transformers!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 1000 }, "id": "MOsHUjgdIrIW", "outputId": "f84a093e-147f-470e-aad9-80fb51193c8e" }, "outputs": [], "source": [ "#! pip install transformers[sentencepiece] datasets\n", "#! pip install sacrebleu sentencepiece\n", "#! pip install huggingface_hub" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you're opening this notebook locally, make sure your environment has an install from the last version of those libraries.\n", "\n", "To be able to share your model with the community and generate results like the one shown in the picture below via the inference API, there are a few more steps to follow.\n", "\n", "First you have to store your authentication token from the Hugging Face website (sign up [here](https://huggingface.co/join) if you haven't already!) then uncomment the following cell and input your token:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from huggingface_hub import notebook_login\n", "\n", "notebook_login()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then you need to install Git-LFS and setup Git if you haven't already. Uncomment the following instructions and adapt with your name and email:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# !apt install git-lfs\n", "# !git config --global user.email \"you@example.com\"\n", "# !git config --global user.name \"Your Name\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Make sure your version of Transformers is at least 4.16.0 since some of the functionality we use was introduced in that version:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "4.21.0.dev0\n" ] } ], "source": [ "import transformers\n", "\n", "print(transformers.__version__)" ] }, { "cell_type": "markdown", "metadata": { "id": "HFASsisvIrIb" }, "source": [ "You can find a script version of this notebook to fine-tune your model in a distributed fashion using multiple GPUs or TPUs [here](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/translation)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We also quickly upload some telemetry - this tells us which examples and software versions are getting used so we know where to prioritize our maintenance efforts. We don't collect (or care about) any personally identifiable information, but if you'd prefer not to be counted, feel free to skip this step or delete this cell entirely." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from transformers.utils import send_example_telemetry\n", "\n", "send_example_telemetry(\"translation_notebook\", framework=\"tensorflow\")" ] }, { "cell_type": "markdown", "metadata": { "id": "rEJBSTyZIrIb" }, "source": [ "# Fine-tuning a model on a translation task" ] }, { "cell_type": "markdown", "metadata": { "id": "kTCFado4IrIc" }, "source": [ "In this notebook, we will see how to fine-tune one of the [🤗 Transformers](https://github.com/huggingface/transformers) model for a translation task. We will use the [WMT dataset](http://www.statmt.org/wmt16/), a machine translation dataset composed from a collection of various sources, including news commentaries and parliament proceedings.\n", "\n", "\n", "\n", "We will see how to easily load the dataset for this task using 🤗 Datasets and how to fine-tune a model on it using Keras." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "model_checkpoint = \"Helsinki-NLP/opus-mt-en-ROMANCE\"" ] }, { "cell_type": "markdown", "metadata": { "id": "4RRkXuteIrIh" }, "source": [ "This notebook is built to run with any model checkpoint from the [Model Hub](https://huggingface.co/models) as long as that model has a sequence-to-sequence version in the Transformers library. Here we picked the [`Helsinki-NLP/opus-mt-en-romance`](https://huggingface.co/Helsinki-NLP/opus-mt-en-ROMANCE) checkpoint. " ] }, { "cell_type": "markdown", "metadata": { "id": "whPRbBNbIrIl" }, "source": [ "## Loading the dataset" ] }, { "cell_type": "markdown", "metadata": { "id": "W7QYTpxXIrIl" }, "source": [ "We will use the [🤗 Datasets](https://github.com/huggingface/datasets) library to download the data and get the metric we need to use for evaluation (to compare our model to the benchmark). This can be easily done with the `datasets` function `load_dataset` and the `evaluate` function `load`. We use the English/Romanian part of the WMT dataset here." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "id": "IreSlFmlIrIm" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Reusing dataset wmt16 (/home/matt/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/28ebdf8cf22106c2f1e58b2083d4b103608acd7bfdb6b14313ccd9e5bc8c313a)\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "3cead726a3164d67847ade363d465233", "version_major": 2, "version_minor": 0 }, "text/plain": [ " 0%| | 0/3 [00:00<?, ?it/s]" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "from datasets import load_dataset\n", "from evaluate import load\n", "\n", "raw_datasets = load_dataset(\"wmt16\", \"ro-en\")\n", "metric = load(\"sacrebleu\")" ] }, { "cell_type": "markdown", "metadata": { "id": "RzfPtOMoIrIu" }, "source": [ "The `dataset` object itself is [`DatasetDict`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasetdict), which contains one key for the training, validation and test set:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "id": "GWiVUF0jIrIv", "outputId": "35e3ea43-f397-4a54-c90c-f2cf8d36873e" }, "outputs": [ { "data": { "text/plain": [ "DatasetDict({\n", " train: Dataset({\n", " features: ['translation'],\n", " num_rows: 610320\n", " })\n", " validation: Dataset({\n", " features: ['translation'],\n", " num_rows: 1999\n", " })\n", " test: Dataset({\n", " features: ['translation'],\n", " num_rows: 1999\n", " })\n", "})" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "raw_datasets" ] }, { "cell_type": "markdown", "metadata": { "id": "u3EtYfeHIrIz" }, "source": [ "To access an actual element, you need to select a split first, then give an index:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "id": "X6HrpprwIrIz", "outputId": "d7670bc0-42e4-4c09-8a6a-5c018ded7d95" }, "outputs": [ { "data": { "text/plain": [ "{'translation': {'en': 'Membership of Parliament: see Minutes',\n", " 'ro': 'Componenţa Parlamentului: a se vedea procesul-verbal'}}" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "raw_datasets[\"train\"][0]" ] }, { "cell_type": "markdown", "metadata": { "id": "WHUmphG3IrI3" }, "source": [ "To get a sense of what the data looks like, the following function will show some examples picked randomly in the dataset." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "id": "i3j8APAoIrI3" }, "outputs": [], "source": [ "import datasets\n", "import random\n", "import pandas as pd\n", "from IPython.display import display, HTML\n", "\n", "\n", "def show_random_elements(dataset, num_examples=5):\n", " assert num_examples <= len(\n", " dataset\n", " ), \"Can't pick more elements than there are in the dataset.\"\n", " picks = []\n", " for _ in range(num_examples):\n", " pick = random.randint(0, len(dataset) - 1)\n", " while pick in picks:\n", " pick = random.randint(0, len(dataset) - 1)\n", " picks.append(pick)\n", "\n", " df = pd.DataFrame(dataset[picks])\n", " for column, typ in dataset.features.items():\n", " if isinstance(typ, datasets.ClassLabel):\n", " df[column] = df[column].transform(lambda i: typ.names[i])\n", " display(HTML(df.to_html()))" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "id": "SZy5tRB_IrI7", "outputId": "ba8f2124-e485-488f-8c0c-254f34f24f13" }, "outputs": [ { "data": { "text/html": [ "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>translation</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>{'en': '\"Kosovo does not have not a positive image mainly because (the media portrays) the Serbs living in ghettoes ... and NATO helping Albanians displace the Serbs from Kosovo,\" Chukov said.', 'ro': '\"Kosovo nu are o imagine pozitivă mai ales din cauza faptului că (presa arată) că sârbii trăiesc în ghetto-uri ... şi NATO îi ajută pe albanezi să strămute sârbii din Kosovo\", a spus Chukov.'}</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>{'en': 'They also signed a memorandum of understanding on diplomatic consultations.', 'ro': 'Aceştia au semnat de asemenea un protocol de acord cu privire la consultaţiile diplomatice.'}</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>{'en': 'EU Commissioner for Home Affairs Cecilia Malmstrom said on Monday (September 20th) that Albania has made significant progress in meeting requirements for visa-free travel.', 'ro': 'Comisarul UE pentru afaceri interne, Cecilia Malmstrom, a declarat luni (20 septembrie) că Albania a făcut progrese semnificative în întrunirea condiţiilor pentru liberalizarea vizelor.'}</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>{'en': '13.', 'ro': '13.'}</td>\n", " </tr>\n", " <tr>\n", " <th>4</th>\n", " <td>{'en': 'But, in principle, thank you very much for what was, for me, too, a very interesting debate, and all the best.', 'ro': 'Dar, în principiu, vă mulţumesc foarte mult pentru această dezbatere care a fost foarte interesantă pentru mine şi vă urez toate cele bune.'}</td>\n", " </tr>\n", " </tbody>\n", "</table>" ], "text/plain": [ "<IPython.core.display.HTML object>" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_random_elements(raw_datasets[\"train\"])" ] }, { "cell_type": "markdown", "metadata": { "id": "lnjDIuQ3IrI-" }, "source": [ "The metric is an instance of [`datasets.Metric`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Metric):" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "id": "5o4rUteaIrI_", "outputId": "18038ef5-554c-45c5-e00a-133b02ec10f1" }, "outputs": [ { "data": { "text/plain": [ "EvaluationModule(name: \"sacrebleu\", module_type: \"metric\", features: [{'predictions': Value(dtype='string', id='sequence'), 'references': Sequence(feature=Value(dtype='string', id='sequence'), length=-1, id='references')}, {'predictions': Value(dtype='string', id='sequence'), 'references': Value(dtype='string', id='sequence')}], usage: \"\"\"\n", "Produces BLEU scores along with its sufficient statistics\n", "from a source against one or more references.\n", "\n", "Args:\n", " predictions (`list` of `str`): list of translations to score. Each translation should be tokenized into a list of tokens.\n", " references (`list` of `list` of `str`): A list of lists of references. The contents of the first sub-list are the references for the first prediction, the contents of the second sub-list are for the second prediction, etc. Note that there must be the same number of references for each prediction (i.e. all sub-lists must be of the same length).\n", " smooth_method (`str`): The smoothing method to use, defaults to `'exp'`. Possible values are:\n", " - `'none'`: no smoothing\n", " - `'floor'`: increment zero counts\n", " - `'add-k'`: increment num/denom by k for n>1\n", " - `'exp'`: exponential decay\n", " smooth_value (`float`): The smoothing value. Only valid when `smooth_method='floor'` (in which case `smooth_value` defaults to `0.1`) or `smooth_method='add-k'` (in which case `smooth_value` defaults to `1`).\n", " tokenize (`str`): Tokenization method to use for BLEU. If not provided, defaults to `'zh'` for Chinese, `'ja-mecab'` for Japanese and `'13a'` (mteval) otherwise. Possible values are:\n", " - `'none'`: No tokenization.\n", " - `'zh'`: Chinese tokenization.\n", " - `'13a'`: mimics the `mteval-v13a` script from Moses.\n", " - `'intl'`: International tokenization, mimics the `mteval-v14` script from Moses\n", " - `'char'`: Language-agnostic character-level tokenization.\n", " - `'ja-mecab'`: Japanese tokenization. Uses the [MeCab tokenizer](https://pypi.org/project/mecab-python3).\n", " lowercase (`bool`): If `True`, lowercases the input, enabling case-insensitivity. Defaults to `False`.\n", " force (`bool`): If `True`, insists that your tokenized input is actually detokenized. Defaults to `False`.\n", " use_effective_order (`bool`): If `True`, stops including n-gram orders for which precision is 0. This should be `True`, if sentence-level BLEU will be computed. Defaults to `False`.\n", "\n", "Returns:\n", " 'score': BLEU score,\n", " 'counts': Counts,\n", " 'totals': Totals,\n", " 'precisions': Precisions,\n", " 'bp': Brevity penalty,\n", " 'sys_len': predictions length,\n", " 'ref_len': reference length,\n", "\n", "Examples:\n", "\n", " Example 1:\n", " >>> predictions = [\"hello there general kenobi\", \"foo bar foobar\"]\n", " >>> references = [[\"hello there general kenobi\", \"hello there !\"], [\"foo bar foobar\", \"foo bar foobar\"]]\n", " >>> sacrebleu = evaluate.load(\"sacrebleu\")\n", " >>> results = sacrebleu.compute(predictions=predictions, references=references)\n", " >>> print(list(results.keys()))\n", " ['score', 'counts', 'totals', 'precisions', 'bp', 'sys_len', 'ref_len']\n", " >>> print(round(results[\"score\"], 1))\n", " 100.0\n", "\n", " Example 2:\n", " >>> predictions = [\"hello there general kenobi\",\n", " ... \"on our way to ankh morpork\"]\n", " >>> references = [[\"hello there general kenobi\", \"hello there !\"],\n", " ... [\"goodbye ankh morpork\", \"ankh morpork\"]]\n", " >>> sacrebleu = evaluate.load(\"sacrebleu\")\n", " >>> results = sacrebleu.compute(predictions=predictions,\n", " ... references=references)\n", " >>> print(list(results.keys()))\n", " ['score', 'counts', 'totals', 'precisions', 'bp', 'sys_len', 'ref_len']\n", " >>> print(round(results[\"score\"], 1))\n", " 39.8\n", "\"\"\", stored examples: 0)" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "metric" ] }, { "cell_type": "markdown", "metadata": { "id": "jAWdqcUBIrJC" }, "source": [ "You can call its `compute` method with your predictions and labels, which need to be list of decoded strings (list of list for the labels):" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "id": "6XN1Rq0aIrJC", "outputId": "a4405435-a8a9-41ff-9f79-a13077b587c7" }, "outputs": [ { "data": { "text/plain": [ "{'score': 0.0,\n", " 'counts': [4, 2, 0, 0],\n", " 'totals': [4, 2, 0, 0],\n", " 'precisions': [100.0, 100.0, 0.0, 0.0],\n", " 'bp': 1.0,\n", " 'sys_len': 4,\n", " 'ref_len': 4}" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "fake_preds = [\"hello there\", \"general kenobi\"]\n", "fake_labels = [[\"hello there\"], [\"general kenobi\"]]\n", "metric.compute(predictions=fake_preds, references=fake_labels)" ] }, { "cell_type": "markdown", "metadata": { "id": "n9qywopnIrJH" }, "source": [ "## Preprocessing the data" ] }, { "cell_type": "markdown", "metadata": { "id": "YVx71GdAIrJH" }, "source": [ "Before we can feed those texts to our model, we need to preprocess them. This is done by a 🤗 Transformers `Tokenizer` which will (as the name indicates) tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary) and put it in a format the model expects, as well as generate the other inputs that model requires.\n", "\n", "To do all of this, we instantiate our tokenizer with the `AutoTokenizer.from_pretrained` method, which will ensure:\n", "\n", "- we get a tokenizer that corresponds to the model architecture we want to use,\n", "- we download the vocabulary used when pretraining this specific checkpoint.\n", "\n", "That vocabulary will be cached, so it's not downloaded again the next time we run the cell." ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "id": "eXNLu_-nIrJI" }, "outputs": [], "source": [ "from transformers import AutoTokenizer\n", "\n", "tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For the mBART tokenizer (like we have here), we need to set the source and target languages (so the texts are preprocessed properly). You can check the language codes [here](https://huggingface.co/facebook/mbart-large-cc25) if you are using this notebook on a different pairs of languages." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "if \"mbart\" in model_checkpoint:\n", " tokenizer.src_lang = \"en-XX\"\n", " tokenizer.tgt_lang = \"ro-RO\"" ] }, { "cell_type": "markdown", "metadata": { "id": "Vl6IidfdIrJK" }, "source": [ "By default, the call above will use one of the fast tokenizers (backed by Rust) from the 🤗 Tokenizers library." ] }, { "cell_type": "markdown", "metadata": { "id": "rowT4iCLIrJK" }, "source": [ "You can directly call this tokenizer on one sentence or a pair of sentences:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "id": "a5hBlsrHIrJL", "outputId": "acdaa98a-a8cd-4a20-89b8-cc26437bbe90" }, "outputs": [ { "data": { "text/plain": [ "{'input_ids': [4708, 2, 69, 28, 9, 8662, 84, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1]}" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tokenizer(\"Hello, this is a sentence!\")" ] }, { "cell_type": "markdown", "metadata": { "id": "qo_0B1M2IrJM" }, "source": [ "Depending on the model you selected, you will see different keys in the dictionary returned by the cell above. They don't matter much for what we're doing here (just know they are required by the model we will instantiate later), you can learn more about them in [this tutorial](https://huggingface.co/transformers/preprocessing.html) if you're interested.\n", "\n", "Instead of one sentence, we can pass along a list of sentences:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'input_ids': [[4708, 2, 69, 28, 9, 8662, 84, 0], [188, 28, 823, 8662, 3, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1]]}" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tokenizer([\"Hello, this is a sentence!\", \"This is another sentence.\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To prepare the targets for our model, we need to tokenize them inside the `as_target_tokenizer` context manager. This will make sure the tokenizer uses the special tokens corresponding to the targets:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'input_ids': [[14232, 244, 2, 69, 160, 6, 9, 10513, 1101, 84, 0], [13486, 6, 160, 6, 3778, 4853, 10513, 1101, 3, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}\n" ] } ], "source": [ "with tokenizer.as_target_tokenizer():\n", " print(tokenizer([\"Hello, this is a sentence!\", \"This is another sentence.\"]))" ] }, { "cell_type": "markdown", "metadata": { "id": "2C0hcmp9IrJQ" }, "source": [ "If you are using one of the five T5 checkpoints that require a special prefix to put before the inputs, you should adapt the following cell." ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "if model_checkpoint in [\"t5-small\", \"t5-base\", \"t5-larg\", \"t5-3b\", \"t5-11b\"]:\n", " prefix = \"translate English to Romanian: \"\n", "else:\n", " prefix = \"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can then write the function that will preprocess our samples. We just feed them to the `tokenizer` with the argument `truncation=True`. This will ensure that an input longer that what the model selected can handle will be truncated to the maximum length accepted by the model. The padding will be dealt with later on (in a data collator) so we pad examples to the longest length in the batch and not the whole dataset." ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "id": "vc0BSBLIIrJQ" }, "outputs": [], "source": [ "max_input_length = 128\n", "max_target_length = 128\n", "source_lang = \"en\"\n", "target_lang = \"ro\"\n", "\n", "\n", "def preprocess_function(examples):\n", " inputs = [prefix + ex[source_lang] for ex in examples[\"translation\"]]\n", " targets = [ex[target_lang] for ex in examples[\"translation\"]]\n", " model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True)\n", "\n", " # Setup the tokenizer for targets\n", " with tokenizer.as_target_tokenizer():\n", " labels = tokenizer(targets, max_length=max_target_length, truncation=True)\n", "\n", " model_inputs[\"labels\"] = labels[\"input_ids\"]\n", " return model_inputs" ] }, { "cell_type": "markdown", "metadata": { "id": "0lm8ozrJIrJR" }, "source": [ "This function works with one or several examples. In the case of several examples, the tokenizer will return a list of lists for each key:" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "id": "-b70jh26IrJS", "outputId": "acd3a42d-985b-44ee-9daa-af5d944ce1d9" }, "outputs": [ { "data": { "text/plain": [ "{'input_ids': [[37284, 8, 949, 37, 358, 31483, 0], [32818, 8, 31483, 8, 2541, 7910, 37, 358, 31483, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], 'labels': [[1163, 8008, 7037, 26971, 37, 9, 56, 16836, 9026, 226, 15, 33834, 0], [67, 16852, 791, 9026, 896, 15, 33834, 111, 10795, 9351, 26549, 11114, 37, 9, 56, 16836, 9026, 226, 15, 33834, 0]]}" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "preprocess_function(raw_datasets[\"train\"][:2])" ] }, { "cell_type": "markdown", "metadata": { "id": "zS-6iXTkIrJT" }, "source": [ "To apply this function on all the pairs of sentences in our dataset, we just use the `map` method of our `dataset` object we created earlier. This will apply the function on all the elements of all the splits in `dataset`, so our training, validation and testing data will be preprocessed in one single command." ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "id": "DDtsaJeVIrJT", "outputId": "aa4734bf-4ef5-4437-9948-2c16363da719" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Loading cached processed dataset at /home/matt/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/28ebdf8cf22106c2f1e58b2083d4b103608acd7bfdb6b14313ccd9e5bc8c313a/cache-f1b4cc7f6a817a09.arrow\n", "Loading cached processed dataset at /home/matt/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/28ebdf8cf22106c2f1e58b2083d4b103608acd7bfdb6b14313ccd9e5bc8c313a/cache-2dcbdf92c911af2a.arrow\n", "Loading cached processed dataset at /home/matt/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/28ebdf8cf22106c2f1e58b2083d4b103608acd7bfdb6b14313ccd9e5bc8c313a/cache-34490b3ad1e70b86.arrow\n" ] } ], "source": [ "tokenized_datasets = raw_datasets.map(preprocess_function, batched=True)" ] }, { "cell_type": "markdown", "metadata": { "id": "voWiw8C7IrJV" }, "source": [ "Even better, the results are automatically cached by the 🤗 Datasets library to avoid spending time on this step the next time you run your notebook. The 🤗 Datasets library is normally smart enough to detect when the function you pass to map has changed (and thus requires to not use the cache data). For instance, it will properly detect if you change the task in the first cell and rerun the notebook. 🤗 Datasets warns you when it uses cached files, you can pass `load_from_cache_file=False` in the call to `map` to not use the cached files and force the preprocessing to be applied again.\n", "\n", "Note that we passed `batched=True` to encode the texts by batches together. This is to leverage the full benefit of the fast tokenizer we loaded earlier, which will use multi-threading to treat the texts in a batch concurrently." ] }, { "cell_type": "markdown", "metadata": { "id": "545PP3o8IrJV" }, "source": [ "## Fine-tuning the model" ] }, { "cell_type": "markdown", "metadata": { "id": "FBiW8UpKIrJW" }, "source": [ "Now that our data is ready, we can download the pretrained model and fine-tune it. Since our task is of the sequence-to-sequence kind, we use the `AutoModelForSeq2SeqLM` class. Like with the tokenizer, the `from_pretrained` method will download and cache the model for us." ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "id": "TlqNaB8jIrJW", "outputId": "84916cf3-6e6c-47f3-d081-032ec30a4132" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "2022-07-25 17:49:51.571462: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-25 17:49:51.577820: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-25 17:49:51.578841: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-25 17:49:51.580434: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA\n", "To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n", "2022-07-25 17:49:51.583246: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-25 17:49:51.583929: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-25 17:49:51.584582: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-25 17:49:51.938374: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-25 17:49:51.939080: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-25 17:49:51.939739: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-25 17:49:51.940364: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 21659 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:21:00.0, compute capability: 8.6\n", "2022-07-25 17:49:53.116600: I tensorflow/stream_executor/cuda/cuda_blas.cc:1786] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.\n", "All model checkpoint layers were used when initializing TFMarianMTModel.\n", "\n", "All the layers of TFMarianMTModel were initialized from the model checkpoint at Helsinki-NLP/opus-mt-en-ROMANCE.\n", "If your task is similar to the task the model of the checkpoint was trained on, you can already use TFMarianMTModel for predictions without further training.\n" ] } ], "source": [ "from transformers import TFAutoModelForSeq2SeqLM, DataCollatorForSeq2Seq\n", "\n", "model = TFAutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)" ] }, { "cell_type": "markdown", "metadata": { "id": "CczA5lJlIrJX" }, "source": [ "Note that we don't get a warning like in our classification example. This means we used all the weights of the pretrained model and there is no randomly initialized head in this case." ] }, { "cell_type": "markdown", "metadata": { "id": "_N8urzhyIrJY" }, "source": [ "Next we set some parameters like the learning rate and the `batch_size`and customize the weight decay. \n", "\n", "The last two arguments are to setup everything so we can push the model to the [Hub](https://huggingface.co/models) at the end of training. Remove the two of them if you didn't follow the installation steps at the top of the notebook, otherwise you can change the value of push_to_hub_model_id to something you would prefer." ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "id": "Bliy8zgjIrJY" }, "outputs": [], "source": [ "batch_size = 16\n", "learning_rate = 2e-5\n", "weight_decay = 0.01\n", "num_train_epochs = 1\n", "\n", "model_name = model_checkpoint.split(\"/\")[-1]\n", "push_to_hub_model_id = f\"{model_name}-finetuned-{source_lang}-to-{target_lang}\"" ] }, { "cell_type": "markdown", "metadata": { "id": "km3pGVdTIrJc" }, "source": [ "Then, we need a special kind of data collator, which will not only pad the inputs to the maximum length in the batch, but also the labels. Note that our data collators are designed to work for multiple frameworks, so ensure you set the `return_tensors='np'` argument to get NumPy arrays out - you don't want to accidentally get a load of `torch.Tensor` objects in the middle of your nice TF code! You could also use `return_tensors='tf'` to get TensorFlow tensors, but our TF dataset pipeline actually uses a NumPy loader internally, which is wrapped at the end with a `tf.data.Dataset`. As a result, `np` is usually more reliable and performant when you're using it!" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "ename": "NameError", "evalue": "name 'DataCollatorForSeq2Seq' is not defined", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)", "Cell \u001b[0;32mIn[1], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m data_collator \u001b[38;5;241m=\u001b[39m \u001b[43mDataCollatorForSeq2Seq\u001b[49m(tokenizer, model\u001b[38;5;241m=\u001b[39mmodel, return_tensors\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mnp\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[1;32m 3\u001b[0m generation_data_collator \u001b[38;5;241m=\u001b[39m DataCollatorForSeq2Seq(tokenizer, model\u001b[38;5;241m=\u001b[39mmodel, return_tensors\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mnp\u001b[39m\u001b[38;5;124m\"\u001b[39m, pad_to_multiple_of\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m128\u001b[39m)\n", "\u001b[0;31mNameError\u001b[0m: name 'DataCollatorForSeq2Seq' is not defined" ] } ], "source": [ "data_collator = DataCollatorForSeq2Seq(tokenizer, model=model, return_tensors=\"np\")\n", "\n", "generation_data_collator = DataCollatorForSeq2Seq(tokenizer, model=model, return_tensors=\"np\", pad_to_multiple_of=128)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we convert our datasets to `tf.data.Dataset`, which Keras understands natively. There are two ways to do this - we can use the slightly more low-level [`Dataset.to_tf_dataset()`](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.to_tf_dataset) method, or we can use [`Model.prepare_tf_dataset()`](https://huggingface.co/docs/transformers/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset). The main difference between these two is that the `Model` method can inspect the model to determine which column names it can use as input, which means you don't need to specify them yourself. Make sure to specify the collator we just created as our `collate_fn`!\n", "\n", "We also want to compute `BLEU` metrics, which will require us to generate text from our model. To speed things up, we can compile our generation loop with XLA. This results in a *huge* speedup - up to 100X! The downside of XLA generation, though, is that it doesn't like variable input shapes, because it needs to run a new compilation for each new input shape! To compensate for that, let's use `pad_to_multiple_of` for the dataset we use for text generation. This will reduce the number of unique input shapes a lot, meaning we can get the benefits of XLA generation with only a few compilations." ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [], "source": [ "train_dataset = model.prepare_tf_dataset(\n", " tokenized_datasets[\"train\"],\n", " batch_size=batch_size,\n", " shuffle=True,\n", " collate_fn=data_collator,\n", ")\n", "\n", "validation_dataset = model.prepare_tf_dataset(\n", " tokenized_datasets[\"validation\"],\n", " batch_size=batch_size,\n", " shuffle=False,\n", " collate_fn=data_collator,\n", ")\n", "\n", "generation_dataset = model.prepare_tf_dataset(\n", " tokenized_datasets[\"validation\"],\n", " batch_size=8,\n", " shuffle=False,\n", " collate_fn=generation_data_collator,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we initialize our loss and optimizer and compile the model. Note that most Transformers models compute loss internally, so we can just leave the loss argument blank to use the internal loss instead. For the optimizer, we can use the `AdamWeightDecay` optimizer in the Transformer library." ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "No loss specified in compile() - the model's internal loss computation will be used as the loss. Don't panic - this is a common way to train TensorFlow models in Transformers! To disable this behaviour please pass a loss argument, or explicitly pass `loss=None` if you do not want your model to compute a loss.\n" ] } ], "source": [ "from transformers import AdamWeightDecay\n", "import tensorflow as tf\n", "\n", "optimizer = AdamWeightDecay(learning_rate=learning_rate, weight_decay_rate=weight_decay)\n", "model.compile(optimizer=optimizer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can train our model. We can also add a few optional callbacks here, which you can remove if they aren't useful to you. In no particular order, these are:\n", "- PushToHubCallback will sync up our model with the Hub - this allows us to resume training from other machines, share the model after training is finished, and even test the model's inference quality midway through training!\n", "- TensorBoard is a built-in Keras callback that logs TensorBoard metrics.\n", "- KerasMetricCallback is a callback for computing advanced metrics. There are a number of common metrics in NLP like ROUGE which are hard to fit into your compiled training loop because they depend on decoding predictions and labels back to strings with the tokenizer, and calling arbitrary Python functions to compute the metric. The KerasMetricCallback will wrap a metric function, outputting metrics as training progresses.\n", "\n", "If this is the first time you've seen `KerasMetricCallback`, it's worth explaining what exactly is going on here. The callback takes two main arguments - a `metric_fn` and an `eval_dataset`. It then iterates over the `eval_dataset` and collects the model's outputs for each sample, before passing the `list` of predictions and the associated `list` of labels to the user-defined `metric_fn`. If the `predict_with_generate` argument is `True`, then it will call `model.generate()` for each input sample instead of `model.predict()` - this is useful for metrics that expect generated text from the model, like `ROUGE` and `BLEU`.\n", "\n", "This callback allows complex metrics to be computed each epoch that would not function as a standard Keras Metric. Metric values are printed each epoch, and can be used by other callbacks like `TensorBoard` or `EarlyStopping`." ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [], "source": [ "from transformers.keras_callbacks import KerasMetricCallback\n", "import numpy as np\n", "\n", "\n", "def metric_fn(eval_predictions):\n", " preds, labels = eval_predictions\n", " prediction_lens = [\n", " np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds\n", " ]\n", " decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)\n", "\n", " # We use -100 to mask labels - replace it with the tokenizer pad token when decoding\n", " # so that no output is emitted for these\n", " labels = np.where(labels != -100, labels, tokenizer.pad_token_id)\n", " decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)\n", "\n", " # Some simple post-processing\n", " decoded_preds = [pred.strip() for pred in decoded_preds]\n", " decoded_labels = [[label.strip()] for label in decoded_labels]\n", "\n", " result = metric.compute(predictions=decoded_preds, references=decoded_labels)\n", " result = {\"bleu\": result[\"score\"]}\n", " result[\"gen_len\"] = np.mean(prediction_lens)\n", " return result\n", "\n", "\n", "metric_callback = KerasMetricCallback(\n", " metric_fn=metric_fn, eval_dataset=generation_dataset, predict_with_generate=True, use_xla_generation=True, \n", " generate_kwargs={\"max_length\": 128}\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With the metric callback ready, now we can specify the other callbacks and fit our model:" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "scrolled": true }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/home/matt/PycharmProjects/notebooks/examples/translation_model_save is already a clone of https://huggingface.co/Rocketknight1/opus-mt-en-ROMANCE-finetuned-en-to-ro. Make sure you pull the latest changes with `repo.git_pull()`.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " 6/38145 [..............................] - ETA: 56:25 - loss: 5.2187WARNING:tensorflow:Callback method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0788s vs `on_train_batch_end` time: 0.1046s). Check your callbacks.\n", "38145/38145 [==============================] - ETA: 0s - loss: 0.7140" ] }, { "name": "stderr", "output_type": "stream", "text": [ "2022-07-25 18:43:16.811498: I tensorflow/compiler/xla/service/service.cc:170] XLA service 0x5633dc97b3a0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\n", "2022-07-25 18:43:16.811529: I tensorflow/compiler/xla/service/service.cc:178] StreamExecutor device (0): NVIDIA GeForce RTX 3090, Compute Capability 8.6\n", "2022-07-25 18:43:16.943241: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:263] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.\n", "2022-07-25 18:43:17.816234: I tensorflow/compiler/xla/service/dynamic_dimension_inference.cc:965] Reshaping a dynamic dimension into a scalar, which has undefined behavior when input size is 0. The offending instruction is: %reshape.41 = s32[] reshape(s32[<=1]{0} %set-dimension-size.3), metadata={op_type=\"Equal\" op_name=\"cond/while/map/while/map/while/cond/cond/Equal\" source_file=\"/home/matt/PycharmProjects/transformers/src/transformers/generation_tf_logits_process.py\" source_line=351}\n", "2022-07-25 18:43:25.655895: I tensorflow/compiler/jit/xla_compilation_cache.cc:478] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.\n", "2022-07-25 18:45:51.416864: I tensorflow/compiler/xla/service/dynamic_dimension_inference.cc:965] Reshaping a dynamic dimension into a scalar, which has undefined behavior when input size is 0. The offending instruction is: %reshape.41 = s32[] reshape(s32[<=1]{0} %set-dimension-size.3), metadata={op_type=\"Equal\" op_name=\"cond/while/map/while/map/while/cond/cond/Equal\" source_file=\"/home/matt/PycharmProjects/transformers/src/transformers/generation_tf_logits_process.py\" source_line=351}\n", "Several commits (2) will be pushed upstream.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r", "38145/38145 [==============================] - 3382s 88ms/step - loss: 0.7140 - val_loss: 1.2757 - bleu: 26.7914 - gen_len: 41.4932\n" ] }, { "data": { "text/plain": [ "<keras.callbacks.History at 0x7f4af02c52d0>" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from transformers.keras_callbacks import PushToHubCallback\n", "from tensorflow.keras.callbacks import TensorBoard\n", "\n", "tensorboard_callback = TensorBoard(log_dir=\"./translation_model_save/logs\")\n", "\n", "push_to_hub_callback = PushToHubCallback(\n", " output_dir=\"./translation_model_save\",\n", " tokenizer=tokenizer,\n", " hub_model_id=push_to_hub_model_id,\n", ")\n", "\n", "callbacks = [metric_callback, tensorboard_callback, push_to_hub_callback]\n", "\n", "model.fit(\n", " train_dataset, validation_data=validation_dataset, epochs=1, callbacks=callbacks\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you used the callback above, you can now share this model with all your friends, family or favorite pets: they can all load it with the identifier `\"your-username/the-name-you-picked\"` so for instance:\n", "\n", "```python\n", "from transformers import TFAutoModelForSeq2SeqLM\n", "\n", "model = TFAutoModelForSeq2SeqLM.from_pretrained(\"your-username/my-awesome-model\")\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Inference" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we've trained our model, let's see how we could load it and use it to translate text in future! First, let's load it from the hub. This means we can resume the code from here without needing to rerun everything above every time." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "2ad4b75e5c8f442d8f1492d46d934506", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading tokenizer_config.json: 0%| | 0.00/551 [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "a32e7310f642425991fa40022620c199", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading source.spm: 0%| | 0.00/761k [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "d852fec625c7450fb8d284ca6518f413", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading target.spm: 0%| | 0.00/780k [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "f45af3274b5c4e50a68ee2c77f6af94a", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading vocab.json: 0%| | 0.00/1.51M [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "bb06b7aa33f54896950a8d1eb1245c21", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading special_tokens_map.json: 0%| | 0.00/74.0 [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "26017a377a3d4a04add74085af38ebf4", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading config.json: 0%| | 0.00/1.47k [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "5ab4277b7cc24e9d8ebbdc0280c6bda8", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading tf_model.h5: 0%| | 0.00/298M [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "2022-07-26 17:56:38.238360: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-26 17:56:38.275342: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-26 17:56:38.276357: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-26 17:56:38.278209: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA\n", "To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n", "2022-07-26 17:56:38.309314: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-26 17:56:38.310572: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-26 17:56:38.311790: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-26 17:56:39.033228: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-26 17:56:39.033908: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-26 17:56:39.034535: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-26 17:56:39.035152: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 21719 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:21:00.0, compute capability: 8.6\n", "2022-07-26 17:56:40.631257: I tensorflow/stream_executor/cuda/cuda_blas.cc:1786] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.\n", "All model checkpoint layers were used when initializing TFMarianMTModel.\n", "\n", "All the layers of TFMarianMTModel were initialized from the model checkpoint at Rocketknight1/opus-mt-en-ROMANCE-finetuned-en-to-ro.\n", "If your task is similar to the task the model of the checkpoint was trained on, you can already use TFMarianMTModel for predictions without further training.\n" ] } ], "source": [ "from transformers import AutoTokenizer, TFAutoModelForSeq2SeqLM\n", "\n", "# You can of course substitute your own username and model here if you've trained and uploaded it!\n", "model_name = 'Rocketknight1/opus-mt-en-ROMANCE-finetuned-en-to-ro'\n", "tokenizer = AutoTokenizer.from_pretrained(model_name)\n", "model = TFAutoModelForSeq2SeqLM.from_pretrained(model_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's try tokenizing some text and passing it to the model to generate a translation. Don't forget to add the \"translate: \" string at the start if you're using a `T5` model." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tf.Tensor(\n", "[[65000 642 1204 5 12648 35 26792 415 36773 5031 11008 208\n", " 2 1019 203 2836 600 229 15032 3796 13286 226 3 0\n", " 65000 65000 65000 65000 65000 65000 65000 65000 65000 65000 65000 65000\n", " 65000 65000 65000 65000 65000 65000 65000 65000 65000 65000 65000 65000\n", " 65000 65000 65000 65000 65000 65000 65000 65000 65000 65000 65000 65000\n", " 65000 65000 65000 65000 65000 65000 65000 65000 65000 65000 65000 65000\n", " 65000 65000 65000 65000 65000 65000 65000 65000 65000 65000 65000 65000\n", " 65000 65000 65000 65000 65000 65000 65000 65000 65000 65000 65000 65000\n", " 65000 65000 65000 65000 65000 65000 65000 65000 65000 65000 65000 65000\n", " 65000 65000 65000 65000 65000 65000 65000 65000 65000 65000 65000 65000\n", " 65000 65000 65000 65000 65000 65000 65000 65000]], shape=(1, 128), dtype=int32)\n" ] } ], "source": [ "input_text = \"I'm not actually a very competent Romanian speaker, but let's try our best.\"\n", "if 't5' in model_name: \n", " input_text = \"translate English to Romanian: \" + input_text\n", "tokenized = tokenizer([input_text], return_tensors='np')\n", "out = model.generate(**tokenized, max_length=128)\n", "print(out)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Well, that's some tokens and a lot of padding! Let's decode those to see what it says, using the `skip_special_tokens` argument to skip those padding tokens:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Nu sunt de fapt un vorbitor român foarte competent, dar haideţi să facem tot posibilul.\n" ] } ], "source": [ "with tokenizer.as_target_tokenizer():\n", " print(tokenizer.decode(out[0], skip_special_tokens=True))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is the point where I start wishing I'd done this example in a language I actually speak. Still, it looks good! Probably!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Using XLA in inference" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you just want to generate a few translations, the code above is all you need. However, generation can be **much** faster if you use XLA, and if you want to generate data in bulk, you should probably use it! If you're using XLA, though, remember that you'll need to do a new XLA compilation for every input size you pass to the model. This means that you should keep your batch size constant, and consider padding inputs to the same length, or using `pad_to_multiple_of` in your tokenizer to reduce the number of different input shapes you pass. Let's show an example of that:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "2022-07-26 18:19:44.757209: I tensorflow/compiler/xla/service/dynamic_dimension_inference.cc:965] Reshaping a dynamic dimension into a scalar, which has undefined behavior when input size is 0. The offending instruction is: %reshape.41 = s32[] reshape(s32[<=1]{0} %set-dimension-size.3), metadata={op_type=\"Equal\" op_name=\"cond/while/map/while/map/while/cond/cond/Equal\" source_file=\"/home/matt/PycharmProjects/transformers/src/transformers/generation_tf_logits_process.py\" source_line=351}\n" ] } ], "source": [ "import tensorflow as tf\n", "\n", "@tf.function(jit_compile=True)\n", "def generate(inputs):\n", " return model.generate(**inputs, max_length=128)\n", "\n", "tokenized_data = tokenizer([input_text], return_tensors=\"np\", pad_to_multiple_of=128)\n", "out = generate(tokenized_data)" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Nu sunt de fapt un vorbitor român foarte competent, dar haideţi să facem tot posibilul.\n" ] } ], "source": [ "with tokenizer.as_target_tokenizer():\n", " print(tokenizer.decode(out[0], skip_special_tokens=True))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Pipeline API" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The pipeline API offers a convenient shortcut for all of this, but doesn't (yet!) support XLA generation:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "All model checkpoint layers were used when initializing TFMarianMTModel.\n", "\n", "All the layers of TFMarianMTModel were initialized from the model checkpoint at Rocketknight1/opus-mt-en-ROMANCE-finetuned-en-to-ro.\n", "If your task is similar to the task the model of the checkpoint was trained on, you can already use TFMarianMTModel for predictions without further training.\n" ] } ], "source": [ "from transformers import pipeline\n", "\n", "translator = pipeline('text2text-generation', model_name, framework=\"tf\")" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'generated_text': 'Nu sunt de fapt un vorbitor român foarte competent, dar haideţi să facem tot posibilul.'}]" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "translator(input_text, max_length=128)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Easy!" ] } ], "metadata": { "colab": { "name": "Translation", "provenance": [] }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.3" } }, "nbformat": 4, "nbformat_minor": 1 }