{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "X4cRE8IbIrIV" }, "source": [ "If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers and 🤗 Datasets as well as other dependencies. Uncomment the following cell and run it. Note the `rouge-score` and `nltk` dependencies - even if you've used 🤗 Transformers before, you may not have these installed!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 1000 }, "id": "MOsHUjgdIrIW", "outputId": "f84a093e-147f-470e-aad9-80fb51193c8e" }, "outputs": [], "source": [ "#! pip install transformers datasets\n", "#! pip install rouge-score nltk\n", "#! pip install huggingface_hub" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you're opening this notebook locally, make sure your environment has an install from the last version of those libraries.\n", "\n", "To be able to share your model with the community and generate results like the one shown in the picture below via the inference API, there are a few more steps to follow.\n", "\n", "First you have to store your authentication token from the Hugging Face website (sign up [here](https://huggingface.co/join) if you haven't already!) then run the following cell and input your token:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from huggingface_hub import notebook_login\n", "\n", "notebook_login()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then you need to install Git-LFS and setup Git if you haven't already. Uncomment the following instructions and adapt with your name and email:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# !apt install git-lfs\n", "# !git config --global user.email \"you@example.com\"\n", "# !git config --global user.name \"Your Name\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Make sure your version of Transformers is at least 4.16.0 since some of the functionality we use was introduced in that version:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "4.21.0.dev0\n" ] } ], "source": [ "import transformers\n", "\n", "print(transformers.__version__)" ] }, { "cell_type": "markdown", "metadata": { "id": "HFASsisvIrIb" }, "source": [ "You can find a script version of this notebook to fine-tune your model in a distributed fashion using multiple GPUs or TPUs [here](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We also quickly upload some telemetry - this tells us which examples and software versions are getting used so we know where to prioritize our maintenance efforts. We don't collect (or care about) any personally identifiable information, but if you'd prefer not to be counted, feel free to skip this step or delete this cell entirely." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from transformers.utils import send_example_telemetry\n", "\n", "send_example_telemetry(\"summarization_notebook\", framework=\"tensorflow\")" ] }, { "cell_type": "markdown", "metadata": { "id": "rEJBSTyZIrIb" }, "source": [ "# Fine-tuning a model on a summarization task" ] }, { "cell_type": "markdown", "metadata": { "id": "kTCFado4IrIc" }, "source": [ "In this notebook, we will see how to fine-tune one of the [🤗 Transformers](https://github.com/huggingface/transformers) model for a summarization task. We will use the [XSum dataset](https://arxiv.org/pdf/1808.08745.pdf) (for extreme summarization) which contains BBC articles accompanied with single-sentence summaries.\n", "\n", "![Widget inference on a summarization task](images/summarization.png)\n", "\n", "We will see how to easily load the dataset for this task using 🤗 Datasets and how to fine-tune a model on it using Keras." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "model_checkpoint = \"t5-small\"" ] }, { "cell_type": "markdown", "metadata": { "id": "4RRkXuteIrIh" }, "source": [ "This notebook is built to run with any model checkpoint from the [Model Hub](https://huggingface.co/models) as long as that model has a sequence-to-sequence version in the Transformers library. Here we pick the [`t5-small`](https://huggingface.co/t5-small) checkpoint. " ] }, { "cell_type": "markdown", "metadata": { "id": "whPRbBNbIrIl" }, "source": [ "## Loading the dataset" ] }, { "cell_type": "markdown", "metadata": { "id": "W7QYTpxXIrIl" }, "source": [ "We will use the [🤗 Datasets](https://github.com/huggingface/datasets) library to download the data and get the metric we need to use for evaluation (to compare our model to the benchmark). This can be easily done with the functions `load_dataset` and `load_metric`. " ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "id": "IreSlFmlIrIm" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Using custom data configuration default\n", "Reusing dataset xsum (/home/matt/.cache/huggingface/datasets/xsum/default/1.2.0/32c23220eadddb1149b16ed2e9430a05293768cfffbdfd151058697d4c11f934)\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "060b5fe0604e4646aee58b876ad50d70", "version_major": 2, "version_minor": 0 }, "text/plain": [ " 0%| | 0/3 [00:00\n", " \n", " \n", " \n", " document\n", " summary\n", " id\n", " \n", " \n", " \n", " \n", " 0\n", " South Wales Police alleged Detective Sergeant Andrew Whelan failed to investigate Crimestoppers and witness complaints about Watkins in 2010.\\nWatkins was sentenced to 35 years in 2013 after admitting a string of child sex offences.\\nA tribunal panel found it \"inappropriate\" to make any findings of misconduct against the officer.\\nDet Sgt Whelan was accused of failing to act on information about Watkins given to Crimestoppers three times between March and October 2010.\\nJonathan Walters, for South Wales Police, told the tribunal: \"You failed to take any action in relation to those logs.\"\\nThe second allegation was that Det Sgt Whelan failed to react to a complaint from an unnamed member of the public in October 2010 that Watkins had \"child pornography on his computer and that he fantasised about abusing children\".\\nJohn Beggs QC, representing Det Sgt Whelan, described him as an \"exemplary officer\", who was dedicated to his duty and had an \"unblemished\" 25-year career.\\n\"He had six years of unrelenting hell where he worked many more hours than he was being paid, and was frequently working 18 hour days,\" he said.\\n\"This was an isolated out-of-character incident in difficult circumstances.\"\\nTribunal panel chairman Robert Vernon said: \"It was an uncharacteristic lapse of judgement from an officer who was otherwise carrying out his duties in a diligent and professional manner.\\n\"It was a decision taken in very difficult circumstances, bearing in mind his workload and professional and personal circumstances at that time.\"\n", " A detective who investigated paedophile Lostprophets singer Ian Watkins has been cleared of misconduct.\n", " 39862615\n", " \n", " \n", " 1\n", " Extensive discussions between leading Fleet Street executives over the past several months have been aimed at combating the structural decline of the market, as annual display advertising diminishes by around 20% and circulations continue their downward trajectory.\\nThe talks have been hampered by personality clashes, the competing priorities of different groups, and the sheer novelty of companies whose commercial operations had hitherto been aggressive rivals, trying instead to co-operate for mutual advantage.\\nProject Rio will continue, with News UK, Telegraph Media Group, and the owner of The Guardian still trying to collaborate.\\nDMGT, the publisher of the Daily Mail, pulled out of Project Rio in January, saying it had \"stepped back\" to pursue \"broader commercial priorities\" in 2017. These include the monetisation of Mail Online, which has expanded quickly into America and is a priority of Paul Zwillenberg, chief executive of DMGT.\\nOn 10 January I revealed that Trinity was in talks with Northern & Shell, the newspaper group run by Richard Desmond, and David Montgomery, the former newspaper executive and investor, about back-office consolidations.\\nMy recent conversations with very senior sources in the industry make clear that the failure to progress this work on consolidation, which could potentially reap huge savings, is a source of immense frustration to the parties involved.\\nFor Trinity, focusing on that consolidation is a higher priority than Project Rio.\\nBut with the flight of advertising from print to digital accelerating, and Facebook and Google tightening their grip on that money, newspapers are struggling to make enough money from their websites to offset the loss of money from print, due to structural decline.\\nEven after the closure of the print Independent (of which I was editor) last year, Britain's newspaper sector is very full - and arguably over-supplied - for a country with our population size.\\nAs this blog has repeatedly argued, bloated sectors facing structural decline are bound to consolidate. For Fleet Street, it's a question not of when, but how.\\nI've been sent a statement on behalf of Telegraph Media Group, Guardian Media Group and News UK.\\nIt says: \"Telegraph Media Group, Guardian Media Group and News UK today confirmed their continued commitment to working together to significantly improve the commercial value and perception of the news brands in the UK.\\n\"Trinity Mirror have confirmed that they will step aside from involvement in the next phase of the project, whilst wishing it well and reserving an option to rejoin at a later stage.\\n\"The three partners are working with market-leading consultancies on building the right approach to ensure that the industry continues to evolve to service key client and agency needs.\"\n", " I can reveal that Trinity Mirror, publisher of the Mirror titles, has pulled out of talks to create a joint advertising initiative across Britain's national newspaper industry.\n", " 39081377\n", " \n", " \n", " 2\n", " The pair are both available for Saturday's FA Cup quarter-final against 12-times winners Arsenal.\\nCalder, 21, spent the first half of the season on loan at Doncaster, scoring once in 20 appearances.\\nEtheridge, 22, started his career at Derby and has made eight appearances for the League Two leaders this season.\n", " Lincoln City have signed Aston Villa midfielder Riccardo Calder and Doncaster goalkeeper Ross Etheridge on loan until the end of the season.\n", " 39233337\n", " \n", " \n", " 3\n", " Westbound traffic has been diverted via the A48 Southern Distributor Road between M4 junctions 24 and 28.\\nThe westbound carriageway, between Junction 25a and Junction 26 of the M4, will reopen at 06:00 BST on Monday.\\nMotorists have been warned there is \"stop-start traffic\" on M4 Westbound between the Coldra (junction 24) and Caerleon (junction 25).\\nInrix traffic sensors are also showing slow traffic on the A48 Southern Distributor Road to the south of Newport.\\nBoth tunnels will be closed during the night occasionally as engineers upgrade the mechanical and electrical systems.\\nScheduled weekend closures for the Brynglas Tunnels on the M4:\\nWestbound:\\nEastbound:\n", " The westbound side of the M4's Brynglas Tunnels at Newport is closed on Sunday as its 18-month upgrade continues.\n", " 40317857\n", " \n", " \n", " 4\n", " The 28-year-old former Salisbury player joined Rovers in 2014 and has since made 20 first-team appearances.\\nHe played in the promotion final win against Grimsby which earned Rovers promotion back to the Football League.\\nPuddy joins Braintree, 22nd in non-league's top flight, as competition for Sam Beasant.\\nFind all the latest football transfers on our dedicated page.\n", " National League side Braintree Town have signed goalkeeper Will Puddy on a one-month loan deal from League One club Bristol Rovers.\n", " 37455713\n", " \n", " \n", "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_random_elements(raw_datasets[\"train\"])" ] }, { "cell_type": "markdown", "metadata": { "id": "lnjDIuQ3IrI-" }, "source": [ "The metric is an instance of [`datasets.Metric`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Metric):" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "id": "5o4rUteaIrI_", "outputId": "18038ef5-554c-45c5-e00a-133b02ec10f1" }, "outputs": [ { "data": { "text/plain": [ "Metric(name: \"rouge\", features: {'predictions': Value(dtype='string', id='sequence'), 'references': Value(dtype='string', id='sequence')}, usage: \"\"\"\n", "Calculates average rouge scores for a list of hypotheses and references\n", "Args:\n", " predictions: list of predictions to score. Each prediction\n", " should be a string with tokens separated by spaces.\n", " references: list of reference for each prediction. Each\n", " reference should be a string with tokens separated by spaces.\n", " rouge_types: A list of rouge types to calculate.\n", " Valid names:\n", " `\"rouge{n}\"` (e.g. `\"rouge1\"`, `\"rouge2\"`) where: {n} is the n-gram based scoring,\n", " `\"rougeL\"`: Longest common subsequence based scoring.\n", " `\"rougeLSum\"`: rougeLsum splits text using `\"\n", "\"`.\n", " See details in https://github.com/huggingface/datasets/issues/617\n", " use_stemmer: Bool indicating whether Porter stemmer should be used to strip word suffixes.\n", " use_aggregator: Return aggregates if this is set to True\n", "Returns:\n", " rouge1: rouge_1 (precision, recall, f1),\n", " rouge2: rouge_2 (precision, recall, f1),\n", " rougeL: rouge_l (precision, recall, f1),\n", " rougeLsum: rouge_lsum (precision, recall, f1)\n", "Examples:\n", "\n", " >>> rouge = datasets.load_metric('rouge')\n", " >>> predictions = [\"hello there\", \"general kenobi\"]\n", " >>> references = [\"hello there\", \"general kenobi\"]\n", " >>> results = rouge.compute(predictions=predictions, references=references)\n", " >>> print(list(results.keys()))\n", " ['rouge1', 'rouge2', 'rougeL', 'rougeLsum']\n", " >>> print(results[\"rouge1\"])\n", " AggregateScore(low=Score(precision=1.0, recall=1.0, fmeasure=1.0), mid=Score(precision=1.0, recall=1.0, fmeasure=1.0), high=Score(precision=1.0, recall=1.0, fmeasure=1.0))\n", " >>> print(results[\"rouge1\"].mid.fmeasure)\n", " 1.0\n", "\"\"\", stored examples: 0)" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "metric" ] }, { "cell_type": "markdown", "metadata": { "id": "jAWdqcUBIrJC" }, "source": [ "You can call its `compute` method with your predictions and labels, which need to be list of decoded strings:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "id": "6XN1Rq0aIrJC", "outputId": "a4405435-a8a9-41ff-9f79-a13077b587c7" }, "outputs": [ { "data": { "text/plain": [ "{'rouge1': AggregateScore(low=Score(precision=1.0, recall=1.0, fmeasure=1.0), mid=Score(precision=1.0, recall=1.0, fmeasure=1.0), high=Score(precision=1.0, recall=1.0, fmeasure=1.0)),\n", " 'rouge2': AggregateScore(low=Score(precision=1.0, recall=1.0, fmeasure=1.0), mid=Score(precision=1.0, recall=1.0, fmeasure=1.0), high=Score(precision=1.0, recall=1.0, fmeasure=1.0)),\n", " 'rougeL': AggregateScore(low=Score(precision=1.0, recall=1.0, fmeasure=1.0), mid=Score(precision=1.0, recall=1.0, fmeasure=1.0), high=Score(precision=1.0, recall=1.0, fmeasure=1.0)),\n", " 'rougeLsum': AggregateScore(low=Score(precision=1.0, recall=1.0, fmeasure=1.0), mid=Score(precision=1.0, recall=1.0, fmeasure=1.0), high=Score(precision=1.0, recall=1.0, fmeasure=1.0))}" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "fake_preds = [\"hello there\", \"general kenobi\"]\n", "fake_labels = [\"hello there\", \"general kenobi\"]\n", "metric.compute(predictions=fake_preds, references=fake_labels)" ] }, { "cell_type": "markdown", "metadata": { "id": "n9qywopnIrJH" }, "source": [ "## Preprocessing the data" ] }, { "cell_type": "markdown", "metadata": { "id": "YVx71GdAIrJH" }, "source": [ "Before we can feed those texts to our model, we need to preprocess them. This is done by a 🤗 Transformers `Tokenizer` which will (as the name indicates) tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary) and put it in a format the model expects, as well as generate the other inputs that the model requires.\n", "\n", "To do all of this, we instantiate our tokenizer with the `AutoTokenizer.from_pretrained` method, which will ensure:\n", "\n", "- we get a tokenizer that corresponds to the model architecture we want to use,\n", "- we download the vocabulary used when pretraining this specific checkpoint.\n", "\n", "That vocabulary will be cached, so it's not downloaded again the next time we run the cell." ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "id": "eXNLu_-nIrJI" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/home/matt/PycharmProjects/transformers/src/transformers/models/t5/tokenization_t5_fast.py:156: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5.\n", "For now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with `truncation is True`.\n", "- Be aware that you SHOULD NOT rely on t5-small automatically truncating your input to 512 when padding/encoding.\n", "- If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with `model_max_length` or pass `max_length` when encoding/padding.\n", "- To avoid this warning, please instantiate this tokenizer with `model_max_length` set to your preferred value.\n", " warnings.warn(\n" ] } ], "source": [ "from transformers import AutoTokenizer\n", "\n", "tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)" ] }, { "cell_type": "markdown", "metadata": { "id": "Vl6IidfdIrJK" }, "source": [ "By default, the call above will use one of the fast tokenizers (backed by Rust) from the 🤗 Tokenizers library." ] }, { "cell_type": "markdown", "metadata": { "id": "rowT4iCLIrJK" }, "source": [ "You can directly call this tokenizer on one sentence or a pair of sentences:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "id": "a5hBlsrHIrJL", "outputId": "acdaa98a-a8cd-4a20-89b8-cc26437bbe90" }, "outputs": [ { "data": { "text/plain": [ "{'input_ids': [8774, 6, 48, 19, 3, 9, 7142, 55, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1]}" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tokenizer(\"Hello, this is a sentence!\")" ] }, { "cell_type": "markdown", "metadata": { "id": "qo_0B1M2IrJM" }, "source": [ "Depending on the model you selected, you will see different keys in the dictionary returned by the cell above. They don't matter much for what we're doing here (just know they are required by the model we will instantiate later), you can learn more about them in [this tutorial](https://huggingface.co/transformers/preprocessing.html) if you're interested.\n", "\n", "Instead of one sentence, we can pass along a list of sentences:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'input_ids': [[8774, 6, 48, 19, 3, 9, 7142, 55, 1], [100, 19, 430, 7142, 5, 1]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1]]}" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tokenizer([\"Hello, this is a sentence!\", \"This is another sentence.\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To prepare the targets for our model, we need to tokenize them inside the `as_target_tokenizer` context manager. This will make sure the tokenizer uses the special tokens corresponding to the targets:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'input_ids': [[8774, 6, 48, 19, 3, 9, 7142, 55, 1], [100, 19, 430, 7142, 5, 1]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1]]}\n" ] } ], "source": [ "with tokenizer.as_target_tokenizer():\n", " print(tokenizer([\"Hello, this is a sentence!\", \"This is another sentence.\"]))" ] }, { "cell_type": "markdown", "metadata": { "id": "2C0hcmp9IrJQ" }, "source": [ "If you are using one of the five T5 checkpoints we have to prefix the inputs with \"summarize:\" (the model can also translate and it needs the prefix to know which task it has to perform)." ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "if model_checkpoint in [\"t5-small\", \"t5-base\", \"t5-large\", \"t5-3b\", \"t5-11b\"]:\n", " prefix = \"summarize: \"\n", "else:\n", " prefix = \"\"" ] }, { "cell_type": "markdown", "metadata": { "id": "2C0hcmp9IrJQ" }, "source": [ "We can then write the function that will preprocess our samples. We just feed them to the `tokenizer` with the argument `truncation=True`. This will ensure that an input longer that what the model selected can handle will be truncated to the maximum length accepted by the model. The padding will be dealt with later on (in a data collator) so we pad examples to the longest length in the batch and not the whole dataset." ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "id": "vc0BSBLIIrJQ" }, "outputs": [], "source": [ "max_input_length = 1024\n", "max_target_length = 128\n", "\n", "\n", "def preprocess_function(examples):\n", " inputs = [prefix + doc for doc in examples[\"document\"]]\n", " model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True)\n", "\n", " # Setup the tokenizer for targets\n", " with tokenizer.as_target_tokenizer():\n", " labels = tokenizer(\n", " examples[\"summary\"], max_length=max_target_length, truncation=True\n", " )\n", "\n", " model_inputs[\"labels\"] = labels[\"input_ids\"]\n", " return model_inputs" ] }, { "cell_type": "markdown", "metadata": { "id": "0lm8ozrJIrJR" }, "source": [ "This function works with one or several examples. In the case of several examples, the tokenizer will return a list of lists for each key:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "id": "-b70jh26IrJS", "outputId": "acd3a42d-985b-44ee-9daa-af5d944ce1d9" }, "outputs": [ { "data": { "text/plain": [ "{'input_ids': [[21603, 10, 37, 423, 583, 13, 1783, 16, 20126, 16496, 6, 80, 13, 8, 844, 6025, 4161, 6, 19, 341, 271, 14841, 5, 7057, 161, 19, 4912, 16, 1626, 5981, 11, 186, 7540, 16, 1276, 15, 2296, 7, 5718, 2367, 14621, 4161, 57, 4125, 387, 5, 15059, 7, 30, 8, 4653, 4939, 711, 747, 522, 17879, 788, 12, 1783, 44, 8, 15763, 6029, 1813, 9, 7472, 5, 1404, 1623, 11, 5699, 277, 130, 4161, 57, 18368, 16, 20126, 16496, 227, 8, 2473, 5895, 15, 147, 89, 22411, 139, 8, 1511, 5, 1485, 3271, 3, 21926, 9, 472, 19623, 5251, 8, 616, 12, 15614, 8, 1783, 5, 37, 13818, 10564, 15, 26, 3, 9, 3, 19513, 1481, 6, 18368, 186, 1328, 2605, 30, 7488, 1887, 3, 18, 8, 711, 2309, 9517, 89, 355, 5, 3966, 1954, 9233, 15, 6, 113, 293, 7, 8, 16548, 13363, 106, 14022, 84, 47, 14621, 4161, 6, 243, 255, 228, 59, 7828, 8, 1249, 18, 545, 11298, 1773, 728, 8, 8347, 1560, 5, 611, 6, 255, 243, 72, 1709, 1528, 161, 228, 43, 118, 4006, 91, 12, 766, 8, 3, 19513, 1481, 410, 59, 5124, 5, 96, 196, 17, 19, 1256, 68, 27, 103, 317, 132, 19, 78, 231, 23546, 21, 970, 51, 89, 2593, 11, 8, 2504, 189, 3, 18, 11, 27, 3536, 3653, 24, 3, 18, 68, 34, 19, 966, 114, 62, 31, 60, 23708, 42, 11821, 976, 255, 243, 5, 96, 11880, 164, 59, 36, 1176, 68, 34, 19, 2361, 82, 3503, 147, 8, 336, 360, 477, 5, 96, 17891, 130, 25, 59, 1065, 12, 199, 178, 3, 9, 720, 72, 116, 8, 6337, 11, 8, 6196, 5685, 7, 141, 2767, 91, 4609, 7940, 6, 3, 9, 8347, 5685, 3048, 16, 286, 640, 8, 17600, 7, 250, 13, 8, 3917, 3412, 5, 1276, 15, 2296, 7, 47, 14621, 1560, 57, 982, 6, 13233, 53, 3088, 12, 4277, 72, 13613, 7, 16, 8, 616, 5, 12580, 17600, 7, 2063, 65, 474, 3, 9, 570, 30, 165, 475, 13, 8, 7540, 6025, 4161, 11, 3863, 43, 118, 3, 19492, 59, 12, 9751, 12493, 3957, 5, 37, 16117, 3450, 31, 7, 21108, 12580, 2488, 5104, 11768, 1306, 47, 16, 1626, 5981, 30, 2089, 12, 217, 8, 1419, 166, 609, 5, 216, 243, 34, 47, 359, 12, 129, 8, 8347, 1711, 515, 269, 68, 3, 9485, 3088, 12, 1634, 95, 8, 433, 5, 96, 196, 47, 882, 1026, 3, 9, 1549, 57, 8, 866, 13, 1783, 24, 65, 118, 612, 976, 3, 88, 243, 5, 96, 14116, 34, 19, 842, 18, 18087, 21, 151, 113, 43, 118, 5241, 91, 13, 70, 2503, 11, 8, 1113, 30, 1623, 535, 216, 243, 34, 47, 359, 24, 96, 603, 5700, 342, 2245, 121, 130, 1026, 12, 1822, 8, 844, 167, 9930, 11, 3, 9, 964, 97, 3869, 474, 16, 286, 21, 8347, 9793, 1390, 5, 2114, 25, 118, 4161, 57, 18368, 16, 970, 51, 89, 2593, 11, 10987, 32, 1343, 42, 8, 17600, 7, 58, 8779, 178, 81, 39, 351, 13, 8, 1419, 11, 149, 34, 47, 10298, 5, 8601, 178, 30, 142, 40, 157, 12546, 5, 15808, 1741, 115, 115, 75, 5, 509, 5, 1598, 42, 146, 51, 89, 2593, 1741, 115, 115, 75, 5, 509, 5, 1598, 5, 1], [21603, 10, 71, 1472, 6196, 877, 326, 44, 8, 9108, 86, 29, 16, 6000, 1887, 44, 81, 11484, 10, 1755, 272, 4209, 30, 1856, 11, 2554, 130, 1380, 12, 1175, 8, 1595, 5, 282, 79, 3, 9094, 1067, 79, 1509, 8, 192, 14264, 6, 3, 16669, 596, 18, 969, 18, 1583, 16, 8, 443, 2447, 6, 3, 35, 6106, 19565, 57, 12314, 7, 5, 555, 13, 8, 1552, 1637, 19, 45, 3434, 6, 8, 119, 45, 1473, 11, 14441, 5, 94, 47, 70, 166, 706, 16, 5961, 5316, 5, 37, 2535, 13, 80, 13, 8, 14264, 243, 186, 13, 8, 9234, 141, 646, 525, 12770, 7, 30, 1476, 11, 175, 141, 118, 10932, 5, 2867, 1637, 43, 13666, 3709, 11210, 11, 56, 1731, 70, 1552, 13, 8, 3457, 4939, 865, 145, 79, 141, 4355, 5, 5076, 43, 3958, 15, 26, 21, 251, 81, 8, 3211, 5, 86, 7, 102, 1955, 24723, 243, 10, 96, 196, 17, 3475, 38, 713, 8, 1472, 708, 365, 80, 13, 8, 14264, 274, 16436, 12, 8, 511, 5, 96, 27674, 8, 2883, 1137, 19, 341, 365, 4962, 6, 34, 19, 816, 24, 8, 1472, 47, 708, 24067, 535, 1]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], 'labels': [[7433, 18, 413, 2673, 33, 6168, 640, 8, 12580, 17600, 7, 11, 970, 51, 89, 2593, 11, 10987, 32, 1343, 227, 18368, 2953, 57, 16133, 4937, 5, 1], [2759, 8548, 14264, 43, 118, 10932, 57, 1472, 16, 3, 9, 18024, 1584, 739, 3211, 16, 27874, 690, 2050, 5, 1]]}" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "preprocess_function(raw_datasets[\"train\"][:2])" ] }, { "cell_type": "markdown", "metadata": { "id": "zS-6iXTkIrJT" }, "source": [ "To apply this function on all the pairs of sentences in our dataset, we just use the `map` method of our `dataset` object we created earlier. This will apply the function on all the elements of all the splits in `dataset`, so our training, validation and testing data will be preprocessed in one single command." ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "id": "DDtsaJeVIrJT", "outputId": "aa4734bf-4ef5-4437-9948-2c16363da719" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Loading cached processed dataset at /home/matt/.cache/huggingface/datasets/xsum/default/1.2.0/32c23220eadddb1149b16ed2e9430a05293768cfffbdfd151058697d4c11f934/cache-6884faaa11a4f3fa.arrow\n", "Loading cached processed dataset at /home/matt/.cache/huggingface/datasets/xsum/default/1.2.0/32c23220eadddb1149b16ed2e9430a05293768cfffbdfd151058697d4c11f934/cache-891fbe6db8d3607d.arrow\n", "Loading cached processed dataset at /home/matt/.cache/huggingface/datasets/xsum/default/1.2.0/32c23220eadddb1149b16ed2e9430a05293768cfffbdfd151058697d4c11f934/cache-e814de7328c711ab.arrow\n" ] } ], "source": [ "tokenized_datasets = raw_datasets.map(preprocess_function, batched=True)" ] }, { "cell_type": "markdown", "metadata": { "id": "voWiw8C7IrJV" }, "source": [ "Even better, the results are automatically cached by the 🤗 Datasets library to avoid spending time on this step the next time you run your notebook. The 🤗 Datasets library is normally smart enough to detect when the function you pass to map has changed (and thus requires to not use the cache data). For instance, it will properly detect if you change the task in the first cell and rerun the notebook. 🤗 Datasets warns you when it uses cached files, you can pass `load_from_cache_file=False` in the call to `map` to not use the cached files and force the preprocessing to be applied again.\n", "\n", "Note that we passed `batched=True` to encode the texts by batches together. This is to leverage the full benefit of the fast tokenizer we loaded earlier, which will use multi-threading to treat the texts in a batch concurrently." ] }, { "cell_type": "markdown", "metadata": { "id": "545PP3o8IrJV" }, "source": [ "## Fine-tuning the model" ] }, { "cell_type": "markdown", "metadata": { "id": "FBiW8UpKIrJW" }, "source": [ "Now that our data is ready, we can download the pretrained model and fine-tune it. Since our task is sequence-to-sequence (both the input and output are text sequences), we use the `AutoModelForSeq2SeqLM` class. Like with the tokenizer, the `from_pretrained` method will download and cache the model for us." ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "id": "TlqNaB8jIrJW", "outputId": "84916cf3-6e6c-47f3-d081-032ec30a4132" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "2022-07-22 18:59:10.209197: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-22 18:59:10.216531: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-22 18:59:10.217844: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-22 18:59:10.219660: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA\n", "To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n", "2022-07-22 18:59:10.222448: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-22 18:59:10.223132: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-22 18:59:10.223789: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-22 18:59:10.573580: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-22 18:59:10.574270: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-22 18:59:10.574920: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-22 18:59:10.575544: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 21610 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:21:00.0, compute capability: 8.6\n", "2022-07-22 18:59:11.230832: I tensorflow/stream_executor/cuda/cuda_blas.cc:1786] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.\n", "All model checkpoint layers were used when initializing TFT5ForConditionalGeneration.\n", "\n", "All the layers of TFT5ForConditionalGeneration were initialized from the model checkpoint at t5-small.\n", "If your task is similar to the task the model of the checkpoint was trained on, you can already use TFT5ForConditionalGeneration for predictions without further training.\n" ] } ], "source": [ "from transformers import TFAutoModelForSeq2SeqLM, DataCollatorForSeq2Seq\n", "\n", "model = TFAutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)" ] }, { "cell_type": "markdown", "metadata": { "id": "CczA5lJlIrJX" }, "source": [ "Note that we don't get a warning like in our classification example. This means we used all the weights of the pretrained model and there is no randomly initialized head in this case." ] }, { "cell_type": "markdown", "metadata": { "id": "_N8urzhyIrJY" }, "source": [ "Next we set some parameters like the learning rate and the `batch_size`and customize the weight decay. \n", "\n", "The last two arguments are to setup everything so we can push the model to the [Hub](https://huggingface.co/models) at the end of training. Remove the two of them if you didn't follow the installation steps at the top of the notebook, otherwise you can change the value of push_to_hub_model_id to something you would prefer." ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "id": "Bliy8zgjIrJY" }, "outputs": [], "source": [ "batch_size = 8\n", "learning_rate = 2e-5\n", "weight_decay = 0.01\n", "num_train_epochs = 1\n", "\n", "model_name = model_checkpoint.split(\"/\")[-1]\n", "push_to_hub_model_id = f\"{model_name}-finetuned-xsum\"" ] }, { "cell_type": "markdown", "metadata": { "id": "km3pGVdTIrJc" }, "source": [ "Then, we need a special kind of data collator, which will not only pad the inputs to the maximum length in the batch, but also the labels. Note that our data collators are designed to work for multiple frameworks, so ensure you set the `return_tensors='np'` argument to get NumPy arrays out - you don't want to accidentally get a load of `torch.Tensor` objects in the middle of your nice TF code! You could also use `return_tensors='tf'` to get TensorFlow tensors, but our TF dataset pipeline actually uses a NumPy loader internally, which is wrapped at the end with a `tf.data.Dataset`. As a result, `np` is usually more reliable and performant when you're using it!\n", "\n", "We also want to compute `ROUGE` metrics, which will require us to generate text from our model. To speed things up, we can compile our generation loop with XLA. This results in a *huge* speedup - up to 100X! The downside of XLA generation, though, is that it doesn't like variable input shapes, because it needs to run a new compilation for each new input shape! To compensate for that, let's use `pad_to_multiple_of` for the dataset we use for text generation. This will reduce the number of unique input shapes a lot, meaning we can get the benefits of XLA generation with only a few compilations." ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "data_collator = DataCollatorForSeq2Seq(tokenizer, model=model, return_tensors=\"np\")\n", "\n", "generation_data_collator = DataCollatorForSeq2Seq(tokenizer, model=model, return_tensors=\"np\", pad_to_multiple_of=128)" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Dataset({\n", " features: ['document', 'summary', 'id', 'input_ids', 'attention_mask', 'labels'],\n", " num_rows: 204045\n", "})" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tokenized_datasets[\"train\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we convert our datasets to `tf.data.Dataset`, which Keras understands natively. There are two ways to do this - we can use the slightly more low-level [`Dataset.to_tf_dataset()`](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.to_tf_dataset) method, or we can use [`Model.prepare_tf_dataset()`](https://huggingface.co/docs/transformers/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset). The main difference between these two is that the `Model` method can inspect the model to determine which column names it can use as input, which means you don't need to specify them yourself. Make sure to specify the collator we just created as our `collate_fn`!" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [], "source": [ "train_dataset = model.prepare_tf_dataset(\n", " tokenized_datasets[\"train\"],\n", " batch_size=batch_size,\n", " shuffle=True,\n", " collate_fn=data_collator,\n", ")\n", "\n", "validation_dataset = model.prepare_tf_dataset(\n", " tokenized_datasets[\"validation\"],\n", " batch_size=batch_size,\n", " shuffle=False,\n", " collate_fn=data_collator,\n", ")\n", "\n", "generation_dataset = model.prepare_tf_dataset(\n", " tokenized_datasets[\"validation\"],\n", " batch_size=8,\n", " shuffle=False,\n", " collate_fn=generation_data_collator\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we initialize our loss and optimizer and compile the model. Note that most Transformers models compute loss internally - we can train on this as our loss value simply by not specifying a loss when we `compile()`." ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "No loss specified in compile() - the model's internal loss computation will be used as the loss. Don't panic - this is a common way to train TensorFlow models in Transformers! To disable this behaviour please pass a loss argument, or explicitly pass `loss=None` if you do not want your model to compute a loss.\n" ] } ], "source": [ "from transformers import AdamWeightDecay\n", "import tensorflow as tf\n", "\n", "optimizer = AdamWeightDecay(learning_rate=learning_rate, weight_decay_rate=weight_decay)\n", "model.compile(optimizer=optimizer)" ] }, { "cell_type": "markdown", "metadata": { "id": "7sZOdRlRIrJd" }, "source": [ "Now we can train our model. We can also add a few optional callbacks here, which you can remove if they aren't useful to you. In no particular order, these are:\n", "- PushToHubCallback will sync up our model with the Hub - this allows us to resume training from other machines, share the model after training is finished, and even test the model's inference quality midway through training!\n", "- TensorBoard is a built-in Keras callback that logs TensorBoard metrics.\n", "- KerasMetricCallback is a callback for computing advanced metrics. There are a number of common metrics in NLP like ROUGE which are hard to fit into your compiled training loop because they depend on decoding predictions and labels back to strings with the tokenizer, and calling arbitrary Python functions to compute the metric. The KerasMetricCallback will wrap a metric function, outputting metrics as training progresses.\n", "\n", "If this is the first time you've seen `KerasMetricCallback`, it's worth explaining what exactly is going on here. The callback takes two main arguments - a `metric_fn` and an `eval_dataset`. It then iterates over the `eval_dataset` and collects the model's outputs for each sample, before passing the `list` of predictions and the associated `list` of labels to the user-defined `metric_fn`. If the `predict_with_generate` argument is `True`, then it will call `model.generate()` for each input sample instead of `model.predict()` - this is useful for metrics that expect generated text from the model, like `ROUGE`.\n", "\n", "This callback allows complex metrics to be computed each epoch that would not function as a standard Keras Metric. Metric values are printed each epoch, and can be used by other callbacks like `TensorBoard` or `EarlyStopping`." ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import nltk\n", "\n", "\n", "def metric_fn(eval_predictions):\n", " predictions, labels = eval_predictions\n", " decoded_predictions = tokenizer.batch_decode(predictions, skip_special_tokens=True)\n", " for label in labels:\n", " label[label < 0] = tokenizer.pad_token_id # Replace masked label tokens\n", " decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)\n", " # Rouge expects a newline after each sentence\n", " decoded_predictions = [\n", " \"\\n\".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_predictions\n", " ]\n", " decoded_labels = [\n", " \"\\n\".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels\n", " ]\n", " result = metric.compute(\n", " predictions=decoded_predictions, references=decoded_labels, use_stemmer=True\n", " )\n", " # Extract a few results\n", " result = {key: value.mid.fmeasure * 100 for key, value in result.items()}\n", " # Add mean generated length\n", " prediction_lens = [\n", " np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions\n", " ]\n", " result[\"gen_len\"] = np.mean(prediction_lens)\n", "\n", " return result" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And now we can try training our model. By default, we only do a single epoch of training here, as the inputs are very long, which means training is quite slow. However, you may wish to experiment with larger pre-trained models and longer training runs if you want to maximize the quality of your summaries." ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "scrolled": false }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/home/matt/PycharmProjects/notebooks/examples/summarization_model_save is already a clone of https://huggingface.co/Rocketknight1/t5-small-finetuned-xsum. Make sure you pull the latest changes with `repo.git_pull()`.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " 6/25505 [..............................] - ETA: 1:18:56 - loss: 3.5683WARNING:tensorflow:Callback method `on_train_batch_end` is slow compared to the batch time (batch time: 0.1032s vs `on_train_batch_end` time: 0.1321s). Check your callbacks.\n", "25505/25505 [==============================] - ETA: 0s - loss: 2.7178" ] }, { "name": "stderr", "output_type": "stream", "text": [ "2022-07-22 20:17:16.649201: I tensorflow/compiler/xla/service/service.cc:170] XLA service 0x56017d743390 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\n", "2022-07-22 20:17:16.649234: I tensorflow/compiler/xla/service/service.cc:178] StreamExecutor device (0): NVIDIA GeForce RTX 3090, Compute Capability 8.6\n", "2022-07-22 20:17:16.730241: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:263] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.\n", "2022-07-22 20:17:22.792069: I tensorflow/compiler/jit/xla_compilation_cache.cc:478] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:5 out of the last 7 calls to .generation_function at 0x7f79cc4bc1f0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "Several commits (2) will be pushed upstream.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r", "25505/25505 [==============================] - 4900s 192ms/step - loss: 2.7178 - val_loss: 2.3966 - rouge1: 29.5589 - rouge2: 8.6381 - rougeL: 23.3911 - rougeLsum: 23.3926 - gen_len: 18.8211\n" ] }, { "data": { "text/plain": [ "" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from transformers.keras_callbacks import PushToHubCallback, KerasMetricCallback\n", "from tensorflow.keras.callbacks import TensorBoard\n", "\n", "tensorboard_callback = TensorBoard(log_dir=\"./summarization_model_save/logs\")\n", "\n", "push_to_hub_callback = PushToHubCallback(\n", " output_dir=\"./summarization_model_save\",\n", " tokenizer=tokenizer,\n", " hub_model_id=push_to_hub_model_id,\n", ")\n", "\n", "metric_callback = KerasMetricCallback(\n", " metric_fn, eval_dataset=generation_dataset, predict_with_generate=True, use_xla_generation=True\n", ")\n", "\n", "callbacks = [metric_callback, tensorboard_callback, push_to_hub_callback]\n", "\n", "model.fit(\n", " train_dataset, validation_data=validation_dataset, epochs=1, callbacks=callbacks\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you used the callback above, you can now share this model with all your friends, family, favorite pets: they can all load it with the identifier `\"your-username/the-name-you-picked\"` so for instance:\n", "\n", "```python\n", "from transformers import TFAutoModelForSeq2SeqLM\n", "\n", "model = TFAutoModelForSeq2SeqLM.from_pretrained(\"your-username/my-awesome-model\")\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Inference" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we've trained our model, let's see how we could load it and use it to summarize text in future! First, let's load it from the hub. This means we can resume the code from here without needing to rerun everything above every time." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "8a200bdd219e41648eedcf46d092f4bf", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading tokenizer_config.json: 0%| | 0.00/1.88k [00:00 device: 0, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:21:00.0, compute capability: 8.6\n", "2022-07-25 14:24:33.560594: I tensorflow/stream_executor/cuda/cuda_blas.cc:1786] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.\n", "All model checkpoint layers were used when initializing TFT5ForConditionalGeneration.\n", "\n", "All the layers of TFT5ForConditionalGeneration were initialized from the model checkpoint at Rocketknight1/t5-small-finetuned-xsum.\n", "If your task is similar to the task the model of the checkpoint was trained on, you can already use TFT5ForConditionalGeneration for predictions without further training.\n" ] } ], "source": [ "from transformers import AutoTokenizer, TFAutoModelForSeq2SeqLM\n", "\n", "# You can of course substitute your own username and model here if you've trained and uploaded it!\n", "model_name = 'Rocketknight1/t5-small-finetuned-xsum'\n", "tokenizer = AutoTokenizer.from_pretrained(model_name)\n", "model = TFAutoModelForSeq2SeqLM.from_pretrained(model_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's try tokenizing a document from the training set. Don't forget to add 'summarize:' at the start if you're using a `T5` model." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "document = 'The full cost of damage in Newton Stewart, one of the areas worst affected, is still being assessed.\\nRepair work is ongoing in Hawick and many roads in Peeblesshire remain badly affected by standing water.\\nTrains on the west coast mainline face disruption due to damage at the Lamington Viaduct.\\nMany businesses and householders were affected by flooding in Newton Stewart after the River Cree overflowed into the town.\\nFirst Minister Nicola Sturgeon visited the area to inspect the damage.\\nThe waters breached a retaining wall, flooding many commercial properties on Victoria Street - the main shopping thoroughfare.\\nJeanette Tate, who owns the Cinnamon Cafe which was badly affected, said she could not fault the multi-agency response once the flood hit.\\nHowever, she said more preventative work could have been carried out to ensure the retaining wall did not fail.\\n\"It is difficult but I do think there is so much publicity for Dumfries and the Nith - and I totally appreciate that - but it is almost like we\\'re neglected or forgotten,\" she said.\\n\"That may not be true but it is perhaps my perspective over the last few days.\\n\"Why were you not ready to help us a bit more when the warning and the alarm alerts had gone out?\"\\nMeanwhile, a flood alert remains in place across the Borders because of the constant rain.\\nPeebles was badly hit by problems, sparking calls to introduce more defences in the area.\\nScottish Borders Council has put a list on its website of the roads worst affected and drivers have been urged not to ignore closure signs.\\nThe Labour Party\\'s deputy Scottish leader Alex Rowley was in Hawick on Monday to see the situation first hand.\\nHe said it was important to get the flood protection plan right but backed calls to speed up the process.\\n\"I was quite taken aback by the amount of damage that has been done,\" he said.\\n\"Obviously it is heart-breaking for people who have been forced out of their homes and the impact on businesses.\"\\nHe said it was important that \"immediate steps\" were taken to protect the areas most vulnerable and a clear timetable put in place for flood prevention plans.\\nHave you been affected by flooding in Dumfries and Galloway or the Borders? Tell us about your experience of the situation and how it was handled. Email us on selkirk.news@bbc.co.uk or dumfries@bbc.co.uk.'\n", "if 't5' in model_name: \n", " document = \"summarize: \" + document\n", "tokenized = tokenizer([document], return_tensors='np')\n", "out = model.generate(**tokenized, max_length=128)" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " A flood warning has been issued to the people affected by flooding in Dumfries and the Nith, the Scottish Borders Council has said.\n" ] } ], "source": [ "with tokenizer.as_target_tokenizer():\n", " print(tokenizer.decode(out[0]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Not bad for a single epoch of training! Of course, the flood warning isn't much use to them after they've been flooded, but the summary correctly identified flooding in Dumfries and the Nith as the key event. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Using XLA in inference" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you just want to generate a few summaries, the code above is all you need. However, generation can be **much** faster if you use XLA, and if you want to generate data in bulk, you should probably use it! If you're using XLA, though, remember that you'll need to do a new XLA compilation for every input size you pass to the model. This means that you should keep your batch size constant, and consider padding inputs to the same length, or using `pad_to_multiple_of` in your tokenizer to reduce the number of different input shapes you pass. Let's show an example of that:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "import tensorflow as tf\n", "\n", "@tf.function(jit_compile=True)\n", "def generate(inputs):\n", " return model.generate(**inputs, max_length=128)\n", "\n", "tokenized_data = tokenizer([document], return_tensors=\"np\", pad_to_multiple_of=128)\n", "out = generate(tokenized_data)" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " A flood warning has been issued to the people affected by flooding in Dumfries and the Nith, the Scottish Borders Council has said.\n" ] } ], "source": [ "with tokenizer.as_target_tokenizer():\n", " print(tokenizer.decode(out[0]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When using XLA generation, you'll notice that the first call to generate with a new input shape takes a long time because XLA has to compile your function, but subsequent calls are extremely quick. Also, XLA always generates to the maximum length, which can lead to a lot of padding tokens in your output! These are easy to remove, however:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "A flood warning has been issued to the people affected by flooding in Dumfries and the Nith, the Scottish Borders Council has said.\n" ] } ], "source": [ "with tokenizer.as_target_tokenizer():\n", " print(tokenizer.decode(out[0], skip_special_tokens=True))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Pipeline API" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The pipeline API offers a convenient shortcut for all of this, but doesn't (yet!) support XLA generation:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "All model checkpoint layers were used when initializing TFT5ForConditionalGeneration.\n", "\n", "All the layers of TFT5ForConditionalGeneration were initialized from the model checkpoint at Rocketknight1/t5-small-finetuned-xsum.\n", "If your task is similar to the task the model of the checkpoint was trained on, you can already use TFT5ForConditionalGeneration for predictions without further training.\n" ] } ], "source": [ "from transformers import pipeline\n", "\n", "summarizer = pipeline('text2text-generation', model_name, framework=\"tf\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Remember that if we're using a T5 model then we appended \"summarize: \" to the start of our input above. Don't forget to do that when you're getting summaries for new texts!" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Token indices sequence length is longer than the specified maximum sequence length for this model (541 > 512). Running this sequence through the model will result in indexing errors\n", "2022-07-25 15:05:51.802359: I tensorflow/compiler/xla/service/service.cc:170] XLA service 0x563f5be5ff70 initialized for platform Host (this does not guarantee that XLA will be used). Devices:\n", "2022-07-25 15:05:51.802390: I tensorflow/compiler/xla/service/service.cc:178] StreamExecutor device (0): Host, Default Version\n" ] }, { "data": { "text/plain": [ "[{'generated_text': 'A flood warning has been issued to the people affected by flooding in Dumfries and the Nith, the Scottish Borders Council has said.'}]" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "summarizer(document, max_length=128)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Easy!" ] } ], "metadata": { "colab": { "name": "Summarization", "provenance": [] }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.3" } }, "nbformat": 4, "nbformat_minor": 1 }