{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "X4cRE8IbIrIV" }, "source": [ "If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers and 🤗 Datasets. We will also use the `seqeval` library to compute some evaluation metrics. Uncomment the following cell and run it." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 1000 }, "id": "MOsHUjgdIrIW", "outputId": "f84a093e-147f-470e-aad9-80fb51193c8e" }, "outputs": [], "source": [ "#! pip install transformers\n", "#! pip install datasets\n", "#! pip install seqeval\n", "#! pip install huggingface_hub" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you're opening this notebook locally, make sure your environment has an install from the last version of those libraries.\n", "\n", "To be able to share your model with the community and generate results like the one shown in the picture below via the inference API, there are a few more steps to follow.\n", "\n", "First you have to store your authentication token from the Hugging Face website (sign up [here](https://huggingface.co/join) if you haven't already!) then uncomment the following cell and input it:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from huggingface_hub import notebook_login\n", "\n", "notebook_login()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then you need to install Git-LFS and setup Git if you haven't already. Uncomment the following instructions and adapt with your name and email:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# !apt install git-lfs\n", "# !git config --global user.email \"you@example.com\"\n", "# !git config --global user.name \"Your Name\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Make sure your version of Transformers is at least 4.16.0 since some of the functionality we use was introduced in that version:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "4.21.0.dev0\n" ] } ], "source": [ "import transformers\n", "\n", "print(transformers.__version__)" ] }, { "cell_type": "markdown", "metadata": { "id": "HFASsisvIrIb" }, "source": [ "You can find a script version of this notebook to fine-tune your model in a distributed fashion using multiple GPUs or TPUs [here](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We also quickly upload some telemetry - this tells us which examples and software versions are getting used so we know where to prioritize our maintenance efforts. We don't collect (or care about) any personally identifiable information, but if you'd prefer not to be counted, feel free to skip this step or delete this cell entirely." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from transformers.utils import send_example_telemetry\n", "\n", "send_example_telemetry(\"token_classification_notebook\", framework=\"tensorflow\")" ] }, { "cell_type": "markdown", "metadata": { "id": "rEJBSTyZIrIb" }, "source": [ "# Fine-tuning a model on a token classification task" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this notebook, we will see how to fine-tune one of the [🤗 Transformers](https://github.com/huggingface/transformers) model to a token classification task, which is the task of predicting a label for each token.\n", "\n", "\n", "\n", "The most common token classification tasks are:\n", "\n", "- NER (Named-entity recognition) Classify the entities in the text (person, organization, location...).\n", "- POS (Part-of-speech tagging) Grammatically classify the tokens (noun, verb, adjective...)\n", "- Chunk (Chunking) Grammatically classify the tokens and group them into \"chunks\" that go together\n", "\n", "We will see how to easily load a dataset for these kinds of tasks and use Keras to fine-tune a model on it." ] }, { "cell_type": "markdown", "metadata": { "id": "4RRkXuteIrIh" }, "source": [ "This notebook is built to run on any token classification task, with any model checkpoint from the [Model Hub](https://huggingface.co/models) as long as that model has a version with a token classification head and a fast tokenizer (check on [this table](https://huggingface.co/transformers/index.html#bigtable) if this is the case). It might just need some small adjustments if you decide to use a different dataset than the one used here. Depending on you model and the GPU you are using, you might need to adjust the batch size to avoid out-of-memory errors. Set those three parameters, then the rest of the notebook should run smoothly:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "id": "zVvslsfMIrIh" }, "outputs": [], "source": [ "task = \"ner\" # Should be one of \"ner\", \"pos\" or \"chunk\"\n", "model_checkpoint = \"distilbert-base-uncased\"\n", "batch_size = 16" ] }, { "cell_type": "markdown", "metadata": { "id": "whPRbBNbIrIl" }, "source": [ "## Loading the dataset" ] }, { "cell_type": "markdown", "metadata": { "id": "W7QYTpxXIrIl" }, "source": [ "We will use the [🤗 Datasets](https://github.com/huggingface/datasets) library to download the data and get the metric we need to use for evaluation (to compare our model to the benchmark). This can be easily done with the functions `load_dataset` and `load_metric`. " ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "id": "IreSlFmlIrIm" }, "outputs": [], "source": [ "from datasets import load_dataset, load_metric" ] }, { "cell_type": "markdown", "metadata": { "id": "CKx2zKs5IrIq" }, "source": [ "For our example here, we'll use the [CONLL 2003 dataset](https://www.aclweb.org/anthology/W03-0419.pdf). The notebook should work with any token classification dataset provided by the 🤗 Datasets library. If you're using your own dataset defined from a JSON or csv file (see the [Datasets documentation](https://huggingface.co/docs/datasets/loading_datasets.html#from-local-files) on how to load them), it might need some adjustments in the names of the columns used." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 270, "referenced_widgets": [ "69caab03d6264fef9fc5649bffff5e20", "3f74532faa86412293d90d3952f38c4a", "50615aa59c7247c4804ca5cbc7945bd7", "fe962391292a413ca55dc932c4279fa7", "299f4b4c07654e53a25f8192bd1d7bbd", "ad04ed1038154081bbb0c1444784dcc2", "7c667ad22b5740d5a6319f1b1e3a8097", "46c2b043c0f84806978784a45a4e203b", "80e2943be35f46eeb24c8ab13faa6578", "de5956b5008d4fdba807bae57509c393", "931db1f7a42f4b46b7ff8c2e1262b994", "6c1db72efff5476e842c1386fadbbdba", "ccd2f37647c547abb4c719b75a26f2de", "d30a66df5c0145e79693e09789d96b81", "5fa26fc336274073abbd1d550542ee33", "2b34de08115d49d285def9269a53f484", "d426be871b424affb455aeb7db5e822e", "160bf88485f44f5cb6eaeecba5e0901f", "745c0d47d672477b9bb0dae77b926364", "d22ab78269cd4ccfbcf70c707057c31b", "d298eb19eeff453cba51c2804629d3f4", "a7204ade36314c86907c562e0a2158b8", "e35d42b2d352498ca3fc8530393786b2", "75103f83538d44abada79b51a1cec09e", "f6253931d90543e9b5fd0bb2d615f73a", "051aa783ff9e47e28d1f9584043815f5", "0984b2a14115454bbb009df71c1cf36f", "8ab9dfce29854049912178941ef1b289", "c9de740e007141958545e269372780a4", "cbea68b25d6d4ba09b2ce0f27b1726d5", "5781fc45cf8d486cb06ed68853b2c644", "d2a92143a08a4951b55bab9bc0a6d0d3", "a14c3e40e5254d61ba146f6ec88eae25", "c4ffe6f624ce4e978a0d9b864544941a", "1aca01c1d8c940dfadd3e7144bb35718", "9fbbaae50e6743f2aa19342152398186", "fea27ca6c9504fc896181bc1ff5730e5", "940d00556cb849b3a689d56e274041c2", "5cdf9ed939fb42d4bf77301c80b8afca", "94b39ccfef0b4b08bf2fb61bb0a657c1", "9a55087c85b74ea08b3e952ac1d73cbe", "2361ab124daf47cc885ff61f2899b2af", "1a65887eb37747ddb75dc4a40f7285f2", "3c946e2260704e6c98593136bd32d921", "50d325cdb9844f62a9ecc98e768cb5af", "aa781f0cfe454e9da5b53b93e9baabd8", "6bb68d3887ef43809eb23feb467f9723", "7e29a8b952cf4f4ea42833c8bf55342f", "dd5997d01d8947e4b1c211433969b89b", "2ace4dc78e2f4f1492a181bcd63304e7", "bbee008c2791443d8610371d1f16b62b", "31b1c8a2e3334b72b45b083688c1a20c", "7fb7c36adc624f7dbbcb4a831c1e4f63", "0b7c8f1939074794b3d9221244b1344d", "a71908883b064e1fbdddb547a8c41743", "2f5223f26c8541fc87e91d2205c39995" ] }, "id": "s_AY1ATSIrIq", "outputId": "fd0578d1-8895-443d-b56f-5908de9f1b6b" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Reusing dataset conll2003 (/home/matt/.cache/huggingface/datasets/conll2003/conll2003/1.0.0/9a4d16a94f8674ba3466315300359b0acd891b68b6c8743ddf60b9c702adce98)\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "c5d18992de2649609cc82185d781412d", "version_major": 2, "version_minor": 0 }, "text/plain": [ " 0%| | 0/3 [00:00<?, ?it/s]" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "datasets = load_dataset(\"conll2003\")" ] }, { "cell_type": "markdown", "metadata": { "id": "RzfPtOMoIrIu" }, "source": [ "The `datasets` object itself is [`DatasetDict`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasetdict), which contains one key for the training, validation and test set." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "id": "GWiVUF0jIrIv", "outputId": "35e3ea43-f397-4a54-c90c-f2cf8d36873e" }, "outputs": [ { "data": { "text/plain": [ "DatasetDict({\n", " train: Dataset({\n", " features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\n", " num_rows: 14041\n", " })\n", " validation: Dataset({\n", " features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\n", " num_rows: 3250\n", " })\n", " test: Dataset({\n", " features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\n", " num_rows: 3453\n", " })\n", "})" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "datasets" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see the training, validation and test sets all have a column for the tokens (the input texts split into words) and one column of labels for each kind of task we introduced before." ] }, { "cell_type": "markdown", "metadata": { "id": "u3EtYfeHIrIz" }, "source": [ "To access an actual element, you need to select a split first, then give an index:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "id": "X6HrpprwIrIz", "outputId": "d7670bc0-42e4-4c09-8a6a-5c018ded7d95" }, "outputs": [ { "data": { "text/plain": [ "{'id': '0',\n", " 'tokens': ['EU',\n", " 'rejects',\n", " 'German',\n", " 'call',\n", " 'to',\n", " 'boycott',\n", " 'British',\n", " 'lamb',\n", " '.'],\n", " 'pos_tags': [22, 42, 16, 21, 35, 37, 16, 21, 7],\n", " 'chunk_tags': [11, 21, 11, 12, 21, 22, 11, 12, 0],\n", " 'ner_tags': [3, 0, 7, 0, 0, 0, 7, 0, 0]}" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "datasets[\"train\"][0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The labels are already coded as integer ids to be easily usable by our model, but the correspondence with the actual categories is stored in the `features` of the dataset:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Sequence(feature=ClassLabel(num_classes=9, names=['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'B-MISC', 'I-MISC'], id=None), length=-1, id=None)" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "datasets[\"train\"].features[f\"ner_tags\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So for the NER tags, 0 corresponds to 'O', 1 to 'B-PER' etc... On top of the 'O' (which means no special entity), there are four labels for NER here, each prefixed with 'B-' (for beginning) or 'I-' (for intermediate), that indicate if the token is the first one for the current group with the label or not:\n", "- 'PER' for person\n", "- 'ORG' for organization\n", "- 'LOC' for location\n", "- 'MISC' for miscellaneous" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since the labels are lists of `ClassLabel`, the actual names of the labels are nested in the `feature` attribute of the object above:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'B-MISC', 'I-MISC']" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "label_list = datasets[\"train\"].features[f\"{task}_tags\"].feature.names\n", "label_list" ] }, { "cell_type": "markdown", "metadata": { "id": "WHUmphG3IrI3" }, "source": [ "To get a sense of what the data looks like, the following function will show some examples picked randomly in the dataset (automatically decoding the labels in passing)." ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "id": "i3j8APAoIrI3" }, "outputs": [], "source": [ "from datasets import ClassLabel, Sequence\n", "import random\n", "import pandas as pd\n", "from IPython.display import display, HTML\n", "\n", "\n", "def show_random_elements(dataset, num_examples=10):\n", " assert num_examples <= len(\n", " dataset\n", " ), \"Can't pick more elements than there are in the dataset.\"\n", " picks = []\n", " for _ in range(num_examples):\n", " pick = random.randint(0, len(dataset) - 1)\n", " while pick in picks:\n", " pick = random.randint(0, len(dataset) - 1)\n", " picks.append(pick)\n", "\n", " df = pd.DataFrame(dataset[picks])\n", " for column, typ in dataset.features.items():\n", " if isinstance(typ, ClassLabel):\n", " df[column] = df[column].transform(lambda i: typ.names[i])\n", " elif isinstance(typ, Sequence) and isinstance(typ.feature, ClassLabel):\n", " df[column] = df[column].transform(\n", " lambda x: [typ.feature.names[i] for i in x]\n", " )\n", " display(HTML(df.to_html()))" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "id": "SZy5tRB_IrI7", "outputId": "ba8f2124-e485-488f-8c0c-254f34f24f13", "scrolled": true }, "outputs": [ { "data": { "text/html": [ "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>id</th>\n", " <th>tokens</th>\n", " <th>pos_tags</th>\n", " <th>chunk_tags</th>\n", " <th>ner_tags</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>6496</td>\n", " <td>[Total, (, for, one, wicket, ), 48]</td>\n", " <td>[JJ, (, IN, CD, NN, ), CD]</td>\n", " <td>[B-NP, O, B-PP, B-NP, I-NP, O, B-NP]</td>\n", " <td>[O, O, O, O, O, O, O]</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>11665</td>\n", " <td>[The, BOJ, sought, to, put, the, best, face, on, the, data, which, defied, economists, ', predictions, of, improving, sentiment, and, was, the, first, decline, in, business, sentiment, in, a, year, .]</td>\n", " <td>[DT, NNP, VBD, TO, VB, DT, JJS, NN, IN, DT, NNS, WDT, VBD, NNS, POS, NNS, IN, VBG, NN, CC, VBD, DT, JJ, NN, IN, NN, NN, IN, DT, NN, .]</td>\n", " <td>[B-NP, I-NP, B-VP, I-VP, I-VP, B-NP, I-NP, I-NP, B-PP, B-NP, I-NP, B-NP, B-VP, B-NP, B-NP, I-NP, B-PP, B-NP, I-NP, O, B-VP, B-NP, I-NP, I-NP, B-PP, B-NP, I-NP, B-PP, B-NP, I-NP, O]</td>\n", " <td>[O, B-ORG, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O]</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>9882</td>\n", " <td>[Palestinians, to, strike, over, Jerusalem, demolition, .]</td>\n", " <td>[NNPS, TO, VB, RP, NNP, NN, .]</td>\n", " <td>[B-NP, B-VP, I-VP, B-PRT, B-NP, I-NP, O]</td>\n", " <td>[B-MISC, O, O, O, B-LOC, O, O]</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>7824</td>\n", " <td>[REPRICING, OF, THE, BALANCE, OF, THE, BONDS, IN, THE, ACCOUNT, .]</td>\n", " <td>[VBG, IN, DT, NN, IN, DT, NNS, IN, DT, NN, .]</td>\n", " <td>[B-VP, B-PP, B-NP, I-NP, B-PP, B-NP, I-NP, B-PP, B-NP, I-NP, O]</td>\n", " <td>[O, O, O, O, O, O, O, O, O, O, O]</td>\n", " </tr>\n", " <tr>\n", " <th>4</th>\n", " <td>13690</td>\n", " <td>[Hong, Kong, Financial, Secretary, Donald, Tsang, said, on, Thursday, he, expected, the, territory, 's, economy, to, keep, growing, at, around, five, percent, but, with, some, fluctuations, from, year, to, year, .]</td>\n", " <td>[NNP, NNP, NNP, NNP, NNP, NNP, VBD, IN, NNP, PRP, VBD, DT, NN, POS, NN, TO, VB, VBG, IN, IN, CD, NN, CC, IN, DT, NNS, IN, NN, TO, NN, .]</td>\n", " <td>[B-NP, I-NP, I-NP, I-NP, I-NP, I-NP, B-VP, B-PP, B-NP, B-NP, B-VP, B-NP, I-NP, B-NP, I-NP, B-VP, I-VP, I-VP, B-PP, B-NP, I-NP, I-NP, O, B-PP, B-NP, I-NP, B-PP, B-NP, B-PP, B-NP, O]</td>\n", " <td>[B-LOC, I-LOC, O, O, B-PER, I-PER, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O]</td>\n", " </tr>\n", " <tr>\n", " <th>5</th>\n", " <td>6749</td>\n", " <td>[Atlante, 1, Atlas, 1]</td>\n", " <td>[NNP, CD, NNP, CD]</td>\n", " <td>[B-NP, I-NP, I-NP, I-NP]</td>\n", " <td>[B-ORG, O, B-ORG, O]</td>\n", " </tr>\n", " <tr>\n", " <th>6</th>\n", " <td>9964</td>\n", " <td>[1., Osmond, Ezinwa, (, Nigeria, ), 10.13, seconds]</td>\n", " <td>[CD, NNP, NNP, (, NNP, ), CD, NNS]</td>\n", " <td>[B-NP, I-NP, I-NP, O, B-NP, O, B-NP, I-NP]</td>\n", " <td>[O, B-PER, I-PER, O, B-LOC, O, O, O]</td>\n", " </tr>\n", " <tr>\n", " <th>7</th>\n", " <td>13665</td>\n", " <td>[In, an, interview, following, its, first-half, results, ,, which, included, a, less, optimistic, forecast, for, the, second, half, of, this, year, than, it, had, made, in, the, past, ,, Sir, Colin, Hope, said, T&N, had, taken, defensive, action, to, protect, it, from, patchy, markets, .]</td>\n", " <td>[IN, DT, NN, VBG, PRP$, JJ, NNS, ,, WDT, VBD, DT, RBR, JJ, NN, IN, DT, JJ, NN, IN, DT, NN, IN, PRP, VBD, VBN, IN, DT, NN, ,, NNP, NNP, NNP, VBD, NNP, VBD, VBN, JJ, NN, TO, VB, PRP, IN, JJ, NNS, .]</td>\n", " <td>[B-PP, B-NP, I-NP, B-PP, B-NP, I-NP, I-NP, O, B-NP, B-VP, B-NP, I-NP, I-NP, I-NP, B-PP, B-NP, I-NP, I-NP, B-PP, B-NP, I-NP, B-SBAR, B-NP, B-VP, I-VP, B-PP, B-NP, I-NP, O, B-NP, I-NP, I-NP, B-VP, B-NP, B-VP, I-VP, B-NP, I-NP, B-VP, I-VP, B-NP, B-PP, B-NP, I-NP, O]</td>\n", " <td>[O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, B-PER, I-PER, O, B-ORG, O, O, O, O, O, O, O, O, O, O, O]</td>\n", " </tr>\n", " <tr>\n", " <th>8</th>\n", " <td>6232</td>\n", " <td>[The, rand, was, last, bid, at, 4.5350, against, the, dollar, .]</td>\n", " <td>[DT, NN, VBD, JJ, NN, IN, CD, IN, DT, NN, .]</td>\n", " <td>[B-NP, I-NP, B-VP, B-NP, I-NP, B-PP, B-NP, B-PP, B-NP, I-NP, O]</td>\n", " <td>[O, O, O, O, O, O, O, O, O, O, O]</td>\n", " </tr>\n", " <tr>\n", " <th>9</th>\n", " <td>13340</td>\n", " <td>[Liam, Gallagher, ,, singer, of, Britain, 's, top, rock, group, Oasis, ,, flew, out, on, Thursday, to, join, the, band, three, days, after, the, start, of, its, U.S., tour, .]</td>\n", " <td>[NNP, NNP, ,, NN, IN, NNP, POS, JJ, NN, NN, NNP, ,, VBD, RP, IN, NNP, TO, VB, DT, NN, CD, NNS, IN, DT, NN, IN, PRP$, NNP, NN, .]</td>\n", " <td>[B-NP, I-NP, O, B-NP, B-PP, B-NP, B-NP, I-NP, I-NP, I-NP, I-NP, O, B-VP, B-PRT, B-PP, B-NP, B-VP, I-VP, B-NP, I-NP, B-NP, I-NP, B-PP, B-NP, I-NP, B-PP, B-NP, I-NP, I-NP, O]</td>\n", " <td>[B-PER, I-PER, O, O, O, B-LOC, O, O, O, O, B-ORG, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, B-LOC, O, O]</td>\n", " </tr>\n", " </tbody>\n", "</table>" ], "text/plain": [ "<IPython.core.display.HTML object>" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_random_elements(datasets[\"train\"])" ] }, { "cell_type": "markdown", "metadata": { "id": "n9qywopnIrJH" }, "source": [ "## Preprocessing the data" ] }, { "cell_type": "markdown", "metadata": { "id": "YVx71GdAIrJH" }, "source": [ "Before we can feed those texts to our model, we need to preprocess them. This is done by a 🤗 Transformers `Tokenizer` which will (as the name indicates) tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary) and put it in a format the model expects, as well as generate the other inputs that model requires.\n", "\n", "To do all of this, we instantiate our tokenizer with the `AutoTokenizer.from_pretrained` method, which will ensure:\n", "\n", "- we get a tokenizer that corresponds to the model architecture we want to use,\n", "- we download the vocabulary used when pretraining this specific checkpoint.\n", "\n", "That vocabulary will be cached, so it's not downloaded again the next time we run the cell." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "id": "eXNLu_-nIrJI" }, "outputs": [], "source": [ "from transformers import AutoTokenizer\n", "\n", "tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)" ] }, { "cell_type": "markdown", "metadata": { "id": "Vl6IidfdIrJK" }, "source": [ "The following assertion ensures that our tokenizer is a fast tokenizers (backed by Rust) from the 🤗 Tokenizers library. Those fast tokenizers are available for almost all models, and we will need some of the special features they have for our preprocessing." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "import transformers\n", "\n", "assert isinstance(tokenizer, transformers.PreTrainedTokenizerFast)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can check which type of models have a fast tokenizer available and which don't on the [big table of models](https://huggingface.co/transformers/index.html#bigtable)." ] }, { "cell_type": "markdown", "metadata": { "id": "rowT4iCLIrJK" }, "source": [ "You can directly call this tokenizer on one sentence:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "id": "a5hBlsrHIrJL", "outputId": "acdaa98a-a8cd-4a20-89b8-cc26437bbe90" }, "outputs": [ { "data": { "text/plain": [ "{'input_ids': [101, 7592, 1010, 2023, 2003, 1037, 6251, 999, 102], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1]}" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tokenizer(\"Hello, this is a sentence!\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Depending on the model you selected, you will see different keys in the dictionary returned by the cell above. They don't matter much for what we're doing here (just know they are required by the model we will instantiate later), you can learn more about them in [this tutorial](https://huggingface.co/transformers/preprocessing.html) if you're interested.\n", "\n", "If, as is the case here, your inputs have already been split into words, you should pass the list of words to your tokenzier with the argument `is_split_into_words=True`:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'input_ids': [101, 7592, 1010, 2023, 2003, 2028, 6251, 3975, 2046, 2616, 1012, 102], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tokenizer(\n", " [\"Hello\", \",\", \"this\", \"is\", \"one\", \"sentence\", \"split\", \"into\", \"words\", \".\"],\n", " is_split_into_words=True,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that transformers are often pretrained with subword tokenizers, meaning that even if your inputs have been split into words already, each of those words could be split again by the tokenizer. Let's look at an example of that:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['Germany', \"'s\", 'representative', 'to', 'the', 'European', 'Union', \"'s\", 'veterinary', 'committee', 'Werner', 'Zwingmann', 'said', 'on', 'Wednesday', 'consumers', 'should', 'buy', 'sheepmeat', 'from', 'countries', 'other', 'than', 'Britain', 'until', 'the', 'scientific', 'advice', 'was', 'clearer', '.']\n" ] } ], "source": [ "example = datasets[\"train\"][4]\n", "print(example[\"tokens\"])" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['[CLS]', 'germany', \"'\", 's', 'representative', 'to', 'the', 'european', 'union', \"'\", 's', 'veterinary', 'committee', 'werner', 'z', '##wing', '##mann', 'said', 'on', 'wednesday', 'consumers', 'should', 'buy', 'sheep', '##me', '##at', 'from', 'countries', 'other', 'than', 'britain', 'until', 'the', 'scientific', 'advice', 'was', 'clearer', '.', '[SEP]']\n" ] } ], "source": [ "tokenized_input = tokenizer(example[\"tokens\"], is_split_into_words=True)\n", "tokens = tokenizer.convert_ids_to_tokens(tokenized_input[\"input_ids\"])\n", "print(tokens)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here the words \"Zwingmann\" and \"sheepmeat\" have been split in three subtokens.\n", "\n", "This means that we need to do some processing on our labels as the input ids returned by the tokenizer are longer than the lists of labels our dataset contain, first because some special tokens might be added (we can a `[CLS]` and a `[SEP]` above) and then because of those possible splits of words in multiple tokens:" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(31, 39)" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "len(example[f\"{task}_tags\"]), len(tokenized_input[\"input_ids\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Thankfully, the tokenizer returns outputs that have a `word_ids` method which can help us." ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[None, 0, 1, 1, 2, 3, 4, 5, 6, 7, 7, 8, 9, 10, 11, 11, 11, 12, 13, 14, 15, 16, 17, 18, 18, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, None]\n" ] } ], "source": [ "print(tokenized_input.word_ids())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we can see, it returns a list with the same number of elements as our processed input ids, mapping special tokens to `None` and all other tokens to their respective word. This way, we can align the labels with the processed input ids." ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "39 39\n" ] } ], "source": [ "word_ids = tokenized_input.word_ids()\n", "aligned_labels = [-100 if i is None else example[f\"{task}_tags\"][i] for i in word_ids]\n", "print(len(aligned_labels), len(tokenized_input[\"input_ids\"]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we set the labels of all special tokens to -100 (the index that is ignored by the loss function) and the labels of all other tokens to the label of the word they come from. Another strategy is to set the label only on the first token obtained from a given word, and give a label of -100 to the other subtokens from the same word. Both strategies are possible with this script; simply change the value of this flag to swap between them." ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "label_all_tokens = True" ] }, { "cell_type": "markdown", "metadata": { "id": "2C0hcmp9IrJQ" }, "source": [ "We're now ready to write the function that will preprocess our samples. We feed them to the `tokenizer` with the argument `truncation=True` (to truncate texts that are bigger than the maximum size allowed by the model) and `is_split_into_words=True` (as seen above). Then we align the labels with the token ids using the strategy we picked:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "id": "vc0BSBLIIrJQ" }, "outputs": [], "source": [ "def tokenize_and_align_labels(examples):\n", " tokenized_inputs = tokenizer(\n", " examples[\"tokens\"], truncation=True, is_split_into_words=True\n", " )\n", "\n", " labels = []\n", " for i, label in enumerate(examples[f\"{task}_tags\"]):\n", " word_ids = tokenized_inputs.word_ids(batch_index=i)\n", " previous_word_idx = None\n", " label_ids = []\n", " for word_idx in word_ids:\n", " # Special tokens have a word id that is None. We set the label to -100 so they are automatically\n", " # ignored in the loss function.\n", " if word_idx is None:\n", " label_ids.append(-100)\n", " # We set the label for the first token of each word.\n", " elif word_idx != previous_word_idx:\n", " label_ids.append(label[word_idx])\n", " # For the other tokens in a word, we set the label to either the current label or -100, depending on\n", " # the label_all_tokens flag.\n", " else:\n", " label_ids.append(label[word_idx] if label_all_tokens else -100)\n", " previous_word_idx = word_idx\n", "\n", " labels.append(label_ids)\n", "\n", " tokenized_inputs[\"labels\"] = labels\n", " return tokenized_inputs" ] }, { "cell_type": "markdown", "metadata": { "id": "0lm8ozrJIrJR" }, "source": [ "This function works with one or several examples. In the case of several examples, the tokenizer will return a list of lists for each key:" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "id": "-b70jh26IrJS", "outputId": "acd3a42d-985b-44ee-9daa-af5d944ce1d9" }, "outputs": [ { "data": { "text/plain": [ "{'input_ids': [[101, 7327, 19164, 2446, 2655, 2000, 17757, 2329, 12559, 1012, 102], [101, 2848, 13934, 102], [101, 9371, 2727, 1011, 5511, 1011, 2570, 102], [101, 1996, 2647, 3222, 2056, 2006, 9432, 2009, 18335, 2007, 2446, 6040, 2000, 10390, 2000, 18454, 2078, 2329, 12559, 2127, 6529, 5646, 3251, 5506, 11190, 4295, 2064, 2022, 11860, 2000, 8351, 1012, 102], [101, 2762, 1005, 1055, 4387, 2000, 1996, 2647, 2586, 1005, 1055, 15651, 2837, 14121, 1062, 9328, 5804, 2056, 2006, 9317, 10390, 2323, 4965, 8351, 4168, 4017, 2013, 3032, 2060, 2084, 3725, 2127, 1996, 4045, 6040, 2001, 24509, 1012, 102]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], 'labels': [[-100, 3, 0, 7, 0, 0, 0, 7, 0, 0, -100], [-100, 1, 2, -100], [-100, 5, 0, 0, 0, 0, 0, -100], [-100, 0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -100], [-100, 5, 0, 0, 0, 0, 0, 3, 4, 0, 0, 0, 0, 1, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0, -100]]}" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tokenize_and_align_labels(datasets[\"train\"][:5])" ] }, { "cell_type": "markdown", "metadata": { "id": "zS-6iXTkIrJT" }, "source": [ "To apply this function on all the sentences (or pairs of sentences) in our dataset, we just use the `map` method of our `dataset` object we created earlier. This will apply the function on all the elements of all the splits in `dataset`, so our training, validation and testing data will be preprocessed in one single command." ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "id": "DDtsaJeVIrJT", "outputId": "aa4734bf-4ef5-4437-9948-2c16363da719" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Loading cached processed dataset at /home/matt/.cache/huggingface/datasets/conll2003/conll2003/1.0.0/9a4d16a94f8674ba3466315300359b0acd891b68b6c8743ddf60b9c702adce98/cache-7a41e503ed258cd6.arrow\n", "Loading cached processed dataset at /home/matt/.cache/huggingface/datasets/conll2003/conll2003/1.0.0/9a4d16a94f8674ba3466315300359b0acd891b68b6c8743ddf60b9c702adce98/cache-8eb7bcede9dedbe4.arrow\n", "Loading cached processed dataset at /home/matt/.cache/huggingface/datasets/conll2003/conll2003/1.0.0/9a4d16a94f8674ba3466315300359b0acd891b68b6c8743ddf60b9c702adce98/cache-cffd29036188a9aa.arrow\n" ] } ], "source": [ "tokenized_datasets = datasets.map(tokenize_and_align_labels, batched=True)" ] }, { "cell_type": "markdown", "metadata": { "id": "voWiw8C7IrJV" }, "source": [ "Even better, the results are automatically cached by the 🤗 Datasets library to avoid spending time on this step the next time you run your notebook. The 🤗 Datasets library is normally smart enough to detect when the function you pass to map has changed (and thus requires to not use the cache data). For instance, it will properly detect if you change the task in the first cell and rerun the notebook. 🤗 Datasets warns you when it uses cached files, you can pass `load_from_cache_file=False` in the call to `map` to not use the cached files and force the preprocessing to be applied again.\n", "\n", "Note that we passed `batched=True` to encode the texts by batches together. This is to leverage the full benefit of the fast tokenizer we loaded earlier, which will use multi-threading to treat the texts in a batch concurrently." ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[-100, 3, 0, 7, 0, 0, 0, 7, 0, 0, -100]" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tokenized_datasets[\"train\"][\"labels\"][0]" ] }, { "cell_type": "markdown", "metadata": { "id": "545PP3o8IrJV" }, "source": [ "## Fine-tuning the model" ] }, { "cell_type": "markdown", "metadata": { "id": "FBiW8UpKIrJW" }, "source": [ "Now that our data is ready, we can download the pretrained model and fine-tune it. Since all our tasks are about token classification, we use the `TFAutoModelForTokenClassification` class. Like with the tokenizer, the `from_pretrained` method will download and cache the model for us. The only thing we have to specify is the number of labels for our problem (which we can get from the features, as seen before). We can also set the `id2label` and `label2id` properties for our model - this is optional, but will give us some nice cleaner outputs later." ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "id": "TlqNaB8jIrJW", "outputId": "84916cf3-6e6c-47f3-d081-032ec30a4132" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "2022-07-21 17:09:54.903083: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-21 17:09:54.907230: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-21 17:09:54.907935: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-21 17:09:54.909185: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA\n", "To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n", "2022-07-21 17:09:54.911983: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-21 17:09:54.912676: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-21 17:09:54.913357: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-21 17:09:55.246108: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-21 17:09:55.246816: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-21 17:09:55.247466: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n", "2022-07-21 17:09:55.248099: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 21656 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:21:00.0, compute capability: 8.6\n", "2022-07-21 17:09:55.964037: I tensorflow/stream_executor/cuda/cuda_blas.cc:1786] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.\n", "Some layers from the model checkpoint at distilbert-base-uncased were not used when initializing TFDistilBertForTokenClassification: ['activation_13', 'vocab_transform', 'vocab_projector', 'vocab_layer_norm']\n", "- This IS expected if you are initializing TFDistilBertForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n", "- This IS NOT expected if you are initializing TFDistilBertForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n", "Some layers of TFDistilBertForTokenClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['classifier', 'dropout_19']\n", "You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n" ] } ], "source": [ "from transformers import TFAutoModelForTokenClassification\n", "\n", "id2label = {i: label for i, label in enumerate(label_list)}\n", "label2id = {label: i for i, label in enumerate(label_list)}\n", "\n", "model = TFAutoModelForTokenClassification.from_pretrained(\n", " model_checkpoint, num_labels=len(label_list), id2label=id2label, label2id=label2id\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "CczA5lJlIrJX" }, "source": [ "After all of the Tensorflow initialization messages, we see a warning that is telling us we are throwing away some weights (the `vocab_transform` and `vocab_layer_norm` layers) and randomly initializing some other (the `pre_classifier` and `classifier` layers). This is absolutely normal in this case, because we are removing the head used to pretrain the model on a masked language modeling objective and replacing it with a new head for which we don't have pretrained weights, so the library warns us we should fine-tune this model before using it for inference, which is exactly what we are going to do." ] }, { "cell_type": "markdown", "metadata": { "id": "_N8urzhyIrJY" }, "source": [ "To compile a Keras `Model`, we will need to set an optimizer and our loss function. We can use the `create_optimizer` function from Transformers. Here we tweak the learning rate, use the `batch_size` defined at the top of the notebook and customize the number of epochs for training, as well as the weight decay. This function will also create and apply a learning rate decay schedule for us." ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "id": "Bliy8zgjIrJY" }, "outputs": [], "source": [ "from transformers import create_optimizer\n", "\n", "num_train_epochs = 3\n", "num_train_steps = (len(tokenized_datasets[\"train\"]) // batch_size) * num_train_epochs\n", "optimizer, lr_schedule = create_optimizer(\n", " init_lr=2e-5,\n", " num_train_steps=num_train_steps,\n", " weight_decay_rate=0.01,\n", " num_warmup_steps=0,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that most Transformers models compute loss internally, so we actually don't have to specify anything there! You can of course set your own loss function if you want, but by default our models will choose the 'obvious' loss that matches their task, such as cross-entropy in the case of language modelling. The built-in loss will also correctly handle things like masking the loss on padding tokens, or unlabelled tokens in the case of masked language modelling, so we recommend using it unless you're an advanced user!\n", "\n", "In some of our other examples, we use `jit_compile` to compile the model with [XLA](https://www.tensorflow.org/xla). In this case, we should be careful about that - because our inputs have variable sequence lengths, we may end up having to do a new XLA compilation for each possible length, because XLA compilation expects a static input shape! For small datasets, this will probably result in spending more time on XLA compilation than actually training, which isn't very helpful.\n", "\n", "If you really want to use XLA without these problems (for example, if you're training on TPU), you can create a tokenizer with `padding=\"max_length\"`. This will pad all of your samples to the same length, ensuring that a single XLA compilation will suffice for your entire dataset. Note that depending on the nature of your dataset, this may result in a lot of wasted computation on padding tokens!" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "No loss specified in compile() - the model's internal loss computation will be used as the loss. Don't panic - this is a common way to train TensorFlow models in Transformers! To disable this behaviour please pass a loss argument, or explicitly pass `loss=None` if you do not want your model to compute a loss.\n" ] } ], "source": [ "import tensorflow as tf\n", "\n", "model.compile(optimizer=optimizer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we will need a data collator that will batch our processed examples together while applying padding to make them all the same size (each pad will be padded to the length of its longest example). There is a data collator for this task in the Transformers library, that not only pads the inputs, but also the labels. Note that our data collators are designed to work for multiple frameworks, so ensure you set the `return_tensors='np'` argument to get NumPy arrays out - you don't want to accidentally get a load of `torch.Tensor` objects in the middle of your nice TF code! You could also use `return_tensors='tf'` to get TensorFlow tensors, but our TF dataset pipeline actually uses a NumPy loader internally, which is wrapped at the end with a `tf.data.Dataset`. As a result, `np` is usually more reliable and performant when you're using it!" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [], "source": [ "from transformers import DataCollatorForTokenClassification\n", "\n", "data_collator = DataCollatorForTokenClassification(tokenizer, return_tensors=\"np\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we convert our datasets to `tf.data.Dataset`, which Keras understands natively. There are two ways to do this - we can use the slightly more low-level [`Dataset.to_tf_dataset()`](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.to_tf_dataset) method, or we can use [`Model.prepare_tf_dataset()`](https://huggingface.co/docs/transformers/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset). The main difference between these two is that the `Model` method can inspect the model to determine which column names it can use as input, which means you don't need to specify them yourself." ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/home/matt/PycharmProjects/transformers/src/transformers/tokenization_utils_base.py:719: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.\n", " tensor = as_tensor(value)\n" ] } ], "source": [ "train_set = model.prepare_tf_dataset(\n", " tokenized_datasets[\"train\"],\n", " shuffle=True,\n", " batch_size=batch_size,\n", " collate_fn=data_collator,\n", ")\n", "\n", "validation_set = model.prepare_tf_dataset(\n", " tokenized_datasets[\"validation\"],\n", " shuffle=False,\n", " batch_size=batch_size,\n", " collate_fn=data_collator,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, let's think about metrics. The `seqeval` framework will give us a nice set of metrics like accuracy, precision, recall and F1 score, and all we need to do is pass it some predictions and labels. But if these aren't Keras Metric objects, how can we use them in the middle of our training loop? The answer is the `KerasMetricCallback`. This callback takes a function and computes it on the predictions from the validation set each epoch, printing it and logging the returned value(s) for other callbacks like `TensorBoard` and `EarlyStopping`. \n", "\n", "This allows much more flexibility with the metric computation functions - you can even include Python code that would be impossible to compile into your model, such as using the `tokenizer` (which is backed by Rust!) to decode model outputs to strings and compare them to the labels. We won't be doing that here, but it's essential for metrics like `BLEU` and `ROUGE` used in tasks like translation.\n", "\n", "So how do we use `KerasMetricCallback`? It's straightforward - we simply define a function that takes the `predictions` and `labels` from the validation set and computes a dict of one or more metrics, then pass that function and the validation data to the callback. Note that we discard all predictions where the label is `-100` - this indicates a missing label, either because there's no label for that token, or the token is a padding token. The built-in Keras metrics are not aware of this masking system and so will produce erroneous values if they are used instead." ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "from transformers.keras_callbacks import KerasMetricCallback\n", "\n", "metric = load_metric(\"seqeval\")\n", "labels = [label_list[i] for i in example[f\"{task}_tags\"]]\n", "metric.compute(predictions=[labels], references=[labels])\n", "\n", "\n", "def compute_metrics(p):\n", " predictions, labels = p\n", " predictions = np.argmax(predictions, axis=2)\n", "\n", " # Remove ignored index (special tokens)\n", " true_predictions = [\n", " [label_list[p] for (p, l) in zip(prediction, label) if l != -100]\n", " for prediction, label in zip(predictions, labels)\n", " ]\n", " true_labels = [\n", " [label_list[l] for (p, l) in zip(prediction, label) if l != -100]\n", " for prediction, label in zip(predictions, labels)\n", " ]\n", "\n", " results = metric.compute(predictions=true_predictions, references=true_labels)\n", " return {\n", " \"precision\": results[\"overall_precision\"],\n", " \"recall\": results[\"overall_recall\"],\n", " \"f1\": results[\"overall_f1\"],\n", " \"accuracy\": results[\"overall_accuracy\"],\n", " }\n", "\n", "\n", "metric_callback = KerasMetricCallback(\n", " metric_fn=compute_metrics, eval_dataset=validation_set\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can train our model. We can also add a callback to sync up our model with the Hub - this allows us to resume training from other machines and even test the model's inference quality midway through training! If you don't want to do this, simply remove the callbacks argument in the call to `fit()`." ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/home/matt/PycharmProjects/notebooks/examples/tc_model_save is already a clone of https://huggingface.co/Rocketknight1/distilbert-base-uncased-finetuned-ner. Make sure you pull the latest changes with `repo.git_pull()`.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Epoch 1/3\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/home/matt/PycharmProjects/transformers/src/transformers/tokenization_utils_base.py:719: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.\n", " tensor = as_tensor(value)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "876/877 [============================>.] - ETA: 0s - loss: 0.2028" ] }, { "name": "stderr", "output_type": "stream", "text": [ "Several commits (2) will be pushed upstream.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "877/877 [==============================] - 60s 62ms/step - loss: 0.2026 - val_loss: 0.0726 - precision: 0.8945 - recall: 0.9220 - f1: 0.9081 - accuracy: 0.9793\n", "Epoch 2/3\n", " 5/877 [..............................] - ETA: 30s - loss: 0.0512" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/home/matt/PycharmProjects/transformers/src/transformers/tokenization_utils_base.py:719: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.\n", " tensor = as_tensor(value)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "877/877 [==============================] - 38s 43ms/step - loss: 0.0553 - val_loss: 0.0632 - precision: 0.9194 - recall: 0.9302 - f1: 0.9248 - accuracy: 0.9823\n", "Epoch 3/3\n", "877/877 [==============================] - 50s 57ms/step - loss: 0.0347 - val_loss: 0.0609 - precision: 0.9214 - recall: 0.9343 - f1: 0.9278 - accuracy: 0.9830\n" ] }, { "data": { "text/plain": [ "<keras.callbacks.History at 0x7f1b2c175bd0>" ] }, "execution_count": 31, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from transformers.keras_callbacks import PushToHubCallback\n", "from tensorflow.keras.callbacks import TensorBoard\n", "\n", "model_name = model_checkpoint.split(\"/\")[-1]\n", "push_to_hub_model_id = f\"{model_name}-finetuned-{task}\"\n", "\n", "tensorboard_callback = TensorBoard(log_dir=\"./tc_model_save/logs\")\n", "\n", "push_to_hub_callback = PushToHubCallback(\n", " output_dir=\"./tc_model_save\",\n", " tokenizer=tokenizer,\n", " hub_model_id=push_to_hub_model_id,\n", ")\n", "\n", "callbacks = [metric_callback, tensorboard_callback, push_to_hub_callback]\n", "\n", "model.fit(\n", " train_set,\n", " validation_data=validation_set,\n", " epochs=num_train_epochs,\n", " callbacks=callbacks,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you used the callback above, you can now share this model with all your friends, family, favorite pets: they can all load it with the identifier `\"your-username/the-name-you-picked\"` so for instance:\n", "\n", "```python\n", "from transformers import TFAutoModelForTokenClassification\n", "\n", "model = TFAutoModelForTokenClassification.from_pretrained(\"your-username/my-awesome-model\")\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Inference" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we've finished training our model, let's look at how we can get predictions from it. First, let's load the model from the hub so that we can resume from here without needing to run the rest of the notebook." ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "30cf1a22f21a44b4825212d0e6605f0b", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading config.json: 0%| | 0.00/882 [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "5aeebb0550464669a5b86400872059da", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading tf_model.h5: 0%| | 0.00/253M [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "Some layers from the model checkpoint at Rocketknight1/distilbert-base-uncased-finetuned-ner were not used when initializing TFDistilBertForTokenClassification: ['dropout_19']\n", "- This IS expected if you are initializing TFDistilBertForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n", "- This IS NOT expected if you are initializing TFDistilBertForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n", "Some layers of TFDistilBertForTokenClassification were not initialized from the model checkpoint at Rocketknight1/distilbert-base-uncased-finetuned-ner and are newly initialized: ['dropout_39']\n", "You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n" ] } ], "source": [ "from transformers import AutoTokenizer, TFAutoModelForTokenClassification\n", "\n", "# You can, of course, use your own username and model name here \n", "# once you've pushed your model using the code above!\n", "checkpoint = \"Rocketknight1/distilbert-base-uncased-finetuned-ner\"\n", "model = TFAutoModelForTokenClassification.from_pretrained(checkpoint)\n", "tokenizer = AutoTokenizer.from_pretrained(checkpoint)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's see how to use this model for inference. Let's use a sample sentence from a recent news story and tokenize it." ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [], "source": [ "sample_sentence = \"Britain will send scores of artillery guns and more than 1,600 anti-tank weapons to Ukraine in the latest supply of western arms to help bolster its defence against Russia, the UK defence secretary, Ben Wallace, said on Thursday.\"\n", "tokenized = tokenizer([sample_sentence], return_tensors=\"np\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we pass this to the model. The outputs from the model will be logits, so let's use argmax to get the model's guess for the class for each token." ] }, { "cell_type": "code", "execution_count": 42, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[0 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 0 0 0 0 0 7 0 0 0 0 0 0 0 0 5 0 0\n", " 5 0 0 0 1 2 0 0 0 0 0 0]\n" ] } ], "source": [ "import numpy as np\n", "outputs = model(tokenized).logits\n", "classes = np.argmax(outputs, axis=-1)[0]\n", "print(classes)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Well, there's our answer but it's not exactly readable like that. Let's do two things: Firstly we can pair each classification to the token it comes from, and secondly we'll convert the classes from label IDs to names." ] }, { "cell_type": "code", "execution_count": 45, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[('[CLS]', 'O'), ('britain', 'B-LOC'), ('will', 'O'), ('send', 'O'), ('scores', 'O'), ('of', 'O'), ('artillery', 'O'), ('guns', 'O'), ('and', 'O'), ('more', 'O'), ('than', 'O'), ('1', 'O'), (',', 'O'), ('600', 'O'), ('anti', 'O'), ('-', 'O'), ('tank', 'O'), ('weapons', 'O'), ('to', 'O'), ('ukraine', 'B-LOC'), ('in', 'O'), ('the', 'O'), ('latest', 'O'), ('supply', 'O'), ('of', 'O'), ('western', 'B-MISC'), ('arms', 'O'), ('to', 'O'), ('help', 'O'), ('bo', 'O'), ('##lster', 'O'), ('its', 'O'), ('defence', 'O'), ('against', 'O'), ('russia', 'B-LOC'), (',', 'O'), ('the', 'O'), ('uk', 'B-LOC'), ('defence', 'O'), ('secretary', 'O'), (',', 'O'), ('ben', 'B-PER'), ('wallace', 'I-PER'), (',', 'O'), ('said', 'O'), ('on', 'O'), ('thursday', 'O'), ('.', 'O'), ('[SEP]', 'O')]\n" ] } ], "source": [ "outputs = [(tokenizer.decode(token), model.config.id2label[id]) for token, id in zip(tokenized[\"input_ids\"][0], classes)]\n", "print(outputs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that the model has correctly identified a named person as well as several locations in this text!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Pipeline API" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "An alternative way to quickly perform inference with any model on the hub is to use the [Pipeline API](https://huggingface.co/docs/transformers/main_classes/pipelines), which abstracts away all the steps we did manually above. It will perform the preprocessing, forward pass and postprocessing all in a single object.\n", "\n", "Let's showcase this for our trained model:" ] }, { "cell_type": "code", "execution_count": 47, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Some layers from the model checkpoint at Rocketknight1/distilbert-base-uncased-finetuned-ner were not used when initializing TFDistilBertForTokenClassification: ['dropout_19']\n", "- This IS expected if you are initializing TFDistilBertForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n", "- This IS NOT expected if you are initializing TFDistilBertForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n", "Some layers of TFDistilBertForTokenClassification were not initialized from the model checkpoint at Rocketknight1/distilbert-base-uncased-finetuned-ner and are newly initialized: ['dropout_79']\n", "You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n" ] } ], "source": [ "from transformers import pipeline\n", "\n", "token_classifier = pipeline(\n", " \"token-classification\",\n", " \"Rocketknight1/distilbert-base-uncased-finetuned-ner\",\n", " framework=\"tf\"\n", ")" ] }, { "cell_type": "code", "execution_count": 48, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'entity': 'B-LOC',\n", " 'score': 0.9885372,\n", " 'index': 1,\n", " 'word': 'britain',\n", " 'start': 0,\n", " 'end': 7},\n", " {'entity': 'B-LOC',\n", " 'score': 0.99329174,\n", " 'index': 19,\n", " 'word': 'ukraine',\n", " 'start': 84,\n", " 'end': 91},\n", " {'entity': 'B-MISC',\n", " 'score': 0.6854797,\n", " 'index': 25,\n", " 'word': 'western',\n", " 'start': 116,\n", " 'end': 123},\n", " {'entity': 'B-LOC',\n", " 'score': 0.9881704,\n", " 'index': 34,\n", " 'word': 'russia',\n", " 'start': 165,\n", " 'end': 171},\n", " {'entity': 'B-LOC',\n", " 'score': 0.5458358,\n", " 'index': 37,\n", " 'word': 'uk',\n", " 'start': 177,\n", " 'end': 179},\n", " {'entity': 'B-PER',\n", " 'score': 0.98485094,\n", " 'index': 41,\n", " 'word': 'ben',\n", " 'start': 199,\n", " 'end': 202},\n", " {'entity': 'I-PER',\n", " 'score': 0.9929636,\n", " 'index': 42,\n", " 'word': 'wallace',\n", " 'start': 203,\n", " 'end': 210}]" ] }, "execution_count": 48, "metadata": {}, "output_type": "execute_result" } ], "source": [ "token_classifier(sample_sentence)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In addition to handling tokenization and running the model for us, the pipeline API saved us a lot of post-processing work and cleaned up our output for us too. Nice!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "colab": { "name": "Token Classification", "provenance": [] }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.3" } }, "nbformat": 4, "nbformat_minor": 1 }