{ "cells": [ { "cell_type": "markdown", "id": "75bcf2ee-3fb0-4c3a-94dc-e0fc867099b2", "metadata": {}, "source": [ "# How to fine-tune a distilbert model with ONNX Runtime\n", "\n", "This notebook is largely inspired by the text classification [notebook of Transformers](https://github.com/huggingface/notebooks/blob/main/examples/text_classification.ipynb) which takes PyTorch as backend for fine tuning. \n", "\n", "Here, instead of `Trainer`, you will use the `ORTTrainer` class in [🏎️ Optimum ](https://github.com/huggingface/optimum) library and take [ONNX Runtime](https://microsoft.github.io/onnxruntime/) as backend to accelerate the training. " ] }, { "cell_type": "markdown", "id": "a9408e05-175b-4573-8314-9ae4ba7d3453", "metadata": {}, "source": [ "__Dependencies__\n", "\n", "To use ONNX Runtime for training, you need a machine with at least one NVIDIA GPU.\n", "\n", "__ONNX Runtime training module need to be properly installed before launching the notebook! Please follow the instruction in [Optimum's documentation](https://huggingface.co/docs/optimum/onnxruntime/trainer) to set up your environment.__\n", "\n", "Check your GPU:" ] }, { "cell_type": "code", "execution_count": 1, "id": "74c87883-595d-47cb-acf9-091340705b33", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Fri Sep 16 09:45:13 2022 \n", "+-----------------------------------------------------------------------------+\n", "| NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 11.3 |\n", "|-------------------------------+----------------------+----------------------+\n", "| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n", "| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n", "|===============================+======================+======================|\n", "| 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 |\n", "| N/A 27C P8 8W / 70W | 0MiB / 15109MiB | 0% Default |\n", "+-------------------------------+----------------------+----------------------+\n", " \n", "+-----------------------------------------------------------------------------+\n", "| Processes: GPU Memory |\n", "| GPU PID Type Process name Usage |\n", "|=============================================================================|\n", "| No running processes found |\n", "+-----------------------------------------------------------------------------+\n" ] } ], "source": [ "!nvidia-smi" ] }, { "cell_type": "markdown", "id": "e379d8ab-9495-4c2a-ace9-f1e605ea9adf", "metadata": {}, "source": [ "If you're opening this Notebook on colab, you will probably need to install 🤗 Optimum, 🤗 Transformers, 🤗 Datasets and 🤗 evaluate. Uncomment the following cell and run it." ] }, { "cell_type": "code", "execution_count": 3, "id": "0cef7c68-fd55-4ba2-a627-3dcbe93e1d81", "metadata": {}, "outputs": [], "source": [ "# !pip install optimum transformers datasets evaluate " ] }, { "cell_type": "markdown", "id": "039cbbd8-4794-4982-abb8-141586c14e16", "metadata": {}, "source": [ "__[Optional]__ If you want to share your model with the community and generate an inference API, there are a few more steps to follow.\n", "\n", "First you have to store your authentication token from the Hugging Face website (sign up [here](https://huggingface.co/welcome) if you haven't already!) then execute the following cell and input your username and password:" ] }, { "cell_type": "code", "execution_count": 4, "id": "855d1dd9-6f5c-4563-be82-c4a908cbfeab", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!\n", " warnings.warn(\"urllib3 ({}) or chardet ({}) doesn't match a supported \"\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "11aefee5f23142b48cc829d6487f32df", "version_major": 2, "version_minor": 0 }, "text/plain": [ "VBox(children=(HTML(value='
\n", " \n", " \n", " \n", " sentence\n", " label\n", " idx\n", " \n", " \n", " \n", " \n", " 0\n", " I didn't help him because I have any sympathy for urban guerillas.\n", " acceptable\n", " 5242\n", " \n", " \n", " 1\n", " I worked on Sunday in the city on that project without a break.\n", " acceptable\n", " 7692\n", " \n", " \n", " 2\n", " In general that he understands what's going on is surprising.\n", " unacceptable\n", " 431\n", " \n", " \n", " 3\n", " Did Bill eat his dinner?\n", " acceptable\n", " 5991\n", " \n", " \n", " 4\n", " The more he eats, the poorer he knows a woman that gets.\n", " unacceptable\n", " 197\n", " \n", " \n", " 5\n", " No reading Shakespeare satisfied me\n", " unacceptable\n", " 7894\n", " \n", " \n", " 6\n", " Amanda drove the package from Boston to New York.\n", " acceptable\n", " 2699\n", " \n", " \n", " 7\n", " I only eat fish drunk raw.\n", " unacceptable\n", " 843\n", " \n", " \n", " 8\n", " I loved the policeman intensely with all my heart.\n", " acceptable\n", " 5795\n", " \n", " \n", " 9\n", " I wanted him to leave.\n", " acceptable\n", " 6043\n", " \n", " \n", "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_random_elements(dataset[\"train\"])" ] }, { "cell_type": "markdown", "id": "30921055-5430-4017-bc97-99997f95117e", "metadata": {}, "source": [ "The metric is an instance of [`evaluate.EvaluationModule`](https://huggingface.co/docs/evaluate/package_reference/main_classes#evaluate.EvaluationModule):" ] }, { "cell_type": "code", "execution_count": 20, "id": "7dad26a1-edc4-4945-b5cb-4fc0d20b8976", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "EvaluationModule(name: \"glue\", module_type: \"metric\", features: {'predictions': Value(dtype='int64', id=None), 'references': Value(dtype='int64', id=None)}, usage: \"\"\"\n", "Compute GLUE evaluation metric associated to each GLUE dataset.\n", "Args:\n", " predictions: list of predictions to score.\n", " Each translation should be tokenized into a list of tokens.\n", " references: list of lists of references for each translation.\n", " Each reference should be tokenized into a list of tokens.\n", "Returns: depending on the GLUE subset, one or several of:\n", " \"accuracy\": Accuracy\n", " \"f1\": F1 score\n", " \"pearson\": Pearson Correlation\n", " \"spearmanr\": Spearman Correlation\n", " \"matthews_correlation\": Matthew Correlation\n", "Examples:\n", "\n", " >>> glue_metric = evaluate.load('glue', 'sst2') # 'sst2' or any of [\"mnli\", \"mnli_mismatched\", \"mnli_matched\", \"qnli\", \"rte\", \"wnli\", \"hans\"]\n", " >>> references = [0, 1]\n", " >>> predictions = [0, 1]\n", " >>> results = glue_metric.compute(predictions=predictions, references=references)\n", " >>> print(results)\n", " {'accuracy': 1.0}\n", "\n", " >>> glue_metric = evaluate.load('glue', 'mrpc') # 'mrpc' or 'qqp'\n", " >>> references = [0, 1]\n", " >>> predictions = [0, 1]\n", " >>> results = glue_metric.compute(predictions=predictions, references=references)\n", " >>> print(results)\n", " {'accuracy': 1.0, 'f1': 1.0}\n", "\n", " >>> glue_metric = evaluate.load('glue', 'stsb')\n", " >>> references = [0., 1., 2., 3., 4., 5.]\n", " >>> predictions = [0., 1., 2., 3., 4., 5.]\n", " >>> results = glue_metric.compute(predictions=predictions, references=references)\n", " >>> print({\"pearson\": round(results[\"pearson\"], 2), \"spearmanr\": round(results[\"spearmanr\"], 2)})\n", " {'pearson': 1.0, 'spearmanr': 1.0}\n", "\n", " >>> glue_metric = evaluate.load('glue', 'cola')\n", " >>> references = [0, 1]\n", " >>> predictions = [0, 1]\n", " >>> results = glue_metric.compute(predictions=predictions, references=references)\n", " >>> print(results)\n", " {'matthews_correlation': 1.0}\n", "\"\"\", stored examples: 0)" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "metric" ] }, { "cell_type": "markdown", "id": "c3a4f7ac-9b3b-4fa4-aa1f-62f2aed232fe", "metadata": {}, "source": [ "You can call its `compute` method with your predictions and labels directly and it will return a dictionary with the metric(s) value:" ] }, { "cell_type": "code", "execution_count": 21, "id": "77cedff5-696f-45aa-8ccc-3c002b41feb2", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'matthews_correlation': 0.025861699363244256}" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import numpy as np\n", "\n", "fake_preds = np.random.randint(0, 2, size=(64,))\n", "fake_labels = np.random.randint(0, 2, size=(64,))\n", "metric.compute(predictions=fake_preds, references=fake_labels)" ] }, { "cell_type": "markdown", "id": "f8de6285-2daa-41d5-b6ac-ef16f5b4a59f", "metadata": {}, "source": [ "Note that `evaluate.load` has loaded the proper metric associated to your task, which is:\n", "\n", "- for CoLA: [Matthews Correlation Coefficient](https://en.wikipedia.org/wiki/Matthews_correlation_coefficient)\n", "- for MNLI (matched or mismatched): Accuracy\n", "- for MRPC: Accuracy and [F1 score](https://en.wikipedia.org/wiki/F1_score)\n", "- for QNLI: Accuracy\n", "- for QQP: Accuracy and [F1 score](https://en.wikipedia.org/wiki/F1_score)\n", "- for RTE: Accuracy\n", "- for SST-2: Accuracy\n", "- for STS-B: [Pearson Correlation Coefficient](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) and [Spearman's_Rank_Correlation_Coefficient](https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient)\n", "- for WNLI: Accuracy\n", "\n", "so the metric object only computes the one(s) needed for your task." ] }, { "cell_type": "markdown", "id": "b900700b-6a74-4617-ac91-bb22c4a9796a", "metadata": {}, "source": [ "## Preprocessing the data" ] }, { "cell_type": "markdown", "id": "da42948b-d235-46ff-8db7-aef2e31c3588", "metadata": {}, "source": [ "Before we can feed those texts to our model, we need to preprocess them. This is done by a 🤗 Transformers `Tokenizer` which will (as the name indicates) tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary) and put it in a format the model expects, as well as generate the other inputs that model requires.\n", "\n", "To do all of this, we instantiate our tokenizer with the `AutoTokenizer.from_pretrained` method, which will ensure:\n", "\n", "- we get a tokenizer that corresponds to the model architecture we want to use,\n", "- we download the vocabulary used when pretraining this specific checkpoint.\n", "\n", "That vocabulary will be cached, so it's not downloaded again the next time we run the cell." ] }, { "cell_type": "code", "execution_count": 24, "id": "be76655e-6b80-43c5-910d-fe506f575825", "metadata": {}, "outputs": [], "source": [ "from transformers import AutoTokenizer\n", " \n", "tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True)" ] }, { "cell_type": "markdown", "id": "06fda37b-67cb-4050-9d0d-29f2a9cd8576", "metadata": {}, "source": [ "We pass along `use_fast=True` to the call above to use one of the fast tokenizers (backed by Rust) from the 🤗 Tokenizers library. Those fast tokenizers are available for almost all models, but if you got an error with the previous call, remove that argument." ] }, { "cell_type": "markdown", "id": "ac1c5f4c-34fb-4adf-8f4a-cda1569fc6de", "metadata": {}, "source": [ "You can directly call this tokenizer on one sentence or a pair of sentences:" ] }, { "cell_type": "code", "execution_count": 25, "id": "bced65fa-5808-4aa7-bdfa-a06d8f8f3a4f", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'input_ids': [101, 7592, 1010, 2023, 2028, 6251, 999, 102, 1998, 2023, 6251, 3632, 2007, 2009, 1012, 102], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tokenizer(\"Hello, this one sentence!\", \"And this sentence goes with it.\")" ] }, { "cell_type": "markdown", "id": "88c5a356-ebcd-4247-bc71-1d4c759cef4b", "metadata": {}, "source": [ "Depending on the model you selected, you will see different keys in the dictionary returned by the cell above. They don't matter much for what we're doing here (just know they are required by the model we will instantiate later), you can learn more about them in [this tutorial](https://huggingface.co/transformers/preprocessing.html) if you're interested.\n", "\n", "To preprocess our dataset, we will thus need the names of the columns containing the sentence(s). The following dictionary keeps track of the correspondence task to column names:" ] }, { "cell_type": "code", "execution_count": 26, "id": "1e27933b-23ee-41dd-af38-dd0f68cdf84c", "metadata": {}, "outputs": [], "source": [ "task_to_keys = {\n", " \"cola\": (\"sentence\", None),\n", " \"mnli\": (\"premise\", \"hypothesis\"),\n", " \"mnli-mm\": (\"premise\", \"hypothesis\"),\n", " \"mrpc\": (\"sentence1\", \"sentence2\"),\n", " \"qnli\": (\"question\", \"sentence\"),\n", " \"qqp\": (\"question1\", \"question2\"),\n", " \"rte\": (\"sentence1\", \"sentence2\"),\n", " \"sst2\": (\"sentence\", None),\n", " \"stsb\": (\"sentence1\", \"sentence2\"),\n", " \"wnli\": (\"sentence1\", \"sentence2\"),\n", "}" ] }, { "cell_type": "markdown", "id": "ab9aea16-b01d-4a62-9fca-6f9cec83c509", "metadata": {}, "source": [ "We can double check it does work on our current dataset:" ] }, { "cell_type": "code", "execution_count": 27, "id": "cc1c8db3-7340-43b2-b693-a0642c1e4de3", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Sentence: Our friends won't buy this analysis, let alone the next one we propose.\n" ] } ], "source": [ "sentence1_key, sentence2_key = task_to_keys[task]\n", "if sentence2_key is None:\n", " print(f\"Sentence: {dataset['train'][0][sentence1_key]}\")\n", "else:\n", " print(f\"Sentence 1: {dataset['train'][0][sentence1_key]}\")\n", " print(f\"Sentence 2: {dataset['train'][0][sentence2_key]}\")" ] }, { "cell_type": "markdown", "id": "99efdd2e-49c0-4d93-8f7d-e211a1f2e271", "metadata": {}, "source": [ "We can them write the function that will preprocess our samples. We just feed them to the `tokenizer` with the argument `truncation=True`. This will ensure that an input longer that what the model selected can handle will be truncated to the maximum length accepted by the model." ] }, { "cell_type": "code", "execution_count": 28, "id": "b5c423d0-2520-47f3-8592-fb2852e2f2c3", "metadata": {}, "outputs": [], "source": [ "def preprocess_function(examples):\n", " if sentence2_key is None:\n", " return tokenizer(examples[sentence1_key], truncation=True)\n", " return tokenizer(examples[sentence1_key], examples[sentence2_key], truncation=True)" ] }, { "cell_type": "markdown", "id": "8fc0d003-39c4-453d-af44-9ef53362e1e8", "metadata": {}, "source": [ "This function works with one or several examples. In the case of several examples, the tokenizer will return a list of lists for each key:" ] }, { "cell_type": "code", "execution_count": 29, "id": "6de42bf9-5ed0-4a4d-a6b5-54103a0637ad", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'input_ids': [[101, 2256, 2814, 2180, 1005, 1056, 4965, 2023, 4106, 1010, 2292, 2894, 1996, 2279, 2028, 2057, 16599, 1012, 102], [101, 2028, 2062, 18404, 2236, 3989, 1998, 1045, 1005, 1049, 3228, 2039, 1012, 102], [101, 2028, 2062, 18404, 2236, 3989, 2030, 1045, 1005, 1049, 3228, 2039, 1012, 102], [101, 1996, 2062, 2057, 2817, 16025, 1010, 1996, 13675, 16103, 2121, 2027, 2131, 1012, 102], [101, 2154, 2011, 2154, 1996, 8866, 2024, 2893, 14163, 8024, 3771, 1012, 102]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}" ] }, "execution_count": 29, "metadata": {}, "output_type": "execute_result" } ], "source": [ "preprocess_function(dataset['train'][:5])" ] }, { "cell_type": "markdown", "id": "0b5e7585-d18f-481a-bdb8-1a96aac0fee1", "metadata": {}, "source": [ "To apply this function on all the sentences (or pairs of sentences) in our dataset, we just use the `map` method of our `dataset` object we created earlier. This will apply the function on all the elements of all the splits in `dataset`, so our training, validation and testing data will be preprocessed in one single command." ] }, { "cell_type": "code", "execution_count": 31, "id": "fd5db7e6-a30f-46b4-a84c-2622d6c3f124", "metadata": {}, "outputs": [], "source": [ "encoded_dataset = dataset.map(preprocess_function, batched=True)" ] }, { "cell_type": "markdown", "id": "1ad549b5-404b-488e-a213-b248a2f8dbff", "metadata": {}, "source": [ "Even better, the results are automatically cached by the 🤗 Datasets library to avoid spending time on this step the next time you run your notebook. The 🤗 Datasets library is normally smart enough to detect when the function you pass to map has changed (and thus requires to not use the cache data). For instance, it will properly detect if you change the task in the first cell and rerun the notebook. 🤗 Datasets warns you when it uses cached files, you can pass `load_from_cache_file=False` in the call to `map` to not use the cached files and force the preprocessing to be applied again.\n", "\n", "Note that we passed `batched=True` to encode the texts by batches together. This is to leverage the full benefit of the fast tokenizer we loaded earlier, which will use multi-threading to treat the texts in a batch concurrently." ] }, { "cell_type": "markdown", "id": "e7fdbae4-9b63-4e90-9af6-290023178dd3", "metadata": {}, "source": [ "## Fine-tuning the model" ] }, { "cell_type": "markdown", "id": "97ddcc9e-ce36-414d-9f70-47da3ac94185", "metadata": {}, "source": [ "Now that our data is ready, we can download the pretrained model and fine-tune it. Since all our tasks are about sentence classification, we use the `AutoModelForSequenceClassification` class. Like with the tokenizer, the `from_pretrained` method will download and cache the model for us. The only thing we have to specify is the number of labels for our problem (which is always 2, except for STS-B which is a regression problem and MNLI where we have 3 labels):" ] }, { "cell_type": "code", "execution_count": 34, "id": "3d772af8-b698-4176-80a1-8ed1ee21caf3", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertForSequenceClassification: ['vocab_projector.bias', 'vocab_transform.bias', 'vocab_projector.weight', 'vocab_layer_norm.bias', 'vocab_layer_norm.weight', 'vocab_transform.weight']\n", "- This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n", "- This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n", "Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['pre_classifier.weight', 'pre_classifier.bias', 'classifier.bias', 'classifier.weight']\n", "You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n" ] } ], "source": [ "from transformers import AutoModelForSequenceClassification\n", "from optimum.onnxruntime import ORTTrainer, ORTTrainingArguments\n", "\n", "num_labels = 3 if task.startswith(\"mnli\") else 1 if task==\"stsb\" else 2\n", "model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint, num_labels=num_labels)" ] }, { "cell_type": "markdown", "id": "6a1e0489-bf2b-40f1-87bf-74d8693ece02", "metadata": {}, "source": [ "The warning is telling us we are throwing away some weights (the `vocab_transform` and `vocab_layer_norm` layers) and randomly initializing some other (the `pre_classifier` and `classifier` layers). This is absolutely normal in this case, because we are removing the head used to pretrain the model on a masked language modeling objective and replacing it with a new head for which we don't have pretrained weights, so the library warns us we should fine-tune this model before using it for inference, which is exactly what we are going to do." ] }, { "cell_type": "markdown", "id": "e17f5a56-f117-429c-9338-d5b9d5bdb6e1", "metadata": {}, "source": [ "To instantiate a `ORTTrainer`, we will need to define two more things. The most important is the [`ORTTrainingArguments`](https://huggingface.co/docs/optimum/onnxruntime/trainer#optimum.onnxruntime.ORTTrainingArguments), which is a class that contains all the attributes to customize the training. You can also use `TrainingArguments` in Transformers, but `ORTTrainingArguments` enables more optimized features of ONNX Runtime. It requires one folder name, which will be used to save the checkpoints of the model, and all other arguments are optional:" ] }, { "cell_type": "code", "execution_count": 35, "id": "f2a2966c-3377-4e22-a85e-fd83c3ae043e", "metadata": {}, "outputs": [], "source": [ "metric_name = \"pearson\" if task == \"stsb\" else \"matthews_correlation\" if task == \"cola\" else \"accuracy\"\n", "model_name = model_checkpoint.split(\"/\")[-1]\n", "\n", "args = ORTTrainingArguments(\n", " f\"{model_name}-finetuned-{task}\",\n", " evaluation_strategy = \"epoch\",\n", " save_strategy = \"epoch\",\n", " learning_rate=2e-5,\n", " per_device_train_batch_size=batch_size,\n", " per_device_eval_batch_size=batch_size,\n", " num_train_epochs=5,\n", " weight_decay=0.01,\n", " load_best_model_at_end=True,\n", " metric_for_best_model=metric_name,\n", " optim=\"adamw_ort_fused\",\n", " # push_to_hub=True,\n", ")" ] }, { "cell_type": "markdown", "id": "54c19a05-4e34-4ce4-9497-5d884a81aabd", "metadata": {}, "source": [ "Here we set the evaluation to be done at the end of each epoch, tweak the learning rate, use the `batch_size` defined at the top of the notebook and customize the number of epochs for training, as well as the weight decay. Since the best model might not be the one at the end of training, we ask the `ORTTrainer` to load the best model it saved (according to `metric_name`) at the end of training.\n", "\n", "The last argument to setup everything so we can push the model to the [Hub](https://huggingface.co/models) regularly during training. Remove it if you didn't follow the installation steps at the top of the notebook. If you want to save your model locally in a name that is different than the name of the repository it will be pushed, or if you want to push your model under an organization and not your name space, use the `hub_model_id` argument to set the repo name (it needs to be the full name, including your namespace: for instance `\"optimum/bert-finetuned-mrpc\"`)." ] }, { "cell_type": "markdown", "id": "b6e316ab-b96a-4029-b5db-53f9c449c902", "metadata": {}, "source": [ "The last thing to define for our `ORTTrainer` is how to compute the metrics from the predictions. We need to define a function for this, which will just use the `metric` we loaded earlier, the only preprocessing we have to do is to take the argmax of our predicted logits (our just squeeze the last axis in the case of STS-B):" ] }, { "cell_type": "code", "execution_count": 36, "id": "feea5dbe-2514-4817-a186-56e3a45adc66", "metadata": {}, "outputs": [], "source": [ "def compute_metrics(eval_pred):\n", " predictions, labels = eval_pred\n", " if task != \"stsb\":\n", " predictions = np.argmax(predictions, axis=1)\n", " else:\n", " predictions = predictions[:, 0]\n", " return metric.compute(predictions=predictions, references=labels)" ] }, { "cell_type": "markdown", "id": "fdc095b4-9831-48d4-9775-3a13b9a2fd9c", "metadata": {}, "source": [ "Then we just need to pass all of this along with our datasets to the `ORTTrainer`:" ] }, { "cell_type": "code", "execution_count": 39, "id": "4e1b105a-1208-4862-bdbd-2959a8c56135", "metadata": {}, "outputs": [], "source": [ "validation_key = \"validation_mismatched\" if task == \"mnli-mm\" else \"validation_matched\" if task == \"mnli\" else \"validation\"\n", "trainer = ORTTrainer(\n", " model=model,\n", " args=args,\n", " train_dataset=encoded_dataset[\"train\"],\n", " eval_dataset=encoded_dataset[validation_key],\n", " compute_metrics=compute_metrics,\n", " tokenizer=tokenizer,\n", " feature=\"sequence-classification\",\n", ")" ] }, { "cell_type": "markdown", "id": "93c9b666-ba55-42bb-8050-1d14fa074630", "metadata": {}, "source": [ "You might wonder why we pass along the `tokenizer` when we already preprocessed our data. This is because we will use it once last time to make all the samples we gather the same length by applying padding, which requires knowing the model's preferences regarding padding (to the left or right? with which token?). The `tokenizer` has a pad method that will do all of this right for us, and the `ORTTrainer` will use it. You can customize this part by defining and passing your own `data_collator` which will receive the samples like the dictionaries seen above and will need to return a dictionary of tensors." ] }, { "cell_type": "markdown", "id": "8e31ff86-10c4-4462-a020-4830810dbec1", "metadata": {}, "source": [ "We can now finetune our model by just calling the `train` method:" ] }, { "cell_type": "code", "execution_count": 40, "id": "ebc0b60f-2d46-49b6-bbdd-b19b51f2bd64", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "The following columns in the training set don't have a corresponding argument in `DistilBertForSequenceClassification.forward` and have been ignored: sentence, idx. If sentence, idx are not expected by `DistilBertForSequenceClassification.forward`, you can safely ignore this message.\n", "You're using a DistilBertTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\n", "/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_training_manager.py:191: UserWarning: Fast path enabled - skipping checks. Rebuild graph: True, Execution agent: True, Device check: True\n", " warnings.warn(\n", "WARNING: The shape inference of org.pytorch.aten::ATen type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of org.pytorch.aten::ATen type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of com.microsoft::SoftmaxCrossEntropyLossInternal type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of com.microsoft::SoftmaxCrossEntropyLossInternal type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of org.pytorch.aten::ATen type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of org.pytorch.aten::ATen type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of com.microsoft::SoftmaxCrossEntropyLossInternal type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of com.microsoft::SoftmaxCrossEntropyLossInternal type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of org.pytorch.aten::ATen type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of org.pytorch.aten::ATen type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of com.microsoft::SoftmaxCrossEntropyLossInternal type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of com.microsoft::SoftmaxCrossEntropyLossInternal type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "Warning: Checker does not support models with experimental ops: ATen\n", "Warning: Checker does not support models with experimental ops: ATen\n", "/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_logger.py:52: UserWarning: There were one or more warnings or errors raised while exporting the PyTorch model. Please enable INFO level logging to view all warnings and errors.\n", " warnings.warn(\n", "Warning: Checker does not support models with experimental ops: ATen\n", "Warning: Checker does not support models with experimental ops: ATen\n", "Warning: Checker does not support models with experimental ops: ATen\n", "Warning: Checker does not support models with experimental ops: ATen\n", "Warning: Checker does not support models with experimental ops: ATen\n", "Warning: Checker does not support models with experimental ops: ATen\n", "Warning: Checker does not support models with experimental ops: ATen\n", "Warning: Checker does not support models with experimental ops: ATen\n", "Warning: Checker does not support models with experimental ops: ATen\n", "Warning: Checker does not support models with experimental ops: ATen\n", "Warning: Checker does not support models with experimental ops: ATen\n", "Warning: Checker does not support models with experimental ops: ATen\n" ] }, { "data": { "text/html": [ "\n", "
\n", " \n", " \n", " [2675/2675 02:03, Epoch 5/5]\n", "
\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
EpochTraining LossValidation LossMatthews Correlation
10.5238000.5407020.406863
20.3484000.4958340.510895
30.2407000.5725650.529083
40.1851000.7650270.523835
50.1379000.8077990.532764

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "The following columns in the evaluation set don't have a corresponding argument in `DistilBertForSequenceClassification.forward` and have been ignored: sentence, idx. If sentence, idx are not expected by `DistilBertForSequenceClassification.forward`, you can safely ignore this message.\n", "[INFO] Evaluating with PyTorch backend. If you want to use ONNX Runtime for the evaluation, set `trainer.evaluate(inference_with_ort=True)`.\n", "***** Running Evaluation *****\n", " Num examples = 1043\n", " Batch size = 16\n", "Saving model checkpoint to distilbert-base-uncased-finetuned-cola/checkpoint-535\n", "Configuration saved in distilbert-base-uncased-finetuned-cola/checkpoint-535/config.json\n", "Model weights saved in distilbert-base-uncased-finetuned-cola/checkpoint-535/pytorch_model.bin\n", "tokenizer config file saved in distilbert-base-uncased-finetuned-cola/checkpoint-535/tokenizer_config.json\n", "Special tokens file saved in distilbert-base-uncased-finetuned-cola/checkpoint-535/special_tokens_map.json\n", "The following columns in the evaluation set don't have a corresponding argument in `DistilBertForSequenceClassification.forward` and have been ignored: sentence, idx. If sentence, idx are not expected by `DistilBertForSequenceClassification.forward`, you can safely ignore this message.\n", "[INFO] Evaluating with PyTorch backend. If you want to use ONNX Runtime for the evaluation, set `trainer.evaluate(inference_with_ort=True)`.\n", "***** Running Evaluation *****\n", " Num examples = 1043\n", " Batch size = 16\n", "Saving model checkpoint to distilbert-base-uncased-finetuned-cola/checkpoint-1070\n", "Configuration saved in distilbert-base-uncased-finetuned-cola/checkpoint-1070/config.json\n", "Model weights saved in distilbert-base-uncased-finetuned-cola/checkpoint-1070/pytorch_model.bin\n", "tokenizer config file saved in distilbert-base-uncased-finetuned-cola/checkpoint-1070/tokenizer_config.json\n", "Special tokens file saved in distilbert-base-uncased-finetuned-cola/checkpoint-1070/special_tokens_map.json\n", "The following columns in the evaluation set don't have a corresponding argument in `DistilBertForSequenceClassification.forward` and have been ignored: sentence, idx. If sentence, idx are not expected by `DistilBertForSequenceClassification.forward`, you can safely ignore this message.\n", "[INFO] Evaluating with PyTorch backend. If you want to use ONNX Runtime for the evaluation, set `trainer.evaluate(inference_with_ort=True)`.\n", "***** Running Evaluation *****\n", " Num examples = 1043\n", " Batch size = 16\n", "Saving model checkpoint to distilbert-base-uncased-finetuned-cola/checkpoint-1605\n", "Configuration saved in distilbert-base-uncased-finetuned-cola/checkpoint-1605/config.json\n", "Model weights saved in distilbert-base-uncased-finetuned-cola/checkpoint-1605/pytorch_model.bin\n", "tokenizer config file saved in distilbert-base-uncased-finetuned-cola/checkpoint-1605/tokenizer_config.json\n", "Special tokens file saved in distilbert-base-uncased-finetuned-cola/checkpoint-1605/special_tokens_map.json\n", "The following columns in the evaluation set don't have a corresponding argument in `DistilBertForSequenceClassification.forward` and have been ignored: sentence, idx. If sentence, idx are not expected by `DistilBertForSequenceClassification.forward`, you can safely ignore this message.\n", "[INFO] Evaluating with PyTorch backend. If you want to use ONNX Runtime for the evaluation, set `trainer.evaluate(inference_with_ort=True)`.\n", "***** Running Evaluation *****\n", " Num examples = 1043\n", " Batch size = 16\n", "Saving model checkpoint to distilbert-base-uncased-finetuned-cola/checkpoint-2140\n", "Configuration saved in distilbert-base-uncased-finetuned-cola/checkpoint-2140/config.json\n", "Model weights saved in distilbert-base-uncased-finetuned-cola/checkpoint-2140/pytorch_model.bin\n", "tokenizer config file saved in distilbert-base-uncased-finetuned-cola/checkpoint-2140/tokenizer_config.json\n", "Special tokens file saved in distilbert-base-uncased-finetuned-cola/checkpoint-2140/special_tokens_map.json\n", "The following columns in the evaluation set don't have a corresponding argument in `DistilBertForSequenceClassification.forward` and have been ignored: sentence, idx. If sentence, idx are not expected by `DistilBertForSequenceClassification.forward`, you can safely ignore this message.\n", "[INFO] Evaluating with PyTorch backend. If you want to use ONNX Runtime for the evaluation, set `trainer.evaluate(inference_with_ort=True)`.\n", "***** Running Evaluation *****\n", " Num examples = 1043\n", " Batch size = 16\n", "Saving model checkpoint to distilbert-base-uncased-finetuned-cola/checkpoint-2675\n", "Configuration saved in distilbert-base-uncased-finetuned-cola/checkpoint-2675/config.json\n", "Model weights saved in distilbert-base-uncased-finetuned-cola/checkpoint-2675/pytorch_model.bin\n", "tokenizer config file saved in distilbert-base-uncased-finetuned-cola/checkpoint-2675/tokenizer_config.json\n", "Special tokens file saved in distilbert-base-uncased-finetuned-cola/checkpoint-2675/special_tokens_map.json\n", "Loading best model from distilbert-base-uncased-finetuned-cola/checkpoint-2675 (score: 0.5327637463001902).\n" ] }, { "data": { "text/plain": [ "TrainOutput(global_step=2675, training_loss=0.2764874895042348, metrics={'train_runtime': 129.4511, 'train_samples_per_second': 330.279, 'train_steps_per_second': 20.664, 'total_flos': 229309863736728.0, 'train_loss': 0.2764874895042348, 'epoch': 5.0})" ] }, "execution_count": 40, "metadata": {}, "output_type": "execute_result" } ], "source": [ "trainer.train()" ] }, { "cell_type": "markdown", "id": "a748fb21-3588-4e6e-9de2-0b5aa8bb4402", "metadata": {}, "source": [ "## Evaluating your model" ] }, { "cell_type": "markdown", "id": "130a395e-e1a4-4d94-aa2a-ef649d2eafe5", "metadata": {}, "source": [ "Evaluate the performance of the model that you just fine-tuned with the validation dataset that you've passed to `ORTTrainer` by just calling the `evaluate` method. \n", "\n", "If you set `inference_with_ort=True`, the inference will be done with ONNX Runtime backend. Otherwise, the inference will take PyTorch as backend." ] }, { "cell_type": "code", "execution_count": 41, "id": "42971086-8c8e-4e5f-8cc2-1c90c2ccf46a", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "The following columns in the evaluation set don't have a corresponding argument in `DistilBertForSequenceClassification.forward` and have been ignored: sentence, idx. If sentence, idx are not expected by `DistilBertForSequenceClassification.forward`, you can safely ignore this message.\n", "Using framework PyTorch: 1.11.0+cu113\n", "/usr/local/lib/python3.8/dist-packages/transformers/models/distilbert/modeling_distilbert.py:213: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.\n", " mask, torch.tensor(torch.finfo(scores.dtype).min)\n", "WARNING: The shape inference of org.pytorch.aten::ATen type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of org.pytorch.aten::ATen type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of com.microsoft::SoftmaxCrossEntropyLossInternal type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of com.microsoft::SoftmaxCrossEntropyLossInternal type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of org.pytorch.aten::ATen type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of org.pytorch.aten::ATen type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of com.microsoft::SoftmaxCrossEntropyLossInternal type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of com.microsoft::SoftmaxCrossEntropyLossInternal type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of org.pytorch.aten::ATen type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of org.pytorch.aten::ATen type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of com.microsoft::SoftmaxCrossEntropyLossInternal type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "WARNING: The shape inference of com.microsoft::SoftmaxCrossEntropyLossInternal type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\n", "Warning: Checker does not support models with experimental ops: ATen\n", "Warning: Checker does not support models with experimental ops: ATen\n", "Warning: Checker does not support models with experimental ops: ATen\n", "Warning: Checker does not support models with experimental ops: ATen\n", "2022-09-16 13:13:24.566099390 [W:onnxruntime:, graph.cc:106 MergeShapeInfo] Error merging shape info for output. 'loss' source:{} target:{-1,-1}. Falling back to lenient merge.\n", "2022-09-16 13:13:24.566263091 [W:onnxruntime:, graph.cc:3472 CleanUnusedInitializersAndNodeArgs] Removing initializer 'org.pytorch.aten::ATen_123'. It is not used by any node and should be removed from the model.\n", "2022-09-16 13:13:24.566270525 [W:onnxruntime:, graph.cc:3472 CleanUnusedInitializersAndNodeArgs] Removing initializer 'org.pytorch.aten::ATen_120'. It is not used by any node and should be removed from the model.\n", "2022-09-16 13:13:24.566274859 [W:onnxruntime:, graph.cc:3472 CleanUnusedInitializersAndNodeArgs] Removing initializer 'org.pytorch.aten::ATen_119'. It is not used by any node and should be removed from the model.\n", "2022-09-16 13:13:24.566278842 [W:onnxruntime:, graph.cc:3472 CleanUnusedInitializersAndNodeArgs] Removing initializer 'org.pytorch.aten::ATen_118'. It is not used by any node and should be removed from the model.\n", "2022-09-16 13:13:24.566289045 [W:onnxruntime:, graph.cc:3472 CleanUnusedInitializersAndNodeArgs] Removing initializer 'org.pytorch.aten::ATen_122'. It is not used by any node and should be removed from the model.\n", "2022-09-16 13:13:24.566329639 [W:onnxruntime:, graph.cc:3472 CleanUnusedInitializersAndNodeArgs] Removing initializer 'org.pytorch.aten::ATen_124'. It is not used by any node and should be removed from the model.\n" ] }, { "data": { "text/html": [ "\n", "

\n", " \n", " \n", " [66/66 00:00]\n", "
\n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/plain": [ "{'eval_loss': 0.8077986240386963,\n", " 'eval_matthews_correlation': 0.5327637463001902,\n", " 'eval_runtime': 5.997,\n", " 'eval_samples_per_second': 173.92,\n", " 'eval_steps_per_second': 11.005,\n", " 'epoch': 5.0}" ] }, "execution_count": 41, "metadata": {}, "output_type": "execute_result" } ], "source": [ "trainer.evaluate(inference_with_ort=True)" ] }, { "cell_type": "markdown", "id": "a32ba2ab-ccc3-4669-b04c-a22bdf08ecd8", "metadata": {}, "source": [ "## __Extended reading__\n", "\n", "Now check your trained ONNX model with [Netron](https://netron.app/), and you might notice that the computation graph is under optimizatiom. Want to accelerate even more? \n", "\n", "Check the [graph optimizers](https://huggingface.co/docs/optimum/onnxruntime/optimization) and [quantizers](https://huggingface.co/docs/optimum/onnxruntime/quantization) of [Optimum](https://github.com/huggingface/optimum)🤗! " ] }, { "cell_type": "code", "execution_count": null, "id": "4d2ee6ee-6b52-4e40-a819-872d308a31c1", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.8" } }, "nbformat": 4, "nbformat_minor": 5 }