{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "X4cRE8IbIrIV" }, "source": [ "# Quantizing a model with Intel Neural Compressor (INC) for text classification tasks" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This notebook shows how to apply different quantization approaches such as dynamic, static and aware training quantization, using the [Intel Neural Compressor](https://github.com/intel/neural-compressor) (INC) library, for any tasks of the GLUE benchmark. This is made possible thanks to 🤗 [Optimum](https://github.com/huggingface/optimum), an extension of 🤗 [Transformers](https://github.com/huggingface/transformers), providing a set of performance optimization tools enabling maximum efficiency to train and run models on targeted hardwares. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers, 🤗 Datasets and 🤗 Optimum. Uncomment the following cell and run it." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 1000 }, "id": "MOsHUjgdIrIW", "outputId": "f84a093e-147f-470e-aad9-80fb51193c8e" }, "outputs": [], "source": [ "#! pip install datasets transformers optimum[intel]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Make sure your version of 🤗 Optimum is at least 1.2.3 since the functionality was introduced in that version:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1.2.3\n" ] } ], "source": [ "from optimum.intel.version import __version__\n", "\n", "print(__version__)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that quantization is currently only supported for CPUs, so we will not be utilizing GPUs / CUDA in this notebook. " ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "import os\n", "\n", "os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The GLUE Benchmark is a group of nine classification tasks on sentences or pairs of sentences which are:\n", "\n", "- [CoLA](https://nyu-mll.github.io/CoLA/) (Corpus of Linguistic Acceptability) Determine if a sentence is grammatically correct or not.\n", "- [MNLI](https://arxiv.org/abs/1704.05426) (Multi-Genre Natural Language Inference) Determine if a sentence entails, contradicts or is unrelated to a given hypothesis. This dataset has two versions, one with the validation and test set coming from the same distribution, another called mismatched where the validation and test use out-of-domain data.\n", "- [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398) (Microsoft Research Paraphrase Corpus) Determine if two sentences are paraphrases from one another or not.\n", "- [QNLI](https://rajpurkar.github.io/SQuAD-explorer/) (Question-answering Natural Language Inference) Determine if the answer to a question is in the second sentence or not. This dataset is built from the SQuAD dataset.\n", "- [QQP](https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs) (Quora Question Pairs2) Determine if two questions are semantically equivalent or not.\n", "- [RTE](https://aclweb.org/aclwiki/Recognizing_Textual_Entailment) (Recognizing Textual Entailment) Determine if a sentence entails a given hypothesis or not.\n", "- [SST-2](https://nlp.stanford.edu/sentiment/index.html) (Stanford Sentiment Treebank) Determine if the sentence has a positive or negative sentiment.\n", "- [STS-B](http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark) (Semantic Textual Similarity Benchmark) Determine the similarity of two sentences with a score from 1 to 5.\n", "- [WNLI](https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WS.html) (Winograd Natural Language Inference) Determine if a sentence with an anonymous pronoun and a sentence with this pronoun replaced are entailed or not. This dataset is built from the Winograd Schema Challenge dataset.\n", "\n", "We will see how to apply post-training static quantization on a DistilBERT model fine-tuned on the SST-2 task:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "id": "zVvslsfMIrIh" }, "outputs": [], "source": [ "GLUE_TASKS = [\"cola\", \"mnli\", \"mnli-mm\", \"mrpc\", \"qnli\", \"qqp\", \"rte\", \"sst2\", \"stsb\", \"wnli\"]\n", "task = \"sst2\"\n", "model_checkpoint = \"distilbert-base-uncased-finetuned-sst-2-english\"\n", "batch_size = 16\n", "max_train_samples = 100" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can set our `quantization_approach` to either `dynamic`, `static` or `aware_training` in order to apply respectively dynamic, static and aware training quantization. \n", "- Post-training static quantization : introduces an additional calibration step where data is fed through the network in order to compute the activations quantization parameters.\n", "- Post-training dynamic quantization : dynamically computes activations quantization parameters based on the data observed at runtime.\n", "- Quantization aware training : simulates the effects of quantization during training in order to alleviate its effects on the model's performance.\n", "\n", "Quantization will be applied on the embeddings, and on the linear layers as well as on their corresponding input activations." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "SUPPORTED_QUANTIZATION_APPROACH = [\"dynamic\", \"static\", \"aware_training\"]\n", "\n", "quantization_approach = \"static\"" ] }, { "cell_type": "markdown", "metadata": { "id": "whPRbBNbIrIl" }, "source": [ "## Loading the dataset" ] }, { "cell_type": "markdown", "metadata": { "id": "W7QYTpxXIrIl" }, "source": [ "We will use the [🤗 Datasets](https://github.com/huggingface/datasets) library to download the data and get the metric we need to use for evaluation (to compare our quantized model to the baseline). This can be easily done with the functions `load_dataset` and `load_metric`. " ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "id": "IreSlFmlIrIm" }, "outputs": [], "source": [ "from datasets import load_dataset, load_metric" ] }, { "cell_type": "markdown", "metadata": { "id": "CKx2zKs5IrIq" }, "source": [ "Apart from `mnli-mm` being a special code, we can directly pass our task name to those functions. `load_dataset` will cache the dataset to avoid downloading it again the next time you run this cell." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 270, "referenced_widgets": [ "69caab03d6264fef9fc5649bffff5e20", "3f74532faa86412293d90d3952f38c4a", "50615aa59c7247c4804ca5cbc7945bd7", "fe962391292a413ca55dc932c4279fa7", "299f4b4c07654e53a25f8192bd1d7bbd", "ad04ed1038154081bbb0c1444784dcc2", "7c667ad22b5740d5a6319f1b1e3a8097", "46c2b043c0f84806978784a45a4e203b", "80e2943be35f46eeb24c8ab13faa6578", "de5956b5008d4fdba807bae57509c393", "931db1f7a42f4b46b7ff8c2e1262b994", "6c1db72efff5476e842c1386fadbbdba", "ccd2f37647c547abb4c719b75a26f2de", "d30a66df5c0145e79693e09789d96b81", "5fa26fc336274073abbd1d550542ee33", "2b34de08115d49d285def9269a53f484", "d426be871b424affb455aeb7db5e822e", "160bf88485f44f5cb6eaeecba5e0901f", "745c0d47d672477b9bb0dae77b926364", "d22ab78269cd4ccfbcf70c707057c31b", "d298eb19eeff453cba51c2804629d3f4", "a7204ade36314c86907c562e0a2158b8", "e35d42b2d352498ca3fc8530393786b2", "75103f83538d44abada79b51a1cec09e", "f6253931d90543e9b5fd0bb2d615f73a", "051aa783ff9e47e28d1f9584043815f5", "0984b2a14115454bbb009df71c1cf36f", "8ab9dfce29854049912178941ef1b289", "c9de740e007141958545e269372780a4", "cbea68b25d6d4ba09b2ce0f27b1726d5", "5781fc45cf8d486cb06ed68853b2c644", "d2a92143a08a4951b55bab9bc0a6d0d3", "a14c3e40e5254d61ba146f6ec88eae25", "c4ffe6f624ce4e978a0d9b864544941a", "1aca01c1d8c940dfadd3e7144bb35718", "9fbbaae50e6743f2aa19342152398186", "fea27ca6c9504fc896181bc1ff5730e5", "940d00556cb849b3a689d56e274041c2", "5cdf9ed939fb42d4bf77301c80b8afca", "94b39ccfef0b4b08bf2fb61bb0a657c1", "9a55087c85b74ea08b3e952ac1d73cbe", "2361ab124daf47cc885ff61f2899b2af", "1a65887eb37747ddb75dc4a40f7285f2", "3c946e2260704e6c98593136bd32d921", "50d325cdb9844f62a9ecc98e768cb5af", "aa781f0cfe454e9da5b53b93e9baabd8", "6bb68d3887ef43809eb23feb467f9723", "7e29a8b952cf4f4ea42833c8bf55342f", "dd5997d01d8947e4b1c211433969b89b", "2ace4dc78e2f4f1492a181bcd63304e7", "bbee008c2791443d8610371d1f16b62b", "31b1c8a2e3334b72b45b083688c1a20c", "7fb7c36adc624f7dbbcb4a831c1e4f63", "0b7c8f1939074794b3d9221244b1344d", "a71908883b064e1fbdddb547a8c41743", "2f5223f26c8541fc87e91d2205c39995" ] }, "id": "s_AY1ATSIrIq", "outputId": "fd0578d1-8895-443d-b56f-5908de9f1b6b" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "2022-06-14 15:28:50 [WARNING] Reusing dataset glue (/home/ella/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "bc4ed01451ab449c800ca5ffd30dcbc7", "version_major": 2, "version_minor": 0 }, "text/plain": [ " 0%| | 0/3 [00:00, ?it/s]" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "actual_task = \"mnli\" if task == \"mnli-mm\" else task\n", "dataset = load_dataset(\"glue\", actual_task)\n", "metric = load_metric(\"glue\", actual_task)" ] }, { "cell_type": "markdown", "metadata": { "id": "YOCrQwPoIrJG" }, "source": [ "Note that `load_metric` has loaded the proper metric associated to your task, which is:\n", "\n", "- for CoLA: [Matthews Correlation Coefficient](https://en.wikipedia.org/wiki/Matthews_correlation_coefficient)\n", "- for MNLI (matched or mismatched): Accuracy\n", "- for MRPC: Accuracy and [F1 score](https://en.wikipedia.org/wiki/F1_score)\n", "- for QNLI: Accuracy\n", "- for QQP: Accuracy and [F1 score](https://en.wikipedia.org/wiki/F1_score)\n", "- for RTE: Accuracy\n", "- for SST-2: Accuracy\n", "- for STS-B: [Pearson Correlation Coefficient](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) and [Spearman's_Rank_Correlation_Coefficient](https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient)\n", "- for WNLI: Accuracy\n", "\n", "so the metric object only computes the one(s) needed for your task." ] }, { "cell_type": "markdown", "metadata": { "id": "n9qywopnIrJH" }, "source": [ "## Preprocessing the data" ] }, { "cell_type": "markdown", "metadata": { "id": "YVx71GdAIrJH" }, "source": [ "Before we can feed those texts to our model, we need to preprocess them. This is done by a 🤗 Transformers `Tokenizer` which will (as the name indicates) tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary) and put it in a format the model expects, as well as generate the other inputs that model requires.\n", "\n", "To do all of this, we instantiate our tokenizer with the `AutoTokenizer.from_pretrained` method, which will ensure that:\n", "\n", "- we get a tokenizer that corresponds to the model architecture we want to use\n", "- we download the vocabulary used when pretraining this specific checkpoint\n", "\n", "That vocabulary will be cached, so it's not downloaded again the next time we run the cell." ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "id": "eXNLu_-nIrJI" }, "outputs": [], "source": [ "from transformers import AutoTokenizer\n", " \n", "tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)" ] }, { "cell_type": "markdown", "metadata": { "id": "qo_0B1M2IrJM" }, "source": [ "To preprocess our dataset, we will thus need the names of the columns containing the sentence(s). The following dictionary keeps track of the correspondence task to column names:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "id": "fyGdtK9oIrJM" }, "outputs": [], "source": [ "task_to_keys = {\n", " \"cola\": (\"sentence\", None),\n", " \"mnli\": (\"premise\", \"hypothesis\"),\n", " \"mnli-mm\": (\"premise\", \"hypothesis\"),\n", " \"mrpc\": (\"sentence1\", \"sentence2\"),\n", " \"qnli\": (\"question\", \"sentence\"),\n", " \"qqp\": (\"question1\", \"question2\"),\n", " \"rte\": (\"sentence1\", \"sentence2\"),\n", " \"sst2\": (\"sentence\", None),\n", " \"stsb\": (\"sentence1\", \"sentence2\"),\n", " \"wnli\": (\"sentence1\", \"sentence2\"),\n", "}" ] }, { "cell_type": "markdown", "metadata": { "id": "xbqtC4MrIrJO" }, "source": [ "We can double check it does work on our current dataset:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "id": "19GG646uIrJO", "outputId": "0cb4a520-817e-4f92-8de8-bb45df367657" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Sentence: hide new secretions from the parental units \n" ] } ], "source": [ "sentence1_key, sentence2_key = task_to_keys[task]\n", "if sentence2_key is None:\n", " print(f\"Sentence: {dataset['train'][0][sentence1_key]}\")\n", "else:\n", " print(f\"Sentence 1: {dataset['train'][0][sentence1_key]}\")\n", " print(f\"Sentence 2: {dataset['train'][0][sentence2_key]}\")" ] }, { "cell_type": "markdown", "metadata": { "id": "2C0hcmp9IrJQ" }, "source": [ "We can then write the function that will preprocess our samples. We just feed them to the `tokenizer` with the argument `truncation=True`. This will ensure that an input longer than what the model selected can handle will be truncated to the maximum length accepted by the model." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "max_seq_length = min(128, tokenizer.model_max_length)\n", "padding = \"max_length\"\n", "\n", "def preprocess_function(examples):\n", " args = (\n", " (examples[sentence1_key],) if sentence2_key is None else (examples[sentence1_key], examples[sentence2_key])\n", " )\n", " return tokenizer(*args, padding=padding, max_length=max_seq_length, truncation=True)" ] }, { "cell_type": "markdown", "metadata": { "id": "zS-6iXTkIrJT" }, "source": [ "To apply this function on all the sentences (or pairs of sentences) in our dataset, we just use the `map` method of our `dataset` object we created earlier. This will apply the function on all the elements of all the splits in `dataset`, so our training, validation and testing data will be preprocessed in one single command." ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "id": "DDtsaJeVIrJT", "outputId": "aa4734bf-4ef5-4437-9948-2c16363da719" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "2022-06-14 15:28:55 [WARNING] Loading cached processed dataset at /home/ella/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-fa763904b981dcc5.arrow\n", "2022-06-14 15:28:55 [WARNING] Loading cached processed dataset at /home/ella/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-fdce79612ea077d3.arrow\n", "2022-06-14 15:28:55 [WARNING] Loading cached processed dataset at /home/ella/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-c3c52d2d662a80eb.arrow\n" ] } ], "source": [ "encoded_dataset = dataset.map(preprocess_function, batched=True)" ] }, { "cell_type": "markdown", "metadata": { "id": "voWiw8C7IrJV" }, "source": [ "Even better, the results are automatically cached by the 🤗 Datasets library to avoid spending time on this step the next time you run your notebook. The 🤗 Datasets library is normally smart enough to detect when the function you pass to map has changed (and thus requires to not use the cache data). For instance, it will properly detect if you change the task in the first cell and rerun the notebook. 🤗 Datasets warns you when it uses cached files, you can pass `load_from_cache_file=False` in the call to `map` to not use the cached files and force the preprocessing to be applied again.\n", "\n", "Note that we passed `batched=True` to encode the texts by batches together. This is to leverage the full benefit of the fast tokenizer we loaded earlier, which will use multi-threading to treat the texts in a batch concurrently." ] }, { "cell_type": "markdown", "metadata": { "id": "545PP3o8IrJV" }, "source": [ "## Applying quantization on the model" ] }, { "cell_type": "markdown", "metadata": { "id": "FBiW8UpKIrJW" }, "source": [ "Now that our data is ready, we can download the pretrained model and fine-tune it. Since all our tasks are about sentence classification, we use the `AutoModelForSequenceClassification` class. Like with the tokenizer, the `from_pretrained` method will download and cache the model for us. The only thing we have to specify is the number of labels for our problem (which is always 2, except for STS-B which is a regression problem and MNLI where we have 3 labels):" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "id": "TlqNaB8jIrJW", "outputId": "84916cf3-6e6c-47f3-d081-032ec30a4132" }, "outputs": [], "source": [ "from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer\n", "\n", "fp_model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To instantiate a `Trainer`, we will need to define two more things. The most important is the [`TrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments), which is a class that contains all the attributes to customize the training. It requires one folder name, which will be used to save the checkpoints of the model, and all other arguments are optional:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "id": "Bliy8zgjIrJY" }, "outputs": [], "source": [ "model_name = model_checkpoint.split(\"/\")[-1]\n", "output = f\"{model_name}-finetuned-{task}\"\n", "\n", "args = TrainingArguments(\n", " output,\n", " evaluation_strategy = \"epoch\",\n", " save_strategy = \"epoch\",\n", " per_device_train_batch_size=batch_size,\n", " per_device_eval_batch_size=batch_size,\n", " dataloader_drop_last=False if quantization_approach == \"dynamic\" else True,\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "7sZOdRlRIrJd" }, "source": [ "The last thing to define for our `Trainer` is how to compute the metrics from the predictions. We need to define a function for this, which will just use the `metric` we loaded earlier, the only preprocessing we have to do is to take the argmax of our predicted logits (our just squeeze the last axis in the case of STS-B):" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "id": "UmvbnJ9JIrJd" }, "outputs": [], "source": [ "import numpy as np\n", "\n", "def compute_metrics(eval_pred):\n", " predictions, labels = eval_pred\n", " if task != \"stsb\":\n", " predictions = np.argmax(predictions, axis=1)\n", " else:\n", " predictions = predictions[:, 0]\n", " return metric.compute(predictions=predictions, references=labels)" ] }, { "cell_type": "markdown", "metadata": { "id": "rXuFTAzDIrJe" }, "source": [ "Then we just need to pass all of this along with our datasets to the `Trainer`:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "id": "imY1oC3SIrJf" }, "outputs": [], "source": [ "from transformers import default_data_collator\n", "\n", "validation_key = \"validation_mismatched\" if task == \"mnli-mm\" else \"validation_matched\" if task == \"mnli\" else \"validation\"\n", "train_dataset = encoded_dataset[\"train\"].select(range(max_train_samples))\n", "trainer = Trainer(\n", " model=fp_model,\n", " args=args,\n", " train_dataset=train_dataset if quantization_approach != \"dynamic\" else None,\n", " eval_dataset=encoded_dataset[validation_key],\n", " compute_metrics=compute_metrics,\n", " tokenizer=tokenizer,\n", " data_collator=default_data_collator,\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "CdzABDVcIrJg" }, "source": [ "In the case where we want to apply quantization aware training, we need to pass to the Intel Neural Compressor (INC) library a training function. Note that, as we are using a `Trainer`, we need to set its `model` attribute to the quantized model resulting from the INC library." ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "id": "uNx5pyRlIrJh", "outputId": "077e661e-d36c-469b-89b8-7ff7f73541ec" }, "outputs": [], "source": [ "def train_func(model):\n", " trainer.model_wrapped = model\n", " trainer.model = model \n", " train_result = trainer.train()\n", " metrics = train_result.metrics\n", " trainer.save_model() \n", " trainer.save_metrics(\"train\", metrics)\n", " trainer.save_state()" ] }, { "cell_type": "markdown", "metadata": { "id": "CKASz-2vIrJi" }, "source": [ "In order to evaluate the model's performance before and after quantization, we need to define an evaluation function. The metric chosen to evaluate the drop in performance resulting from quantization will be Matthews correlation coefficient (MCC) for CoLA, Pearson correlation coefficient (PCC) for STS-B and accuracy for any other tasks." ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "id": "UOUcBkX8IrJi", "outputId": "de5b9dd6-9dc0-4702-cb43-55e9829fde25" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "The following columns in the evaluation set don't have a corresponding argument in `DistilBertForSequenceClassification.forward` and have been ignored: idx, sentence. If idx, sentence are not expected by `DistilBertForSequenceClassification.forward`, you can safely ignore this message.\n", "***** Running Evaluation *****\n", " Num examples = 872\n", " Batch size = 16\n" ] }, { "data": { "text/html": [ "\n", "