{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "X4cRE8IbIrIV" }, "source": [ "# Quantizing a model with ONNX Runtime for text classification tasks" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This notebook shows how to apply different post-training quantization approaches such as static and dynamic quantization using [ONNX Runtime](https://onnxruntime.ai), for any tasks of the GLUE benchmark. This is made possible thanks to 🤗 [Optimum](https://github.com/huggingface/optimum), an extension of 🤗 [Transformers](https://github.com/huggingface/transformers), providing a set of performance optimization tools enabling maximum efficiency to train and run models on targeted hardwares. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers, 🤗 Datasets and 🤗 Optimum. Uncomment the following cell and run it." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 1000 }, "id": "MOsHUjgdIrIW", "outputId": "f84a093e-147f-470e-aad9-80fb51193c8e" }, "outputs": [], "source": [ "#! pip install datasets transformers[sklearn] optimum[onnxruntime]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Make sure your version of 🤗 Optimum is at least 1.1.0 since the functionality was introduced in that version:" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1.4.0.dev0\n" ] } ], "source": [ "from optimum.version import __version__\n", "\n", "print(__version__)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The GLUE Benchmark is a group of nine classification tasks on sentences or pairs of sentences which are:\n", "\n", "- [CoLA](https://nyu-mll.github.io/CoLA/) (Corpus of Linguistic Acceptability) Determine if a sentence is grammatically correct or not.\n", "- [MNLI](https://arxiv.org/abs/1704.05426) (Multi-Genre Natural Language Inference) Determine if a sentence entails, contradicts or is unrelated to a given hypothesis. This dataset has two versions, one with the validation and test set coming from the same distribution, another called mismatched where the validation and test use out-of-domain data.\n", "- [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398) (Microsoft Research Paraphrase Corpus) Determine if two sentences are paraphrases from one another or not.\n", "- [QNLI](https://rajpurkar.github.io/SQuAD-explorer/) (Question-answering Natural Language Inference) Determine if the answer to a question is in the second sentence or not. This dataset is built from the SQuAD dataset.\n", "- [QQP](https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs) (Quora Question Pairs2) Determine if two questions are semantically equivalent or not.\n", "- [RTE](https://aclweb.org/aclwiki/Recognizing_Textual_Entailment) (Recognizing Textual Entailment) Determine if a sentence entails a given hypothesis or not.\n", "- [SST-2](https://nlp.stanford.edu/sentiment/index.html) (Stanford Sentiment Treebank) Determine if the sentence has a positive or negative sentiment.\n", "- [STS-B](http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark) (Semantic Textual Similarity Benchmark) Determine the similarity of two sentences with a score from 1 to 5.\n", "- [WNLI](https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WS.html) (Winograd Natural Language Inference) Determine if a sentence with an anonymous pronoun and a sentence with this pronoun replaced are entailed or not. This dataset is built from the Winograd Schema Challenge dataset.\n", "\n", "We will see how to apply post-training static quantization on a DistilBERT model fine-tuned on the SST-2 task:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "id": "zVvslsfMIrIh" }, "outputs": [], "source": [ "GLUE_TASKS = [\"cola\", \"mnli\", \"mnli-mm\", \"mrpc\", \"qnli\", \"qqp\", \"rte\", \"sst2\", \"stsb\", \"wnli\"]\n", "task = \"sst2\"\n", "model_checkpoint = \"textattack/bert-base-uncased-SST-2\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We also quickly upload some telemetry - this tells us which examples and software versions are getting used so we know where to prioritize our maintenance efforts. We don't collect (or care about) any personally identifiable information, but if you'd prefer not to be counted, feel free to skip this step or delete this cell entirely." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from transformers.utils import send_example_telemetry\n", "\n", "send_example_telemetry(\"text_classification_notebook_ort\", framework=\"none\")" ] }, { "cell_type": "markdown", "metadata": { "id": "whPRbBNbIrIl" }, "source": [ "## Loading the dataset" ] }, { "cell_type": "markdown", "metadata": { "id": "W7QYTpxXIrIl" }, "source": [ "We will use the [🤗 Datasets](https://github.com/huggingface/datasets) library to download the dataset and get the metric we need to use for evaluation. This can be easily done with the functions `load_dataset` and `load_metric`. " ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "id": "IreSlFmlIrIm" }, "outputs": [], "source": [ "from datasets import load_dataset, load_metric" ] }, { "cell_type": "markdown", "metadata": { "id": "CKx2zKs5IrIq" }, "source": [ "`load_dataset` will cache the dataset to avoid downloading it again the next time you run this cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 270, "referenced_widgets": [ "69caab03d6264fef9fc5649bffff5e20", "3f74532faa86412293d90d3952f38c4a", "50615aa59c7247c4804ca5cbc7945bd7", "fe962391292a413ca55dc932c4279fa7", "299f4b4c07654e53a25f8192bd1d7bbd", "ad04ed1038154081bbb0c1444784dcc2", "7c667ad22b5740d5a6319f1b1e3a8097", "46c2b043c0f84806978784a45a4e203b", "80e2943be35f46eeb24c8ab13faa6578", "de5956b5008d4fdba807bae57509c393", "931db1f7a42f4b46b7ff8c2e1262b994", "6c1db72efff5476e842c1386fadbbdba", "ccd2f37647c547abb4c719b75a26f2de", "d30a66df5c0145e79693e09789d96b81", "5fa26fc336274073abbd1d550542ee33", "2b34de08115d49d285def9269a53f484", "d426be871b424affb455aeb7db5e822e", "160bf88485f44f5cb6eaeecba5e0901f", "745c0d47d672477b9bb0dae77b926364", "d22ab78269cd4ccfbcf70c707057c31b", "d298eb19eeff453cba51c2804629d3f4", "a7204ade36314c86907c562e0a2158b8", "e35d42b2d352498ca3fc8530393786b2", "75103f83538d44abada79b51a1cec09e", "f6253931d90543e9b5fd0bb2d615f73a", "051aa783ff9e47e28d1f9584043815f5", "0984b2a14115454bbb009df71c1cf36f", "8ab9dfce29854049912178941ef1b289", "c9de740e007141958545e269372780a4", "cbea68b25d6d4ba09b2ce0f27b1726d5", "5781fc45cf8d486cb06ed68853b2c644", "d2a92143a08a4951b55bab9bc0a6d0d3", "a14c3e40e5254d61ba146f6ec88eae25", "c4ffe6f624ce4e978a0d9b864544941a", "1aca01c1d8c940dfadd3e7144bb35718", "9fbbaae50e6743f2aa19342152398186", "fea27ca6c9504fc896181bc1ff5730e5", "940d00556cb849b3a689d56e274041c2", "5cdf9ed939fb42d4bf77301c80b8afca", "94b39ccfef0b4b08bf2fb61bb0a657c1", "9a55087c85b74ea08b3e952ac1d73cbe", "2361ab124daf47cc885ff61f2899b2af", "1a65887eb37747ddb75dc4a40f7285f2", "3c946e2260704e6c98593136bd32d921", "50d325cdb9844f62a9ecc98e768cb5af", "aa781f0cfe454e9da5b53b93e9baabd8", "6bb68d3887ef43809eb23feb467f9723", "7e29a8b952cf4f4ea42833c8bf55342f", "dd5997d01d8947e4b1c211433969b89b", "2ace4dc78e2f4f1492a181bcd63304e7", "bbee008c2791443d8610371d1f16b62b", "31b1c8a2e3334b72b45b083688c1a20c", "7fb7c36adc624f7dbbcb4a831c1e4f63", "0b7c8f1939074794b3d9221244b1344d", "a71908883b064e1fbdddb547a8c41743", "2f5223f26c8541fc87e91d2205c39995" ] }, "id": "s_AY1ATSIrIq", "outputId": "fd0578d1-8895-443d-b56f-5908de9f1b6b" }, "outputs": [], "source": [ "actual_task = \"mnli\" if task == \"mnli-mm\" else task\n", "validation_split = \"validation_mismatched\" if task == \"mnli-mm\" else \"validation_matched\" if task == \"mnli\" else \"validation\"\n", "eval_dataset = load_dataset(\"glue\", actual_task, split=validation_split)\n", "metric = load_metric(\"glue\", actual_task) " ] }, { "cell_type": "markdown", "metadata": { "id": "YOCrQwPoIrJG" }, "source": [ "Note that `load_metric` has loaded the proper metric associated to your task, which is:\n", "\n", "- for CoLA: [Matthews Correlation Coefficient](https://en.wikipedia.org/wiki/Matthews_correlation_coefficient)\n", "- for MNLI (matched or mismatched): Accuracy\n", "- for MRPC: Accuracy and [F1 score](https://en.wikipedia.org/wiki/F1_score)\n", "- for QNLI: Accuracy\n", "- for QQP: Accuracy and [F1 score](https://en.wikipedia.org/wiki/F1_score)\n", "- for RTE: Accuracy\n", "- for SST-2: Accuracy\n", "- for STS-B: [Pearson Correlation Coefficient](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) and [Spearman's_Rank_Correlation_Coefficient](https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient)\n", "- for WNLI: Accuracy\n", "\n", "so the metric object only computes the one(s) needed for your task." ] }, { "cell_type": "markdown", "metadata": { "id": "n9qywopnIrJH" }, "source": [ "## Preprocessing the data" ] }, { "cell_type": "markdown", "metadata": { "id": "YVx71GdAIrJH" }, "source": [ "To preprocess our dataset, we will need the names of the columns containing the sentence(s). The following dictionary keeps track of the correspondence task to column names:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "id": "fyGdtK9oIrJM" }, "outputs": [], "source": [ "task_to_keys = {\n", " \"cola\": (\"sentence\", None),\n", " \"mnli\": (\"premise\", \"hypothesis\"),\n", " \"mnli-mm\": (\"premise\", \"hypothesis\"),\n", " \"mrpc\": (\"sentence1\", \"sentence2\"),\n", " \"qnli\": (\"question\", \"sentence\"),\n", " \"qqp\": (\"question1\", \"question2\"),\n", " \"rte\": (\"sentence1\", \"sentence2\"),\n", " \"sst2\": (\"sentence\", None),\n", " \"stsb\": (\"sentence1\", \"sentence2\"),\n", " \"wnli\": (\"sentence1\", \"sentence2\"),\n", "}" ] }, { "cell_type": "markdown", "metadata": { "id": "2C0hcmp9IrJQ" }, "source": [ "We can then write the function that will preprocess our samples. We just feed them to the `tokenizer` with the argument `truncation=True`. This will ensure that an input longer than what the model selected can handle will be truncated to the maximum length accepted by the model." ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "sentence1_key, sentence2_key = task_to_keys[task]\n", "\n", "def preprocess_function(examples, tokenizer):\n", " args = (examples[sentence1_key],) if sentence2_key is None else (examples[sentence1_key], examples[sentence2_key])\n", " return tokenizer(*args, padding=\"max_length\", max_length=128, truncation=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Applying quantization on the model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can set our `quantization_approach` to either `dynamic` or `static` in order to apply respectively dynamic and static quantization. \n", "- Post-training static quantization : introduces an additional calibration step where data is fed through the network in order to compute the activations quantization parameters.\n", "- Post-training dynamic quantization : dynamically computes activations quantization parameters based on the data observed at runtime. " ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "QUANTIZATION_APPROACH = [\"dynamic\", \"static\"]\n", "\n", "quantization_approach = \"static\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, let's create the output directory where the resulting quantized model will be saved." ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [], "source": [ "import os\n", "\n", "model_name = model_checkpoint.split(\"/\")[-1]\n", "output_dir = f\"{model_name}-{quantization_approach}-quantization\"\n", "os.makedirs(output_dir, exist_ok=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will use the [🤗 Optimum](https://github.com/huggingface/optimum) library to instantiate an `ORTQuantizer`, which will take care of the quantization process. To instantiate an `ORTQuantizer`, we need to provide a path to a converted ONNX checkpoint or instance of a `ORTModelForXXX`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from optimum.onnxruntime import ORTQuantizer, ORTModelForSequenceClassification\n", "\n", "# Loading Model from the Hub and convert to ONNX\n", "ort_model = ORTModelForSequenceClassification.from_pretrained(model_checkpoint, from_transformers=True)\n", "\n", "# Create a quantizer from a ORTModelForXXX\n", "quantizer = ORTQuantizer.from_pretrained(ort_model)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We also need to create an `QuantizationConfig` instance, which is the configuration handling the ONNX Runtime quantization related parameters.\n", "\n", "* We set `per_channel` to `False` in order to apply per-tensor quantization on the weights. As opposed to per-channel quantization, which introduces one set of quantization parameters per channel, per-tensor quantization means that there will be one set of quantization parameters per tensor.\n", "* We set the number of samples `num_calibration_samples` to use for the calibration step resulting from static quantization to `40`.\n", "* `operators_to_quantize` is used to specify the types of operators to quantize, here we want to quantize all the network's fully connected and embedding layers." ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [], "source": [ "from optimum.onnxruntime.configuration import QuantizationConfig, AutoCalibrationConfig\n", "from optimum.onnxruntime.quantization import QuantFormat, QuantizationMode, QuantType\n", "\n", "per_channel = False\n", "num_calibration_samples = 40\n", "operators_to_quantize = [\"MatMul\", \"Add\", \"Gather\"] \n", "apply_static_quantization = quantization_approach == \"static\"\n", "\n", "qconfig = QuantizationConfig(\n", " is_static=apply_static_quantization,\n", " format=QuantFormat.QDQ if apply_static_quantization else QuantFormat.QOperator,\n", " mode=QuantizationMode.QLinearOps if apply_static_quantization else QuantizationMode.IntegerOps,\n", " activations_dtype=QuantType.QInt8 if apply_static_quantization else QuantType.QUInt8,\n", " weights_dtype=QuantType.QInt8,\n", " per_channel=per_channel,\n", " operators_to_quantize=operators_to_quantize,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When applying static quantization, we need to perform a calibration step where the activations quantization ranges are computed. This additionnal step should only be performed in the case of static quantization and not for dynamic quantization. \n", "Because the quantization of certain nodes often results in degradation in accuracy, we create an instance of `QuantizationPreprocessor` to determine the nodes to exclude when applying static quantization." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from functools import partial\n", "from transformers import AutoTokenizer\n", "from optimum.onnxruntime.preprocessors import QuantizationPreprocessor\n", "from optimum.onnxruntime.preprocessors.passes import (\n", " ExcludeGeLUNodes,\n", " ExcludeLayerNormNodes,\n", " ExcludeNodeAfter,\n", " ExcludeNodeFollowedBy,\n", ")\n", "\n", "# Load tokenizer for preprocessing\n", "tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\n", "\n", "ranges = None \n", "quantization_preprocessor = None\n", "if apply_static_quantization:\n", " # Create the calibration dataset used for the calibration step\n", " calibration_dataset = quantizer.get_calibration_dataset(\n", " \"glue\",\n", " dataset_config_name=actual_task,\n", " preprocess_function=partial(preprocess_function, tokenizer=tokenizer),\n", " num_samples=num_calibration_samples,\n", " dataset_split=\"train\",\n", " )\n", " calibration_config = AutoCalibrationConfig.minmax(calibration_dataset)\n", " # Perform the calibration step: computes the activations quantization ranges\n", " ranges = quantizer.fit(\n", " dataset=calibration_dataset,\n", " calibration_config=calibration_config,\n", " operators_to_quantize=qconfig.operators_to_quantize,\n", " )\n", " quantization_preprocessor = QuantizationPreprocessor()\n", " # Exclude the nodes constituting LayerNorm\n", " quantization_preprocessor.register_pass(ExcludeLayerNormNodes())\n", " # Exclude the nodes constituting GELU\n", " quantization_preprocessor.register_pass(ExcludeGeLUNodes())\n", " # Exclude the residual connection Add nodes\n", " quantization_preprocessor.register_pass(ExcludeNodeAfter(\"Add\", \"Add\"))\n", " # Exclude the Add nodes following the Gather operator\n", " quantization_preprocessor.register_pass(ExcludeNodeAfter(\"Gather\", \"Add\"))\n", " # Exclude the Add nodes followed by the Softmax operator\n", " quantization_preprocessor.register_pass(ExcludeNodeFollowedBy(\"Add\", \"Softmax\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we export the quantized model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "quantizer.quantize(\n", " save_dir=output_dir,\n", " calibration_tensors_range=ranges,\n", " quantization_config=qconfig,\n", " preprocessor=quantization_preprocessor,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Evaluation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To evaluate our resulting quantized model we need to define how to compute the metrics from the predictions. We need to define a function for this, which will just use the `metric` we loaded earlier, the only preprocessing we have to do is to take the argmax of our predicted logits (our just squeeze the last axis in the case of STS-B).\n", "\n", "The metric chosen to evaluate the quantized model's performance will be Matthews correlation coefficient (MCC) for CoLA, Pearson correlation coefficient (PCC) for STS-B and accuracy for any other tasks." ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "\n", "def compute_metrics(eval_pred):\n", " predictions, labels = eval_pred\n", " if task != \"stsb\":\n", " predictions = np.argmax(predictions, axis=1)\n", " else:\n", " predictions = predictions[:, 0]\n", " return metric.compute(predictions=predictions, references=labels)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then to apply the preprocessing on all the sentences (or pairs of sentences) of our validation dataset, we just use the `map` method of our `dataset` object that was earlier created. This will apply the `preprocess_function` function on all the elements of our validation dataset." ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "83eac4380b0f4c8abe7140ea31db78d0", "version_major": 2, "version_minor": 0 }, "text/plain": [ " 0%| | 0/1 [00:00<?, ?ba/s]" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "eval_dataset = eval_dataset.map(partial(preprocess_function, tokenizer=tokenizer), batched=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, to estimate the drop in performance resulting from quantization, we are going to perform an evaluation step for both models (before and after quantization). In order to perform the latter, we will need to instantiate an `ORTModel` and thus need:\n", "\n", "* The path of the model to evaluate.\n", "* The dataset to use for the evaluation step.\n", "* The model's ONNX configuration `onnx_config` associated to the model. This instance of `OnnxConfig` describes how to export the model through the ONNX format.\n", "* The function that will be used to compute the evaluation metrics `compute_metrics` that was defined previously." ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [], "source": [ "from optimum.onnxruntime import ORTModel\n", "from pathlib import Path\n", "\n", "ort_model = ORTModel(quantizer.onnx_model_path, compute_metrics=compute_metrics, label_names=[\"label\"])\n", "model_output = ort_model.evaluation_loop(eval_dataset)" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'accuracy': 0.9243119266055045}" ] }, "execution_count": 33, "metadata": {}, "output_type": "execute_result" } ], "source": [ "model_output.metrics" ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [], "source": [ "q8_ort_model = ORTModel(Path(output_dir) / \"model_quantized.onnx\", compute_metrics=compute_metrics, label_names=[\"label\"])\n", "q8_model_output = q8_ort_model.evaluation_loop(eval_dataset)" ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'accuracy': 0.9002293577981652}" ] }, "execution_count": 37, "metadata": {}, "output_type": "execute_result" } ], "source": [ "q8_model_output.metrics" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's compute the full-precision and the quantized model respective size in megabyte (MB) :" ] }, { "cell_type": "code", "execution_count": 40, "metadata": {}, "outputs": [], "source": [ "fp_model_size = os.path.getsize(quantizer.onnx_model_path) / (1024*1024)\n", "q_model_size = os.path.getsize(Path(output_dir) / \"model_quantized.onnx\") / (1024*1024)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The reduction in the model size resulting from quantization is:" ] }, { "cell_type": "code", "execution_count": 41, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "3.92" ] }, "execution_count": 41, "metadata": {}, "output_type": "execute_result" } ], "source": [ "round(fp_model_size / q_model_size, 2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "colab": { "name": "Text Classification on GLUE", "provenance": [] }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.8" }, "vscode": { "interpreter": { "hash": "bddb99ecda5b40a820d97bf37f3ff3a89fb9dbcf726ae84d28624ac628a665b4" } } }, "nbformat": 4, "nbformat_minor": 1 }