{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "git68adWeq4l" }, "source": [ "# Quantization Aware Training with NNCF, using PyTorch framework\n", "\n", "This notebook is based on [ImageNet training in PyTorch](https://github.com/pytorch/examples/blob/master/imagenet/main.py).\n", "\n", "The goal of this notebook is to demonstrate how to use the Neural Network Compression Framework [NNCF](https://github.com/openvinotoolkit/nncf) 8-bit quantization to optimize a PyTorch model for inference with OpenVINO Toolkit. The optimization process contains the following steps:\n", "\n", "* Transforming the original `FP32` model to `INT8`\n", "* Using fine-tuning to restore the accuracy.\n", "* Exporting optimized and original models to OpenVINO IR\n", "* Measuring and comparing the performance of models.\n", "\n", "For more advanced usage, refer to these [examples](https://github.com/openvinotoolkit/nncf/tree/develop/examples).\n", "\n", "This tutorial uses the ResNet-18 model with the Tiny ImageNet-200 dataset. ResNet-18 is the version of ResNet models that contains the fewest layers (18). Tiny ImageNet-200 is a subset of the larger ImageNet dataset with smaller images. The dataset will be downloaded in the notebook. Using the smaller model and dataset will speed up training and download time. To see other ResNet models, visit [PyTorch hub](https://pytorch.org/hub/pytorch_vision_resnet/).\n", "\n", "> **NOTE**: This notebook requires a C++ compiler.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Table of content:\n", "- [Imports and Settings](#Imports-and-Settings-Uparrow)\n", "- [Pre-train Floating-Point Model](#Pre-train-Floating-Point-Model-Uparrow)\n", " - [Train Function](#Train-Function-Uparrow)\n", " - [Validate Function](#Validate-Function-Uparrow)\n", " - [Helpers](#Helpers-Uparrow)\n", " - [Get a Pre-trained FP32 Model](#Get-a-Pre-trained-FP32-Model-Uparrow)\n", "- [Create and Initialize Quantization](#Create-and-Initialize-Quantization-Uparrow)\n", "- [Fine-tune the Compressed Model](#Fine-tune-the-Compressed-Model-Uparrow)\n", "- [Export INT8 Model to OpenVINO IR](#Export-INT8-Model-to-OpenVINO-IR-Uparrow)\n", "- [Benchmark Model Performance by Computing Inference Time](#Benchmark-Model-Performance-by-Computing-Inference-Time-Uparrow)" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "%pip install -q \"openvino>=2023.1.0\" \"torch\" \"torchvision\"\n", "%pip install -q \"git+https://github.com/openvinotoolkit/nncf.git@release_v260\"" ] }, { "cell_type": "markdown", "metadata": { "id": "6M1xndNu-z_2" }, "source": [ "## Imports and Settings [$\\Uparrow$](#Table-of-content:)\n", "\n", "On Windows, add the required C++ directories to the system PATH.\n", "\n", "Import NNCF and all auxiliary packages from your Python code.\n", "Set a name for the model, and the image width and height that will be used for the network. Also define paths where PyTorch and OpenVINO IR versions of the models will be stored. \n", "\n", "> **NOTE**: All NNCF logging messages below ERROR level (INFO and WARNING) are disabled to simplify the tutorial. For production use, it is recommended to enable logging by removing ```set_log_level(logging.ERROR)```." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "# On Windows, add the directory that contains cl.exe to the PATH to enable PyTorch to find the\n", "# required C++ tools. This code assumes that Visual Studio 2019 is installed in the default\n", "# directory. If you have a different C++ compiler, add the correct path to os.environ[\"PATH\"]\n", "# directly. Note that the C++ Redistributable is not enough to run this notebook.\n", "\n", "# Adding the path to os.environ[\"LIB\"] is not always required - it depends on the system configuration\n", "\n", "import sys\n", "\n", "if sys.platform == \"win32\":\n", " import distutils.command.build_ext\n", " import os\n", " from pathlib import Path\n", "\n", " VS_INSTALL_DIR = r\"C:/Program Files (x86)/Microsoft Visual Studio\"\n", " cl_paths = sorted(list(Path(VS_INSTALL_DIR).glob(\"**/Hostx86/x64/cl.exe\")))\n", " if len(cl_paths) == 0:\n", " raise ValueError(\n", " \"Cannot find Visual Studio. This notebook requires a C++ compiler. If you installed \"\n", " \"a C++ compiler, please add the directory that contains cl.exe to `os.environ['PATH']`.\"\n", " )\n", " else:\n", " # If multiple versions of MSVC are installed, get the most recent one.\n", " cl_path = cl_paths[-1]\n", " vs_dir = str(cl_path.parent)\n", " os.environ[\"PATH\"] += f\"{os.pathsep}{vs_dir}\"\n", " # The code for finding the library dirs is from:\n", " # https://stackoverflow.com/questions/47423246/get-pythons-lib-path\n", " d = distutils.core.Distribution()\n", " b = distutils.command.build_ext.build_ext(d)\n", " b.finalize_options()\n", " os.environ[\"LIB\"] = os.pathsep.join(b.library_dirs)\n", " print(f\"Added {vs_dir} to PATH\")" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "id": "BtaM_i2mEB0z", "pycharm": { "name": "#%%\n" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, tensorflow, onnx, openvino\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Using cpu device\n", "'model/resnet18_fp32.pth' already exists.\n" ] }, { "data": { "text/plain": [ "PosixPath('/home/ea/work/openvino_notebooks/notebooks/302-pytorch-quantization-aware-training/model/resnet18_fp32.pth')" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import sys\n", "import time\n", "import warnings # To disable warnings on export model\n", "import zipfile\n", "from pathlib import Path\n", "import logging\n", "\n", "import torch\n", "import nncf # Important - should be imported directly after torch.\n", "\n", "import torch.nn as nn\n", "import torch.nn.parallel\n", "import torch.optim\n", "import torch.utils.data\n", "import torch.utils.data.distributed\n", "import torchvision.datasets as datasets\n", "import torchvision.models as models\n", "import torchvision.transforms as transforms\n", "\n", "from nncf.common.logging.logger import set_log_level\n", "set_log_level(logging.ERROR) # Disables all NNCF info and warning messages.\n", "from nncf import NNCFConfig\n", "from nncf.torch import create_compressed_model, register_default_init_args\n", "import openvino as ov\n", "from torch.jit import TracerWarning\n", "\n", "sys.path.append(\"../utils\")\n", "from notebook_utils import download_file\n", "\n", "torch.manual_seed(0)\n", "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n", "print(f\"Using {device} device\")\n", "\n", "MODEL_DIR = Path(\"model\")\n", "OUTPUT_DIR = Path(\"output\")\n", "DATA_DIR = Path(\"data\")\n", "BASE_MODEL_NAME = \"resnet18\"\n", "image_size = 64\n", "\n", "OUTPUT_DIR.mkdir(exist_ok=True)\n", "MODEL_DIR.mkdir(exist_ok=True)\n", "DATA_DIR.mkdir(exist_ok=True)\n", "\n", "# Paths where PyTorch and OpenVINO IR models will be stored.\n", "fp32_pth_path = Path(MODEL_DIR / (BASE_MODEL_NAME + \"_fp32\")).with_suffix(\".pth\")\n", "fp32_ir_path = fp32_pth_path.with_suffix(\".xml\")\n", "int8_ir_path = Path(MODEL_DIR / (BASE_MODEL_NAME + \"_int8\")).with_suffix(\".xml\")\n", "\n", "# It is possible to train FP32 model from scratch, but it might be slow. Therefore, the pre-trained weights are downloaded by default.\n", "pretrained_on_tiny_imagenet = True\n", "fp32_pth_url = \"https://storage.openvinotoolkit.org/repositories/nncf/openvino_notebook_ckpts/302_resnet18_fp32_v1.pth\"\n", "download_file(fp32_pth_url, directory=MODEL_DIR, filename=fp32_pth_path.name)" ] }, { "cell_type": "markdown", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "EIo5S145S0Ug", "outputId": "9a2db892-eb38-4863-dfdb-560aa12c8232", "pycharm": { "name": "#%% md\n" } }, "source": [ "Download Tiny ImageNet dataset\n", "\n", "* 100k images of shape 3x64x64\n", "* 200 different classes: snake, spider, cat, truck, grasshopper, gull, etc." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "-HxsU71bEbLS", "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "def download_tiny_imagenet_200(\n", " data_dir: Path,\n", " url=\"http://cs231n.stanford.edu/tiny-imagenet-200.zip\",\n", " tarname=\"tiny-imagenet-200.zip\",\n", "):\n", " archive_path = data_dir / tarname\n", " download_file(url, directory=data_dir, filename=tarname)\n", " zip_ref = zipfile.ZipFile(archive_path, \"r\")\n", " zip_ref.extractall(path=data_dir)\n", " zip_ref.close()\n", "\n", "def prepare_tiny_imagenet_200(dataset_dir: Path):\n", " # Format validation set the same way as train set is formatted.\n", " val_data_dir = dataset_dir / 'val'\n", " val_annotations_file = val_data_dir / 'val_annotations.txt'\n", " with open(val_annotations_file, 'r') as f:\n", " val_annotation_data = map(lambda line: line.split('\\t')[:2], f.readlines())\n", " val_images_dir = val_data_dir / 'images'\n", " for image_filename, image_label in val_annotation_data:\n", " from_image_filepath = val_images_dir / image_filename\n", " to_image_dir = val_data_dir / image_label\n", " if not to_image_dir.exists():\n", " to_image_dir.mkdir()\n", " to_image_filepath = to_image_dir / image_filename\n", " from_image_filepath.rename(to_image_filepath)\n", " val_annotations_file.unlink()\n", " val_images_dir.rmdir()\n", " \n", "\n", "DATASET_DIR = DATA_DIR / \"tiny-imagenet-200\"\n", "if not DATASET_DIR.exists():\n", " download_tiny_imagenet_200(DATA_DIR)\n", " prepare_tiny_imagenet_200(DATASET_DIR)\n", " print(f\"Successfully downloaded and prepared dataset at: {DATASET_DIR}\")" ] }, { "cell_type": "markdown", "metadata": { "id": "eZX2GAh3W7ZT", "pycharm": { "name": "#%% md\n" } }, "source": [ "## Pre-train Floating-Point Model [$\\Uparrow$](#Table-of-content:)\n", "Using NNCF for model compression assumes that a pre-trained model and a training pipeline are already in use.\n", "\n", "This tutorial demonstrates one possible training pipeline: a ResNet-18 model pre-trained on 1000 classes from ImageNet is fine-tuned with 200 classes from Tiny-ImageNet. \n", "\n", "Subsequently, the training and validation functions will be reused as is for quantization-aware training.\n" ] }, { "cell_type": "markdown", "metadata": { "id": "E01dMaR2_AFL" }, "source": [ "### Train Function [$\\Uparrow$](#Table-of-content:)\n" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "id": "940rcAIyiXml" }, "outputs": [], "source": [ "def train(train_loader, model, criterion, optimizer, epoch):\n", " batch_time = AverageMeter(\"Time\", \":3.3f\")\n", " losses = AverageMeter(\"Loss\", \":2.3f\")\n", " top1 = AverageMeter(\"Acc@1\", \":2.2f\")\n", " top5 = AverageMeter(\"Acc@5\", \":2.2f\")\n", " progress = ProgressMeter(\n", " len(train_loader), [batch_time, losses, top1, top5], prefix=\"Epoch:[{}]\".format(epoch)\n", " )\n", "\n", " # Switch to train mode.\n", " model.train()\n", "\n", " end = time.time()\n", " for i, (images, target) in enumerate(train_loader):\n", " images = images.to(device)\n", " target = target.to(device)\n", "\n", " # Compute output.\n", " output = model(images)\n", " loss = criterion(output, target)\n", "\n", " # Measure accuracy and record loss.\n", " acc1, acc5 = accuracy(output, target, topk=(1, 5))\n", " losses.update(loss.item(), images.size(0))\n", " top1.update(acc1[0], images.size(0))\n", " top5.update(acc5[0], images.size(0))\n", "\n", " # Compute gradient and do opt step.\n", " optimizer.zero_grad()\n", " loss.backward()\n", " optimizer.step()\n", "\n", " # Measure elapsed time.\n", " batch_time.update(time.time() - end)\n", " end = time.time()\n", "\n", " print_frequency = 50\n", " if i % print_frequency == 0:\n", " progress.display(i)" ] }, { "cell_type": "markdown", "metadata": { "id": "CoNr8qwm_El2" }, "source": [ "### Validate Function [$\\Uparrow$](#Table-of-content:)\n" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "id": "KgnugrWgicWC" }, "outputs": [], "source": [ "def validate(val_loader, model, criterion):\n", " batch_time = AverageMeter(\"Time\", \":3.3f\")\n", " losses = AverageMeter(\"Loss\", \":2.3f\")\n", " top1 = AverageMeter(\"Acc@1\", \":2.2f\")\n", " top5 = AverageMeter(\"Acc@5\", \":2.2f\")\n", " progress = ProgressMeter(len(val_loader), [batch_time, losses, top1, top5], prefix=\"Test: \")\n", "\n", " # Switch to evaluate mode.\n", " model.eval()\n", "\n", " with torch.no_grad():\n", " end = time.time()\n", " for i, (images, target) in enumerate(val_loader):\n", " images = images.to(device)\n", " target = target.to(device)\n", "\n", " # Compute output.\n", " output = model(images)\n", " loss = criterion(output, target)\n", "\n", " # Measure accuracy and record loss.\n", " acc1, acc5 = accuracy(output, target, topk=(1, 5))\n", " losses.update(loss.item(), images.size(0))\n", " top1.update(acc1[0], images.size(0))\n", " top5.update(acc5[0], images.size(0))\n", "\n", " # Measure elapsed time.\n", " batch_time.update(time.time() - end)\n", " end = time.time()\n", "\n", " print_frequency = 10\n", " if i % print_frequency == 0:\n", " progress.display(i)\n", "\n", " print(\" * Acc@1 {top1.avg:.3f} Acc@5 {top5.avg:.3f}\".format(top1=top1, top5=top5))\n", " return top1.avg" ] }, { "cell_type": "markdown", "metadata": { "id": "qMnYsGo9_MA8" }, "source": [ "### Helpers [$\\Uparrow$](#Table-of-content:)\n" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "id": "R724tbxcidQE" }, "outputs": [], "source": [ "class AverageMeter(object):\n", " \"\"\"Computes and stores the average and current value\"\"\"\n", "\n", " def __init__(self, name, fmt=\":f\"):\n", " self.name = name\n", " self.fmt = fmt\n", " self.reset()\n", "\n", " def reset(self):\n", " self.val = 0\n", " self.avg = 0\n", " self.sum = 0\n", " self.count = 0\n", "\n", " def update(self, val, n=1):\n", " self.val = val\n", " self.sum += val * n\n", " self.count += n\n", " self.avg = self.sum / self.count\n", "\n", " def __str__(self):\n", " fmtstr = \"{name} {val\" + self.fmt + \"} ({avg\" + self.fmt + \"})\"\n", " return fmtstr.format(**self.__dict__)\n", "\n", "\n", "class ProgressMeter(object):\n", " def __init__(self, num_batches, meters, prefix=\"\"):\n", " self.batch_fmtstr = self._get_batch_fmtstr(num_batches)\n", " self.meters = meters\n", " self.prefix = prefix\n", "\n", " def display(self, batch):\n", " entries = [self.prefix + self.batch_fmtstr.format(batch)]\n", " entries += [str(meter) for meter in self.meters]\n", " print(\"\\t\".join(entries))\n", "\n", " def _get_batch_fmtstr(self, num_batches):\n", " num_digits = len(str(num_batches // 1))\n", " fmt = \"{:\" + str(num_digits) + \"d}\"\n", " return \"[\" + fmt + \"/\" + fmt.format(num_batches) + \"]\"\n", "\n", "\n", "def accuracy(output, target, topk=(1,)):\n", " \"\"\"Computes the accuracy over the k top predictions for the specified values of k\"\"\"\n", " with torch.no_grad():\n", " maxk = max(topk)\n", " batch_size = target.size(0)\n", "\n", " _, pred = output.topk(maxk, 1, True, True)\n", " pred = pred.t()\n", " correct = pred.eq(target.view(1, -1).expand_as(pred))\n", "\n", " res = []\n", " for k in topk:\n", " correct_k = correct[:k].reshape(-1).float().sum(0, keepdim=True)\n", " res.append(correct_k.mul_(100.0 / batch_size))\n", " return res" ] }, { "cell_type": "markdown", "metadata": { "id": "kcSjyLBwiqBx", "pycharm": { "name": "#%% md\n" } }, "source": [ "### Get a Pre-trained FP32 Model [$\\Uparrow$](#Table-of-content:)\n", "\n", "А pre-trained floating-point model is a prerequisite for quantization. It can be obtained by tuning from scratch with the code below. However, this usually takes a lot of time. Therefore, this code has already been run and received good enough weights after 4 epochs (for the sake of simplicity, tuning was not done until the best accuracy). By default, this notebook just loads these weights without launching training. To train the model yourself on a model pre-trained on ImageNet, set `pretrained_on_tiny_imagenet = False` in the Imports and Settings section at the top of this notebook." ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "avCsioUYIaL7", "outputId": "183bdbb6-4016-463c-8d76-636a6b3a9778", "tags": [], "test_replace": { "train_dataset, ": "torch.utils.data.Subset(train_dataset, torch.arange(300)), ", "val_dataset, ": "torch.utils.data.Subset(val_dataset, torch.arange(100)), " } }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/home/ea/work/ov_venv/lib/python3.8/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.\n", " warnings.warn(\n", "/home/ea/work/ov_venv/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=None`.\n", " warnings.warn(msg)\n" ] } ], "source": [ "num_classes = 200 # 200 is for Tiny ImageNet, default is 1000 for ImageNet\n", "init_lr = 1e-4\n", "batch_size = 128\n", "epochs = 4\n", "\n", "model = models.resnet18(pretrained=not pretrained_on_tiny_imagenet)\n", "# Update the last FC layer for Tiny ImageNet number of classes.\n", "model.fc = nn.Linear(in_features=512, out_features=num_classes, bias=True)\n", "model.to(device)\n", "\n", "# Data loading code.\n", "train_dir = DATASET_DIR / \"train\"\n", "val_dir = DATASET_DIR / \"val\"\n", "normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])\n", "\n", "train_dataset = datasets.ImageFolder(\n", " train_dir,\n", " transforms.Compose(\n", " [\n", " transforms.Resize(image_size),\n", " transforms.RandomHorizontalFlip(),\n", " transforms.ToTensor(),\n", " normalize,\n", " ]\n", " ),\n", ")\n", "val_dataset = datasets.ImageFolder(\n", " val_dir,\n", " transforms.Compose(\n", " [\n", " transforms.Resize(image_size),\n", " transforms.ToTensor(),\n", " normalize,\n", " ]\n", " ),\n", ")\n", "\n", "train_loader = torch.utils.data.DataLoader(\n", " train_dataset, batch_size=batch_size, shuffle=True, num_workers=0, pin_memory=True, sampler=None\n", ")\n", "\n", "val_loader = torch.utils.data.DataLoader(\n", " val_dataset, batch_size=batch_size, shuffle=False, num_workers=0, pin_memory=True\n", ")\n", "\n", "# Define loss function (criterion) and optimizer.\n", "criterion = nn.CrossEntropyLoss().to(device)\n", "optimizer = torch.optim.Adam(model.parameters(), lr=init_lr)" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "id": "L0tH9KdwtHhV", "pycharm": { "name": "#%%\n" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Accuracy of FP32 model: 55.520\n" ] } ], "source": [ "if pretrained_on_tiny_imagenet:\n", " #\n", " # ** WARNING: The `torch.load` functionality uses Python's pickling module that\n", " # may be used to perform arbitrary code execution during unpickling. Only load data that you\n", " # trust.\n", " #\n", " checkpoint = torch.load(str(fp32_pth_path), map_location=\"cpu\")\n", " model.load_state_dict(checkpoint[\"state_dict\"], strict=True)\n", " acc1_fp32 = checkpoint[\"acc1\"]\n", "else:\n", " best_acc1 = 0\n", " # Training loop.\n", " for epoch in range(0, epochs):\n", " # Run a single training epoch.\n", " train(train_loader, model, criterion, optimizer, epoch)\n", "\n", " # Evaluate on validation set.\n", " acc1 = validate(val_loader, model, criterion)\n", "\n", " is_best = acc1 > best_acc1\n", " best_acc1 = max(acc1, best_acc1)\n", "\n", " if is_best:\n", " checkpoint = {\"state_dict\": model.state_dict(), \"acc1\": acc1}\n", " torch.save(checkpoint, fp32_pth_path)\n", " acc1_fp32 = best_acc1\n", " \n", "print(f\"Accuracy of FP32 model: {acc1_fp32:.3f}\")" ] }, { "cell_type": "markdown", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "pt_xNDDrJKsy", "outputId": "0925c801-0585-4431-98c9-de0decc4ad27", "pycharm": { "name": "#%% md\n" } }, "source": [ "Export the `FP32` model to OpenVINO™ Intermediate Representation, to benchmark it in comparison with the `INT8` model." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "id": "9d8LOmKut36x", "pycharm": { "name": "#%%\n" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "FP32 model was exported to model/resnet18_fp32.xml.\n" ] } ], "source": [ "dummy_input = torch.randn(1, 3, image_size, image_size).to(device)\n", "\n", "ov_model = ov.convert_model(model, example_input=dummy_input, input=[1, 3, image_size, image_size])\n", "ov.save_model(ov_model, fp32_ir_path, compress_to_fp16=False)\n", "print(f\"FP32 model was exported to {fp32_ir_path}.\")" ] }, { "cell_type": "markdown", "metadata": { "id": "pobVoHEoKcYp" }, "source": [ "## Create and Initialize Quantization [$\\Uparrow$](#Table-of-content:)\n", "\n", "NNCF enables compression-aware training by integrating into regular training pipelines. The framework is designed so that modifications to your original training code are minor.\n", "Quantization is the simplest scenario and requires only 3 modifications." ] }, { "cell_type": "markdown", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "ENAbqFpdWSlE", "outputId": "cd2701e3-e4a2-4a19-86cd-ae37f45cd64a", "pycharm": { "name": "#%% md\n" } }, "source": [ "1. Configure NNCF parameters to specify compression" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "id": "_I_G-g9TPWkl", "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "nncf_config_dict = {\n", " \"input_info\": {\"sample_size\": [1, 3, image_size, image_size]},\n", " \"log_dir\": str(OUTPUT_DIR), # The log directory for NNCF-specific logging outputs.\n", " \"compression\": {\n", " \"algorithm\": \"quantization\", # Specify the algorithm here.\n", " },\n", "}\n", "nncf_config = NNCFConfig.from_dict(nncf_config_dict)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "2. Provide a data loader to initialize the values of quantization ranges and determine which activation should be signed or unsigned from the collected statistics, using a given number of samples.\n" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "nncf_config = register_default_init_args(nncf_config, train_loader)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "3. Create a wrapped model ready for compression fine-tuning from a pre-trained `FP32` model and a configuration object." ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "2023-09-12 23:17:43.430476: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n", "2023-09-12 23:17:43.466284: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n", "To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n", "2023-09-12 23:17:44.073352: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n" ] } ], "source": [ "compression_ctrl, model = create_compressed_model(model, nncf_config)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Evaluate the new model on the validation set after initialization of quantization. The accuracy should be close to the accuracy of the floating-point `FP32` model for a simple case like the one being demonstrated here." ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Test: [ 0/79]\tTime 0.148 (0.148)\tLoss 6.413 (6.413)\tAcc@1 33.59 (33.59)\tAcc@5 36.72 (36.72)\n", "Test: [10/79]\tTime 0.165 (0.151)\tLoss 10.535 (9.744)\tAcc@1 0.00 (3.20)\tAcc@5 0.00 (4.83)\n", "Test: [20/79]\tTime 0.166 (0.157)\tLoss 12.081 (10.131)\tAcc@1 0.00 (1.67)\tAcc@5 0.78 (2.98)\n", "Test: [30/79]\tTime 0.167 (0.160)\tLoss 11.036 (10.367)\tAcc@1 0.00 (1.13)\tAcc@5 0.00 (2.09)\n", "Test: [40/79]\tTime 0.146 (0.160)\tLoss 11.325 (10.496)\tAcc@1 0.00 (0.88)\tAcc@5 0.00 (1.66)\n", "Test: [50/79]\tTime 0.142 (0.159)\tLoss 10.805 (10.583)\tAcc@1 0.00 (0.70)\tAcc@5 0.78 (1.35)\n", "Test: [60/79]\tTime 0.147 (0.157)\tLoss 11.237 (10.601)\tAcc@1 0.00 (0.59)\tAcc@5 0.00 (1.17)\n", "Test: [70/79]\tTime 0.147 (0.156)\tLoss 9.264 (10.575)\tAcc@1 0.00 (0.51)\tAcc@5 2.34 (1.08)\n", " * Acc@1 0.480 Acc@5 1.090\n", "Accuracy of initialized INT8 model: 0.480\n" ] } ], "source": [ "acc1 = validate(val_loader, model, criterion)\n", "print(f\"Accuracy of initialized INT8 model: {acc1:.3f}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Fine-tune the Compressed Model [$\\Uparrow$](#Table-of-content:)\n", "\n", "At this step, a regular fine-tuning process is applied to further improve quantized model accuracy. Normally, several epochs of tuning are required with a small learning rate, the same that is usually used at the end of the training of the original model. No other changes in the training pipeline are required. Here is a simple example." ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Epoch:[0][ 0/782]\tTime 0.399 (0.399)\tLoss 0.790 (0.790)\tAcc@1 81.25 (81.25)\tAcc@5 93.75 (93.75)\n", "Epoch:[0][ 50/782]\tTime 0.423 (0.349)\tLoss 0.756 (0.834)\tAcc@1 83.59 (79.53)\tAcc@5 94.53 (94.16)\n", "Epoch:[0][100/782]\tTime 0.351 (0.352)\tLoss 0.713 (0.797)\tAcc@1 82.81 (80.06)\tAcc@5 96.09 (94.43)\n", "Epoch:[0][150/782]\tTime 0.351 (0.354)\tLoss 0.664 (0.790)\tAcc@1 84.38 (80.28)\tAcc@5 96.09 (94.48)\n", "Epoch:[0][200/782]\tTime 0.349 (0.353)\tLoss 0.789 (0.785)\tAcc@1 82.03 (80.53)\tAcc@5 96.09 (94.51)\n", "Epoch:[0][250/782]\tTime 0.391 (0.353)\tLoss 0.691 (0.779)\tAcc@1 85.94 (80.75)\tAcc@5 92.19 (94.47)\n", "Epoch:[0][300/782]\tTime 0.367 (0.352)\tLoss 0.643 (0.773)\tAcc@1 85.16 (81.04)\tAcc@5 97.66 (94.51)\n", "Epoch:[0][350/782]\tTime 0.352 (0.353)\tLoss 0.834 (0.767)\tAcc@1 79.69 (81.13)\tAcc@5 94.53 (94.63)\n", "Epoch:[0][400/782]\tTime 0.338 (0.353)\tLoss 0.646 (0.761)\tAcc@1 87.50 (81.30)\tAcc@5 96.88 (94.68)\n", "Epoch:[0][450/782]\tTime 0.350 (0.352)\tLoss 0.753 (0.757)\tAcc@1 84.38 (81.47)\tAcc@5 93.75 (94.72)\n", "Epoch:[0][500/782]\tTime 0.348 (0.352)\tLoss 0.722 (0.755)\tAcc@1 80.47 (81.59)\tAcc@5 94.53 (94.71)\n", "Epoch:[0][550/782]\tTime 0.329 (0.355)\tLoss 0.651 (0.753)\tAcc@1 85.94 (81.58)\tAcc@5 96.09 (94.75)\n", "Epoch:[0][600/782]\tTime 0.355 (0.354)\tLoss 0.809 (0.751)\tAcc@1 78.91 (81.66)\tAcc@5 95.31 (94.75)\n", "Epoch:[0][650/782]\tTime 0.342 (0.354)\tLoss 0.800 (0.750)\tAcc@1 80.47 (81.65)\tAcc@5 92.19 (94.73)\n", "Epoch:[0][700/782]\tTime 0.335 (0.354)\tLoss 0.735 (0.748)\tAcc@1 82.81 (81.71)\tAcc@5 92.19 (94.73)\n", "Epoch:[0][750/782]\tTime 0.359 (0.354)\tLoss 0.774 (0.745)\tAcc@1 77.34 (81.78)\tAcc@5 95.31 (94.75)\n", "Test: [ 0/79]\tTime 0.144 (0.144)\tLoss 6.201 (6.201)\tAcc@1 35.16 (35.16)\tAcc@5 36.72 (36.72)\n", "Test: [10/79]\tTime 0.125 (0.129)\tLoss 10.551 (9.781)\tAcc@1 0.00 (3.41)\tAcc@5 0.00 (5.04)\n", "Test: [20/79]\tTime 0.148 (0.130)\tLoss 12.097 (10.172)\tAcc@1 0.00 (1.79)\tAcc@5 0.78 (2.98)\n", "Test: [30/79]\tTime 0.128 (0.130)\tLoss 10.913 (10.431)\tAcc@1 0.00 (1.21)\tAcc@5 0.00 (2.12)\n", "Test: [40/79]\tTime 0.127 (0.130)\tLoss 11.263 (10.554)\tAcc@1 0.00 (0.93)\tAcc@5 0.00 (1.68)\n", "Test: [50/79]\tTime 0.127 (0.129)\tLoss 11.204 (10.660)\tAcc@1 0.78 (0.77)\tAcc@5 0.78 (1.38)\n", "Test: [60/79]\tTime 0.126 (0.129)\tLoss 11.518 (10.682)\tAcc@1 0.00 (0.64)\tAcc@5 0.00 (1.19)\n", "Test: [70/79]\tTime 0.149 (0.132)\tLoss 9.236 (10.661)\tAcc@1 0.00 (0.55)\tAcc@5 3.91 (1.12)\n", " * Acc@1 0.520 Acc@5 1.160\n", "Accuracy of tuned INT8 model: 0.520\n", "Accuracy drop of tuned INT8 model over pre-trained FP32 model: 55.000\n" ] } ], "source": [ "compression_lr = init_lr / 10\n", "optimizer = torch.optim.Adam(model.parameters(), lr=compression_lr)\n", "\n", "# Train for one epoch with NNCF.\n", "train(train_loader, model, criterion, optimizer, epoch=0)\n", "\n", "# Evaluate on validation set after Quantization-Aware Training (QAT case).\n", "acc1_int8 = validate(val_loader, model, criterion)\n", "\n", "print(f\"Accuracy of tuned INT8 model: {acc1_int8:.3f}\")\n", "print(f\"Accuracy drop of tuned INT8 model over pre-trained FP32 model: {acc1_fp32 - acc1_int8:.3f}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Export INT8 Model to OpenVINO IR [$\\Uparrow$](#Table-of-content:)\n" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:Please fix your imports. Module tensorflow.python.training.tracking.base has been moved to tensorflow.python.trackable.base. The old module will be deleted in version 2.11.\n", "INT8 Omodel exported to model/resnet18_int8.xml.\n" ] } ], "source": [ "if not int8_ir_path.exists():\n", " warnings.filterwarnings(\"ignore\", category=TracerWarning)\n", " warnings.filterwarnings(\"ignore\", category=UserWarning)\n", " # Export INT8 model to OpenVINO™ IR\n", " ov_model = ov.convert_model(model, example_input=dummy_input, input=[1, 3, image_size, image_size])\n", " ov.save_model(ov_model, int8_ir_path)\n", " print(f\"INT8 Omodel exported to {int8_ir_path}.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Benchmark Model Performance by Computing Inference Time [$\\Uparrow$](#Table-of-content:)\n", "\n", "Finally, measure the inference performance of the `FP32` and `INT8` models, using [Benchmark Tool](https://docs.openvino.ai/2023.0/openvino_inference_engine_tools_benchmark_tool_README.html) - inference performance measurement tool in OpenVINO. By default, Benchmark Tool runs inference for 60 seconds in asynchronous mode on CPU. It returns inference speed as latency (milliseconds per image) and throughput (frames per second) values.\n", "\n", "> **NOTE**: This notebook runs `benchmark_app` for 15 seconds to give a quick indication of performance. For more accurate performance, it is recommended to run `benchmark_app` in a terminal/command prompt after closing other applications. Run `benchmark_app -m model.xml -d CPU` to benchmark async inference on CPU for one minute. Change CPU to GPU to benchmark on GPU. Run `benchmark_app --help` to see an overview of all command-line options." ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Benchmark FP32 model (IR)\n", "[ INFO ] Throughput: 3610.98 FPS\n", "Benchmark INT8 model (IR)\n", "[ INFO ] Throughput: 15613.06 FPS\n" ] } ], "source": [ "def parse_benchmark_output(benchmark_output):\n", " parsed_output = [line for line in benchmark_output if 'FPS' in line]\n", " print(*parsed_output, sep='\\n')\n", "\n", "\n", "print('Benchmark FP32 model (IR)')\n", "benchmark_output = ! benchmark_app -m $fp32_ir_path -d CPU -api async -t 15\n", "parse_benchmark_output(benchmark_output)\n", "\n", "print('Benchmark INT8 model (IR)')\n", "benchmark_output = ! benchmark_app -m $int8_ir_path -d CPU -api async -t 15\n", "parse_benchmark_output(benchmark_output)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Show CPU Information for reference." ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [ { "data": { "text/plain": [ "'Intel(R) Core(TM) i9-10980XE CPU @ 3.00GHz'" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ie = ov.Core()\n", "ie.get_property(\"CPU\", \"FULL_DEVICE_NAME\")" ] } ], "metadata": { "accelerator": "GPU", "colab": { "collapsed_sections": [ "K5HPrY_d-7cV", "E01dMaR2_AFL", "qMnYsGo9_MA8", "L0tH9KdwtHhV" ], "name": "NNCF Quantization PyTorch Demo (tiny-imagenet/resnet-18)", "provenance": [] }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" }, "vscode": { "interpreter": { "hash": "916dbcbb3f70747c44a77c7bcd40155683ae19c65e1c03b4aa3499c5328201f1" } }, "widgets": { "application/vnd.jupyter.widget-state+json": { "state": {}, "version_major": 2, "version_minor": 0 } } }, "nbformat": 4, "nbformat_minor": 4 }