{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "JwEAhQVzkAwA" }, "source": [ "# Convert a TensorFlow Model to OpenVINO™\n", "\n", "This short tutorial shows how to convert a TensorFlow [MobileNetV3](https://docs.openvino.ai/latest/omz_models_model_mobilenet_v3_small_1_0_224_tf.html) image classification model to OpenVINO [Intermediate Representation](https://docs.openvino.ai/latest/openvino_docs_MO_DG_IR_and_opsets.html) (OpenVINO IR) format, using [Model Optimizer](https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). After creating the OpenVINO IR, load the model in [OpenVINO Runtime](https://docs.openvino.ai/latest/openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html) and do inference with a sample image. " ] }, { "cell_type": "markdown", "metadata": { "id": "QB4Yo-rGGLmV" }, "source": [ "## Imports" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2ynWRum4iiTz" }, "outputs": [], "source": [ "import time\n", "from pathlib import Path\n", "\n", "import cv2\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "import tensorflow as tf\n", "from IPython.display import Markdown\n", "from openvino.runtime import Core" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Settings" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# The paths of the source and converted models.\n", "model_dir = Path(\"model\")\n", "model_dir.mkdir(exist_ok=True)\n", "\n", "model_path = Path(\"model/v3-small_224_1.0_float\")\n", "\n", "ir_path = Path(\"model/v3-small_224_1.0_float.xml\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Download model\n", "\n", "Load model using [tf.keras.applications api](https://www.tensorflow.org/api_docs/python/tf/keras/applications/MobileNetV3Small) and save it to the disk." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model = tf.keras.applications.MobileNetV3Small()\n", "model.save(model_path)" ] }, { "cell_type": "markdown", "metadata": { "id": "6JSoEIk60uxV" }, "source": [ "## Convert a Model to OpenVINO IR Format\n", "\n", "### Convert a TensorFlow Model to OpenVINO IR Format\n", "\n", "Use Model Optimizer to convert a TensorFlow model to OpenVINO IR with `FP16` precision. The models are saved to the current directory. Add mean values to the model and scale the output with the standard deviation with `--scale_values`. With these options, it is not necessary to normalize input data before propagating it through the network. The original model expects input images in `RGB` format. The converted model also expects images in `RGB` format. If you want the converted model to work with `BGR` images, use the `--reverse-input-channels` option. For more information about Model Optimizer, including a description of the command-line options, see the [Model Optimizer Developer Guide](https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). For information about the model, including input shape, expected color order and mean values, refer to the [model documentation](https://docs.openvino.ai/latest/omz_models_model_mobilenet_v3_small_1_0_224_tf.html).\n", "\n", "First construct the command for Model Optimizer, and then execute this command in the notebook by prepending the command with an `!`. There may be some errors or warnings in the output. When model optimization is successful, the last lines of the output will include `[ SUCCESS ] Generated IR version 11 model.`" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# Construct the command for Model Optimizer.\n", "mo_command = f\"\"\"mo\n", " --saved_model_dir \"{model_path}\" \n", " --input_shape \"[1,224,224,3]\" \n", " --mean_values=\"[127.5,127.5,127.5]\"\n", " --scale_values=\"[127.5]\" \n", " --data_type FP16\n", " --model_name \"{model_path.name}\"\n", " --output_dir \"{model_path.parent}\"\n", " \"\"\"\n", "mo_command = \" \".join(mo_command.split())\n", "print(\"Model Optimizer command to convert TensorFlow to OpenVINO:\")\n", "display(Markdown(f\"`{mo_command}`\"))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Run Model Optimizer if the IR model file does not exist\n", "if not ir_path.exists():\n", " print(\"Exporting TensorFlow model to IR... This may take a few minutes.\")\n", " ! $mo_command\n", "else:\n", " print(f\"IR model {ir_path} already exists.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Test Inference on the Converted Model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Load the Model" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ie = Core()\n", "model = ie.read_model(ir_path)\n", "compiled_model = ie.compile_model(model=model, device_name=\"CPU\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Get Model Information" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "input_key = compiled_model.input(0)\n", "output_key = compiled_model.output(0)\n", "network_input_shape = input_key.shape " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Load an Image\n", "\n", "Load an image, resize it, and convert it to the input shape of the network." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# The MobileNet network expects images in RGB format.\n", "image = cv2.cvtColor(cv2.imread(filename=\"data/coco.jpg\"), code=cv2.COLOR_BGR2RGB)\n", "\n", "# Resize the image to the network input shape.\n", "resized_image = cv2.resize(src=image, dsize=(224, 224))\n", "\n", "# Transpose the image to the network input shape.\n", "input_image = np.expand_dims(resized_image, 0)\n", "\n", "plt.imshow(image);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Do Inference" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "result = compiled_model(input_image)[output_key]\n", "\n", "result_index = np.argmax(result)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Convert the inference result to a class name.\n", "imagenet_classes = open(\"utils/imagenet_2012.txt\").read().splitlines()\n", "\n", "# The model description states that for this model, class 0 is background.\n", "# Therefore, add background at the beginning of imagenet_classes\n", "imagenet_classes = ['background'] + imagenet_classes\n", "\n", "imagenet_classes[result_index]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Timing\n", "\n", "Measure the time it takes to do inference on thousand images. This gives an indication of performance. For more accurate benchmarking, use the [Benchmark Tool](https://docs.openvino.ai/latest/openvino_inference_engine_tools_benchmark_tool_README.html) in OpenVINO. Note that many optimizations are possible to improve the performance. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "num_images = 1000\n", "\n", "start = time.perf_counter()\n", "\n", "for _ in range(num_images):\n", " compiled_model([input_image])\n", "\n", "end = time.perf_counter()\n", "time_ir = end - start\n", "\n", "print(\n", " f\"IR model in OpenVINO Runtime/CPU: {time_ir/num_images:.4f} \"\n", " f\"seconds per image, FPS: {num_images/time_ir:.2f}\"\n", ")" ] } ], "metadata": { "colab": { "collapsed_sections": [], "name": "OpenVINO 2021.3 PIP installer - PyTorch Image Segmentation.ipynb", "provenance": [], "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" } }, "nbformat": 4, "nbformat_minor": 4 }