{ "cells": [ { "cell_type": "markdown", "id": "b875b31c", "metadata": {}, "source": [ "# OpenVINO API Tutorial" ] }, { "cell_type": "markdown", "id": "09abd9f4", "metadata": {}, "source": [ "This notebook explains the basics of the OpenVINO Inference Engine API. It covers:\n", "\n", "- [Load Inference Engine and Show Info](#Load-Inference-Engine-and-Show-Info)\n", "- [Loading a Model](#Loading-a-Model)\n", " - [IR Model](#IR-Model)\n", " - [ONNX Model](#ONNX-Model)\n", "- [Getting Information about a Model](#Getting-Information-about-a-Model)\n", " - [Model Inputs](#Model-Inputs)\n", " - [Model Outputs](#Model-Outputs)\n", "- [Doing Inference on a Model](#Doing-Inference-on-a-Model)\n", "- [Reshaping and Resizing](#Reshaping-and-Resizing)\n", " - [Change Image Size](#Change-Image-Size)\n", " - [Change Batch Size](#Change-Batch-Size)\n", " - [Caching a Model](#Caching-a-Model)\n", " \n", "The notebook is divided into sections with headers. Each section is standalone and does not depend on previous sections. A segmentation and classification IR model and a segmentation ONNX model are provided as examples. You can replace these model files with your own models. The exact outputs will be different, but the process is the same. " ] }, { "cell_type": "markdown", "id": "9ed058f4", "metadata": {}, "source": [ "## Load Inference Engine and Show Info\n", "\n", "Initialize Inference Engine with IECore()" ] }, { "cell_type": "code", "execution_count": null, "id": "c08b79c4", "metadata": {}, "outputs": [], "source": [ "from openvino.inference_engine import IECore\n", "\n", "ie = IECore()" ] }, { "cell_type": "markdown", "id": "68bc4125", "metadata": {}, "source": [ "Inference Engine can load a network on a device. A device in this context means a CPU, an Intel GPU, a Neural Compute Stick 2, etc. The `available_devices` property shows the devices that are available on your system. The \"FULL_DEVICE_NAME\" option to `ie.get_metric()` shows the name of the device.\n", "\n", "In this notebook the CPU device is used. To use an integrated GPU, use `device_name=\"GPU\"` instead. Note that loading a network on GPU will be slower than loading a network on CPU, but inference will likely be faster." ] }, { "cell_type": "code", "execution_count": null, "id": "d0c94f76", "metadata": {}, "outputs": [], "source": [ "devices = ie.available_devices\n", "for device in devices:\n", " device_name = ie.get_metric(device_name=device, metric_name=\"FULL_DEVICE_NAME\")\n", " print(f\"{device}: {device_name}\")" ] }, { "cell_type": "markdown", "id": "14d62615", "metadata": {}, "source": [ "## Loading a Model\n", "\n", "After initializing Inference Engine, first read the model file with `read_network()`, then load it to the specified device with `load_network()`. \n", "\n", "### IR Model\n", "\n", "An IR (Intermediate Representation) model consists of an .xml file, containing information about network topology, and a .bin file, containing the weights and biases binary data. `read_network()` expects the weights file to be located in the same directory as the xml file, with the same filename, and the extension .bin: `model_weights_file == Path(model_xml).with_suffix(\".bin\")`. If this is the case, specifying the weights file is optional. If the weights file has a different filename, it can be specified with the `weights` parameter to `read_network()`.\n", "\n", "See the [tensorflow-to-openvino](../101-tensorflow-to-openvino/101-tensorflow-to-openvino.ipynb) and [pytorch-onnx-to-openvino](../102-pytorch-onnx-to-openvino/102-pytorch-onnx-to-openvino.ipynb) notebooks for information on how to convert your existing TensorFlow, PyTorch or ONNX model to OpenVINO's IR format with OpenVINO's Model Optimizer. For exporting ONNX models to IR with default settings, the `.serialize()` method can also be used." ] }, { "cell_type": "code", "execution_count": null, "id": "523978fe", "metadata": {}, "outputs": [], "source": [ "from openvino.inference_engine import IECore\n", "\n", "ie = IECore()\n", "classification_model_xml = \"model/classification.xml\"\n", "net = ie.read_network(model=classification_model_xml)\n", "exec_net = ie.load_network(network=net, device_name=\"CPU\")" ] }, { "cell_type": "markdown", "id": "e5516e87", "metadata": {}, "source": [ "### ONNX Model\n", "\n", "An ONNX model is a single file. Reading and loading an ONNX model works the same way as reading and loading an IR model. The `model` argument points to the ONNX filename." ] }, { "cell_type": "code", "execution_count": null, "id": "15833f51", "metadata": {}, "outputs": [], "source": [ "from openvino.inference_engine import IECore\n", "\n", "ie = IECore()\n", "onnx_model = \"model/segmentation.onnx\"\n", "net_onnx = ie.read_network(model=onnx_model)\n", "exec_net_onnx = ie.load_network(network=net_onnx, device_name=\"CPU\")" ] }, { "cell_type": "markdown", "id": "5e8a38f1-9fa6-41d6-9eae-dfcfac601eb0", "metadata": {}, "source": [ "The ONNX model can be exported to IR with `.serialize()`:" ] }, { "cell_type": "code", "execution_count": null, "id": "49e0f0e9-3a2e-4040-8683-46a3c1088009", "metadata": {}, "outputs": [], "source": [ "net_onnx.serialize(\"model/exported_onnx_model.xml\")" ] }, { "cell_type": "markdown", "id": "8ebee450", "metadata": {}, "source": [ "## Getting Information about a Model\n", "\n", "The OpenVINO IENetwork instance stores information about the model. Information about the inputs and outputs of the model are in `net.input_info` and `net.outputs`. These are also properties of the ExecutableNetwork instance. Where we use `net.input_info` and `net.outputs` in the cells below, you can also use `exec_net.input_info` and `exec_net.outputs`." ] }, { "cell_type": "markdown", "id": "edc79b32", "metadata": {}, "source": [ "### Model Inputs" ] }, { "cell_type": "code", "execution_count": null, "id": "5571614d", "metadata": {}, "outputs": [], "source": [ "from openvino.inference_engine import IECore\n", "\n", "ie = IECore()\n", "classification_model_xml = \"model/classification.xml\"\n", "net = ie.read_network(model=classification_model_xml)\n", "net.input_info" ] }, { "cell_type": "markdown", "id": "4ebb1299", "metadata": {}, "source": [ "The cell above shows that the model loaded expects one input, with the name _input_. If you loaded a different model, you may see a different input layer name, and you may see more inputs.\n", "\n", "It is often useful to have a reference to the name of the first input layer. For a model with one input, `next(iter(net.input_info))` gets this name." ] }, { "cell_type": "code", "execution_count": null, "id": "6cf48354", "metadata": {}, "outputs": [], "source": [ "input_layer = next(iter(net.input_info))\n", "input_layer" ] }, { "cell_type": "markdown", "id": "aea90446", "metadata": {}, "source": [ "Information for this input layer is stored in `input_info`. The next cell prints the input layout, precision and shape." ] }, { "cell_type": "code", "execution_count": null, "id": "bd0bc0c1", "metadata": {}, "outputs": [], "source": [ "print(f\"input layout: {net.input_info[input_layer].layout}\")\n", "print(f\"input precision: {net.input_info[input_layer].precision}\")\n", "print(f\"input shape: {net.input_info[input_layer].tensor_desc.dims}\")" ] }, { "cell_type": "markdown", "id": "d189a73c", "metadata": {}, "source": [ "This cell output tells us that the model expects inputs with a shape of [1,3,224,224], and that this is in NCHW layout. This means that the model expects input data with a batch size (N) of 1, 3 channels (C), and images of a height (H) and width (W) of 224. The input data is expected to be of FP32 (floating point) precision." ] }, { "cell_type": "markdown", "id": "155cf48e", "metadata": {}, "source": [ "### Model Outputs" ] }, { "cell_type": "code", "execution_count": null, "id": "4583eb0e", "metadata": {}, "outputs": [], "source": [ "from openvino.inference_engine import IECore\n", "\n", "ie = IECore()\n", "classification_model_xml = \"model/classification.xml\"\n", "net = ie.read_network(model=classification_model_xml)\n", "net.outputs" ] }, { "cell_type": "markdown", "id": "a189c02f", "metadata": {}, "source": [ "Model output info is stored in `net.outputs`. The cell above shows that the model returns one output, with the name _MobilenetV3/Predictions/Softmax_. If you loaded a different model, you will probably see a different output layer name, and you may see more outputs.\n", "\n", "Since this model has one output, follow the same method as for the input layer to get its name." ] }, { "cell_type": "code", "execution_count": null, "id": "88fbbd06", "metadata": {}, "outputs": [], "source": [ "output_layer = next(iter(net.outputs))\n", "output_layer" ] }, { "cell_type": "markdown", "id": "16ad0240", "metadata": {}, "source": [ "Getting the output layout, precision and shape is similar to getting the input layout, precision and shape." ] }, { "cell_type": "code", "execution_count": null, "id": "0ee5e14a", "metadata": {}, "outputs": [], "source": [ "print(f\"output layout: {net.outputs[output_layer].layout}\")\n", "print(f\"output precision: {net.outputs[output_layer].precision}\")\n", "print(f\"output shape: {net.outputs[output_layer].shape}\")" ] }, { "cell_type": "markdown", "id": "2739f5bb", "metadata": {}, "source": [ "This cell output shows that the model returns outputs with a shape of [1, 1001], where 1 is the batch size (N) and 1001 the number of classes (C). The output is returned as 32-bit floating point." ] }, { "cell_type": "markdown", "id": "021708ab", "metadata": {}, "source": [ "## Doing Inference on a Model\n", "\n", "To do inference on a model, call the `infer()` method of the _ExecutableNetwork_, the `exec_net` that we loaded with `load_network()`. `infer()` expects one argument: _inputs_. This is a dictionary, mapping input layer names to input data." ] }, { "cell_type": "markdown", "id": "3ec2eac8", "metadata": {}, "source": [ "**Preparation: load network**" ] }, { "cell_type": "code", "execution_count": null, "id": "298c80b5", "metadata": {}, "outputs": [], "source": [ "from openvino.inference_engine import IECore\n", "\n", "ie = IECore()\n", "classification_model_xml = \"model/classification.xml\"\n", "net = ie.read_network(model=classification_model_xml)\n", "exec_net = ie.load_network(network=net, device_name=\"CPU\")\n", "input_layer = next(iter(net.input_info))\n", "output_layer = next(iter(net.outputs))" ] }, { "cell_type": "markdown", "id": "173cd1c9", "metadata": {}, "source": [ "**Preparation: load image and convert to input shape**\n", "\n", "To propagate an image through the network, it needs to be loaded into an array, resized to the shape that the network expects, and converted to the network's input layout." ] }, { "cell_type": "code", "execution_count": null, "id": "1f23c43a", "metadata": {}, "outputs": [], "source": [ "import cv2\n", "\n", "image_filename = \"data/coco_hollywood.jpg\"\n", "image = cv2.imread(image_filename)\n", "image.shape" ] }, { "cell_type": "markdown", "id": "6bf541d8", "metadata": {}, "source": [ "The image has a shape of (663,994,3). It is 663 pixels in height, 994 pixels in width, and has 3 color channels. We get a reference to the height and width that the network expects and resize the image to that size." ] }, { "cell_type": "code", "execution_count": null, "id": "c9f97da2", "metadata": {}, "outputs": [], "source": [ "# N,C,H,W = batch size, number of channels, height, width\n", "N, C, H, W = net.input_info[input_layer].tensor_desc.dims\n", "# OpenCV resize expects the destination size as (width, height)\n", "resized_image = cv2.resize(src=image, dsize=(W, H))\n", "resized_image.shape" ] }, { "cell_type": "markdown", "id": "436a6d74", "metadata": {}, "source": [ "Now the image has the width and height that the network expects. It is still in H,C,W format. We change it to N,C,H,W format (where N=1) by first calling `np.transpose()` to change to C,H,W and then adding the N dimension by calling `np.expand_dims()`. Convert the data to FP32 with `np.astype()`." ] }, { "cell_type": "code", "execution_count": null, "id": "d09b7275", "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "\n", "input_data = np.expand_dims(np.transpose(resized_image, (2, 0, 1)), 0).astype(np.float32)\n", "input_data.shape" ] }, { "cell_type": "markdown", "id": "af110efb", "metadata": {}, "source": [ "**Do inference**\n", "\n", "Now that the input data is in the right shape, doing inference is one simple command:" ] }, { "cell_type": "code", "execution_count": null, "id": "bf94022c", "metadata": {}, "outputs": [], "source": [ "result = exec_net.infer({input_layer: input_data})\n", "result" ] }, { "cell_type": "markdown", "id": "53bf7c61", "metadata": {}, "source": [ "`.infer()` returns a dictionary, mapping output layers to data. Since we know this network returns one output, and we stored the reference to the output layer in the `output_layer` variable, we can get the data with `result[output_layer]`" ] }, { "cell_type": "code", "execution_count": null, "id": "4a0f63b5", "metadata": {}, "outputs": [], "source": [ "output = result[output_layer]\n", "output.shape" ] }, { "cell_type": "markdown", "id": "7e5834ee", "metadata": {}, "source": [ "The output shape is (1,1001), which we saw is the expected shape of the output. This output shape indicates that the network returns probabilities for 1001 classes. To transform this into meaningful information, check out the [hello world notebook](../001-hello-world/001-hello-world.ipynb)." ] }, { "cell_type": "markdown", "id": "ec6a9be1", "metadata": {}, "source": [ "## Reshaping and Resizing\n", "\n", "### Change Image Size" ] }, { "cell_type": "markdown", "id": "4239b10c", "metadata": {}, "source": [ "Instead of reshaping the image to fit the model, you can also reshape the model to fit the image. Note that not all models support reshaping, and models that do may not support all input shapes. The model accuracy may also suffer if you reshape the model input shape.\n", "\n", "We first check the input shape of the model, and then reshape to the new input shape." ] }, { "cell_type": "code", "execution_count": null, "id": "bc1a69f8", "metadata": {}, "outputs": [], "source": [ "from openvino.inference_engine import IECore\n", "\n", "ie = IECore()\n", "segmentation_model_xml = \"model/segmentation.xml\"\n", "segmentation_net = ie.read_network(model=segmentation_model_xml)\n", "segmentation_input_layer = next(iter(segmentation_net.input_info))\n", "segmentation_output_layer = next(iter(segmentation_net.outputs))\n", "\n", "print(\"~~~~ ORIGINAL MODEL ~~~~\")\n", "print(f\"input layout: {segmentation_net.input_info[segmentation_input_layer].layout}\")\n", "print(f\"input shape: {segmentation_net.input_info[segmentation_input_layer].tensor_desc.dims}\")\n", "print(f\"output shape: {segmentation_net.outputs[segmentation_output_layer].shape}\")\n", "\n", "new_shape = (1, 3, 544, 544)\n", "segmentation_net.reshape({segmentation_input_layer: new_shape})\n", "segmentation_exec_net = ie.load_network(network=segmentation_net, device_name=\"CPU\")\n", "\n", "print(\"~~~~ RESHAPED MODEL ~~~~\")\n", "print(f\"net input shape: {segmentation_net.input_info[segmentation_input_layer].tensor_desc.dims}\")\n", "print(\n", " f\"exec_net input shape: \"\n", " f\"{segmentation_exec_net.input_info[segmentation_input_layer].tensor_desc.dims}\"\n", ")\n", "print(f\"output shape: {segmentation_net.outputs[segmentation_output_layer].shape}\")" ] }, { "cell_type": "markdown", "id": "2104cef6", "metadata": {}, "source": [ "The input shape for the segmentation network is [1,3,512,512], with an NCHW layout: the network expects 3-channel images with a width and height of 512 and a batch size of 1. We reshape the network to make it accept input images with a width and height of 544 with the `.reshape()` method of `IENetwork`. This segmentation network always returns arrays with the same width and height as the input width and height, so setting the input dimensions to 544x544 also modifies the output dimensions. After reshaping, load the network to the device again." ] }, { "cell_type": "markdown", "id": "249d697a", "metadata": {}, "source": [ "### Change Batch Size" ] }, { "cell_type": "markdown", "id": "ded79c8f", "metadata": {}, "source": [ "We can also use `.reshape()` to set the batch size, by increasing the first element of _new_shape_. For example, to set a batch size of two, set `new_shape = (2,3,544,544)` in the cell above. If you only want to change the batch size, you can also set the `batch_size` property directly. " ] }, { "cell_type": "code", "execution_count": null, "id": "a49d65c6", "metadata": {}, "outputs": [], "source": [ "from openvino.inference_engine import IECore\n", "\n", "ie = IECore()\n", "segmentation_model_xml = \"model/segmentation.xml\"\n", "segmentation_net = ie.read_network(model=segmentation_model_xml)\n", "segmentation_input_layer = next(iter(segmentation_net.input_info))\n", "segmentation_output_layer = next(iter(segmentation_net.outputs))\n", "segmentation_net.batch_size = 2\n", "segmentation_exec_net = ie.load_network(network=segmentation_net, device_name=\"CPU\")\n", "\n", "print(f\"input layout: {segmentation_net.input_info[segmentation_input_layer].layout}\")\n", "print(f\"input shape: {segmentation_net.input_info[segmentation_input_layer].tensor_desc.dims}\")\n", "print(f\"output shape: {segmentation_net.outputs[segmentation_output_layer].shape}\")" ] }, { "cell_type": "markdown", "id": "40c2f3d2", "metadata": {}, "source": [ "The output shows that by setting the batch size to 2, the first element (N) of the input and output shape now has a value of 2. Let's see what happens if we propagate our input image through the network:" ] }, { "cell_type": "code", "execution_count": null, "id": "0eb487fa", "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "from openvino.inference_engine import IECore\n", "\n", "ie = IECore()\n", "segmentation_model_xml = \"model/segmentation.xml\"\n", "segmentation_net = ie.read_network(model=segmentation_model_xml)\n", "segmentation_input_layer = next(iter(segmentation_net.input_info))\n", "segmentation_output_layer = next(iter(segmentation_net.outputs))\n", "input_data = np.random.rand(*segmentation_net.input_info[segmentation_input_layer].tensor_desc.dims)\n", "segmentation_net.batch_size = 2\n", "segmentation_exec_net = ie.load_network(network=segmentation_net, device_name=\"CPU\")\n", "result_batch = segmentation_exec_net.infer({segmentation_input_layer: input_data})\n", "\n", "print(f\"input data shape: {input_data.shape}\")\n", "print(f\"result data data shape: {result_batch[segmentation_output_layer].shape}\")" ] }, { "cell_type": "markdown", "id": "7e6c31d1", "metadata": {}, "source": [ "The output shows that if `batch_size` is set to 2, the network output will have a batch size of 2, even if only one image was propagated through the network. Regardless of batch size, you can always do inference on one image. In that case, only the first network output contains meaningful information.\n", "\n", "Verify that inference on two images works by creating random data with a batch size of 2:" ] }, { "cell_type": "code", "execution_count": null, "id": "1f24e777", "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "from openvino.inference_engine import IECore\n", "\n", "ie = IECore()\n", "segmentation_model_xml = \"model/segmentation.xml\"\n", "segmentation_net = ie.read_network(model=segmentation_model_xml)\n", "segmentation_input_layer = next(iter(segmentation_net.input_info))\n", "segmentation_output_layer = next(iter(segmentation_net.outputs))\n", "segmentation_net.batch_size = 2\n", "input_data = np.random.rand(*segmentation_net.input_info[segmentation_input_layer].tensor_desc.dims)\n", "segmentation_exec_net = ie.load_network(network=segmentation_net, device_name=\"CPU\")\n", "result_batch = segmentation_exec_net.infer({segmentation_input_layer: input_data})\n", "\n", "print(f\"input data shape: {input_data.shape}\")\n", "print(f\"result data shape: {result_batch[segmentation_output_layer].shape}\")" ] }, { "cell_type": "markdown", "id": "a9657dc5-9713-4d2b-a324-c8cd6195e79a", "metadata": {}, "source": [ "## Caching a Model\n", "\n", "For some devices, like GPU, loading a model can take some time. Model Caching solves this issue by caching the model in a cache directory. If `ie.set_config({\"CACHE_DIR\": cache_dir}, device_name=device_name)` is set, caching will be used. This option checks if a model exists in the cache. If so, it loads it from the cache. If not, it loads the model regularly, and stores it in the cache, so that the next time the model is loaded when this option is set, the model will be loaded from the cache.\n", "\n", "In the cell below, we create a *model_cache* directory as a subdirectory of *model*, where the model will be cached for the specified device. The model will be loaded to the GPU. After running this cell once, the model will be cached, so subsequent runs of this cell will load the model from the cache. Note: Model Caching is not available on CPU devices" ] }, { "cell_type": "code", "execution_count": null, "id": "1d235185-18f7-4cf0-8cb2-1ecba279318a", "metadata": {}, "outputs": [], "source": [ "import time\n", "from pathlib import Path\n", "\n", "from openvino.inference_engine import IECore\n", "\n", "ie = IECore()\n", "\n", "device_name = \"GPU\" # Model Caching is not available for CPU\n", "\n", "if device_name in ie.available_devices and device_name != \"CPU\":\n", " cache_path = Path(\"model/model_cache\")\n", " cache_path.mkdir(exist_ok=True)\n", " # Enable caching for Inference Engine. Comment out this line to disable caching\n", " ie.set_config({\"CACHE_DIR\": str(cache_path)}, device_name=device_name)\n", "\n", " classification_model_xml = \"model/classification.xml\"\n", " net = ie.read_network(model=classification_model_xml)\n", "\n", " start_time = time.perf_counter()\n", " exec_net = ie.load_network(network=net, device_name=device_name)\n", " end_time = time.perf_counter()\n", " print(f\"Loading the network to the {device_name} device took {end_time-start_time:.2f} seconds.\")\n", "else:\n", " print(\"Model caching is not available on CPU devices.\")" ] }, { "cell_type": "markdown", "id": "48e0a860-c93c-4b93-a684-f53cd66ec2e3", "metadata": {}, "source": [ "After running the previous cell, we know the model exists in the cache directory. We delete exec net and load it again, and measure the time it takes now." ] }, { "cell_type": "code", "execution_count": null, "id": "b5a7686b-b9e0-44a6-8b6e-5b299d085eac", "metadata": {}, "outputs": [], "source": [ "if device_name in ie.available_devices and device_name != \"CPU\":\n", " del exec_net\n", " start_time = time.perf_counter()\n", " exec_net = ie.load_network(network=net, device_name=device_name)\n", " end_time = time.perf_counter()\n", " print(f\"Loading the network to the {device_name} device took {end_time-start_time:.2f} seconds.\")" ] }, { "cell_type": "code", "execution_count": null, "id": "b21fb052-67df-48b1-adee-a636acf43b6f", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "openvino_env", "language": "python", "name": "openvino_env" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" } }, "nbformat": 4, "nbformat_minor": 5 }