{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Example: Convert a PyTorch model to CoreML\n", "\n", "In this example, we will convert a pretrained PyTorch model. For more details on all the different ways to convert these models, see the [coremltools PyTorch documentation](https://coremltools.readme.io/docs/pytorch-conversion)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create a PyTorch Model\n", "\n", "For this example, we will load [MobileNetV2](https://pytorch.org/hub/pytorch_vision_mobilenet_v2/) with pretrained weights from ImageNet. PyTorch assumes the input image pixel values have a been normalized in a specific way, which coremltools can approximate during the conversion process. We also follow the model with a Softmax layer since we are doing classification." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import tensorflow as tf\n", "\n", "input_shape = (3, 224, 224)\n", "\n", "import torch\n", "import torchvision\n", "\n", "torch_model = torch.nn.Sequential(\n", " torch.hub.load('pytorch/vision:v0.6.0', 'mobilenet_v2', pretrained=True),\n", " torch.nn.Softmax(dim=1)\n", ")\n", "torch_model.eval();\n", "torch_model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We also want the class labels to convert the model predictions to English category names." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Download class labels (from a separate file)\n", "import urllib\n", "label_url = 'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'\n", "class_labels = urllib.request.urlopen(label_url).read().decode(\"utf-8\").splitlines()\n", "class_labels = class_labels[1:] # remove the first class which is background\n", "assert len(class_labels) == 1000" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Test the PyTorch Model\n", "\n", "As a quick check, let's load an image of a dog ([source, CC BY-SA 4.0](https://commons.wikimedia.org/wiki/File:Dog_Breeds.jpg)) and run it through the PyTorch model to ensure it is working." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "import PIL\n", "\n", "img = PIL.Image.open('Dog.jpg').resize(input_shape[1:])\n", "img" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "from torchvision import transforms\n", "\n", "preprocess = transforms.Compose([\n", " # This converts the PIL Image to a tensor with the pixel range of [0, 1]\n", " transforms.ToTensor(),\n", " # This computes (image - mean) / std\n", " transforms.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),\n", "])\n", "\n", "input_tensor = preprocess(img)\n", "input_batch = input_tensor.unsqueeze(0)\n", "\n", "with torch.no_grad():\n", " out = torch_model(input_batch)[0].numpy()\n", "\n", "top_5_indices = np.flip(np.argsort(out))[:5]\n", "for i in top_5_indices:\n", " print(class_labels[i], out[i])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Convert to CoreML Model\n", "\n", "The convert() function in coremltools will take a TensorFlow or PyTorch model in one of many forms, analyze it, and generate a CoreML model wrapped in a `ct.models.MLModel` Python object. In the case of PyTorch, convert() needs a trace from executing the PyTorch model to create the operation graph, which we can generate using `torch.jit.trace()`.\n", "\n", "The convert() API has two optional arguments which we will use:\n", "\n", "- `inputs`: List of type descriptions for the input layers. The types can be inferred from the input model in some cases, but here we want to interpret the (3, 224, 224) input tensor as an image. This will allow us to specify a scale and offset (\"bias\") for the pixel values, as required by the model, and also describe the color channel ordering. We will include color layout metadata so it can be used by the consuming application of the model to ensure correct pixel ordering.\n", "\n", "Note that the scale and bias are applied in the opposite order and \"inversed\" compared to the PyTorch Normalize transform (`image * scale + bias`). Additionally, we need to rescale the integer pixel values to the range [0, 1] with the `scale` factor.\n", "\n", "- `classifier_config`: Since this is a classifier model, we include the class labels here so that the user of the CoreML model has that data as well.\n", "\n", "The converter will translate the PyTorch trace into a [Model Intermediate Language](https://coremltools.readme.io/docs/model-intermediate-language) representation, optimize it for performance, and finally emit the equivalent CoreML operations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import coremltools as ct\n", "\n", "# Trace with random data\n", "example_input = torch.rand(1, *input_shape)\n", "traced_model = torch.jit.trace(torch_model, example_input)\n", "cml_model = ct.convert(traced_model,\n", " inputs=[ct.ImageType(color_layout='RGB', scale=1.0/255.0/0.226,\n", " bias=(-0.485/0.226, -0.456/0.226, -0.406/0.226),\n", " shape=example_input.shape)],\n", " classifier_config=ct.ClassifierConfig(class_labels)\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see the metadata describing this model by looking at the string representation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cml_model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we can write the model to disk in the CoreML format." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cml_model.save('MobileNet_v2_torch.mlmodel')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Using the CoreML Model\n", "\n", "There are two ways to use the trained CoreML model for inference:\n", "\n", "- Load the model in Python using coremltools and call the predict() method. This requires macOS as the model is executed by the CoreML framework. The OS will use whatever specialized hardware may be available to accelerate execution.\n", "\n", "- Load the model in a compiled macOS or iOS app using the [CoreML APIs](https://developer.apple.com/documentation/coreml/integrating_a_core_ml_model_into_your_app).\n", "\n", "If you are running this notebook on a Mac (not available on Binder), you can test the model directly" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import sys\n", "IS_MACOS = sys.platform == 'darwin'\n", "\n", "if IS_MACOS:\n", " loaded_model = ct.models.MLModel('MobileNet_v2_torch.mlmodel')\n", " prediction = loaded_model.predict({'input.1': img})\n", " print('top prediction:', prediction['classLabel'])\n", "else:\n", " prediction = 'Skipping prediction on non-macOS system'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Testing the CoreML Model on iOS\n", "\n", "If you are running this notebook remotely and browsing on an iOS device, you can download the model you just trained and test it out with the CoreMLCompare app ([source code](https://github.com/sivu22/CoreMLCompare)) and your device camera. (Note: the author of CoreMLCompare is not affiliated with coremltools or Apple. It is just a handy app for loading and comparing CoreML image classifiers.)\n", "\n", "Steps:\n", "- [Install CoreMLCompare from the App Store](https://itunes.apple.com/us/app/coremlcompare/id1378662341) or build it from source and load on your device if you have an Apple developer account.\n", "- Download the CoreML model you just trained onto your device from the Jupyter file browser or by tapping this link: [MobileNet_v2_torch.mlmodel](MobileNet_v2_torch.mlmodel)\n", "- Open the \"Files\" app in iOS, browse to the file you downloaded, and tap on it. Then tap the share icon and select \"CoreMLCompare\" on the share sheet that appears. This will open the CoreMLCompare app.\n", "- The app will pop up a modal asking for the index of the new model. CoreMLCompare can run up to 4 different image classifiers simultaneously, and includes two by default (MobileNet and SqueezeNet). Select index 3 and tap \"Import\" to put your new model into an empty slot.\n", "- After a brief delay, the image classifications at the bottom of the image will start to update live based on what your camera is pointing at.\n", "\n", "Note that this model is using ImageNet categories which might not correspond to the types of objects you have nearby. A pen on a white piece of paper provides a good test case. It will likely be classified as \"Ballpoint\" or \"Fountain Pen\"." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 }