{ "cells": [ { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "rOvvWAVTkMR7" }, "source": [ "# Eager Few Shot Object Detection Colab\n", "\n", "Welcome to the Eager Few Shot Object Detection Colab --- in this colab we demonstrate fine tuning of a (TF2 friendly) RetinaNet architecture on very few examples of a novel class after initializing from a pre-trained COCO checkpoint.\n", "Training runs in eager mode.\n", "\n", "Estimated time to run through this colab (with GPU): \u003c 5 minutes." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "vPs64QA1Zdov" }, "source": [ "## Imports" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "LBZ9VWZZFUCT" }, "outputs": [], "source": [ "!pip install -U --pre tensorflow==\"2.2.0\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "oi28cqGGFWnY" }, "outputs": [], "source": [ "import os\n", "import pathlib\n", "\n", "# Clone the tensorflow models repository if it doesn't already exist\n", "if \"models\" in pathlib.Path.cwd().parts:\n", " while \"models\" in pathlib.Path.cwd().parts:\n", " os.chdir('..')\n", "elif not pathlib.Path('models').exists():\n", " !git clone --depth 1 https://github.com/tensorflow/models" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "NwdsBdGhFanc" }, "outputs": [], "source": [ "# Install the Object Detection API\n", "%%bash\n", "cd models/research/\n", "protoc object_detection/protos/*.proto --python_out=.\n", "cp object_detection/packages/tf2/setup.py .\n", "python -m pip install ." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "uZcqD4NLdnf4" }, "outputs": [], "source": [ "import matplotlib\n", "import matplotlib.pyplot as plt\n", "\n", "import os\n", "import random\n", "import io\n", "import imageio\n", "import glob\n", "import scipy.misc\n", "import numpy as np\n", "from six import BytesIO\n", "from PIL import Image, ImageDraw, ImageFont\n", "from IPython.display import display, Javascript\n", "from IPython.display import Image as IPyImage\n", "\n", "import tensorflow as tf\n", "\n", "from object_detection.utils import label_map_util\n", "from object_detection.utils import config_util\n", "from object_detection.utils import visualization_utils as viz_utils\n", "from object_detection.utils import colab_utils\n", "from object_detection.builders import model_builder\n", "\n", "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "IogyryF2lFBL" }, "source": [ "# Utilities" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "-y9R0Xllefec" }, "outputs": [], "source": [ "def load_image_into_numpy_array(path):\n", " \"\"\"Load an image from file into a numpy array.\n", "\n", " Puts image into numpy array to feed into tensorflow graph.\n", " Note that by convention we put it into a numpy array with shape\n", " (height, width, channels), where channels=3 for RGB.\n", "\n", " Args:\n", " path: a file path.\n", "\n", " Returns:\n", " uint8 numpy array with shape (img_height, img_width, 3)\n", " \"\"\"\n", " img_data = tf.io.gfile.GFile(path, 'rb').read()\n", " image = Image.open(BytesIO(img_data))\n", " (im_width, im_height) = image.size\n", " return np.array(image.getdata()).reshape(\n", " (im_height, im_width, 3)).astype(np.uint8)\n", "\n", "def plot_detections(image_np,\n", " boxes,\n", " classes,\n", " scores,\n", " category_index,\n", " figsize=(12, 16),\n", " image_name=None):\n", " \"\"\"Wrapper function to visualize detections.\n", "\n", " Args:\n", " image_np: uint8 numpy array with shape (img_height, img_width, 3)\n", " boxes: a numpy array of shape [N, 4]\n", " classes: a numpy array of shape [N]. Note that class indices are 1-based,\n", " and match the keys in the label map.\n", " scores: a numpy array of shape [N] or None. If scores=None, then\n", " this function assumes that the boxes to be plotted are groundtruth\n", " boxes and plot all boxes as black with no classes or scores.\n", " category_index: a dict containing category dictionaries (each holding\n", " category index `id` and category name `name`) keyed by category indices.\n", " figsize: size for the figure.\n", " image_name: a name for the image file.\n", " \"\"\"\n", " image_np_with_annotations = image_np.copy()\n", " viz_utils.visualize_boxes_and_labels_on_image_array(\n", " image_np_with_annotations,\n", " boxes,\n", " classes,\n", " scores,\n", " category_index,\n", " use_normalized_coordinates=True,\n", " min_score_thresh=0.8)\n", " if image_name:\n", " plt.imsave(image_name, image_np_with_annotations)\n", " else:\n", " plt.imshow(image_np_with_annotations)\n" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "sSaXL28TZfk1" }, "source": [ "# Rubber Ducky data\n", "\n", "We will start with some toy (literally) data consisting of 5 images of a rubber\n", "ducky. Note that the [coco](https://cocodataset.org/#explore) dataset contains a number of animals, but notably, it does *not* contain rubber duckies (or even ducks for that matter), so this is a novel class." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "SQy3ND7EpFQM" }, "outputs": [], "source": [ "# Load images and visualize\n", "train_image_dir = 'models/research/object_detection/test_images/ducky/train/'\n", "train_images_np = []\n", "for i in range(1, 6):\n", " image_path = os.path.join(train_image_dir, 'robertducky' + str(i) + '.jpg')\n", " train_images_np.append(load_image_into_numpy_array(image_path))\n", "\n", "plt.rcParams['axes.grid'] = False\n", "plt.rcParams['xtick.labelsize'] = False\n", "plt.rcParams['ytick.labelsize'] = False\n", "plt.rcParams['xtick.top'] = False\n", "plt.rcParams['xtick.bottom'] = False\n", "plt.rcParams['ytick.left'] = False\n", "plt.rcParams['ytick.right'] = False\n", "plt.rcParams['figure.figsize'] = [14, 7]\n", "\n", "for idx, train_image_np in enumerate(train_images_np):\n", " plt.subplot(2, 3, idx+1)\n", " plt.imshow(train_image_np)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "cbKXmQoxcUgE" }, "source": [ "# Annotate images with bounding boxes\n", "\n", "In this cell you will annotate the rubber duckies --- draw a box around the rubber ducky in each image; click `next image` to go to the next image and `submit` when there are no more images.\n", "\n", "If you'd like to skip the manual annotation step, we totally understand. In this case, simply skip this cell and run the next cell instead, where we've prepopulated the groundtruth with pre-annotated bounding boxes.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "-nEDRoUEcUgL" }, "outputs": [], "source": [ "gt_boxes = []\n", "colab_utils.annotate(train_images_np, box_storage_pointer=gt_boxes)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "wTP9AFqecUgS" }, "source": [ "# In case you didn't want to label...\n", "\n", "Run this cell only if you didn't annotate anything above and\n", "would prefer to just use our preannotated boxes. Don't forget\n", "to uncomment." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "wIAT6ZUmdHOC" }, "outputs": [], "source": [ "# gt_boxes = [\n", "# np.array([[0.436, 0.591, 0.629, 0.712]], dtype=np.float32),\n", "# np.array([[0.539, 0.583, 0.73, 0.71]], dtype=np.float32),\n", "# np.array([[0.464, 0.414, 0.626, 0.548]], dtype=np.float32),\n", "# np.array([[0.313, 0.308, 0.648, 0.526]], dtype=np.float32),\n", "# np.array([[0.256, 0.444, 0.484, 0.629]], dtype=np.float32)\n", "# ]" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Dqb_yjAo3cO_" }, "source": [ "# Prepare data for training\n", "\n", "Below we add the class annotations (for simplicity, we assume a single class in this colab; though it should be straightforward to extend this to handle multiple classes). We also convert everything to the format that the training\n", "loop below expects (e.g., everything converted to tensors, classes converted to one-hot representations, etc.)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "HWBqFVMcweF-" }, "outputs": [], "source": [ "\n", "# By convention, our non-background classes start counting at 1. Given\n", "# that we will be predicting just one class, we will therefore assign it a\n", "# `class id` of 1.\n", "duck_class_id = 1\n", "num_classes = 1\n", "\n", "category_index = {duck_class_id: {'id': duck_class_id, 'name': 'rubber_ducky'}}\n", "\n", "# Convert class labels to one-hot; convert everything to tensors.\n", "# The `label_id_offset` here shifts all classes by a certain number of indices;\n", "# we do this here so that the model receives one-hot labels where non-background\n", "# classes start counting at the zeroth index. This is ordinarily just handled\n", "# automatically in our training binaries, but we need to reproduce it here.\n", "label_id_offset = 1\n", "train_image_tensors = []\n", "gt_classes_one_hot_tensors = []\n", "gt_box_tensors = []\n", "for (train_image_np, gt_box_np) in zip(\n", " train_images_np, gt_boxes):\n", " train_image_tensors.append(tf.expand_dims(tf.convert_to_tensor(\n", " train_image_np, dtype=tf.float32), axis=0))\n", " gt_box_tensors.append(tf.convert_to_tensor(gt_box_np, dtype=tf.float32))\n", " zero_indexed_groundtruth_classes = tf.convert_to_tensor(\n", " np.ones(shape=[gt_box_np.shape[0]], dtype=np.int32) - label_id_offset)\n", " gt_classes_one_hot_tensors.append(tf.one_hot(\n", " zero_indexed_groundtruth_classes, num_classes))\n", "print('Done prepping data.')\n" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "b3_Z3mJWN9KJ" }, "source": [ "# Let's just visualize the rubber duckies as a sanity check\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "YBD6l-E4N71y" }, "outputs": [], "source": [ "dummy_scores = np.array([1.0], dtype=np.float32) # give boxes a score of 100%\n", "\n", "plt.figure(figsize=(30, 15))\n", "for idx in range(5):\n", " plt.subplot(2, 3, idx+1)\n", " plot_detections(\n", " train_images_np[idx],\n", " gt_boxes[idx],\n", " np.ones(shape=[gt_boxes[idx].shape[0]], dtype=np.int32),\n", " dummy_scores, category_index)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "ghDAsqfoZvPh" }, "source": [ "# Create model and restore weights for all but last layer\n", "\n", "In this cell we build a single stage detection architecture (RetinaNet) and restore all but the classification layer at the top (which will be automatically randomly initialized).\n", "\n", "For simplicity, we have hardcoded a number of things in this colab for the specific RetinaNet architecture at hand (including assuming that the image size will always be 640x640), however it is not difficult to generalize to other model configurations." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "9J16r3NChD-7" }, "outputs": [], "source": [ "# Download the checkpoint and put it into models/research/object_detection/test_data/\n", "\n", "!wget http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_resnet50_v1_fpn_640x640_coco17_tpu-8.tar.gz\n", "!tar -xf ssd_resnet50_v1_fpn_640x640_coco17_tpu-8.tar.gz\n", "!mv ssd_resnet50_v1_fpn_640x640_coco17_tpu-8/checkpoint models/research/object_detection/test_data/" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "RyT4BUbaMeG-" }, "outputs": [], "source": [ "tf.keras.backend.clear_session()\n", "\n", "print('Building model and restoring weights for fine-tuning...', flush=True)\n", "num_classes = 1\n", "pipeline_config = 'models/research/object_detection/configs/tf2/ssd_resnet50_v1_fpn_640x640_coco17_tpu-8.config'\n", "checkpoint_path = 'models/research/object_detection/test_data/checkpoint/ckpt-0'\n", "\n", "# Load pipeline config and build a detection model.\n", "#\n", "# Since we are working off of a COCO architecture which predicts 90\n", "# class slots by default, we override the `num_classes` field here to be just\n", "# one (for our new rubber ducky class).\n", "configs = config_util.get_configs_from_pipeline_file(pipeline_config)\n", "model_config = configs['model']\n", "model_config.ssd.num_classes = num_classes\n", "model_config.ssd.freeze_batchnorm = True\n", "detection_model = model_builder.build(\n", " model_config=model_config, is_training=True)\n", "\n", "# Set up object-based checkpoint restore --- RetinaNet has two prediction\n", "# `heads` --- one for classification, the other for box regression. We will\n", "# restore the box regression head but initialize the classification head\n", "# from scratch (we show the omission below by commenting out the line that\n", "# we would add if we wanted to restore both heads)\n", "fake_box_predictor = tf.compat.v2.train.Checkpoint(\n", " _base_tower_layers_for_heads=detection_model._box_predictor._base_tower_layers_for_heads,\n", " # _prediction_heads=detection_model._box_predictor._prediction_heads,\n", " # (i.e., the classification head that we *will not* restore)\n", " _box_prediction_head=detection_model._box_predictor._box_prediction_head,\n", " )\n", "fake_model = tf.compat.v2.train.Checkpoint(\n", " _feature_extractor=detection_model._feature_extractor,\n", " _box_predictor=fake_box_predictor)\n", "ckpt = tf.compat.v2.train.Checkpoint(model=fake_model)\n", "ckpt.restore(checkpoint_path).expect_partial()\n", "\n", "# Run model through a dummy image so that variables are created\n", "image, shapes = detection_model.preprocess(tf.zeros([1, 640, 640, 3]))\n", "prediction_dict = detection_model.predict(image, shapes)\n", "_ = detection_model.postprocess(prediction_dict, shapes)\n", "print('Weights restored!')" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "pCkWmdoZZ0zJ" }, "source": [ "# Eager mode custom training loop\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "nyHoF4mUrv5-" }, "outputs": [], "source": [ "tf.keras.backend.set_learning_phase(True)\n", "\n", "# These parameters can be tuned; since our training set has 5 images\n", "# it doesn't make sense to have a much larger batch size, though we could\n", "# fit more examples in memory if we wanted to.\n", "batch_size = 4\n", "learning_rate = 0.01\n", "num_batches = 100\n", "\n", "# Select variables in top layers to fine-tune.\n", "trainable_variables = detection_model.trainable_variables\n", "to_fine_tune = []\n", "prefixes_to_train = [\n", " 'WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead',\n", " 'WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead']\n", "for var in trainable_variables:\n", " if any([var.name.startswith(prefix) for prefix in prefixes_to_train]):\n", " to_fine_tune.append(var)\n", "\n", "# Set up forward + backward pass for a single train step.\n", "def get_model_train_step_function(model, optimizer, vars_to_fine_tune):\n", " \"\"\"Get a tf.function for training step.\"\"\"\n", "\n", " # Use tf.function for a bit of speed.\n", " # Comment out the tf.function decorator if you want the inside of the\n", " # function to run eagerly.\n", " @tf.function\n", " def train_step_fn(image_tensors,\n", " groundtruth_boxes_list,\n", " groundtruth_classes_list):\n", " \"\"\"A single training iteration.\n", "\n", " Args:\n", " image_tensors: A list of [1, height, width, 3] Tensor of type tf.float32.\n", " Note that the height and width can vary across images, as they are\n", " reshaped within this function to be 640x640.\n", " groundtruth_boxes_list: A list of Tensors of shape [N_i, 4] with type\n", " tf.float32 representing groundtruth boxes for each image in the batch.\n", " groundtruth_classes_list: A list of Tensors of shape [N_i, num_classes]\n", " with type tf.float32 representing groundtruth boxes for each image in\n", " the batch.\n", "\n", " Returns:\n", " A scalar tensor representing the total loss for the input batch.\n", " \"\"\"\n", " shapes = tf.constant(batch_size * [[640, 640, 3]], dtype=tf.int32)\n", " model.provide_groundtruth(\n", " groundtruth_boxes_list=groundtruth_boxes_list,\n", " groundtruth_classes_list=groundtruth_classes_list)\n", " with tf.GradientTape() as tape:\n", " preprocessed_images = tf.concat(\n", " [detection_model.preprocess(image_tensor)[0]\n", " for image_tensor in image_tensors], axis=0)\n", " prediction_dict = model.predict(preprocessed_images, shapes)\n", " losses_dict = model.loss(prediction_dict, shapes)\n", " total_loss = losses_dict['Loss/localization_loss'] + losses_dict['Loss/classification_loss']\n", " gradients = tape.gradient(total_loss, vars_to_fine_tune)\n", " optimizer.apply_gradients(zip(gradients, vars_to_fine_tune))\n", " return total_loss\n", "\n", " return train_step_fn\n", "\n", "optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate, momentum=0.9)\n", "train_step_fn = get_model_train_step_function(\n", " detection_model, optimizer, to_fine_tune)\n", "\n", "print('Start fine-tuning!', flush=True)\n", "for idx in range(num_batches):\n", " # Grab keys for a random subset of examples\n", " all_keys = list(range(len(train_images_np)))\n", " random.shuffle(all_keys)\n", " example_keys = all_keys[:batch_size]\n", "\n", " # Note that we do not do data augmentation in this demo. If you want a\n", " # a fun exercise, we recommend experimenting with random horizontal flipping\n", " # and random cropping :)\n", " gt_boxes_list = [gt_box_tensors[key] for key in example_keys]\n", " gt_classes_list = [gt_classes_one_hot_tensors[key] for key in example_keys]\n", " image_tensors = [train_image_tensors[key] for key in example_keys]\n", "\n", " # Training step (forward pass + backwards pass)\n", " total_loss = train_step_fn(image_tensors, gt_boxes_list, gt_classes_list)\n", "\n", " if idx % 10 == 0:\n", " print('batch ' + str(idx) + ' of ' + str(num_batches)\n", " + ', loss=' + str(total_loss.numpy()), flush=True)\n", "\n", "print('Done fine-tuning!')" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "WHlXL1x_Z3tc" }, "source": [ "# Load test images and run inference with new model!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "WcE6OwrHQJya" }, "outputs": [], "source": [ "test_image_dir = 'models/research/object_detection/test_images/ducky/test/'\n", "test_images_np = []\n", "for i in range(1, 50):\n", " image_path = os.path.join(test_image_dir, 'out' + str(i) + '.jpg')\n", " test_images_np.append(np.expand_dims(\n", " load_image_into_numpy_array(image_path), axis=0))\n", "\n", "# Again, uncomment this decorator if you want to run inference eagerly\n", "@tf.function\n", "def detect(input_tensor):\n", " \"\"\"Run detection on an input image.\n", "\n", " Args:\n", " input_tensor: A [1, height, width, 3] Tensor of type tf.float32.\n", " Note that height and width can be anything since the image will be\n", " immediately resized according to the needs of the model within this\n", " function.\n", "\n", " Returns:\n", " A dict containing 3 Tensors (`detection_boxes`, `detection_classes`,\n", " and `detection_scores`).\n", " \"\"\"\n", " preprocessed_image, shapes = detection_model.preprocess(input_tensor)\n", " prediction_dict = detection_model.predict(preprocessed_image, shapes)\n", " return detection_model.postprocess(prediction_dict, shapes)\n", "\n", "# Note that the first frame will trigger tracing of the tf.function, which will\n", "# take some time, after which inference should be fast.\n", "\n", "label_id_offset = 1\n", "for i in range(len(test_images_np)):\n", " input_tensor = tf.convert_to_tensor(test_images_np[i], dtype=tf.float32)\n", " detections = detect(input_tensor)\n", "\n", " plot_detections(\n", " test_images_np[i][0],\n", " detections['detection_boxes'][0].numpy(),\n", " detections['detection_classes'][0].numpy().astype(np.uint32)\n", " + label_id_offset,\n", " detections['detection_scores'][0].numpy(),\n", " category_index, figsize=(15, 20), image_name=\"gif_frame_\" + ('%02d' % i) + \".jpg\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "RW1FrT2iNnpy" }, "outputs": [], "source": [ "imageio.plugins.freeimage.download()\n", "\n", "anim_file = 'duckies_test.gif'\n", "\n", "filenames = glob.glob('gif_frame_*.jpg')\n", "filenames = sorted(filenames)\n", "last = -1\n", "images = []\n", "for filename in filenames:\n", " image = imageio.imread(filename)\n", " images.append(image)\n", "\n", "imageio.mimsave(anim_file, images, 'GIF-FI', fps=5)\n", "\n", "display(IPyImage(open(anim_file, 'rb').read()))" ] } ], "metadata": { "accelerator": "GPU", "colab": { "collapsed_sections": [], "name": "interactive_eager_few_shot_od_training_colab.ipynb", "provenance": [] }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }