{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", " Try in Google Colab\n", " \n", " \n", " \n", " \n", " Share via nbviewer\n", " \n", " \n", " \n", " \n", " View on GitHub\n", " \n", " \n", " \n", " \n", " Download notebook\n", " \n", "
\n" ] }, { "cell_type": "markdown", "metadata": { "id": "iFmwq9BW1xPr" }, "source": [ "## Digging into COCO\n", "\n", "Notebooks offer a convenient way to analyze visual datasets. Code and visualizations can live in the same place, which is exactly what CV/ML often requires. With that in mind, being able to find problems in visual datasets is the first step towards improving them. This notebook walks us though each \"step\" (i.e., a notebook cell) of digging for problems in an image dataset. First, we'll need to install the `fiftyone` package with `pip`." ] }, { "cell_type": "markdown", "metadata": { "id": "prVJVEIKSJBz" }, "source": [ "*If you're working in Google Colab, be sure to [enable a GPU runtime](https://colab.research.google.com/drive/1P7okDVh6viCIOkii6UAF2O9sTAcKGNWq) before running any cell*" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "anLHD4pYArPt" }, "outputs": [], "source": [ "!pip install fiftyone" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we can download and load our dataset. We will be using the [COCO-2017](https://voxel51.com/docs/fiftyone/user_guide/dataset_zoo/datasets.html#coco-2017) validation split. Let's also take a moment to visualize the ground truth detection labels using the [FiftyOne App](https://voxel51.com/docs/fiftyone/user_guide/app.html). The following two cells will do all of this for us." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Split 'validation' already downloaded\n", "Loading 'coco-2017' split 'validation'\n", " 100% |██████████████████████████| 5000/5000 [27.8s elapsed, 0s remaining, 177.9 samples/s] \n", "Dataset 'coco-2017-validation' created\n" ] } ], "source": [ "import fiftyone as fo\n", "import fiftyone.zoo as foz\n", "\n", "dataset = foz.load_zoo_dataset(\"coco-2017\", split=\"validation\")" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "id": "5GxX7n0PI46r", "outputId": "fb593f45-d527-4084-de12-1147b2bc69e6" }, "outputs": [ { "data": { "text/html": [ "\n", "\n", "
\n", "
\n", " \n", "
\n", "
" ], "text/plain": [ "" ] }, "metadata": { "tags": [] }, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " \n", " " ], "text/plain": [ "" ] }, "metadata": { "tags": [] }, "output_type": "display_data" } ], "source": [ "session = fo.launch_app(dataset)" ] }, { "cell_type": "markdown", "metadata": { "id": "LrvZTeYKVJ0b" }, "source": [ "We have our [COCO-2017](https://voxel51.com/docs/fiftyone/user_guide/dataset_zoo/datasets.html#coco-2017) validation dataset loaded, now let's download and load our model and apply it to our validation dataset. We will be using the [faster-rcnn-resnet50-fpn-coco-torch](https://voxel51.com/docs/fiftyone/user_guide/model_zoo/models.html#faster-rcnn-resnet50-fpn-coco-torch) pre-trained model from the [FiftyOne Model Zoo](https://voxel51.com/docs/fiftyone/user_guide/model_zoo/index.html). Let's apply the predictions to a new label field predictions, and limit the application to detections with confidence >= 0.6:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "id": "jNf9EFCjI9LA", "outputId": "71696213-7eff-4348-9cf0-f80b0a9f160d" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Downloading model from 'https://download.pytorch.org/models/fasterrcnn_resnet50_fpn_coco-258fb6c6.pth'...\n", " 100% |██████| 1.2Gb/1.2Gb [1.6s elapsed, 0s remaining, 727.7Mb/s] \n", " 100% |█████| 5000/5000 [31.0m elapsed, 0s remaining, 2.7 samples/s] \n" ] } ], "source": [ "model = foz.load_zoo_model(\"faster-rcnn-resnet50-fpn-coco-torch\")\n", "\n", "# This will take some time. If not using a GPU, I recommend reducing\n", "# the dataset size with the below line. Results will differ.\n", "# \n", "# dataset = dataset.take(100)\n", "\n", "dataset.apply_model(model, label_field=\"predictions\", confidence_thresh=0.6)" ] }, { "cell_type": "markdown", "metadata": { "id": "tPIUIY2IWp2Q" }, "source": [ "Let's focus on issues related vehicle detections and consider all buses, cars, and trucks vehicles and ignore any other detections, in both the ground truth labels and our predictions.\n", "\n", "The following filters our dataset to a view containing only our vehicle detections, and renders the view in the App. Because we are in a notebook, you will notice that each time a new App cell is opened, the previously active App cell will be replaced with a screenshot of itself. Neato!" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "id": "mKC2ZPFYlQ31", "outputId": "4b91a773-5536-4972-f097-878f629941b8" }, "outputs": [ { "data": { "text/html": [ "\n", "\n", "
\n", "
\n", " \n", "
\n", "
" ], "text/plain": [ "" ] }, "metadata": { "tags": [] }, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " \n", " " ], "text/plain": [ "" ] }, "metadata": { "tags": [] }, "output_type": "display_data" } ], "source": [ "from fiftyone import ViewField as F\n", "\n", "vehicle_labels = [\"bus\",\"car\", \"truck\"]\n", "only_vehicles = F(\"label\").is_in(vehicle_labels)\n", "\n", "vehicles = (\n", " dataset\n", " .filter_labels(\"predictions\", only_vehicles)\n", " .filter_labels(\"ground_truth\", only_vehicles)\n", ")\n", "\n", "session.view = vehicles" ] }, { "cell_type": "markdown", "metadata": { "id": "BZ-brN_E7gzu" }, "source": [ "Now that we have our predictions, we can evaluate the model. We'll use the [evaluate_detections()](https://voxel51.com/docs/fiftyone/api/fiftyone.core.collections.html?highlight=evaluate_detections#fiftyone.core.collections.SampleCollection.evaluate_detections) method, which is available on all FiftyOne datasets/views and uses the COCO evaluation methodology by default:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "id": "spYcEnkcoOrD", "outputId": "631e6dd9-6cf1-497a-fc3c-84aaacaa31b9" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Evaluating detections...\n", " 100% |███████| 640/640 [10.1s elapsed, 0s remaining, 67.8 samples/s] \n" ] } ], "source": [ "results = vehicles.evaluate_detections(\n", " \"predictions\",\n", " gt_field=\"ground_truth\",\n", " eval_key=\"eval\",\n", " iou=0.75,\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "j7KyDYot730s" }, "source": [ "[evaluate_detections()](https://voxel51.com/docs/fiftyone/api/fiftyone.core.collections.html?highlight=evaluate_detections#fiftyone.core.collections.SampleCollection.evaluate_detections) has populated various pieces of data about the evaluation into our dataset. Of note is information about which predictions were not matched with a ground truth box. The following view into the dataset lets us look at only those unmatched predictions. We'll sort by maximum per-sample confidence, as well, in descending order." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "id": "V1MEIebgxcWO", "outputId": "9e1101aa-5d0c-45e5-da56-9b10e81bfe87" }, "outputs": [ { "data": { "text/html": [ "\n", "\n", "
\n", "
\n", " \n", "
\n", "
" ], "text/plain": [ "" ] }, "metadata": { "tags": [] }, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " \n", " " ], "text/plain": [ "" ] }, "metadata": { "tags": [] }, "output_type": "display_data" } ], "source": [ "session.view = (\n", " vehicles\n", " .filter_labels(\"predictions\", F(\"eval_id\") == \"\")\n", " .sort_by(F(\"predictions.detections\").map(F(\"confidence\")).max(), reverse=True)\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "-j5PpqOpcUSn" }, "source": [ "Double-clicking on a few images, we can see that the most common reason for an unmatched prediction is that there is a label mismatch. It is not surprising, as all three of these classes are in the `vehicle` supercategory. Trucks and cars are often confused in human annotation and model prediction.\n", "\n", "Looking beyond class confusion, though, let's take a look at the first two samples in our unmatched predictions view." ] }, { "cell_type": "markdown", "metadata": { "id": "wOIqlQNXYZDA" }, "source": [ "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
\n", "\n", "\n", "\n", "\n", "\n", "
\n", "The truncated car in the right of the image has too small of a bounding box. The unmatched prediction is far more accurate, but did not meet the IoU threshold.\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "id": "dCOucT8BcWb4" }, "source": [ "The very first sample, found in the pictures above, has an annotation mistake. The truncated car in the right of the image has too small of a ground truth bounding box (pink). The unmatched prediction (yellow) is far more accurate, but did not meet the IoU threshold." ] }, { "cell_type": "markdown", "metadata": { "id": "bfwQaXekcYa4" }, "source": [ "\n", "\n", "\n", "\n", "
\n", "\n", "\n", "\n", "
\n", "The predicted box of the car in the shadow of the trees is correct, but it is not labeled in the ground truth." ] }, { "cell_type": "markdown", "metadata": { "id": "9q-QFSQ_8-Fn" }, "source": [ "The second sample found in our unmatched predictions view contains a different kind of annotation error. A more egregious one, in fact. The correctly predicted bounding box (yellow) in the image has no corresponding ground truth. The car in the shade of the trees was simply not annotated." ] }, { "cell_type": "markdown", "metadata": { "id": "dwlyL-Bab29A" }, "source": [ "Manually fixing these mistakes is out of the scope of this example, as it requires a large feedback loop. [FiftyOne](http://fiftyone.ai) is dedicated to making that feedback loop possible (and efficient), but for now let's focus on how we can answer questions about model performance, and confirm the hypothesis that our model does in fact confuse buses, cars, and trucks quite often.\n", "\n", "We'll do this by re-evaluating our predictions with buses, cars, and trucks all merged into a single `vehicle` label. The following creates such a view, clones the view into a separate dataset so we'll have separate evaluation results, and evaluates the merged labels." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "id": "Shb18I6X-_-Q", "outputId": "e27ccdfe-4b91-4ad5-a768-59b5ef4953e8" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Evaluating detections...\n", " 100% |███████| 640/640 [6.6s elapsed, 0s remaining, 102.2 samples/s] \n" ] }, { "data": { "text/html": [ "\n", "\n", "
\n", "
\n", " \n", "
\n", "
" ], "text/plain": [ "" ] }, "metadata": { "tags": [] }, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " \n", " " ], "text/plain": [ "" ] }, "metadata": { "tags": [] }, "output_type": "display_data" } ], "source": [ "vehicle_labels = {label: \"vehicle\" for label in [\"bus\",\"car\", \"truck\"]}\n", "\n", "merged_vehicles_dataset = (\n", " vehicles\n", " .map_labels(\"ground_truth\", vehicle_labels)\n", " .map_labels(\"predictions\", vehicle_labels)\n", " .select_fields([\"ground_truth\", \"predictions\"])\n", " .clone(\"merged_vehicles_dataset\")\n", ")\n", "\n", "merged_vehicles_dataset.evaluate_detections(\n", " \"predictions\",\n", " gt_field=\"ground_truth\",\n", " eval_key=\"eval\",\n", " iou=0.75,\n", ")\n", "\n", "session.dataset = merged_vehicles_dataset" ] }, { "cell_type": "markdown", "metadata": { "id": "y70WRs_qg0i6" }, "source": [ "Now we have evaluation results for the originally segmented bus, car, and truck detections and the merged detections. We can now simply compare the number of true positives from the original evaluation, to the number of true positives in the merged evaluation." ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "id": "uEy6xeVCbHPM", "outputId": "3cab9422-3c15-425f-907a-e0e03438b6a7" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Original Vehicles True Positives: 1431\n", "Merged Vehicles True Positives: 1515\n" ] } ], "source": [ "original_tp_count = vehicles.sum(\"eval_tp\")\n", "merged_tp_count = merged_vehicles_dataset.sum(\"eval_tp\")\n", "\n", "print(\"Original Vehicles True Positives: %d\" % original_tp_count)\n", "print(\"Merged Vehicles True Positives: %d\" % merged_tp_count)" ] }, { "cell_type": "markdown", "metadata": { "id": "KwILBUuxhNNS" }, "source": [ "We can see that before merging the `bus`, `car`, and `truck` labels there were 1,431 true positives. Merging the three labels together resulted in 1,515 true positives.\n", "\n", "We were able to confirm our hypothesis! Albeit, a quite obvious one. But we now have a data-backed understanding a common failure mode of this model. And now this entire experiment can be shared with others. The following will screenshot the last active App window, so all outputs can be statically viewed by others." ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "id": "dtnA6pNyzcxl" }, "outputs": [], "source": [ "session.freeze() # Screenshot the active App window for sharing" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Thanks for following along! The FiftyOne project can be found on [GitHub](https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Fvoxel51%2Ffiftyone). If you agree that the CV/ML community needs an open tool to solve its data problems, give us a star!" ] } ], "metadata": { "accelerator": "GPU", "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.13" } }, "nbformat": 4, "nbformat_minor": 1 }