{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Step 1: Basic Evaluation\n", "In our first step, we will be covering how we can perform basic evaluation. One of the great parts of FiftyOne is that once your data and model predictions are in FiftyOne, evaluation becomes easy, no matter if you are coming from different formats. Gone are the days of converting your YOLO styled predictions to COCO styled evaluation, FiftyOne handles all the conversions for you so you can focus on the task at hand.\n", "\n", "Let's take a look first at loading a common a dataset with predictions." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Installation\n", "\n", "Here are some packages that are needed to help run some of our demo code:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install fiftyone torch ultralytics pycocotools" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Loading a Zoo Dataset for Evaluation\n", "We will be loading the [quickstart](https://docs.voxel51.com/api/fiftyone.utils.quickstart.html) dataset from the [Dataset Zoo](https://docs.voxel51.com/dataset_zoo/index.html). This dataset is a slice of MSCOCO and contains some preloaded predictions. If you are unsure how to load your own detection dataset, be sure to checkout our [Getting Started with Detections](<../object_detection/index.html>)\n", "\n", "Once our dataset is loaded, we can start getting ready for model eval!" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Dataset already downloaded\n", "Loading existing dataset 'quickstart'. To reload from disk, either delete the existing dataset or provide a custom `dataset_name` to use\n", "Name: quickstart\n", "Media type: image\n", "Num samples: 200\n", "Persistent: False\n", "Tags: []\n", "Sample fields:\n", " id: fiftyone.core.fields.ObjectIdField\n", " filepath: fiftyone.core.fields.StringField\n", " tags: fiftyone.core.fields.ListField(fiftyone.core.fields.StringField)\n", " metadata: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.metadata.ImageMetadata)\n", " created_at: fiftyone.core.fields.DateTimeField\n", " last_modified_at: fiftyone.core.fields.DateTimeField\n", " ground_truth: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections)\n", " uniqueness: fiftyone.core.fields.FloatField\n", " predictions: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections)\n", " eval_tp: fiftyone.core.fields.IntField\n", " eval_fp: fiftyone.core.fields.IntField\n", " eval_fn: fiftyone.core.fields.IntField\n", " eval_high_conf_tp: fiftyone.core.fields.IntField\n", " eval_high_conf_fp: fiftyone.core.fields.IntField\n", " eval_high_conf_fn: fiftyone.core.fields.IntField\n" ] } ], "source": [ "import fiftyone as fo\n", "import fiftyone.zoo as foz\n", "\n", "dataset = foz.load_zoo_dataset(\"quickstart\")\n", "\n", "# View summary info about the dataset\n", "print(dataset)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Before we go further, let’s launch the [FiftyOne App](https://docs.voxel51.com/user_guide/app.html) and use the GUI to explore the dataset visually:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Session launched. Run `session.show()` to open the App in a cell output.\n" ] } ], "source": [ "session = fo.launch_app(dataset, auto=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Evaluate Detections\n", "\n", "Now that we have samples with ground truth and predicted objects, let’s use FiftyOne to evaluate the quality of the detections.\n", "\n", "FiftyOne provides a powerful [evaluation API](https://docs.voxel51.com/user_guide/evaluation.html) that contains a collection of methods for performing evaluation of model predictions. Since we’re working with object detections here, we’ll use [detection evaluation](https://docs.voxel51.com/api/fiftyone.core.collections.html#fiftyone.core.collections.SampleCollection.evaluate_detections)\n", "\n", "We can run evaluation on our samples via `evaluate_detections()`. Note that this method is available on both the `Dataset` and `DatasetView` classes, which means that we can run evaluation on subsets of our dataset as well.\n", "\n", "By default, this method will use the COCO evaluation protocol, plus some extra goodies that we will use later." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Evaluating detections...\n", " 100% |█████████████████| 200/200 [7.5s elapsed, 0s remaining, 18.3 samples/s] \n", "Performing IoU sweep...\n", " 100% |█████████████████| 200/200 [2.4s elapsed, 0s remaining, 76.0 samples/s] \n" ] } ], "source": [ "results = dataset.evaluate_detections(\n", " \"predictions\",\n", " gt_field=\"ground_truth\",\n", " eval_key=\"eval\",\n", " compute_mAP=True,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Analyzing Results\n", "\n", "The `results` object returned by the evaluation routine provides a number of convenient methods for analyzing our predictions.\n", "\n", "For example, let’s print a classification report for the top-10 most common classes in the dataset:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " precision recall f1-score support\n", "\n", " person 0.52 0.94 0.67 716\n", " kite 0.59 0.88 0.71 140\n", " car 0.18 0.80 0.29 61\n", " bird 0.65 0.78 0.71 110\n", " carrot 0.09 0.74 0.16 47\n", " boat 0.09 0.46 0.16 37\n", " surfboard 0.17 0.73 0.28 30\n", "traffic light 0.32 0.79 0.45 24\n", " airplane 0.36 0.83 0.50 24\n", " bench 0.17 0.52 0.26 23\n", "\n", " micro avg 0.38 0.87 0.53 1212\n", " macro avg 0.31 0.75 0.42 1212\n", " weighted avg 0.47 0.87 0.60 1212\n", "\n" ] } ], "source": [ "# Get the 10 most common classes in the dataset\n", "counts = dataset.count_values(\"ground_truth.detections.label\")\n", "classes_top10 = sorted(counts, key=counts.get, reverse=True)[:10]\n", "\n", "# Print a classification report for the top-10 classes\n", "results.print_report(classes=classes_top10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also grab the mean average-precision (mAP) of our model as well:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.3957238101325776\n" ] } ], "source": [ "print(results.mAP())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Evaluate Subsets\n", "\n", "As mentioned before, we can evaluate `DatasetViews` as well! Let's evaluate only where our model is highly confident. First we will create a high confidence view, then evaluate with `evaluate_detections()` again. See using [Dataset Views](https://docs.voxel51.com/user_guide/using_views.html) for full details on matching, filtering, or sorting detections. " ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Dataset: quickstart\n", "Media type: image\n", "Num samples: 200\n", "Sample fields:\n", " id: fiftyone.core.fields.ObjectIdField\n", " filepath: fiftyone.core.fields.StringField\n", " tags: fiftyone.core.fields.ListField(fiftyone.core.fields.StringField)\n", " metadata: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.metadata.ImageMetadata)\n", " created_at: fiftyone.core.fields.DateTimeField\n", " last_modified_at: fiftyone.core.fields.DateTimeField\n", " ground_truth: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections)\n", " uniqueness: fiftyone.core.fields.FloatField\n", " predictions: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections)\n", " eval_high_conf_tp: fiftyone.core.fields.IntField\n", " eval_high_conf_fp: fiftyone.core.fields.IntField\n", " eval_high_conf_fn: fiftyone.core.fields.IntField\n", " eval_tp: fiftyone.core.fields.IntField\n", " eval_fp: fiftyone.core.fields.IntField\n", " eval_fn: fiftyone.core.fields.IntField\n", "View stages:\n", " 1. FilterLabels(field='predictions', filter={'$gt': ['$$this.confidence', 0.75]}, only_matches=False, trajectories=False)\n" ] } ], "source": [ "from fiftyone import ViewField as F\n", "\n", "# Only contains detections with confidence >= 0.75\n", "high_conf_view = dataset.filter_labels(\"predictions\", F(\"confidence\") > 0.75, only_matches=False)\n", "\n", "# Print some information about the view\n", "print(high_conf_view)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can check out our new view in the session before we run evaluation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "session.view = high_conf_view" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Just like before, lets run evaluation. Be sure to change the eval_key to a new name this time!" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Evaluating detections...\n", " 100% |█████████████████| 200/200 [1.4s elapsed, 0s remaining, 113.3 samples/s] \n", "Performing IoU sweep...\n", " 100% |█████████████████| 200/200 [920.9ms elapsed, 0s remaining, 217.2 samples/s] \n", " precision recall f1-score support\n", "\n", " person 0.85 0.72 0.78 412\n", " kite 0.84 0.68 0.75 91\n", " car 0.74 0.51 0.60 61\n", " bird 0.91 0.48 0.63 64\n", " carrot 0.58 0.40 0.47 47\n", " boat 0.62 0.35 0.45 37\n", " surfboard 0.63 0.40 0.49 30\n", "traffic light 0.88 0.62 0.73 24\n", " airplane 0.90 0.79 0.84 24\n", " bench 0.88 0.30 0.45 23\n", "\n", " micro avg 0.82 0.62 0.70 813\n", " macro avg 0.78 0.53 0.62 813\n", " weighted avg 0.81 0.62 0.70 813\n", "\n", "0.3395358471186352\n" ] } ], "source": [ "results = high_conf_view.evaluate_detections(\n", " \"predictions\",\n", " gt_field=\"ground_truth\",\n", " eval_key=\"eval_high_conf\",\n", " compute_mAP=True,\n", ")\n", "\n", "# Print the same report to see the difference\n", "results.print_report(classes=classes_top10)\n", "print(results.mAP())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Evaluate for Classification\n", "\n", "Evaluation is just as easy for classification tasks. Once you have loaded up your dataset and model predictions, you can start with `dataset.evaluate_classifications()`\n", "\n", "If you need a refresher on how to work with classification datasets, head over to Getting Started with Classifications!" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Split 'test' already downloaded\n", "Loading existing dataset 'cifar10-test'. To reload from disk, either delete the existing dataset or provide a custom `dataset_name` to use\n" ] }, { "data": { "text/html": [ "\n", "\n", "\n", "