{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", " Try in Google Colab\n", " \n", " \n", " \n", " \n", " Share via nbviewer\n", " \n", " \n", " \n", " \n", " View on GitHub\n", " \n", " \n", " \n", " \n", " Download notebook\n", " \n", "
\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Find and remove duplicate images with FiftyOne\n", "\n", "Duplicate images in a train or test set can lead to your model learning biases that will impact its ability to generalize to new data. The problem in practice, however, is that large image datasets are difficult to examine so manually finding duplicates can be prohibitive.\n", "\n", "This notebook will guide you through how to find and remove duplicates in your data by:\n", "\n", "* Loading your data into FiftyOne\n", "\n", "* Computing embeddings for your images using the FiftyOne Model Zoo\n", "\n", "* Calculating the similarity of your images\n", "\n", "* Visualizing and automatically removing duplicates\n", "\n", "It should be noted that there are multiple ways that this can be done. The example here is useful for finding near-duplicates pairwise for every image in the dataset.\n", "\n", "[FiftyOne also provides a `uniqueness` function](https://voxel51.com/docs/fiftyone/user_guide/brain.html) that computes a scalar property over the dataset determining the uniqueness of a sample in relation to the rest of the data. It can also be used to manually find near-duplicates, with low uniqueness indicating likely duplicate or near-duplicate images. You can see an example of it at the end of the post.\n", "\n", "Alternatively, if you are only interested in exact duplicates, you can [compute a hash over your files](https://voxel51.com/docs/fiftyone/recipes/image_deduplication.html) to quickly find matches. However, if images vary by only small pixel values, this method will fail to find the duplicates." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setup\n", "\n", "Run the following lines to install FiftyOne, Scikit Learn, and PyTorch (to generate embeddings):\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install fiftyone" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install torch torchvision sklearn" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Loading your data\n", "\n", "In this example, we will be using the image classification dataset, CIFAR-100. This dataset is fairly old (2009) but still relevant enough that papers submitted to ICLR 2021 are using it as a baseline.\n", "\n", "CIFAR-100 contains 60,000 images between the train and test split annotated with 100 label classes grouped into 20 \"super classes\". This dataset also exists in the FiftyOne dataset zoo and can be easily loaded." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "import fiftyone as fo\n", "import fiftyone.zoo as foz" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Split 'train' already downloaded\n", "Split 'test' already downloaded\n", "Loading 'cifar100' split 'train'\n", " 100% 50000/50000 [51.2s elapsed, 0s remaining, 923.9 samples/s] \n", "Loading 'cifar100' split 'test'\n", " 100% 10000/10000 [3.4s elapsed, 0s remaining, 3.0K samples/s] \n", "Dataset 'cifar100' created\n" ] } ], "source": [ "# This will download all 60,000 samples\n", "dataset = foz.load_zoo_dataset(\"cifar100\")\n", "\n", "# If you are running this notebook on a machine without a GPU or want to \n", "# try a smaller experiment, run this version instead\n", "\n", "# dataset = foz.load_zoo_dataset(\"cifar100\", shuffle=True, max_samples=1000)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Loading your own data](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/index.html) is also easy to accomplish in FiftyOne. For example, the following will let you load an image classification dataset stored in a directory tree:\n", "\n", "```\n", "import fiftyone as fo\n", "\n", "dataset = fo.Dataset.from_dir(\"/path/to/dir\", dataset_type=fo.types.ImageClassificationDirectoryTree)\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Generate Embeddings\n", "\n", "Images store a lot of information in their pixel values. Comparing images pixel-by-pixel would be an expensive operation and result in poor quality results. \n", "\n", "Instead, we can use a pretrained computer vision model to generate embeddings for each image. An embedding is a result of processing an image through a model and extracting an intermediate representation of the image from within the model in the form of a vector containing a few thousand values distilling the information stored in the millions of pixels.\n", "\n", "For deep learning models, one typically uses the output of a fully-connected layer near the end of the forward pass to generate embeddings.\n", "\n", "The [FiftyOne Model Zoo](https://voxel51.com/docs/fiftyone/user_guide/model_zoo/index.html) contains a host of different pretrained models that we can use for this task. In this example, we will use a [MobileNet v2 model trained on ImageNet](https://voxel51.com/docs/fiftyone/user_guide/model_zoo/models.html#mobilenet-v2-imagenet-torch). This model provides relatively high performance, but most importantly is lightweight and can process our dataset quicker than other models. \n", "\n", "Any off-the-shelf model will be informative, but one can easily experiment with other models that may be more useful for particular datasets.\n", "\n", "We can easily load the model and compute embeddings on our dataset." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "scrolled": true }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [] } ], "source": [ "model = foz.load_zoo_model(\"mobilenet-v2-imagenet-torch\")" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " 100% 60000/60000 [18.2m elapsed, 0s remaining, 54.1 samples/s] \n", "(60000, 1280)\n" ] } ], "source": [ "embeddings = dataset.compute_embeddings(model)\n", "\n", "print(embeddings.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Calculate Similarity\n", "\n", "Now that we have significantly reduced the dimensionality of our images, we can use classical similarity algorithms to compute how similar every image embedding is to every other image embedding.\n", "\n", "In this case, we will use [cosine similarity provided by Scikit Learn](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.cosine_similarity.html) since this algorithm is simple and works fairly well in high dimensional spaces." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "from sklearn.metrics.pairwise import cosine_similarity\n", "import numpy as np" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(60000, 60000)\n", "[[1. 0.66108999 0.68013095 ... 0.66358586 0.58341951 0.72003263]\n", " [0.66108999 1. 0.6504253 ... 0.5211634 0.56383594 0.62717584]\n", " [0.68013095 0.6504253 1. ... 0.60735498 0.50470889 0.67887746]\n", " ...\n", " [0.66358586 0.5211634 0.60735498 ... 1. 0.46415098 0.54793259]\n", " [0.58341951 0.56383594 0.50470889 ... 0.46415098 1. 0.56771587]\n", " [0.72003263 0.62717584 0.67887746 ... 0.54793259 0.56771587 1. ]]\n" ] } ], "source": [ "similarity_matrix = cosine_similarity(embeddings)\n", "\n", "print(similarity_matrix.shape)\n", "print(similarity_matrix)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can see, all diagonal values are 1 since every image is identical to itself. We can subtract by the identity matrix (N x N matrix with 1's on the diagonal and 0's elsewhere) in order to zero out the diagonal so those values don't show up when we look for samples with maximum similarity." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "n = len(similarity_matrix)\n", "\n", "similarity_matrix = similarity_matrix - np.identity(n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note:** Computing cosine similarity on datasets with more than 100,000 images can time and memory intensive. It is recommended to split the embeddings into batches and parallelize the process to speed up this computation." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Visualize and remove duplicates\n", "\n", "We can now iterate through every sample and find which other samples are the most similar to it.\n", "\n", "\n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "id_map = [s.id for s in dataset.select_fields([\"id\"])]\n", "\n", "for idx, sample in enumerate(dataset):\n", " sample[\"max_similarity\"] = similarity_matrix[idx].max()\n", " sample.save()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The [FiftyOne App](https://voxel51.com/docs/fiftyone/user_guide/app.html) allows us to visualize and explore our dataset right in this notebook." ] }, { "cell_type": "code", "execution_count": 49, "metadata": { "scrolled": false }, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "
\n", "
\n", " \n", "
\n", " \n", "
\n", "\n", "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "session = fo.launch_app(dataset)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Visualizing the results and sorting by the samples with the highest similarity shows us the duplicates in the dataset. \n", "\n", "Right off the bat, we can see a lot of duplicates and something even more problematic. Two of the images are duplicates but one is in the train split and one is in the test split... and they are labeled differently (seal vs otter)!!! There are a few things wrong with this:\n", "\n", "* It can't be both a seal and an otter so one of the labels is wrong. Additionally, providing different labels for the train and test versions of the image will undoubtedly cause the model to fail.\n", "\n", "* Test sets that contain duplicates of the training set will lead to false confidence in the generalizability of your model. If your test set is not truly independent of your training set, the apparent performance of your model will likely drop-off when applied to production data.\n", "\n", "By looking through the results, we can find a threshold that we can use as a cutoff for when two images are determined to be duplicated. This threshold will be different for every dataset/model used in this process so the visualization step is crucial." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Further inspection puts a good threshold for guaranteed duplicates around 0.92. Lower values likely also include duplicates but should be verified manually so that we do not remove useful data. We can filter the dataset through code as well to see just how many samples have a max_similarity of > 0.92." ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "Dataset: cifar100\n", "Media type: image\n", "Num samples: 4345\n", "Tags: ['test', 'train']\n", "Sample fields:\n", " filepath: fiftyone.core.fields.StringField\n", " tags: fiftyone.core.fields.ListField(fiftyone.core.fields.StringField)\n", " metadata: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.metadata.Metadata)\n", " ground_truth: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Classification)\n", " max_similarity: fiftyone.core.fields.FloatField\n", "View stages:\n", " 1. Match(filter={'$expr': {'$gt': [...]}})" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from fiftyone import ViewField as F\n", "\n", "dataset.match(F(\"max_similarity\")>0.92)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "4,345 out of 60,000 samples are conservatively marked as duplicates!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's use a threshold of 0.92 and tag all duplicate samples. This is where you would remove them if so desired." ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [], "source": [ "id_map = [s.id for s in dataset.select_fields([\"id\"])]" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "60000\n" ] } ], "source": [ "thresh = 0.92\n", "samples_to_remove = set()\n", "samples_to_keep = set()\n", "\n", "for idx, sample in enumerate(dataset):\n", " if sample.id not in samples_to_remove:\n", " # Keep the first instance of two duplicates\n", " samples_to_keep.add(sample.id)\n", " \n", " dup_idxs = np.where(similarity_matrix[idx] > thresh)[0]\n", " for dup in dup_idxs:\n", " # We kept the first instance so remove all other duplicates\n", " samples_to_remove.add(id_map[dup])\n", "\n", " if len(dup_idxs) > 0:\n", " sample.tags.append(\"has_duplicates\")\n", " sample.save()\n", "\n", " else:\n", " sample.tags.append(\"duplicate\")\n", " sample.save()\n", "\n", "print(len(samples_to_remove) + len(samples_to_keep))\n", "\n", "# If you want to remove the samples from the dataset entirely, uncomment the following line\n", "# dataset.remove_samples(list(samples_to_remove))" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "
\n", "
\n", " \n", "
\n", " \n", "
\n", "\n", "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "session.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's see how many of these samples have duplicates both in the test and train split and how many are labeled differently." ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [], "source": [ "view = dataset.match_tags([\"has_duplicates\",\"duplicate\"])\n", "thresh = 0.92\n", "\n", "for idx, sample in enumerate(dataset):\n", " if sample.id in view:\n", " dup_idxs = np.where(similarity_matrix[idx] > thresh)[0]\n", " dup_splits = []\n", " dup_labels = {sample.ground_truth.label}\n", " for dup in dup_idxs:\n", " dup_sample = dataset[id_map[dup]]\n", " dup_split = \"test\" if \"test\" in dup_sample.tags else \"train\"\n", " dup_splits.append(dup_split)\n", " dup_labels.add(dup_sample.ground_truth.label)\n", " \n", " sample[\"dup_splits\"] = dup_splits\n", " sample[\"dup_labels\"] = list(dup_labels)\n", " sample.save()" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "data": { "text/plain": [ ",\n", " 'max_similarity': 0.9343109560801919,\n", " 'dup_splits': BaseList(['train']),\n", " 'dup_labels': BaseList(['cup']),\n", "}>" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "view.first()" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "from fiftyone import ViewField as F" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Compute how many of the duplicates exist in BOTH the train and test split." ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1621\n" ] } ], "source": [ "train_w_test_dups = len(\n", " view\n", " .match(F(\"tags\").contains(\"train\"))\n", " .match(F(\"dup_splits\").contains(\"test\"))\n", ")\n", "\n", "test_w_train_dups = len(\n", " view\n", " .match(F(\"tags\").contains(\"test\"))\n", " .match(F(\"dup_splits\").contains(\"train\"))\n", ")\n", "\n", "print(train_w_test_dups + test_w_train_dups)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Compute how many duplicates are labeled differently." ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "427\n" ] } ], "source": [ "label_mismatches = len(\n", " view\n", " .match(F(\"dup_labels\").length() > 1)\n", ")\n", "\n", "print(label_mismatches)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Visualize the samples with the most number of duplicates" ] }, { "cell_type": "code", "execution_count": 43, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "
\n", "
\n", " \n", "
\n", " \n", "
\n", "\n", "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "session.view = view.sort_by(F(\"dup_splits\").length(), reverse=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Finding unique images\n", "\n", "FiftyOne also provides a [uniqueness method](https://voxel51.com/docs/fiftyone/user_guide/brain.html#image-uniqueness) that can compute the uniqueness of every image in a dataset. This will result in a score for every image indicating how unique the contents of the image are with respect to all other images.\n", "\n", "We previously computed near-duplicate images in the dataset with cosine similarity, but with uniqueness, we are able to rank the images in the dataset according to their relative uniqueness (i.e., information content) compared to the other images.\n", "\n", "Uniqueness can be helpful when deciding which samples to send to annotators. If you have a limited annotation budget, then you will want to have the most unique samples annotated for training and testing." ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [], "source": [ "import fiftyone.brain as fob" ] }, { "cell_type": "code", "execution_count": 52, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Loading uniqueness model...\n", "Preparing data...\n", "Generating embeddings...\n", " 100% 2500/2500 [3.6s elapsed, 0s remaining, 734.9 samples/s] \n", "Computing uniqueness...\n", "Saving results...\n", " 100% 2500/2500 [6.0s elapsed, 0s remaining, 384.3 samples/s] \n", "Uniqueness computation complete\n" ] } ], "source": [ "# Process a subset of the dataset to give a taste\n", "fob.compute_uniqueness(dataset.take(2500))" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "uniqueness_view = dataset.exists(\"uniqueness\").sort_by(\"uniqueness\", reverse=True)" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Dataset: cifar100\n", "Media type: image\n", "Num samples: 2500\n", "Tags: ['duplicate', 'has_duplicates', 'test', 'train']\n", "Sample fields:\n", " filepath: fiftyone.core.fields.StringField\n", " tags: fiftyone.core.fields.ListField(fiftyone.core.fields.StringField)\n", " metadata: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.metadata.Metadata)\n", " ground_truth: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Classification)\n", " max_similarity: fiftyone.core.fields.FloatField\n", " dup_splits: fiftyone.core.fields.ListField\n", " dup_labels: fiftyone.core.fields.ListField\n", " uniqueness: fiftyone.core.fields.FloatField\n", "View stages:\n", " 1. Exists(field='uniqueness', bool=True)\n", " 2. SortBy(field_or_expr='uniqueness', reverse=True)" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "uniqueness_view" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "
\n", "
\n", " \n", "
\n", " \n", "
\n", "\n", "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", "\n", "\n", "
\n", "
\n", " \n", "
\n", " \n", "
\n", "\n", "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "session.view = uniqueness_view" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "session.freeze()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.13" } }, "nbformat": 4, "nbformat_minor": 2 }