{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## widgets.image_cleaner" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "fastai offers several widgets to support the workflow of a deep learning practitioner. The purpose of the widgets are to help you organize, clean, and prepare your data for your model. Widgets are separated by data type." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [], "source": [ "from fastai.vision import *\n", "from fastai.widgets import DatasetFormatter, ImageCleaner, ImageDownloader, download_google_images\n", "from fastai.gen_doc.nbdoc import *" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [], "source": [ "%reload_ext autoreload\n", "%autoreload 2" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "path = untar_data(URLs.MNIST_SAMPLE)\n", "data = ImageDataBunch.from_folder(path)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn = cnn_learner(data, models.resnet18, metrics=error_rate)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/html": [ "Total time: 00:17

\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
epochtrain_lossvalid_losserror_rate
10.1676650.1067270.037291
20.1035790.0779360.023553
\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "learn.fit_one_cycle(2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.save('stage-1')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We create a databunch with all the data in the training set and no validation set (DatasetFormatter uses only the training set)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "db = (ImageList.from_folder(path)\n", " .split_none()\n", " .label_from_folder()\n", " .databunch())" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn = cnn_learner(db, models.resnet18, metrics=[accuracy])\n", "learn.load('stage-1');" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class DatasetFormatter[source][test]

\n", "\n", "> DatasetFormatter()\n", "\n", "
×

No tests found for DatasetFormatter. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Returns a dataset with the appropriate format and file indices to be displayed. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(DatasetFormatter)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The [`DatasetFormatter`](/widgets.image_cleaner.html#DatasetFormatter) class prepares your image dataset for widgets by returning a formatted [`DatasetTfm`](/vision.data.html#DatasetTfm) based on the [`DatasetType`](/basic_data.html#DatasetType) specified. Use `from_toplosses` to grab the most problematic images directly from your learner. Optionally, you can restrict the formatted dataset returned to `n_imgs`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

from_similars[source][test]

\n", "\n", "> from_similars(**`learn`**, **`layer_ls`**:`list`=***`[0, 7, 2]`***, **\\*\\*`kwargs`**)\n", "\n", "
×

No tests found for from_similars. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Gets the indices for the most similar images. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(DatasetFormatter.from_similars)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [], "source": [ "from fastai.gen_doc.nbdoc import *\n", "from fastai.widgets.image_cleaner import * " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

from_toplosses[source][test]

\n", "\n", "> from_toplosses(**`learn`**, **`n_imgs`**=***`None`***, **\\*\\*`kwargs`**)\n", "\n", "
×

No tests found for from_toplosses. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Gets indices with top losses. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(DatasetFormatter.from_toplosses)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class ImageCleaner[source][test]

\n", "\n", "> ImageCleaner(**`dataset`**, **`fns_idxs`**, **`path`**, **`batch_size`**:`int`=***`5`***, **`duplicates`**=***`False`***)\n", "\n", "
×

Tests found for ImageCleaner:

  • pytest -sv tests/test_widgets_image_cleaner.py::test_image_cleaner_index_length_mismatch [source]
  • pytest -sv tests/test_widgets_image_cleaner.py::test_image_cleaner_length_correct [source]
  • pytest -sv tests/test_widgets_image_cleaner.py::test_image_cleaner_wrong_input_type [source]

To run tests please refer to this guide.

\n", "\n", "Displays images for relabeling or deletion and saves changes in `path` as 'cleaned.csv'. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(ImageCleaner)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[`ImageCleaner`](/widgets.image_cleaner.html#ImageCleaner) is for cleaning up images that don't belong in your dataset. It renders images in a row and gives you the opportunity to delete the file from your file system. To use [`ImageCleaner`](/widgets.image_cleaner.html#ImageCleaner) we must first use `DatasetFormatter().from_toplosses` to get the suggested indices for misclassified images." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ds, idxs = DatasetFormatter().from_toplosses(learn)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "c1251de0d9ba41ccb63674dd40c91599", "version_major": 2, "version_minor": 0 }, "text/plain": [ "HBox(children=(VBox(children=(Image(value=b'\\xff\\xd8\\xff\\xe0\\x00\\x10JFIF\\x00\\x01\\x01\\x01\\x00d\\x00d\\x00\\x00\\xff…" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "1d834d97a30046518493bf9c08f1ff0e", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Button(button_style='primary', description='Next Batch', layout=Layout(width='auto'), style=ButtonStyle())" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/plain": [ "" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ImageCleaner(ds, idxs, path)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[`ImageCleaner`](/widgets.image_cleaner.html#ImageCleaner) does not change anything on disk (neither labels or existence of images). Instead, it creates a 'cleaned.csv' file in your data path from which you need to load your new databunch for the files to changes to be applied. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = pd.read_csv(path/'cleaned.csv', header='infer')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# We create a databunch from our csv. We include the data in the training set and we don't use a validation set (DatasetFormatter uses only the training set)\n", "np.random.seed(42)\n", "db = (ImageList.from_df(df, path)\n", " .split_none()\n", " .label_from_df()\n", " .databunch(bs=64))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn = cnn_learner(db, models.resnet18, metrics=error_rate)\n", "learn = learn.load('stage-1')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can then use [`ImageCleaner`](/widgets.image_cleaner.html#ImageCleaner) again to find duplicates in the dataset. To do this, you can specify `duplicates=True` while calling ImageCleaner after getting the indices and dataset from `.from_similars`. Note that if you are using a layer's output which has dimensions (n_batches, n_features, 1, 1) then you don't need any pooling (this is the case with the last layer). The suggested use of `.from_similars()` with resnets is using the last layer and no pooling, like in the following cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Getting activations...\n" ] }, { "data": { "text/html": [ "\n", "
\n", " \n", " \n", " 100.00% [226/226 00:03<00:00]\n", "
\n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "Computing similarities...\n" ] } ], "source": [ "ds, idxs = DatasetFormatter().from_similars(learn, layer_ls=[0,7,1], pool=None)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "3492d20c679541738c6c7a4a2c9574db", "version_major": 2, "version_minor": 0 }, "text/plain": [ "HBox(children=(VBox(children=(Image(value=b'\\xff\\xd8\\xff\\xe0\\x00\\x10JFIF\\x00\\x01\\x01\\x01\\x00d\\x00d\\x00\\x00\\xff…" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "f6b1d5ea96894ef0ab443192d296ca8e", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Button(button_style='primary', description='Next Batch', layout=Layout(width='auto'), style=ButtonStyle())" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/plain": [ "" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ImageCleaner(ds, idxs, path, duplicates=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

class ImageDownloader[source][test]

\n", "\n", "> ImageDownloader(**`path`**:`PathOrStr`=***`'data'`***)\n", "\n", "
×

Tests found for ImageDownloader:

  • pytest -sv tests/test_widgets_image_cleaner.py::test_image_downloader_with_path [source]

To run tests please refer to this guide.

\n", "\n", "Displays a widget that allows searching and downloading images from google images search in a Jupyter Notebook or Lab. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(ImageDownloader)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[`ImageDownloader`](/widgets.image_downloader.html#ImageDownloader) widget gives you a way to quickly bootstrap your image dataset without leaving the notebook. It searches and downloads images that match the search criteria and resolution / quality requirements and stores them on your filesystem within the provided `path`.\n", "\n", "Images for each search query (or label) are stored in a separate folder within `path`. For example, if you pupulate `tiger` with a `path` setup to `./data`, you'll get a folder `./data/tiger/` with the tiger images in it.\n", "\n", "[`ImageDownloader`](/widgets.image_downloader.html#ImageDownloader) will automatically clean up and verify the downloaded images with [`verify_images()`](/vision.data.html#verify_images) after downloading them." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "0f3f11652e204b68b8de634aa6ec1484", "version_major": 2, "version_minor": 0 }, "text/plain": [ "VBox(children=(HBox(children=(Text(value='', placeholder='What images to search for?'), BoundedIntText(value=1…" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/plain": [ "" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "path = Config.data_path()/'image_downloader'\n", "os.makedirs(path, exist_ok=True)\n", "ImageDownloader(path)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Downloading images in python scripts outside Jupyter notebooks" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "
\n", " \n", " \n", " 100.00% [30/30 00:00<00:00]\n", "
\n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", "
\n", " \n", " \n", " 100.00% [30/30 00:00<00:00]\n", "
\n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/plain": [ "30" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "path = Config.data_path()/'image_downloader'\n", "files = download_google_images(path, 'aussie shepherd', size='>1024*768', n_images=30)\n", "\n", "len(files)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

download_google_images[source][test]

\n", "\n", "> download_google_images(**`path`**:`PathOrStr`, **`search_term`**:`str`, **`size`**:`str`=***`'>400*300'`***, **`n_images`**:`int`=***`10`***, **`format`**:`str`=***`'jpg'`***, **`max_workers`**:`int`=***`8`***, **`timeout`**:`int`=***`4`***) → `FilePathList`\n", "\n", "
×

No tests found for download_google_images. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Search for `n_images` images on Google, matching `search_term` and `size` requirements, download them into `path`/`search_term` and verify them, using `max_workers` threads. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(download_google_images)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After populating images with [`ImageDownloader`](/widgets.image_downloader.html#ImageDownloader), you can get a an [`ImageDataBunch`](/vision.data.html#ImageDataBunch) by calling `ImageDataBunch.from_folder(path, size=size)`, or using the data block API." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "
\n", " \n", " \n", " 100.00% [50/50 00:00<00:00]\n", "
\n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", "
\n", " \n", " \n", " 100.00% [50/50 00:01<00:00]\n", "
\n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", "
\n", " \n", " \n", " 100.00% [50/50 00:00<00:00]\n", "
\n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", "
\n", " \n", " \n", " 100.00% [50/50 00:01<00:00]\n", "
\n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "Total time: 00:33

\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
epochtrain_lossvalid_lossaccuracy
11.1614910.4246790.807692
20.7512880.0862400.961538
30.5233410.0669931.000000
\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Setup path and labels to search for\n", "path = Config.data_path()/'image_downloader'\n", "labels = ['boston terrier', 'french bulldog']\n", "\n", "# Download images\n", "for label in labels: \n", " download_google_images(path, label, size='>400*300', n_images=50)\n", "\n", "# Build a databunch and train! \n", "src = (ImageList.from_folder(path)\n", " .split_by_rand_pct()\n", " .label_from_folder()\n", " .transform(get_transforms(), size=224))\n", "\n", "db = src.databunch(bs=16, num_workers=0)\n", "\n", "learn = cnn_learner(db, models.resnet34, metrics=[accuracy])\n", "learn.fit_one_cycle(3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Downloading more than a hundred images\n", "\n", "To fetch more than a hundred images, [`ImageDownloader`](/widgets.image_downloader.html#ImageDownloader) uses `selenium` and `chromedriver` to scroll through the Google Images search results page and scrape image URLs. They're not required as dependencies by default. If you don't have them installed on your system, the widget will show you an error message.\n", "\n", "To install `selenium`, just `pip install selenium` in your fastai environment.\n", "\n", "**On a mac**, you can install `chromedriver` with `brew cask install chromedriver`.\n", "\n", "**On Ubuntu**\n", "Take a look at the latest Chromedriver version available, then something like:\n", "\n", "```\n", "wget https://chromedriver.storage.googleapis.com/2.45/chromedriver_linux64.zip\n", "unzip chromedriver_linux64.zip\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that downloading under 100 images doesn't require any dependencies other than fastai itself, however downloading more than a hundred images [uses `selenium` and `chromedriver`](/widgets.image_cleaner.html#Downloading-more-than-a-hundred-images).\n", "\n", "`size` can be one of:\n", "\n", "```\n", "'>400*300'\n", "'>640*480'\n", "'>800*600'\n", "'>1024*768'\n", "'>2MP'\n", "'>4MP'\n", "'>6MP'\n", "'>8MP'\n", "'>10MP'\n", "'>12MP'\n", "'>15MP'\n", "'>20MP'\n", "'>40MP'\n", "'>70MP'\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Methods" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Undocumented Methods - Methods moved below this line will intentionally be hidden" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

make_dropdown_widget[source][test]

\n", "\n", "> make_dropdown_widget(**`description`**=***`'Description'`***, **`options`**=***`['Label 1', 'Label 2']`***, **`value`**=***`'Label 1'`***, **`file_path`**=***`None`***, **`layout`**=***`Layout()`***, **`handler`**=***`None`***)\n", "\n", "
×

No tests found for make_dropdown_widget. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Return a Dropdown widget with specified `handler`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(ImageCleaner.make_dropdown_widget)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

next_batch[source][test]

\n", "\n", "> next_batch(**`_`**)\n", "\n", "
×

No tests found for next_batch. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Handler for 'Next Batch' button click. Delete all flagged images and renders next batch. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(ImageCleaner.next_batch)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

sort_idxs[source][test]

\n", "\n", "> sort_idxs(**`similarities`**)\n", "\n", "
×

No tests found for sort_idxs. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Sorts `similarities` and return the indexes in pairs ordered by highest similarity. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(DatasetFormatter.sort_idxs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

make_vertical_box[source][test]

\n", "\n", "> make_vertical_box(**`children`**, **`layout`**=***`Layout()`***, **`duplicates`**=***`False`***)\n", "\n", "
×

No tests found for make_vertical_box. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Make a vertical box with [`children`](/torch_core.html#children) and `layout`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(ImageCleaner.make_vertical_box)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

relabel[source][test]

\n", "\n", "> relabel(**`change`**)\n", "\n", "
×

No tests found for relabel. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Relabel images by moving from parent dir with old label `class_old` to parent dir with new label `class_new`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(ImageCleaner.relabel)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

largest_indices[source][test]

\n", "\n", "> largest_indices(**`arr`**, **`n`**)\n", "\n", "
×

No tests found for largest_indices. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Returns the `n` largest indices from a numpy array `arr`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(DatasetFormatter.largest_indices)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

delete_image[source][test]

\n", "\n", "> delete_image(**`file_path`**)\n", "\n", "
×

No tests found for delete_image. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(ImageCleaner.delete_image)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

empty[source][test]

\n", "\n", "> empty()\n", "\n", "
×

No tests found for empty. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(ImageCleaner.empty)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

empty_batch[source][test]

\n", "\n", "> empty_batch()\n", "\n", "
×

No tests found for empty_batch. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(ImageCleaner.empty_batch)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

comb_similarity[source][test]

\n", "\n", "> comb_similarity(**`t1`**:`Tensor`, **`t2`**:`Tensor`, **\\*\\*`kwargs`**)\n", "\n", "
×

No tests found for comb_similarity. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Computes the similarity function between each embedding of `t1` and `t2` matrices. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(DatasetFormatter.comb_similarity)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

get_widgets[source][test]

\n", "\n", "> get_widgets(**`duplicates`**)\n", "\n", "
×

No tests found for get_widgets. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Create and format widget set. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(ImageCleaner.get_widgets)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

write_csv[source][test]

\n", "\n", "> write_csv()\n", "\n", "
×

No tests found for write_csv. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(ImageCleaner.write_csv)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

create_image_list[source][test]

\n", "\n", "> create_image_list(**`dataset`**, **`fns_idxs`**)\n", "\n", "
×

No tests found for create_image_list. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Create a list of images, filenames and labels but first removing files that are not supposed to be displayed. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(ImageCleaner.create_image_list)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

render[source][test]

\n", "\n", "> render()\n", "\n", "
×

No tests found for render. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Re-render Jupyter cell for batch of images. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(ImageCleaner.render)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

get_similars_idxs[source][test]

\n", "\n", "> get_similars_idxs(**`learn`**, **`layer_ls`**, **\\*\\*`kwargs`**)\n", "\n", "
×

No tests found for get_similars_idxs. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Gets the indices for the most similar images in `ds_type` dataset " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(DatasetFormatter.get_similars_idxs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

on_delete[source][test]

\n", "\n", "> on_delete(**`btn`**)\n", "\n", "
×

No tests found for on_delete. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Flag this image as delete or keep. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(ImageCleaner.on_delete)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

make_button_widget[source][test]

\n", "\n", "> make_button_widget(**`label`**, **`file_path`**=***`None`***, **`handler`**=***`None`***, **`style`**=***`None`***, **`layout`**=***`Layout(width='auto')`***)\n", "\n", "
×

No tests found for make_button_widget. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Return a Button widget with specified `handler`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(ImageCleaner.make_button_widget)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

make_img_widget[source][test]

\n", "\n", "> make_img_widget(**`img`**, **`layout`**=***`Layout()`***, **`format`**=***`'jpg'`***)\n", "\n", "
×

No tests found for make_img_widget. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Returns an image widget for specified file name `img`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(ImageCleaner.make_img_widget)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

get_actns[source][test]

\n", "\n", "> get_actns(**`learn`**, **`hook`**:[`Hook`](/callbacks.hooks.html#Hook), **`dl`**:[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader), **`pool`**=***`'AdaptiveConcatPool2d'`***, **`pool_dim`**:`int`=***`4`***, **\\*\\*`kwargs`**)\n", "\n", "
×

No tests found for get_actns. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Gets activations at the layer specified by `hook`, applies `pool` of dim `pool_dim` and concatenates " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(DatasetFormatter.get_actns)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

batch_contains_deleted[source][test]

\n", "\n", "> batch_contains_deleted()\n", "\n", "
×

No tests found for batch_contains_deleted. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Check if current batch contains already deleted images. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(ImageCleaner.batch_contains_deleted)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

make_horizontal_box[source][test]

\n", "\n", "> make_horizontal_box(**`children`**, **`layout`**=***`Layout()`***)\n", "\n", "
×

No tests found for make_horizontal_box. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Make a horizontal box with [`children`](/torch_core.html#children) and `layout`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(ImageCleaner.make_horizontal_box)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

get_toplosses_idxs[source][test]

\n", "\n", "> get_toplosses_idxs(**`learn`**, **`n_imgs`**, **\\*\\*`kwargs`**)\n", "\n", "
×

No tests found for get_toplosses_idxs. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Sorts `ds_type` dataset by top losses and returns dataset and sorted indices. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(DatasetFormatter.get_toplosses_idxs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

padded_ds[source][test]

\n", "\n", "> padded_ds(**`ll_input`**, **`size`**=***`(250, 300)`***, **`resize_method`**=***``***, **`padding_mode`**=***`'zeros'`***, **\\*\\*`kwargs`**)\n", "\n", "
×

No tests found for padded_ds. To contribute a test please refer to this guide and this discussion.

\n", "\n", "For a LabelList `ll_input`, resize each image to `size` using `resize_method` and `padding_mode`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(DatasetFormatter.padded_ds)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## New Methods - Please document or move to the undocumented section" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 2 }