{ "cells": [ { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "mt9dL5dIir8X" }, "source": [ "##### Copyright 2020 The TensorFlow Authors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "colab": {}, "colab_type": "code", "id": "ufPx7EiCiqgR" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License.\n" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "ucMoYase6URl" }, "source": [ "# Load images" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "_Wwu5SXZmEkB" }, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " View on TensorFlow.org\n", " \n", " Run in Google Colab\n", " \n", " View source on GitHub\n", " \n", " Download notebook\n", "
" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Oxw4WahM7DU9" }, "source": [ "This tutorial shows how to load and preprocess an image dataset in two ways. First, you will use high-level Keras preprocessing [utilities](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory) and [layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing). Next, you will write your own input pipeline from scratch using [tf.data](https://www.tensorflow.org/guide/data)." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "hoQQiZDB6URn" }, "source": [ "## Setup" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "wR72sK2D5Bev" }, "outputs": [], "source": [ "!pip install tf-nightly" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "3vhAMaIOBIee" }, "outputs": [], "source": [ "import numpy as np\n", "import os\n", "import PIL\n", "import PIL.Image\n", "import tensorflow as tf" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "Qnp9Z2sT5dWj" }, "outputs": [], "source": [ "print(tf.__version__)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "wO0InzL66URu" }, "source": [ "### Download the flowers dataset\n", "\n", "This tutorial uses a dataset of several thousand photos of flowers. The flowers dataset contains 5 sub-directories, one per class:\n", "\n", "```\n", "flowers_photos/\n", " daisy/\n", " dandelion/\n", " roses/\n", " sunflowers/\n", " tulips/\n", "```" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Ju2yXtdV5YaT" }, "source": [ "Note: all images are licensed CC-BY, creators are listed in the LICENSE.txt file." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "rN-Pc6Zd6awg" }, "outputs": [], "source": [ "import pathlib\n", "dataset_url = \"https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz\"\n", "data_dir = tf.keras.utils.get_file(origin=dataset_url, \n", " fname='flower_photos', \n", " untar=True)\n", "data_dir = pathlib.Path(data_dir)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "rFkFK74oO--g" }, "source": [ "After downloading (218MB), you should now have a copy of the flower photos available. There are 3670 total images:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "QhewYCxhXQBX" }, "outputs": [], "source": [ "image_count = len(list(data_dir.glob('*/*.jpg')))\n", "print(image_count)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "ZUFusk44d9GW" }, "source": [ "Each directory contains images of that type of flower. Here are some roses:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "crs7ZjEp60Ot" }, "outputs": [], "source": [ "roses = list(data_dir.glob('roses/*'))\n", "PIL.Image.open(str(roses[0]))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "oV9PtjdKKWyI" }, "outputs": [], "source": [ "roses = list(data_dir.glob('roses/*'))\n", "PIL.Image.open(str(roses[1]))" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "9_kge08gSCan" }, "source": [ "## Load using keras.preprocessing\n", "\n", "Let's load these images off disk using [image_dataset_from_directory](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory)." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "eRACclAfOPYR" }, "source": [ "Note: The Keras Preprocesing utilities and layers introduced in this section are currently experimental and may change." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "6jobDTUs8Wxu" }, "source": [ "### Create a dataset" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "lAmtzsnjDNhB" }, "source": [ "Define some parameters for the loader:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "qJdpyqK541ty" }, "outputs": [], "source": [ "batch_size = 32\n", "img_height = 180\n", "img_width = 180" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "ehhW308g8soJ" }, "source": [ "It's good practice to use a validation split when developing your model. We will use 80% of the images for training, and 20% for validation." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "chqakIP14PDm" }, "outputs": [], "source": [ "train_ds = tf.keras.preprocessing.image_dataset_from_directory(\n", " data_dir,\n", " validation_split=0.2,\n", " subset=\"training\",\n", " seed=123,\n", " image_size=(img_height, img_width),\n", " batch_size=batch_size)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "pb2Af2lsUShk" }, "outputs": [], "source": [ "val_ds = tf.keras.preprocessing.image_dataset_from_directory(\n", " data_dir,\n", " validation_split=0.2,\n", " subset=\"validation\",\n", " seed=123,\n", " image_size=(img_height, img_width),\n", " batch_size=batch_size)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Ug3ITsz0b_cF" }, "source": [ "You can find the class names in the `class_names` attribute on these datasets." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "R7z2yKt7VDPJ" }, "outputs": [], "source": [ "class_names = train_ds.class_names\n", "print(class_names)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "bK6CQCqIctCd" }, "source": [ "### Visualize the data\n", "\n", "Here are the first 9 images from the training dataset." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "AAY3LJN28Kuy" }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "\n", "plt.figure(figsize=(10, 10))\n", "for images, labels in train_ds.take(1):\n", " for i in range(9):\n", " ax = plt.subplot(3, 3, i + 1)\n", " plt.imshow(images[i].numpy().astype(\"uint8\"))\n", " plt.title(class_names[labels[i]])\n", " plt.axis(\"off\")" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "jUI0fr7igPtA" }, "source": [ "You can train a model using these datasets by passing them to `model.fit` (shown later in this tutorial). If you like, you can also manually iterate over the dataset and retrieve batches of images:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "BdPHeHXt9sjA" }, "outputs": [], "source": [ "for image_batch, labels_batch in train_ds:\n", " print(image_batch.shape)\n", " print(labels_batch.shape)\n", " break" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "2ZgIZeXaDUsF" }, "source": [ "The `image_batch` is a tensor of the shape `(32, 180, 180, 3)`. This is a batch of 32 images of shape `180x180x3` (the last dimension referes to color channels RGB). The `label_batch` is a tensor of the shape `(32,)`, these are corresponding labels to the 32 images. \n" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "LyM2y47W-cxJ" }, "source": [ "Note: you can call `.numpy()` on either of these tensors to convert them to a `numpy.ndarray`." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Ybl6a2YCg1rV" }, "source": [ "### Standardize the data\n" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "IdogGjM2K6OU" }, "source": [ "The RGB channel values are in the `[0, 255]` range. This is not ideal for a neural network; in general you should seek to make your input values small. Here, we will standardize values to be in the `[0, 1]` by using a Rescaling layer." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "16yNdZXdExyM" }, "outputs": [], "source": [ "from tensorflow.keras import layers\n", "\n", "normalization_layer = tf.keras.layers.experimental.preprocessing.Rescaling(1./255)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Nd0_enkb8uxZ" }, "source": [ "There are two ways to use this layer. You can apply it to the dataset by calling map:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "QgOnza-U_z5Y" }, "outputs": [], "source": [ "normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))\n", "image_batch, labels_batch = next(iter(normalized_ds))\n", "first_image = image_batch[0]\n", "# Notice the pixels values are now in `[0,1]`.\n", "print(np.min(first_image), np.max(first_image)) " ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "z39nXayj9ioS" }, "source": [ "Or, you can include the layer inside your model definition to simplify deployment. We will use the second approach here." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "hXLd3wMpDIkp" }, "source": [ "Note: If you would like to scale pixel values to `[-1,1]` you can instead write `Rescaling(1./127.5, offset=-1)`" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "LeNWVa8qRBGm" }, "source": [ "Note: we previously resized images using the `image_size` argument of `image_dataset_from_directory`. If you want to include the resizing logic in your model, you can use the [Resizing](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Resizing) layer instead.\n" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Ti8avTlLofoJ" }, "source": [ "### Configure the dataset for performance\n", "\n", "Let's make sure to use buffered prefetching so we can yield data from disk without having I/O become blocking. These are two important methods you should use when loading data.\n", "\n", "`.cache()` keeps the images in memory after they're loaded off disk during the first epoch. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache.\n", "\n", "`.prefetch()` overlaps data preprocessing and model execution while training. \n", "\n", "Interested readers can learn more about both methods, as well as how to cache data to disk in the [data performance guide](https://www.tensorflow.org/guide/data_performance#prefetching)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "Ea3kbMe-pGDw" }, "outputs": [], "source": [ "AUTOTUNE = tf.data.experimental.AUTOTUNE\n", "\n", "train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)\n", "val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "XqHjIr6cplwY" }, "source": [ "### Train a model\n", "\n", "For completeness, we will show how to train a simple model using the datasets we just prepared. This model has not been tuned in any way - the goal is to show you the mechanics using the datasets you just created. To learn more about image classification, visit this [tutorial](https://www.tensorflow.org/tutorials/images/classification)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "LdR0BzCcqxw0" }, "outputs": [], "source": [ "num_classes = 5\n", "\n", "model = tf.keras.Sequential([\n", " layers.experimental.preprocessing.Rescaling(1./255),\n", " layers.Conv2D(32, 3, activation='relu'),\n", " layers.MaxPooling2D(),\n", " layers.Conv2D(32, 3, activation='relu'),\n", " layers.MaxPooling2D(),\n", " layers.Conv2D(32, 3, activation='relu'),\n", " layers.MaxPooling2D(),\n", " layers.Flatten(),\n", " layers.Dense(128, activation='relu'),\n", " layers.Dense(num_classes)\n", "])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "t_BlmsnmsEr4" }, "outputs": [], "source": [ "model.compile(\n", " optimizer='adam',\n", " loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),\n", " metrics=['accuracy'])" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "ffwd44ldNMOE" }, "source": [ "Note: we will only train for a few epochs so this tutorial runs quickly. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "S08ZKKODsnGW" }, "outputs": [], "source": [ "model.fit(\n", " train_ds,\n", " validation_data=val_ds,\n", " epochs=3\n", ")" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "MEtT9YGjSAOK" }, "source": [ "Note: you can also write a custom training loop instead of using `model.fit`. To learn more, visit this [tutorial](https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch)." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "BaW4wx5L7hrZ" }, "source": [ "You may notice the validation accuracy is low to the compared to the training accuracy, indicating our model is overfitting. You can learn more about overfitting and how to reduce it in this [tutorial](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit)." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "AxS1cLzM8mEp" }, "source": [ "## Using tf.data for finer control" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Ylj9fgkamgWZ" }, "source": [ "The above keras.preprocessing utilities are a convenient way to create a `tf.data.Dataset` from a directory of images. For finer grain control, you can write your own input pipeline using `tf.data`. This section shows how to do just that, beginning with the file paths from the zip we downloaded earlier." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "lAkQp5uxoINu" }, "outputs": [], "source": [ "list_ds = tf.data.Dataset.list_files(str(data_dir/'*/*'), shuffle=False)\n", "list_ds = list_ds.shuffle(image_count, reshuffle_each_iteration=False)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "coORvEH-NGwc" }, "outputs": [], "source": [ "for f in list_ds.take(5):\n", " print(f.numpy())" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "6NLQ_VJhWO4z" }, "source": [ "The tree structure of the files can be used to compile a `class_names` list." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "uRPHzDGhKACK" }, "outputs": [], "source": [ "class_names = np.array(sorted([item.name for item in data_dir.glob('*') if item.name != \"LICENSE.txt\"]))\n", "print(class_names)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "CiptrWmAlmAa" }, "source": [ "Split the dataset into train and validation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "GWHNPzXclpVr" }, "outputs": [], "source": [ "val_size = int(image_count * 0.2)\n", "train_ds = list_ds.skip(val_size)\n", "val_ds = list_ds.take(val_size)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "rkB-IR4-pS3U" }, "source": [ "You can see the length of each dataset as follows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "SiKQrb9ppS-7" }, "outputs": [], "source": [ "print(tf.data.experimental.cardinality(train_ds).numpy())\n", "print(tf.data.experimental.cardinality(val_ds).numpy())" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "91CPfUUJ_8SZ" }, "source": [ "Write a short function that converts a file path to an `(img, label)` pair:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "arSQzIey-4D4" }, "outputs": [], "source": [ "def get_label(file_path):\n", " # convert the path to a list of path components\n", " parts = tf.strings.split(file_path, os.path.sep)\n", " # The second to last is the class-directory\n", " one_hot = parts[-2] == class_names\n", " # Integer encode the label\n", " return tf.argmax(one_hot)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "MGlq4IP4Aktb" }, "outputs": [], "source": [ "def decode_img(img):\n", " # convert the compressed string to a 3D uint8 tensor\n", " img = tf.image.decode_jpeg(img, channels=3)\n", " # resize the image to the desired size\n", " return tf.image.resize(img, [img_height, img_width])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "-xhBRgvNqRRe" }, "outputs": [], "source": [ "def process_path(file_path):\n", " label = get_label(file_path)\n", " # load the raw data from the file as a string\n", " img = tf.io.read_file(file_path)\n", " img = decode_img(img)\n", " return img, label" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "S9a5GpsUOBx8" }, "source": [ "Use `Dataset.map` to create a dataset of `image, label` pairs:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "3SDhbo8lOBQv" }, "outputs": [], "source": [ "# Set `num_parallel_calls` so multiple images are loaded/processed in parallel.\n", "train_ds = train_ds.map(process_path, num_parallel_calls=AUTOTUNE)\n", "val_ds = val_ds.map(process_path, num_parallel_calls=AUTOTUNE)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "kxrl0lGdnpRz" }, "outputs": [], "source": [ "for image, label in train_ds.take(1):\n", " print(\"Image shape: \", image.numpy().shape)\n", " print(\"Label: \", label.numpy())" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "vYGCgJuR_9Qp" }, "source": [ "### Configure dataset for performance" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "wwZavzgsIytz" }, "source": [ "To train a model with this dataset you will want the data:\n", "\n", "* To be well shuffled.\n", "* To be batched.\n", "* Batches to be available as soon as possible.\n", "\n", "These features can be added using the `tf.data` API. For more details, see the [Input Pipeline Performance](../../guide/performance/datasets) guide." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "uZmZJx8ePw_5" }, "outputs": [], "source": [ "def configure_for_performance(ds):\n", " ds = ds.cache()\n", " ds = ds.shuffle(buffer_size=1000)\n", " ds = ds.batch(batch_size)\n", " ds = ds.prefetch(buffer_size=AUTOTUNE)\n", " return ds\n", "\n", "train_ds = configure_for_performance(train_ds)\n", "val_ds = configure_for_performance(val_ds)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "45P7OvzRWzOB" }, "source": [ "### Visualize the data\n", "\n", "You can visualize this dataset similarly to the one you created previously." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "UN_Dnl72YNIj" }, "outputs": [], "source": [ "image_batch, label_batch = next(iter(train_ds))\n", "\n", "plt.figure(figsize=(10, 10))\n", "for i in range(9):\n", " ax = plt.subplot(3, 3, i + 1)\n", " plt.imshow(image_batch[i].numpy().astype(\"uint8\"))\n", " label = label_batch[i]\n", " plt.title(class_names[label])\n", " plt.axis(\"off\")" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "fMT8kh_uXPRU" }, "source": [ "### Continue training the model\n", "\n", "You have now manually built a similar `tf.data.Dataset` to the one created by the `keras.preprocessing` above. You can continue training the model with it. As before, we will train for just a few epochs to keep the running time short." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "Vm_bi7NKXOzW" }, "outputs": [], "source": [ "model.fit(\n", " train_ds,\n", " validation_data=val_ds,\n", " epochs=3\n", ")" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "6cqkPenZIaHl" }, "source": [ "## Next steps\n", "\n", "This tutorial showed two ways of loading images off disk. First, you learned how to load and preprocess an image dataset using Keras preprocessing layers and utilities. Next, you learned how to write an input pipeline from scratch using tf.data. As a next step, you can learn how to add data augmentation by visiting this [tutorial](https://www.tensorflow.org/tutorials/images/data_augmentation). To learn more about tf.data, you can visit this [guide](https://www.tensorflow.org/guide/data)." ] } ], "metadata": { "accelerator": "GPU", "colab": { "collapsed_sections": [], "name": "images.ipynb", "private_outputs": true, "provenance": [], "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }