{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Deep Learning Models -- A collection of various deep learning architectures, models, and tips for TensorFlow and PyTorch in Jupyter Notebooks.\n", "- Author: Sebastian Raschka\n", "- GitHub Repository: https://github.com/rasbt/deeplearning-models" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Sebastian Raschka \n", "\n", "CPython 3.6.8\n", "IPython 7.2.0\n", "\n", "torch 1.1.0\n" ] } ], "source": [ "%load_ext watermark\n", "%watermark -a 'Sebastian Raschka' -v -p torch" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Using PyTorch Dataset Loading Utilities for Custom Dataset -- Asian Face Dataset (AFAD)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This notebook provides an example for how to prepare a custom dataset for PyTorch's data loading utilities. More in-depth information can be found in the official documentation at:\n", "\n", "- [Data Loading and Processing Tutorial](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html)\n", "- [torch.utils.data](http://pytorch.org/docs/master/data.html) API documentation\n", "\n", "In this example, we are using the Asian Face Dataset (AFAD), which is a face image dataset with age labels [1]. There are two versions of this dataset, a smaller Lite version and the full version, which are available at\n", "\n", "- https://github.com/afad-dataset/tarball-lite\n", "- https://github.com/afad-dataset/tarball\n", "\n", "Here, we will be working with the Lite dataset, but the same code can be used for the full dataset as well -- the Lite \n", "dataset is just slightly smaller than the full dataset and thus faster to process.\n", "\n", "[1] Niu, Z., Zhou, M., Wang, L., Gao, X., & Hua, G. (2016). Ordinal regression with multiple output cnn for age estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4920-4928)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Imports" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "import time\n", "import os\n", "import pandas as pd\n", "import numpy as np\n", "from PIL import Image\n", "from torchvision import datasets\n", "from torchvision import transforms\n", "from torch.utils.data import DataLoader\n", "from torch.utils.data import SubsetRandomSampler\n", "from torch.utils.data import Dataset\n", "import torch.nn.functional as F\n", "import torch\n", "\n", "\n", "if torch.cuda.is_available():\n", " torch.backends.cudnn.deterministic = True" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Downloading the Dataset" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following lines of code (bash commands) will download, unzip, and untar the dataset from GitHub." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Cloning into 'tarball-lite'...\n", "remote: Enumerating objects: 37, done.\u001b[K\n", "remote: Total 37 (delta 0), reused 0 (delta 0), pack-reused 37\u001b[K\n", "Unpacking objects: 100% (37/37), done.\n", "Checking out files: 100% (30/30), done.\n" ] } ], "source": [ "# Download\n", "!git clone https://github.com/afad-dataset/tarball-lite.git" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "# Join individual tars\n", "!cat tarball-lite/AFAD-Lite.tar.xz* > tarball-lite/AFAD-Lite.tar.xz" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "# \"Unzip\"\n", "!tar xf tarball-lite/AFAD-Lite.tar.xz" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "# Get image paths\n", "rootDir = 'AFAD-Lite'\n", "\n", "files = [os.path.relpath(os.path.join(dirpath, file), rootDir)\n", " for (dirpath, dirnames, filenames) in os.walk(rootDir) \n", " for file in filenames if file.endswith('.jpg')]" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Number of images in total: 59344\n" ] } ], "source": [ "print(f'Number of images in total: {len(files)}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Creating Label Files (CSVs)" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "d = {}\n", "\n", "d['age'] = []\n", "d['gender'] = []\n", "d['file'] = []\n", "d['path'] = []\n", "\n", "for f in files:\n", " age, gender, fname = f.split('/')\n", " if gender == '111':\n", " gender = 'male'\n", " else:\n", " gender = 'female'\n", " \n", " d['age'].append(age)\n", " d['gender'].append(gender)\n", " d['file'].append(fname)\n", " d['path'].append(f)" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", " | age | \n", "gender | \n", "file | \n", "path | \n", "
---|---|---|---|---|
0 | \n", "39 | \n", "female | \n", "474596-0.jpg | \n", "39/112/474596-0.jpg | \n", "
1 | \n", "39 | \n", "female | \n", "397477-0.jpg | \n", "39/112/397477-0.jpg | \n", "
2 | \n", "39 | \n", "female | \n", "576466-0.jpg | \n", "39/112/576466-0.jpg | \n", "
3 | \n", "39 | \n", "female | \n", "399405-0.jpg | \n", "39/112/399405-0.jpg | \n", "
4 | \n", "39 | \n", "female | \n", "410524-0.jpg | \n", "39/112/410524-0.jpg | \n", "