{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Copyright (c) Microsoft Corporation. All rights reserved.\n", "\n", "Licensed under the MIT License." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Building Models for Accuracy vs. Speed\n", "\n", "The goal of this notebook is to understand how to train a model with different parameters to achieve either a highly accurate but slow during inference model, or a model with fast inference but lower accuracy.\n", "\n", "For example, in IoT settings the inferencing device has limited computational capabilities. This means we need to design our models to have a small memory footprint. In contrast, medical scenarios often require the highest possible accuracy because the cost of mis-classification could impact the well-being of a patient. In this scenario, the accuracy of the model can not be compromised. \n", "\n", "We have conducted various experiments on diverse datasets to find parameters which work well in a wide variety of settings balancing high accuracy or fast inference. In this notebook, we provide these parameters so that your initial models can be trained without any parameter tuning. For most datasets, these parameters are close to optimal. In the second part of the notebook, we provide guidelines on how to fine-tune these parameters based on how they impact the model.\n", "\n", "We recommend first training your model with the default parameters, evaluating the results, and then fine-tuning parameters to achieve better results as necessary." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Table of Contents:\n", "* [Training a High Accuracy, Fast Inference, or Small Size Classifier](#model)\n", " * [Choosing between two types of models](#choosing)\n", " * [Pre-processing](#preprocessing)\n", " * [Training](#training)\n", " * [Evaluation](#evaluation)\n", "* [Fine tuning our models](#finetuning)\n", " * [DNN architectures](#dnn)\n", " * [Key parameters](#key-parameters)\n", " * [Additional parameters](#other-parameters)\n", " * [Testing parameters](#testing-parameters)\n", "* [Appendix](#appendix)\n", " * [Learning rate](#appendix-learning-rate)\n", " * [Image size](#appendix-imsize)\n", " * [How we found good parameters](#appendix-good-parameters)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Training a High Accuracy, Fast Inference, or Small Size Classifier " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's first verify our fast.ai version:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'1.0.57'" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import fastai\n", "fastai.__version__" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ensure edits to libraries are loaded and plotting is shown in the notebook." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "%reload_ext autoreload\n", "%autoreload 2\n", "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Import all the functions we need." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "import sys\n", "sys.path.append(\"../../\")\n", "import os\n", "from pathlib import Path\n", "import scrapbook as sb\n", "\n", "from fastai.metrics import accuracy\n", "from fastai.vision import (\n", " models, ImageList, imagenet_stats, cnn_learner, get_transforms, open_image, partial\n", ")\n", "\n", "from utils_cv.classification.data import Urls, is_data_multilabel\n", "from utils_cv.classification.model import hamming_accuracy, TrainMetricsRecorder\n", "from utils_cv.common.data import unzip_url\n", "from utils_cv.common.gpu import db_num_workers, which_processor" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Fast.ai version = 1.0.57\n", "Torch is using GPU: Tesla V100-PCIE-16GB\n" ] } ], "source": [ "print(f\"Fast.ai version = {fastai.__version__}\")\n", "which_processor()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we've set up our notebook, let's set the hyperparameters based on which model type was selected." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Choosing between types of models " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For most scenarios, computer vision practitioners want to create a high accuracy model, a fast-inference model or a small size model. Set your `MODEL_TYPE` variable to one of the following: `\"high_accuracy\"`, `\"fast_inference\"`, or `\"small_size\"`.\n", "\n", "We will use the `FridgeObjects` dataset from a [previous notebook](01_training_introduction.ipynb) again. You can replace the `DATA_PATH` variable with your own data.\n", "\n", "When choosing the batch size, remember that even mid-level GPUs run out of memory when training a deeper ResNet model with larger image resolutions. If you get an _out of memory_ error, try reducing the batch size by a factor of 2." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "tags": [ "parameters" ] }, "outputs": [], "source": [ "# Choose between \"high_accuracy\", \"fast_inference\", or \"small_size\"\n", "MODEL_TYPE = \"fast_inference\"\n", "\n", "# Path to your data\n", "DATA_PATH = unzip_url(Urls.fridge_objects_path, exist_ok=True)\n", "\n", "# Epochs to train for\n", "EPOCHS_HEAD = 4\n", "EPOCHS_BODY = 12\n", "LEARNING_RATE = 1e-4\n", "BATCH_SIZE = 16 \n", "\n", "#Set parameters based on your selected model.\n", "assert MODEL_TYPE in [\"high_accuracy\", \"fast_inference\", \"small_size\"]\n", "if MODEL_TYPE == \"high_accuracy\":\n", " ARCHITECTURE = models.resnet50\n", " IM_SIZE = 500 \n", " \n", "if MODEL_TYPE == \"fast_inference\":\n", " ARCHITECTURE = models.resnet18\n", " IM_SIZE = 300 \n", "\n", "if MODEL_TYPE == \"small_size\":\n", " ARCHITECTURE = models.squeezenet1_1\n", " IM_SIZE = 300 " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll automatically determine if your dataset is a multi-label or traditional (single-label) classification problem. To do so, we'll use the `is_data_multilabel` helper function. In order to detect whether or not a dataset is multi-label, the helper function will check to see if the datapath contains a csv file that has a column 'labels' where the values are space-delimited. You can inspect the function by calling `is_data_multilabel??`.\n", "\n", "This function assumes that your multi-label dataset is structured in the recommended format shown in the [multilabel notebook](02_multilabel_classification.ipynb)." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "multilabel = is_data_multilabel(DATA_PATH)\n", "metric = accuracy if not multilabel else hamming_accuracy" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Pre-processing \n", "\n", "JPEG decoding represents a performance bottleneck on systems with powerful GPUs which can slow down training significantly. We recommend creating a down-sized copy of the dataset if training takes too long, or if you require multiple training runs to evaluate different parameters. \n", "\n", "The following function will automate image downsizing.\n", "```python\n", "from utils_cv.classification.data import downsize_imagelist\n", "\n", "downsize_imagelist(\n", " im_list = ImageList.from_folder(Path(DATA_PATH)),\n", " out_dir = \"downsized_images\", \n", " max_dim = IM_SIZE\n", ")\n", "```\n", "\n", "Once complete, update the `DATA_PATH` variable to point to `out_dir` so that this notebook uses these resized images. \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Training \n", "\n", "We'll now re-apply the same steps we did in the [01_training_introduction](01_training_introduction.ipynb) notebook here." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Load the data:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "label_list = (\n", " (\n", " ImageList.from_folder(Path(DATA_PATH))\n", " .split_by_rand_pct(valid_pct=0.2, seed=10)\n", " .label_from_folder()\n", " )\n", " if not multilabel\n", " else (\n", " ImageList.from_csv(Path(DATA_PATH), \"labels.csv\", folder=\"images\")\n", " .split_by_rand_pct(valid_pct=0.2, seed=10)\n", " .label_from_df(label_delim=\" \")\n", " )\n", ")" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "data = (\n", " label_list.transform(tfms=get_transforms(), size=IM_SIZE)\n", " .databunch(bs=BATCH_SIZE, num_workers = db_num_workers())\n", " .normalize(imagenet_stats)\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create the learner." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "learn = cnn_learner(data, ARCHITECTURE, metrics=metric, \n", " callback_fns=[partial(TrainMetricsRecorder, show_graph=True)])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Train the last layer for a few epochs, this can use a larger rate since most of the DNN is fixed." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
epochtrain_lossvalid_losstrain_accuracyvalid_accuracytime
02.4192611.6575030.2500000.30769200:14
11.7786180.7610480.5833330.65384600:09
21.3166560.5234790.7812500.76923100:11
31.0523590.4919360.8333330.80769200:09
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYIAAAEGCAYAAABo25JHAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nO3deXxU1f3/8deHkAWyQgIkECDsEEIIIaJWK+KCgFVaS1XUWq0t1dbWLrZSbUtra8vXtm6t1WqrdvEndanWyqZS6lI3QDBIWGUNBEgCBJKQ/fz+uEMIIYEhyTBJ5v18PHhk7txlPsno+Zx7zrnnmHMOEREJXV2CHYCIiASXEoGISIhTIhARCXFKBCIiIU6JQEQkxHUNdgCnKikpyaWlpQU7DBGRDmXFihVFzrleTe3rcIkgLS2N5cuXBzsMEZEOxcy2NbdPTUMiIiFOiUBEJMQpEYiIhLgO10cgIp1HdXU1+fn5VFRUBDuUTiMqKorU1FTCw8P9PkeJQESCJj8/n9jYWNLS0jCzYIfT4TnnKC4uJj8/n0GDBvl9npqGRCRoKioqSExMVBJoI2ZGYmLiKd9hKRGISFApCbStlvw9lQhEREKcEoGIhKTi4mKysrLIysoiOTmZfv361W9XVVX5dY0bb7yR9evXBzjSwFNnsYiEpMTERFatWgXAT3/6U2JiYrj99tuPOcY5h3OOLl2arjM/+eSTAY/zdNAdgYhIA5s2bSIjI4Obb76Z7OxsCgoKmDVrFjk5OYwePZq77767/thzzz2XVatWUVNTQ0JCArNnz2bs2LGcffbZ7N27N4i/xanRHYGItAs/+/ca8nYdbNNrpveNY85lo0/5vLy8PJ588kkeffRRAObOnUvPnj2pqalh0qRJzJgxg/T09GPOKSkpYeLEicydO5fvfve7PPHEE8yePbtNfo9A0x2BiEgjQ4YM4YwzzqjffuaZZ8jOziY7O5u1a9eSl5d33DndunVj6tSpAIwfP56tW7eernBbLWB3BGb2BPAZYK9zLqOJ/QY8CEwDyoEbnHMfBioeEWnfWlJzD5To6Oj61xs3buTBBx/kgw8+ICEhgeuuu67JcfoRERH1r8PCwqipqTktsbaFQN4RPAVMOcH+qcAw379ZwCMBjEVEpEUOHjxIbGwscXFxFBQUsHjx4mCH1OYCdkfgnHvTzNJOcMh04K/OOQe8Z2YJZpbinCsIVEwiIqcqOzub9PR0MjIyGDx4MOecc06wQ2pz5pXDAbq4lwheaaZp6BVgrnPubd/2EuAO59xxq86Y2Sy8uwYGDBgwftu2ZtdXEJEOZO3atYwaNSrYYXQ6Tf1dzWyFcy6nqeOD2Vnc1HPQTWYl59xjzrkc51xOr15NrrQmIiItFMxEkA/0b7CdCuwKUiwiIiErmIngZeB685wFlKh/QETk9Avk8NFngPOBJDPLB+YA4QDOuUeBBXhDRzfhDR+9MVCxiIhI8wI5amjmSfY74BuB+nwREfGPniwWEQlxSgQiErLOP//84x4Qe+CBB/j617/e7DkxMTEA7Nq1ixkzZjR73eXLjxsJf9znlJeX129PmzaNAwcO+Bt6m1IiEJGQNXPmTObNm3fMe/PmzWPmzBO2bAPQt29fnn/++RZ/duNEsGDBAhISElp8vdZQIhCRkDVjxgxeeeUVKisrAdi6dSu7du0iKyuLCy+8kOzsbMaMGcO//vWv487dunUrGRnes7KHDx/m6quvJjMzk6uuuorDhw/XH3fLLbfUT2E9Z84cAB566CF27drFpEmTmDRpEgBpaWkUFRUBcN9995GRkUFGRgYPPPBA/eeNGjWKr371q4wePZrJkycf8zmtoWmoRaR9WDgbdq9u22smj4Gpc5vdnZiYyIQJE1i0aBHTp09n3rx5XHXVVXTr1o0XX3yRuLg4ioqKOOuss7j88subXQ/4kUceoXv37uTm5pKbm0t2dnb9vnvuuYeePXtSW1vLhRdeSG5uLt/61re47777WLp0KUlJScdca8WKFTz55JO8//77OOc488wzmThxIj169GDjxo0888wzPP7441x55ZW88MILXHfdda3+M+mOQERCWsPmoSPNQs457rzzTjIzM7nooovYuXMne/bsafYab775Zn2BnJmZSWZmZv2+Z599luzsbMaNG8eaNWuanMK6obfffpvPfe5zREdHExMTwxVXXMFbb70FwKBBg8jKygLadqpr3RGISPtwgpp7IH32s5/lu9/9Lh9++CGHDx8mOzubp556isLCQlasWEF4eDhpaWlNTj3dUFN3C1u2bOE3v/kNy5Yto0ePHtxwww0nvc6J5n+LjIysfx0WFtZmTUO6IxCRkBYTE8P555/Pl7/85fpO4pKSEnr37k14eDhLly7lZBNdnnfeeTz99NMAfPzxx+Tm5gLeFNbR0dHEx8ezZ88eFi5cWH9ObGwshw4davJaL730EuXl5ZSVlfHiiy/y6U9/uq1+3SbpjkBEQt7MmTO54oor6puIrr32Wi677DJycnLIyspi5MiRJzz/lltu4cYbbyQzM5OsrCwmTJgAwNixYxk3bhyjR48+bgrrWbNmMXXqVFJSUli6dGn9+9nZ2dxwww311/jKV77CuHHjArriWUCnoQ6EnJwcd7LxuSLSMWga6sDoSNNQi4hIO6BEICIS4pQIRCSoOlrzdHvXkr+nEoGIBE1UVBTFxcVKBm3EOUdxcTFRUVGndJ5GDYlI0KSmppKfn09hYWGwQ+k0oqKiSE1NPaVzlAhEJGjCw8MZNGhQsMMIeWoaEhEJcUoEIiIhTolARCTEKRGIiIQ4JQIRkRCnRCAiEuKUCEREQpwSgYhIJ1FX59hXVkVVTd0pnacHykRE2rGK6lqKy6ooLq2kqLSSotIqikurKCqt9L3ne11Wxb6yKmrrHC994xyy+if4/RlKBCIip5FzjoOHayhsUJAXlzUo0EsrGxT0VRyqrGnyOt3Cw0iKjSAxOpLUHt3J6p9AYkwESTGRJMdpriERkdOqqqaOfWVVvhp7g4K8rIqiQ5UUNajR7yurorr2+En2zKBn9wgSY7zCfUxqAonREST5CvfEmEgSYyLo5fvZPaLtim8lAhGRRpxzHKqsaboJ5kgN/lAVRWWVFB2q5GBF07X2yK5dSIqJJCkmgj5xUYzuG+cV6NER9IqNJDE6sr4W36N7OF3DgtNtG9BEYGZTgAeBMOBPzrm5jfYPAP4CJPiOme2cWxDImEQkNNXUHqm1H6mte4V64ZHCvb793avBN9fhmtA93KuhR0cwKjmOpKER9bX1I4V+YnQkSbGRREeEYWZt/8s4B9XlUFEChw94PysOHN3OuAJievt9uYAlAjMLAx4GLgbygWVm9rJzLq/BYT8CnnXOPWJm6cACIC1QMYlI5+Gco7yqtkEHasOfDZtjvJ/7y6ubvE5EWJf6QjwxJoLhfWJJio0gqUFt/cjPntERhLdVrb2u9mgB3lRh3tz2kdd1Tf8+AKRkto9EAEwANjnnNgOY2TxgOtAwETggzvc6HtgVwHhEpJ2rrXPsL686pp39SEFev+1rdy8uq6Siuulae1xUV1/tPJJhvWM4a3DP+nb2pOgIkmK9Gn1iTCRxUV1bVms/Wa38ZNtVh058/S5dISoBouKhm+9nwoBjtxvvj0rw/nXzf8QQBDYR9AN2NNjOB85sdMxPgVfN7JtANHBRUxcys1nALIABAwa0eaAiEjiHfbX24gYFeMP29obt7vvKqqhrYrGyrl2svhM1KTaSIUnRDWrrx3ai9oyOILJrmH/B1dXC4f2BqZUDRMQ2KsgH+lGQ+7bDu3s9yKdBIBNBU79B4694JvCUc+63ZnY28Dczy3DOHZPmnXOPAY8B5OTkaE07kSCqq3McOFzdqAPVV9A37lQtraSsqrbJ68REdvXa02MiSUvqzvi0Hg1q68e2ucd3C2+61n5MrbwIDpTA7tNZK2+uIO8BkXEQ1jHG4wQyynygf4PtVI5v+rkJmALgnHvXzKKAJGBvAOMSET+UVtawZO0e3txQxN5DFfUF/JGHlhoL62L0jI7wDXmMZOCA7s12oiZGRxAV7qu1H9dWvst7vb8EdjVRC29csHeSWnkwBTIRLAOGmdkgYCdwNXBNo2O2AxcCT5nZKCAK0OKlIkFyqKKaJWv3Mn91AW9sKKSqpo6kmAj69+xOv4RuZPWPP6a2nhgT4dXiI2tJsHK6VJb4Cut9xxbWRQcgv5W18oYFdcJAPwryjlUrD6aA/YWcczVmdiuwGG9o6BPOuTVmdjew3Dn3MvA94HEz+w5es9ENzjk1/YicRgcrqlmydg/zc3fz5kav8E+Oi2JWdgyX9y5kKJ/QpaJBO/reEth+GmrlR16HSK08mKyjlbs5OTlu+fLlwQ5DpEM7WFHN63l7WLC6gDc3FNKrdi+fjtnJtF6FZIZtI75kLXao4OgJTdXK/WpeUa28vTCzFc65nKb26dsRCRElh6t5fU0BK1Yu5/C2DxnJZr4WvoPfRW2hW+0hqAF2d4FeI2HQREgZ641H7zPaK9hVK++0lAhEOquaKg7lf8y6lf+jZPMyEkrWMcW28nmrhK5Q1yUCSx6NJV/hK/SzoE86hHcLduRymikRiHQGVeWwZw3s/oiqHSsp2/YhMSUbiaWaM4ByoiiKG0HZgJl0H3YGlpJFl14jICw82JFLO6BEINLRHD4Au1dDwUewOxcKPsIVbcB8j9+UuhjW1KWxLeIzxKRlM2LcuYxMH8uALn4+ZCUhR4lApD0r3QsFuVCw6mjBv39r/e7yqD6sZxBv13yW3No09sWOJCdzDNMy+3JtanxgJjyTTkeJQKQ9cA5KdniFfUHu0UK/4cidHoOo6p3Jut7TWbyvD8/t7Mneijj69+zGtE+l8M0xKYzpp8JfTp0SgcjpVlcH+z7xFfofHS30D+/39lsXSBrhG7mTycEe6bxW3JuX1pXyzupiauscA3p254pPp3DpmBQy+sWp8JdWUSIQCaSaKihcV9+WT0Gu175fXebtD4vwhmeOutw3cmcs9E5nX3VXFq/ZzYLVBbzzSTG1dVsZmNidWecN5tIxKYzuq8Jf2o4SgUhbaTByp77Q35sHtVXe/vBob1z+uOuOjtHvNbJ+5E5xaSWL1+xhwaLVvLvZq/mnJXbna+cNZpoKfwkgJQKRljgycqe+pv8RFG2AIxPnduvhFfZn3nx0jH7PwdDl2EVNiksrWbRmGwtWF/De5n3U1jkGJUVz80Sv8E9PUeEvgadEIHIyDUfuHCn4G4zcIbavV7tPnw7JmV7BH5/a7JO4RaWVLPp4t6/wL6bOweCkaG6ZOIRpY1IYlRKrwl9OKyUCkSPqR+7kHjNGv/HIHVLGQvb1kDzW7yUBCw9VsmjNbhbkFvD+Fl/h3yuab0wayrQxKYxMVuEvwaNEIKHJ75E75x3txE0e402k5qe9hypY/PFu5q8u4IMt+6hzMKRXNLdOGsq0zBRG9FHhL+2DEoF0frXV3sidhmP093wMVaXe/rAI6J3uG7mT6bXn906HiO6n/FF7D1Ww6OPdzM8t4IOt+3AOhvaO4dYLhnHpmBSG94lR4S/tjhKBdC5V5d5InSNP4jY1cid5DGRd6yv0xx4zcqcl9h6sYKGv5r/MV/gP6x3Dty4YxqWZKQzvE9tGv5xIYCgRSMd13MidXChaf+zIneTMBiN3xvpG7rR+zp09BytYuLqABat3s2ybV/gP7xPDbRd6Nf9hKvylA1EikI7hyMid3Q3a9I8ZuZPiFfSjLjs6Rj++f5vOob+7pIKFHxewYHUBy7ftxzkY0SeWb184nEszkxnaW4W/dExKBNK++DVyJ80r7Md90WvP93PkTkvsLqlgweqjhT/AyORYvnPRcKaNSWFo75iAfK7I6aREIMHTeOTOkUL/mJE7w4+O3EnO9Nr3uyUENKyCksMsWO2N81/RoPD/3sXDmZaZwpBeKvylc1EikNNr3xb44DHY+WEzI3d8TTvJY705eFowcqcldh04XF/z/3D7AQBGpcRx+2Sv5j9Yhb90YkoEcnpUlsJbv4V3fw8Y9B0HWdcc7cRNGgFdI05rSDsPHGbh6gLmry5gpa/wT0+J4/uXjGBqRrIKfwkZSgQSWM5B7rPw+hyvnT/zarhoDsT1DUo4+fvLWbjaG+q5aodX+I/u6xX+08akMCgpOihxiQSTEoEEzs4VsHA25H8AfbPhyr9C/wmnPYz8/eUsWF3A/NW7+chX+Gf0i+MHU0YwLSOFNBX+EuKUCKTtHdoDS+6GVX+H6N4w/Q8wduZxM28G0o595fVt/h/llwAwpl88d0wZybQxyQxMVOEvcoQSgbSdmip4/xF449dQUwGf+hac932IijstH79jXznzfYV/rq/wz0yNZ/bUkUzLSGFA4unpeBbpaJQIpPWcgw2LYfGd3nDQ4VNg8j2QNDTgH729+Gjhv3qnV/iPTY3nh1NHMm1MCv17qvAXORklAmmdwg2w+Iew6XVIHAbXvgDDLgroR24rLqsv/D/eeRCAsf0TuHPaSKZmqPAXOVUBTQRmNgV4EAgD/uScm9vEMVcCPwUc8JFz7ppAxiRtpKIE3rgX3n8UwrvDJb+ECbNaNXnbiWwtOlr4r9nlFf5Z/RO4a9oopo5JJrWHCn+RlvIrEZjZC8ATwELnjszoddJzwoCHgYuBfGCZmb3snMtrcMww4IfAOc65/WYWmHkCpO3U1cLKv3udweXF3gItF/wYYnq1+UdtKSrzRvvkFpBX4BX+4wYk8KNLRzF1TAr9Erq1+WeKhCJ/7wgeAW4EHjKz54CnnHPrTnLOBGCTc24zgJnNA6YDeQ2O+SrwsHNuP4Bzbu+pBC+n2bZ3YdEd3jQQ/c+C616Avllt+hGbC0vrh3qu9RX+2Sr8RQLKr0TgnHsdeN3M4oGZwGtmtgN4HPi7c666idP6ATsabOcDZzY6ZjiAmf0Pr/nop865RY0vZGazgFkAAwYM8CdkaUsl+fDaHPj4eYjrB5//M2R8vs1m9vyksJQFud4Tvut2HwJg/MAe/Pgz6UzNSKavCn+RgPK7j8DMEoHrgC8CK4GngXOBLwHnN3VKE++5Jj5/mO/8VOAtM8twzh045iTnHgMeA8jJyWl8DQmU6sPwzu/g7fu9Of7P+wGc+22IaP0Y/E17S+vH+R8p/HMG9uAnn0ln6phkUuJV+IucLv72EfwTGAn8DbjMOXdkTuB/mNnyZk7LB/o32E4FdjVxzHu+O4otZrYeLzEs8zN+CQTnIO9f8OqPoWQ7pE+Hi38OPQa28rKOV3ILeHjpJtbtPoSZV/jPuSydqRkpJMdHtdEvICKnwt87gt875/7T1A7nXE4z5ywDhpnZIGAncDXQeETQS3hNTU+ZWRJeU9FmP2OSQNj9MSyaDVvfgt6j4Uv/9qaBbqUV2/bzi/l5rNx+gJHJsfz0snSmjkmhT5wKf5Fg8zcRjDKzD4802ZhZD2Cmc+4PzZ3gnKsxs1uBxXjt/08459aY2d3Acufcy759k80sD6gFvu+cK27NLyQtVL4Plt4Dy5+AqHi49LeQfQOEtW6E8Y595cxdtI75uQX0jo3k3s9n8vnxqYR10QLuIu2FOXfyJnczW+Wcy2r03krn3LiARdaMnJwct3x5c61Rcspqa7zCf+k9UHkIzrgJzv8hdO/ZqsuWHK7m4aWbeOp/WwnrYsw6bzCzzhtMdKSeYRQJBjNb0VwLjr//V3YxM3O+rOF7RuD0Th4vbW/zf73ZQQvXwqCJMGUu9Elv1SWra+t4+r1tPLhkIwcOVzMjO5XvTR6h9n+RdszfRLAYeNbMHsUb+XMzcNwwT+kg9m+FxXfBulcgYSBc9TSMvLRVw0Gdc7yWt4e5C9exuaiMTw1J5K5LRzG6b3zbxS0iAeFvIrgD+BpwC96w0FeBPwUqKAmQylJ4+z545/fQJcx7IvjsWyG8dbX1j3eW8Iv5eby3eR9DekXz5y/lcMHI3lgbPWcgIoHl7wNldXhPFz8S2HAkIJyD1c/Baz/xVgkbcyVc/LNWrxJWUHKYXy9ez4srd9KjewQ/nz6aqycMIDzs9K07ICKt5+9zBMOAXwHpQH310Tk3OEBxSVvZ+SEsvMO3Stg4+MJfYEDjB7xPTVllDY++8QmPv7WZujr42nlD+PqkIcRFBWbCOREJLH+bhp4E5gD3A5Pw5h3SfX97Vr9K2NMQ3QumPwxjr2nVKmG1dY7nlu/gN69uoKi0ksvG9uUHl4zQtM8iHZy/iaCbc26Jb+TQNuCnZvYWXnKQ9qSmypsa+o17fauE3epNDdHKVcLe3FDILxesZd3uQ4wf2IPHrx/PuAE92ihoEQkmfxNBhZl1ATb6HhLbCWjK6PZmw2JY9ENvlbBhl3hrBLRylbANew5xz/y1vLGhkAE9u/OHa7OZmpGsjmCRTsTfRPBtoDvwLeDneM1DXwpUUHKKijZ6CWDTa5A4FK59HoZd3KpLFh6q5P7XNzDvg+1ER3blrmmjuP5TA4nsGtZGQYtIe3HSROB7eOxK59z3gVK8/gFpDxqvEjb5Hm+VsK4tf9avorqWP7+9hT8s3URlTR3Xn53GbRcOo0e0nh8U6axOmgicc7VmNr7hk8USZHW1XifwkruhrAiyv+hbJazlrXV1dY5/fbSTXy9az66SCian92H21JEM7hXThoGLSHvkb9PQSuBfvtXJyo686Zz7Z0CikuZtfw8W/sC3StiZcO1z3rDQVvhgyz5+MT+P3PwSMvrFcd9VWZw1OLGNAhaR9s7fRNATKAYuaPCeA5QITpeSnfD6HO/BsNi+cMWfYMyMVk0LsaWojLkL17J4zR5S4qO478qxfDarH100M6hISPH3yWL1CwRL9WFvSoi37/OahM77Ppz7nVatEnagvIoHl2zkb+9uI6JrF26fPJybzh1Mtwh1BIuEIn+fLH6S45eZxDn35TaPSDzOwdqX4dUfwYHtMOpymPxz6JHW4ktW1tTyt3e38dCSjZRW1nDVGf35zsXD6R2rmUFFQpm/TUOvNHgdBXyO45edlLayZ403LcTWt6B3Olz/Mgye2OLLOedY9PFufrVwHdv3lXPe8F7cNW0UI5Jj2zBoEemo/G0aeqHhtpk9A7wekIhCWcNVwiLjYNpvYPyNrVolbNWOA9wzP49lW/czvE8MT914BueP0LOAInJUS0uYYcCAtgwkpNXWwIonvSRQUQI5N8GkO1u1Slj+/nLuXbSelz/aRVJMBL/83BiuzEmlq2YGFZFG/O0jOMSxfQS78dYokNba/Ia3WPzePG+R+Clzoc/oFl/uYEU1f1j6CU/8bwsG3DppKDefP4QYLREpIs3wt2lIjcltbf9WryN47b8hYQBc+TcYdVmLh4PW1NbxzLIdPPDaBorLqvjcuH58/5IR9E3o1rZxi0in4+8dweeA/zjnSnzbCcD5zrmXAhlcp1RVBm/dB+/8zrdK2I98q4S1rMB2zrF0/V5+uWAdm/aWMmFQT568dBSZqQltHLiIdFb+thfMcc69eGTDOXfAzOYASgT+ql8lbA4c2gVjvgAX/Qzi+7X4knm7DnLPgjz+t6mYQUnR/PGL45mc3kczg4rIKfE3ETTVw6hGZ3/tWukNB93xPqRkwReehAFntfhyew5W8NtX1/Pcinziu4Uz57J0rj1zIBFd1REsIqfO38J8uZndBzyM12n8TWBFwKLqLEr3ehPDrfw7RCfB5b+HrGtbvEpYeVUNj725mT++sZmaujpuOmcQ37xgGPHdtUSkiLScv4ngm8CPgX/4tl8FfhSQiDqDmir44I/eFNHV5XD2N2DiDyAqvkWXq61zvPBhPr99dT17DlYybUwyd0wZycDElk8zISJyhL+jhsqA2QGOpXPY8Cos/iEUb4KhF8OUX0HSsBZf7p1NRfxi/lryCg4ytn8CD1+TTU5ay58vEBFpzN9RQ68BX3DOHfBt9wDmOecuCWRwHUrRRlh8J2x81Vsl7JrnYPjkFl9u095SfrVgLUvW7aVfQjcevDqLyzL7amZQEWlz/jYNJR1JAgDOuf1mdtJ5CsxsCvAgEAb8yTk3t5njZgDPAWc455b7GVP70HCVsK7dYPIvYMLXWrxKWHFpJQ+8vpH/98F2uoeHcceUkdx4ThpR4ZoZVEQCw99EUGdmA5xz2wHMLI0mZiNtyLfE5cPAxUA+sMzMXnbO5TU6LhZvLeT3Ty30IKur860S9jNvlbBx18KFc1q8SlhFdS1PvbOVh/+zifLqWq6ZMIBvXzSMxJjINg5cRORY/iaCu4C3zewN3/Z5wKyTnDMB2OSc2wxgZvOA6UBeo+N+DtwL3O5nLMG3/X3fKmGrIHUCXPMs9Mtu0aWcc/w7t4D/W7iOnQcOc8HI3tw5bSRDe+thbhE5PfztLF5kZjl4hf8q4F/A4ZOc1g/Y0WA7Hziz4QFmNg7o75x7xcyaTQRmNsv32QwYEMS57o5ZJSwFrnjcezCshQ9wrdi2j5+/spZVOw4wKiWOe2dkcs7QpDYOWkTkxPztLP4KcBuQipcIzgLe5dilK487rYn36puTzKwLcD9ww8k+3zn3GPAYQE5OzgmbpAKiugLe/Z03NURdLXz6dm+VsMiWLey+vbicuYvWsmD1bnrHRnLvjEw+n51KmDqCRSQI/G0aug04A3jPOTfJzEYCPzvJOflA/wbbqRy7mE0skAH81zclQjLwspld3m46jJ3zJoV79S7fKmGXwcU/h56DWnS5kvJqfr90I395ZxthXYzbLhzG1yYOpnuEHtIWkeDxtwSqcM5VmBlmFumcW2dmI05yzjJgmJkNAnYCVwPXHNnpm8Cuvh3EzP4L3N5uksCePFh0B2x5E3qNguv/BYPPb9Glqmvr+Pt723hwyUZKDlczIzuV700eQXK8logUkeDzNxHk+2YcfQl4zcz2c5KlKp1zNWZ2K7AYb/joE865NWZ2N7DcOfdyawIPmPJ9sPSXsPzP3iphU38NOV9u0SphzjlezdvD3IXr2FJUxjlDE7lz2ihG923ZE8YiIoFgzp1ak7uZTQTigUXOuaqARHUCOTk5bvnyANw0HLdK2Jdh0l0tXiVsdX4Jv5ifx/tb9jGkVzR3XTqKSSN6a2ZQEQkKM1vhnMtpat8pV2N7K0gAAAswSURBVHOdc2+c/KgOZsubsHA27F0DaZ/2VglLzmjRpXYdOMxvFq/nnyt30jM6gp9PH83VEwYQriUiRaSdCu1eyv3bvI7gtf+G+AFw5V9h1OUtGg5aWlnDo//9hMff2owDbp44hK9PGkJclGYGFZH2LTQTQVUZvH0//O8hsC5eE9CnvtmiVcJqaut4dnk+9722gaLSSi4b25cfXDKC/j27ByBwEZG2F1qJwDlY/Ty89hNvlbCMGXDxzyA+tUWXe2NDIb+cv5b1ew4xfmAPHr9+POMG9GjjoEVEAit0EsGulV4/wI73IGUszHgCBp7dokut332Iexas5c0NhQzo2Z0/XJvN1IxkdQSLSIfU+RNBaaE3MdzKv0P3RLjsIRh3nbdw/Cnae6iC+1/bwD+W7SAmsit3TRvF9Z8aSGRXzQwqIh1X500EbbhK2OGqWv789mYe+e8nVNbUcf3Zadx24TB6RLdsqmkRkfakcyaCja/Boh9C8UYYehFc8ivoNfyUL1NX53hp1U5+vXg9BSUVTE7vw+ypIxncq2VzDImItEedKxEUbfKtErYYeg7xpoceNrlFw0Hf21zMPfPXsnpnCRn94rj/qizOGpwYgKBFRIKrcySCioPw5r3w3qPQNcqbGO7Mm1u0StjmwlLmLlzHq3l7SImP4r4rx/LZrH5aIlJEOq2Onwi2vAXP3whlhZB1HVz4E4jtc8qX2V9WxYNLNvL397YR2bULt08ezk3nDqZbhDqCRaRz6/iJoOdg6J0OF82BfuNP+fTKmlr++s42fvefjZRW1nDVGf35zsXD6R2rmUFFJDR0/EQQ3w++dOoTmTrnWPjxbuYuXMf2feWcN7wXd00bxYhkLREpIqGl4yeCFli5fT/3zF/L8m37GdEnlr98eQITh/cKdlgiIkERUolgx75y7l28nn9/tIukmAh+dcUYvjA+la6aGVREQlhIJIKDFdU8vHQTT/5vKwbcOmkoN58/hJjIkPj1RUROqFOXhDW1dTzzwXbuf30j+8qquGJcP26/ZAR9E059llERkc6qUyYC5xz/WbeXXy5YyyeFZZw5qCc/ujSdMalaIlJEpLFOlwjW7CrhnvlreeeTYgYlRfPHL45ncnofzQwqItKMTpMIdpdU8JtX1/PCh/nEdwtnzmXpXHvmQCK6qiNYROREOnwiKK+q4Y9vbOaxNzdTU1fHV84dxK2ThhHfXUtEioj4o8Mngrc3FvHgko1MG5PMHVNGMjAxOtghiYh0KB0+EVyc3odXvnkuGf3UESwi0hIdvgHdzJQERERaocMnAhERaR0lAhGREKdEICIS4gKaCMxsipmtN7NNZja7if3fNbM8M8s1syVmNjCQ8YiIyPEClgjMLAx4GJgKpAMzzSy90WErgRznXCbwPHBvoOIREZGmBfKOYAKwyTm32TlXBcwDpjc8wDm31DlX7tt8D0gNYDwiItKEQCaCfsCOBtv5vveacxOwsKkdZjbLzJab2fLCwsI2DFFERAKZCJqa5c01eaDZdUAO8Oum9jvnHnPO5Tjncnr10kpiIiJtKZBPFucD/RtspwK7Gh9kZhcBdwETnXOVAYxHRESaEMg7gmXAMDMbZGYRwNXAMavMm9k44I/A5c65vQGMRUREmhGwROCcqwFuBRYDa4FnnXNrzOxuM7vcd9ivgRjgOTNbZWYvN3M5EREJkIBOOuecWwAsaPTeTxq8viiQny8iIienJ4tFREKcEoGISIhTIhARCXFKBCIiIU6JQEQkxCkRiIiEOCUCEZEQp0QgIhLilAhEREKcEoGISIhTIhARCXFKBCIiIU6JQEQkxCkRiIiEOCUCEZEQp0QgIhLilAhEREKcEoGISIhTIhARCXFKBCIiIU6JQEQkxCkRiIiEOCUCEZEQp0QgIhLilAhEREKcEoGISIhTIhARCXFKBCIiIS6gicDMppjZejPbZGazm9gfaWb/8O1/38zSAhmPiIgcL2CJwMzCgIeBqUA6MNPM0hsddhOw3zk3FLgf+L9AxSMiIk0L5B3BBGCTc26zc64KmAdMb3TMdOAvvtfPAxeamQUwJhERaSSQiaAfsKPBdr7vvSaPcc7VACVAYuMLmdksM1tuZssLCwsDFK6ISGgKZCJoqmbvWnAMzrnHnHM5zrmcXr16tUlwIiLiCWQiyAf6N9hOBXY1d4yZdQXigX0BjElERBoJZCJYBgwzs0FmFgFcDbzc6JiXgS/5Xs8A/uOcO+6OQEREAqdroC7snKsxs1uBxUAY8IRzbo2Z3Q0sd869DPwZ+JuZbcK7E7g6UPGIiEjTApYIAJxzC4AFjd77SYPXFcAXAhmDiIicmJ4sFhEJcUoEIiIhTolARCTEKRGIiIQ4JQIRkRCnRCAiEuKUCEREQpwSgYhIiFMiEBEJcUoEIiIhTolARCTEKRGIiIQ462izPptZIbCt0dtJQFEQwpGW0ffVsej76lia+74GOueaXNmrwyWCppjZcudcTrDjEP/o++pY9H11LC35vtQ0JCIS4pQIRERCXGdJBI8FOwA5Jfq+OhZ9Xx3LKX9fnaKPQEREWq6z3BGIiEgLKRGIiIS4Dp8IzGyKma03s01mNjvY8UjzzOwJM9trZh8HOxY5OTPrb2ZLzWytma0xs9uCHZM0z8yizOwDM/vI9339zO9zO3IfgZmFARuAi4F8YBkw0zmXF9TApElmdh5QCvzVOZcR7HjkxMwsBUhxzn1oZrHACuCz+v+rfTIzA6Kdc6VmFg68DdzmnHvvZOd29DuCCcAm59xm51wVMA+YHuSYpBnOuTeBfcGOQ/zjnCtwzn3oe30IWAv0C25U0hznKfVthvv++VXT7+iJoB+wo8F2PvoPVaTNmVkaMA54P7iRyImYWZiZrQL2Aq855/z6vjp6IrAm3uu4bV0i7ZCZxQAvAN92zh0MdjzSPOdcrXMuC0gFJpiZX02wHT0R5AP9G2ynAruCFItIp+Nra34BeNo5989gxyP+cc4dAP4LTPHn+I6eCJYBw8xskJlFAFcDLwc5JpFOwdf5+GdgrXPuvmDHIydmZr3MLMH3uhtwEbDOn3M7dCJwztUAtwKL8TqynnXOrQluVNIcM3sGeBcYYWb5ZnZTsGOSEzoH+CJwgZmt8v2bFuygpFkpwFIzy8WrJL/mnHvFnxM79PBRERFpvQ59RyAiIq2nRCAiEuKUCEREQpwSgYhIiFMiEBEJcUoEIj5mVttgmOSqtpzN1szSNOuqtFddgx2ASDty2Pd4vkhI0R2ByEmY2VYz+z/fXO8fmNlQ3/sDzWyJmeX6fg7wvd/HzF70zQv/kZl9ynepMDN73DdX/Ku+pz8xs2+ZWZ7vOvOC9GtKCFMiEDmqW6Omoasa7DvonJsA/B54wPfe7/HWVsgEngYe8r3/EPCGc24skA0cedp9GPCwc240cAD4vO/92cA433VuDtQvJ9IcPVks4mNmpc65mCbe3wpc4Jzb7JuEbbdzLtHMivAWbqn2vV/gnEsys0Ig1TlX2eAaaXiP/A/zbd8BhDvnfmFmi/AW7HkJeKnBnPIip4XuCET845p53dwxTals8LqWo310lwIPA+OBFWamvjs5rZQIRPxzVYOf7/pev4M34y3AtXhLAwIsAW6B+oVC4pq7qJl1Afo755YCPwASgOPuSkQCSTUPkaO6+VZ3OmKRc+7IENJIM3sfr/I00/fet4AnzOz7QCFwo+/924DHfLOr1uIlhYJmPjMM+LuZxeMttHS/by55kdNGfQQiJ+HrI8hxzhUFOxaRQFDTkIhIiNMdgYhIiNMdgYhIiFMiEBEJcUoEIiIhTolARCTEKRGIiIS4/w/Dza7UtzJ6JwAAAABJRU5ErkJggg==\n", "text/plain": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "learn.fit_one_cycle(EPOCHS_HEAD, 10 * LEARNING_RATE)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Unfreeze the layers." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "learn.unfreeze()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Fine-tune the network for the remaining epochs." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
epochtrain_lossvalid_losstrain_accuracyvalid_accuracytime
00.3672720.4460860.8437500.80769200:11
10.4012980.3813880.8541670.80769200:11
20.3266900.3095910.9062500.84615400:11
30.2806590.3046250.9062500.88461500:11
40.2688480.2498570.9270830.84615400:11
50.2291080.1925540.9687500.92307700:11
60.2086330.2244820.9791670.92307700:10
70.1914110.2065680.9687500.92307700:10
80.1698210.2336920.9791670.92307700:09
90.1571890.2478920.9895830.88461500:05
100.1464950.2537300.9791670.92307700:04
110.1315430.2679200.9895830.88461500:04
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYIAAAEGCAYAAABo25JHAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nO3df3wV9Z3v8dcnv8NvSBCUgAHFSkDEGBFXq7i0FvoDrOu2cnVvobWsbl103e6WdvvYtt7uXXfrw6qPWru0BduulVpbW27XCq7LXWstSLCWSiiFiygRhPBDfoYkJ+dz/5iT5CScJIeQOSfJvJ+PRx45Z2bO5DOQfN8z3/nOjLk7IiISXTnZLkBERLJLQSAiEnEKAhGRiFMQiIhEnIJARCTi8rJdwJkqLS318vLybJchItKvbNq06YC7j041r98FQXl5OdXV1dkuQ0SkXzGzNzubp64hEZGIUxCIiEScgkBEJOL63TkCERk4mpqaqK2t5dSpU9kuZcAoKiqirKyM/Pz8tD+jIBCRrKmtrWXo0KGUl5djZtkup99zdw4ePEhtbS0TJ05M+3PqGhKRrDl16hQlJSUKgV5iZpSUlJzxEZaCQESySiHQu3ry76kgEBGJOAWBiETSwYMHmTFjBjNmzGDs2LGMGzeu9X1jY2Na61i8eDHbtm0LudLw6WSxiERSSUkJr732GgBf/vKXGTJkCJ/97GfbLePuuDs5Oan3mVeuXBl6nZmgIwIRkSQ7duxg2rRp3HHHHVRWVrJ3716WLFlCVVUVU6dO5b777mtd9pprruG1114jFosxYsQIli1bxqWXXspVV13F/v37s7gVZ0ZHBCLSJ3zl/2yhZs/RXl1nxXnD+NJHpp7x52pqali5ciXf+ta3ALj//vsZNWoUsViM66+/nptvvpmKiop2nzly5AjXXXcd999/P/feey8rVqxg2bJlvbIdYdMRgYhIBxdccAFXXHFF6/snn3ySyspKKisr2bp1KzU1Nad9pri4mHnz5gFw+eWXs2vXrkyVe9ZCOyIwsxXAh4H97j4txXwDHgY+CJwEFrn7q2HVIyJ9W0/23MMyePDg1tfbt2/n4Ycf5pVXXmHEiBHcdtttKcfpFxQUtL7Ozc0lFotlpNbeEOYRwePA3C7mzwMmJ76WAI+FWIuISI8cPXqUoUOHMmzYMPbu3cuaNWuyXVKvC+2IwN1fNLPyLhZZAHzf3R1Yb2YjzOxcd98bVk0iImeqsrKSiooKpk2bxqRJk7j66quzXVKvs6AdDmnlQRD8opOuoV8A97v7S4n3LwCfc/fTnjpjZksIjhqYMGHC5W++2enzFUSkH9m6dStTpkzJdhkDTqp/VzPb5O5VqZbP5qihVNdBp0wld18OLAeoqqoKL7lEZEAIxv9DszvN8aSvxI5vrhm5OcFXjrV879+3u4i7E09sZ2Mszq+213G0PsbRU0189LJxXX42m0FQC4xPel8G7MlSLSLSh3TWkMe9faN++jxa551pb4dh5OZATo61C4rWL7NgXsf5ielnGyTJDXlzh+1snR5v/28ST1o2nrS9+4818OknXml9f82FpV3+7GwGwWrgLjNbBVwJHNH5ARmo4nHnRGOMo6diHK1vCr5aXp9q4kRDjOKCPIYV5TGsOJ9hRfkMK85LfM9naGEeOTn9Z2/V3WmIxVu370hiz7Tjdl93TiNvHjyRaMho1/h115DntDTMrY1yDgW5QWOem9NxXlujDaTd0Maa4ikb2lSCIOk8TAxoTmxjqgY/vfW3X3d+fk7r65ykn9V8qIAf33FV6+/R6CGFXa47zOGjTwKzgVIzqwW+BOQDuPu3gGcJho7uIBg+ujisWkTOVncN+dF2DV3S+8TrY6eaiJ9Fp6YZDClsC4bOAqO3giTdhvxoh+nHkqY3Nse7/BmFeTlc+eGxnGqKtzZgBbk5bY1dJ3veya8zqdM99i7CpKkp3noU485pDXlhbuqG/Gy7rt7Jz2VK+ai0ty3MUUMLu5nvwGfC+vki6XJ3/l/dcX6z8xCv1x7h3frGHjXkQUPd1hCfO7yI94wZ2k3DHbwfVJBHfWNzotFN3ch2bHx3HzrJscT0Yw1dj1nvLEiGFubR0BzvUUNekJfD8KR1jSjOZ8KoQWmF1NCiPIryc9m6dSvvGTv0TP/LsiLHjJxcIy+3Z5939z57DkK3mJDISW741+88yIadBzlwPLjbZMngAkqHFDKsOI+xw4q4aMzQbhvxloYtL/fsLsspyMth+KD8difO0tUcd46fivUoSArzcxhWlM/w4nzGjyxO+0ijKL+HLWJE9dUQAAWBREDQ8J9g/c6Dia9DHDjeAMDYYUW8d/JoZk0axaxJJUwYNahP/8F2JjfHGD4ov8dBElWzZ8/m85//PB/4wAdapz300EP88Y9/5Jvf/GbKzwwZMoTjx4+zZ88eli5dytNPP51yvQ888ABVVSlHa7b+nCVLljBo0CAAPvjBD/LDH/6QESNGnOVWnTkFgQw43TX811xYwqxJJVx1Qf9t+KV3LFy4kFWrVrULglWrVvG1r32t28+ed955KUMgXQ899BC33XZbaxA8++yzPV7X2VIQSL/XVcM/Zlhha8M/a1IJ55eo4Zc2N998M1/84hdpaGigsLCQXbt2sWfPHmbMmMGcOXM4fPgwTU1NfPWrX2XBggXtPrtr1y4+/OEP8/rrr1NfX8/ixYupqalhypQp1NfXty535513snHjRurr67n55pv5yle+wiOPPMKePXu4/vrrKS0tZd26dZSXl1NdXU1paSkPPvggK1asAOD222/nnnvuYdeuXcybN49rrrmGl19+mXHjxvHzn/+c4uLis/53UBBIv+Pu7DxworXRX7/zIHXH1PD3e79cBu/8vnfXOfYSmHd/p7NLSkqYOXMmzz33HAsWLGDVqlV8/OMfp7i4mGeeeYZhw4Zx4MABZs2axfz58zv9XXrssccYNGgQmzdvZvPmzVRWVrbO+6d/+idGjRpFc3Mzc+bMYfPmzSxdupQHH3yQdevWUVrafoz/pk2bWLlyJRs2bMDdufLKK7nuuusYOXIk27dv58knn+Tb3/42H/vYx/jJT37Cbbfddtb/TAoC6fO6avjPGVrIn1zQ1vCXq+GXM9TSPdQSBCtWrMDd+cIXvsCLL75ITk4Ob7/9Nvv27WPs2LEp1/Hiiy+ydOlSAKZPn8706dNb5z311FMsX76cWCzG3r17qampaTe/o5deeomPfvSjrXdAvemmm/jVr37F/PnzmThxIjNmzAB691bXCgLpc+Jx542Davgjp4s99zDdeOON3Hvvvbz66qvU19dTWVnJ448/Tl1dHZs2bSI/P5/y8vKUt55Olur38I033uCBBx5g48aNjBw5kkWLFnW7nq4upCssbLswLDc3t10X1NlQEEjGxePOgeMN7D5cT+3hk9Qerk98neTtw/XUvltPYywYw37O0EKumtR2cjfUhv/UEXjjRYj3n/vI92vDJ4Cfff/22RoyZAizZ8/mk5/8JAsXBpc/HTlyhHPOOYf8/HzWrVtHdze6vPbaa3niiSe4/vrref3119m8eTMQ3MJ68ODBDC/OZ9+bO/jlL3/J7NmzARg6dCjHjh07rWvo2muvZdGiRSxbtgx355lnnuEHP/hB7294EgWB9LozaehblAwuoGxkMVPOHcb7K8YwsXQwMyeOYmLp4PD3+I/uhQ2PQfVKaOjdRyVKN+b+GA7kQ8Hgtq+czDdLCxcu5KabbmLVqlUA3HrrrXzkIx+hqqqKGTNmcPHFF3f5+TvvvJPFixczffp0Zlx6KTOrLofjB7h02oVcdvH5TL3kEiZNKOPqy6fC0T3w7lssWXQr8+bN5dxzz2PdunWt66qsrGTRokXMnDkTCE4WX3bZZaE+8SzU21CHoaqqyqurT7tTtWSQu1N3vKFdA197uJ7dh7pv6MtGDkp8b3s9bmQxgwqysE9Stw1efgR+9yPwZqi4Ea64HQalf2m+9JDH4cAf2dowhinjS6DpZNu8vKJEKAwJvucWBJdG91WxRmg8Dk0noOEExJK6a/KK2wdc08lg2caTwe8cBNNbtrVgMOQXg53dxYn96TbUkgZ3p2bv0dY+8kw7eiqWcq++oZs9+j7R0HfmrQ3w64dh238Ejc7li+Cqz8CoidmuLFrGTIWtW2H0eyDenGgkTwQNZf27cPJgsFxOXvtg6IWGssfcoak+UWei1nhTMM9yIH8QDBmb1Ph3uPq6aFjbemKnEqGQWNepd9uvp3WbB4V+lNSH/jqlRVNznI1vHGJtzT7WbnmHPUe6PrmUCaMSDf3FY4fyviljGN+XG/pU4nHYvgZeegh2r4fikXDd52DmEhjc9S16JQNycqFwaPAFnTSUR4J5mWwo483Bz27Z2286ERzNAOQkd2kNSQRUmkcuZsHy+cUweHQwrbmxfcAc3wfsC+blFbU/aujlo6Q+/tcbHScbY7z4xwOsrXmHF7bu50h9E4V5Obx38mjuef9FXDB6SFaOjocU5jFuRDGDC/vpr0qsEX7/FPz6ETiwDYaPh7n/ApV/EfxBSdalvBlbthrKWGPQ2DeegIbjp3fzFI8KrTEmtwCKC4KdFEg6SkqEYf0hOHkgmJfT4bxK0lFST7r7++lf98Bw6EQj/7l1H2u37ONX2+toiMUZXpzPnIvP4YapY7j2otF9f0+7rzp1FDY9Duu/Ccf2wphL4KbvwNQbITc/29VJQlFREQcPHqSkpKT7QQGdNpQt3UnpNZStWrp5kvf2m4ObD57ezRN+98xpUh0lday3Q3eS5w/i4IlmigrO7IaAamUybPehk61dPht3HSLucN7wIhbOnMANFWO4YuIo8s/yLpa9ovEEvL0p+MMYVwXFmb8RVo8cewfWPwbVK4IRQBOvhQXfgAvm9O0TjhFVVlZGbW0tdXV1Z78yzw/662MN0HwCYvvahgJbThAkeYlx+LGG4He7tZsnr21+XmEQJBYD3k189UU5EM8LjmKa6yF2CJobKDqyk7JX/xVGjoPxV8KEWXDR3C7XpCAIWcvJ3rVb9rG2Zh9b9wbDEy8eO5S7rr+QG6aOZep5w7J/UdTRvUHf+Vsbgu97N7eNasDgnCltv1Tjr4SR5X2rYT2wPTECaFXwxz9lPlx9N4yr7P6zkjX5+flMnBjiSfojb7f/vX7n9aDxHzO1/e/ziAl96/e5pxqOQW0hFN0Bb62HzT+C6u/C39R0+TENHw1BrDlO9ZuHWbPlHdZu2cfb79ZjBlXnj+QDU8fy/ooxnF+Sxf7peBzqtga/KLs3BN/fTVwwk1cM4y6HCVfC+FnB3lHLMrUb28bZDxnbtsyEK2Hs9Ox0uezeCL9+CP7wH0GtM24NRgCVXJD5WqTvazgeBEHL6J2BrjkGdX+AsdO6HD6qIOgl9Y3N/Gp7HWtr9vHC1n0cPtlEQV4O772wlBumjmHOlDGUdvPc0NA0ngy6ed5aH+wV7d4IDYkRGIPPCRryCVcFjfrYSyCvIPV64s2wf2vbHtZb6+HIW8G8/EFBgLTsZZVdEV53UjwO29cGQ0DfehmKRsDMT8PMv4Qho8P5mSL9nIIgJIdPNPLCH/azdss7vLi9jlNNcYYV5TFnyhhuqAhO9mZltM2xd9rv7b+zua2vdPSU9nvyIyee3SHx0T0dftbvE11KBudUtP9ZI84/u58Va4TXnw5GANVthWFlwd5/5f+EwiE9X69IBCgIelHt4ZM8XxOM9Hll1yGa4865w4u4oWIMN0wdy8xMn+yNx4NDv9a99N8kdfMUnb6XHvZVsw3HTz/6aDwWzGvpTppwVVDT2EvS605qOAabvheMADr6dhAwV98N0/5MI4BE0qQgSKExFufYqdTPck39PsahE428ceAEABeNGcINFWO5YeoYLhk3PHMne1u6eVoa/tpX2i60GTw6cfJrVvB97PTOu3kyJd4M+2vaHzUc2R3Ma+lOaql5/BVQNLzts8f2wYZvwcbvBl1Z5e8NAuDC9w2ME3siGTQgg6AnDXny9Pqm5i5/Tm6OMTzFQ7tnjB/B+yvGMrE0Qyd7j+3rMJrnd0ndPBe3H/kwalL/aCBPG8nx+8QwPmsbzRFvCu4B1NwIUz4SBEBZ589/FZGuDaggmHZppTfP/989bsiD711ND94X5+dmfkhnPB5c/frW+raulcO7gnl5RXBeZVvXSia6eTKl4Ti8Xd223bXVQQDMWAh/slQjgER6wYC66VxerrFw1oS2hrtDIz68OIsN+Zlqqk/qT98Au19pu1JwUGmwp3/F7UG3ybmXZr+bJyyFQ2DS7OALgu6kWENwNaeIhK7/BUGO8Q8fqsh2GT1zfH/7vvK9v2u7c2Hpe6Biflv/fn/p5glDTq5CQCSD+l0Q9Bvx4H7rQV944uvwG8G83MLgiterPtPWvz9QunlEpN9REPSWpnp4+9Wkk6Abkrp5SoI9/apPBg3/uZe23fNERCTLFAQ9dbyubW9/9wbY81pSN89FwUiXlmGRJRdEt5tHRPq8UIPAzOYCDwO5wHfc/f4O8ycA3wNGJJZZ5u7PdrnSeAw2/Fs4BXcnHgtuWrV7PRzaGUzLLQhG81z1V4mx8FfC4JLs1Cci0gOhBYGZ5QKPAu8HaoGNZrba3ZNvg/dF4Cl3f8zMKoBngfIuVxyPwS//Ppyi01E8KtjTr/xEoptnBuQXZa8eEZGzFOYRwUxgh7vvBDCzVcACIDkIHGi5DeBwYE+3a80rhL9/o3crPRPFI9XNIyIDSphBMA7YnfS+FriywzJfBtaa2V8Dg4H3pVqRmS0BlgBMmDBBI2xERHpRmHdHS7Xb3PEy5oXA4+5eBnwQ+IFZx+fJgbsvd/cqd68aPVq3GRYR6U1hBkEtMD7pfRmnd/18CngKwN1/AxQBpSHWJCIiHYQZBBuByWY20cwKgFuA1R2WeQuYA2BmUwiCoBceXioiIukKLQjcPQbcBawBthKMDtpiZveZ2fzEYn8LfNrMfgc8CSzy/nYXPBGRfi7U6wgS1wQ822HaPya9rgGuDrMGERHpWgYfpSUiIn2RgkBEJOIUBCIiEacgEBGJOAWBiEjEKQhERCJOQSAiEnEKAhGRiFMQiIhEnIJARCTiFAQiIhGnIBARiTgFgYhIxCkIREQiTkEgIhJxCgIRkYhTEIiIRJyCQEQk4hQEIiIRpyAQEYk4BYGISMQpCEREIk5BICIScQoCEZGIUxCIiEScgkBEJOIUBCIiERdqEJjZXDPbZmY7zGxZJ8t8zMxqzGyLmf0wzHpEROR0aQWBmf3EzD5kZmkHh5nlAo8C84AKYKGZVXRYZjLweeBqd58K3JN25SIi0ivSbdgfA/4HsN3M7jezi9P4zExgh7vvdPdGYBWwoMMynwYedffDAO6+P816RESkl6QVBO7+n+5+K1AJ7AKeN7OXzWyxmeV38rFxwO6k97WJackuAi4ys1+b2Xozm5tqRWa2xMyqzay6rq4unZJFRCRNZ9LVUwIsAm4Hfgs8TBAMz3f2kRTTvMP7PGAyMBtYCHzHzEac9iH35e5e5e5Vo0ePTrdkERFJQ146C5nZT4GLgR8AH3H3vYlZPzKz6k4+VguMT3pfBuxJscx6d28C3jCzbQTBsDHN+kVE5Cyle0TwDXevcPd/TgoBANy9qpPPbAQmm9lEMysAbgFWd1jmZ8D1AGZWStBVtDPt6kVE5KylGwRTkrtszGykmf1VVx9w9xhwF7AG2Ao85e5bzOw+M5ufWGwNcNDMaoB1wN+5+8Ez3goREekxc+/YbZ9iIbPX3H1Gh2m/dffLQqusE1VVVV5d3VlvlIiIpGJmmzrrwUn3iCDHzFpP/iauESjojeJERCS70jpZTNCF85SZfYtg5M8dwHOhVSUiIhmTbhB8DvhL4E6CYaFrge+EVZSIiGROWkHg7nGCq4sfC7ccERHJtHSvI5gM/DPBPYOKWqa7+6SQ6hIRkQxJ92TxSoKjgRjBuP/vE1xcJiIi/Vy6QVDs7i8QDDd9092/DPxpeGWJiEimpHuy+FTiFtTbzewu4G3gnPDKEhGRTEn3iOAeYBCwFLgcuA34RFhFiYhI5nR7RJC4eOxj7v53wHFgcehViYhIxnR7RODuzcDlyVcWi4jIwJHuOYLfAj83sx8DJ1omuvtPQ6lKREQyJt0gGAUcpP1IIQcUBCIi/Vy6VxbrvICIyACV7pXFKzn9MZO4+yd7vSIREcmodLuGfpH0ugj4KKc/dlJERPqhdLuGfpL83syeBP4zlIpERCSj0r2grKPJwITeLERERLIj3XMEx2h/juAdgmcUiIhIP5du19DQsAsREZHsSKtryMw+ambDk96PMLMbwytLREQyJd1zBF9y9yMtb9z9XeBL4ZQkIiKZlG4QpFou3aGnIiLSh6UbBNVm9qCZXWBmk8zs68CmMAsTEZHMSDcI/hpoBH4EPAXUA58JqygREcmcdEcNnQCWhVyLiIhkQbqjhp43sxFJ70ea2ZrwyhIRkUxJt2uoNDFSCAB3P0wazyw2s7lmts3MdphZp0cUZnazmbmZVaVZj4iI9JJ0gyBuZq23lDCzclLcjTRZ4hGXjwLzgApgoZlVpFhuKMGzkDekWYuIiPSidIeA/gPwkpn9d+L9tcCSbj4zE9jh7jsBzGwVsACo6bDc/wL+FfhsmrWIiEgvSuuIwN2fA6qAbQQjh/6WYORQV8YBu5Pe1yamtTKzy4Dx7p58m+vTmNkSM6s2s+q6urp0ShYRkTSle9O524G7gTLgNWAW8BvaP7rytI+lmNbanWRmOcDXgUXd/Xx3Xw4sB6iqquqyS0pERM5MuucI7gauAN509+uBy4Duds1rgfFJ78to/zCbocA04P+a2S6CcFmtE8YiIpmVbhCccvdTAGZW6O5/AN7TzWc2ApPNbKKZFQC3AKtbZrr7EXcvdfdydy8H1gPz3b36jLdCRER6LN2TxbWJ6wh+BjxvZofp5lGV7h4zs7uANUAusMLdt5jZfUC1u6/u6vMiIpIZ5n5mXe5mdh0wHHjO3RtDqaoLVVVVXl2tgwYRkTNhZpvcPWXX+xnfQdTd/7v7pUREpL/o6TOLRURkgFAQiIhEnIJARCTiFAQiIhGnIBARiTgFgYhIxCkIREQiTkEgIhJxCgIRkYhTEIiIRJyCQEQk4hQEIiIRpyAQEYk4BYGISMQpCEREIk5BICIScQoCEZGIUxCIiEScgkBEJOIUBCIiEacgEBGJOAWBiEjEKQhERCJOQSAiEnEKAhGRiFMQiIhEXKhBYGZzzWybme0ws2Up5t9rZjVmttnMXjCz88OsR0REThdaEJhZLvAoMA+oABaaWUWHxX4LVLn7dOBp4F/DqkdERFIL84hgJrDD3Xe6eyOwCliQvIC7r3P3k4m364GyEOsREZEUwgyCccDupPe1iWmd+RTwy1QzzGyJmVWbWXVdXV0vligiImEGgaWY5ikXNLsNqAK+lmq+uy939yp3rxo9enQvligiInkhrrsWGJ/0vgzY03EhM3sf8A/Ade7eEGI9IiKSQphHBBuByWY20cwKgFuA1ckLmNllwL8B8919f4i1iIhIJ0ILAnePAXcBa4CtwFPuvsXM7jOz+YnFvgYMAX5sZq+Z2epOViciIiEJs2sId38WeLbDtH9Mev2+MH++iIh0T1cWi4hEnIJARCTiFAQiIhGnIBARiTgFgYhIxCkIREQiTkEgIhJxCgIRkYhTEIiIRJyCQEQk4hQEIiIRpyAQEYk4BYGISMQpCEREIk5BICIScQoCEZGIUxCIiEScgkBEJOIUBCIiEacgEBGJOAWBiEjEKQhERCJOQSAiEnEKAhGRiFMQiIhEnIJARCTiFAQiIhGnIBARibhQg8DM5prZNjPbYWbLUswvNLMfJeZvMLPyMOsREZHThRYEZpYLPArMAyqAhWZW0WGxTwGH3f1C4OvAv4RVj4iIpBbmEcFMYIe773T3RmAVsKDDMguA7yVePw3MMTMLsSYREekgzCAYB+xOel+bmJZyGXePAUeAko4rMrMlZlZtZtV1dXUhlSsiEk1hBkGqPXvvwTK4+3J3r3L3qtGjR/dKcSIiEggzCGqB8Unvy4A9nS1jZnnAcOBQiDWJiEgHYQbBRmCymU00swLgFmB1h2VWA59IvL4Z+C93P+2IQEREwpMX1ordPWZmdwFrgFxghbtvMbP7gGp3Xw18F/iBme0gOBK4Jax6REQktdCCAMDdnwWe7TDtH5NenwL+PMwaRESka7qyWEQk4hQEIiIRpyAQEYk4BYGISMQpCEREIk5BICIScQoCEZGIUxCIiEScgkBEJOIUBCIiEacgEBGJOAWBiEjEWX+767OZ1QFvAqXAgSyXk0na3oEvatus7c2s89095ZO9+l0QtDCzanevynYdmaLtHfiits3a3r5DXUMiIhGnIBARibj+HATLs11Ahml7B76obbO2t4/ot+cIRESkd/TnIwIREekFCgIRkYjrd0FgZnPNbJuZ7TCzZdmuJ0xmNt7M1pnZVjPbYmZ3Z7umTDGzXDP7rZn9Itu1hM3MRpjZ02b2h8T/9VXZrilMZvY3id/n183sSTMrynZNvc3MVpjZfjN7PWnaKDN73sy2J76PzGaNyfpVEJhZLvAoMA+oABaaWUV2qwpVDPhbd58CzAI+M8C3N9ndwNZsF5EhDwPPufvFwKUM4O02s3HAUqDK3acBucAt2a0qFI8DcztMWwa84O6TgRcS7/uEfhUEwExgh7vvdPdGYBWwIMs1hcbd97r7q4nXxwgaiHHZrSp8ZlYGfAj4TrZrCZuZDQOuBb4L4O6N7v5udqsKXR5QbGZ5wCBgT5br6XXu/iJwqMPkBcD3Eq+/B9yY0aK60N+CYBywO+l9LRFoGAHMrBy4DNiQ3Uoy4iHg74F4tgvJgElAHbAy0RX2HTMbnO2iwuLubwMPAG8Be4Ej7r42u1VlzBh33wvBTh5wTpbradXfgsBSTBvw41/NbAjwE+Aedz+a7XrCZGYfBva7+6Zs15IheUAl8Ji7XwacoA91GfS2RL/4AmAicB4w2Mxuy25V0t+CoBYYn/S+jAF4WJnMzPIJQuAJd/9ptuvJgKuB+Wa2i6Dr70/N7N+zW1KoaoFad2850nuaIIKyOwwAAALVSURBVBgGqvcBb7h7nbs3AT8F/iTLNWXKPjM7FyDxfX+W62nV34JgIzDZzCaaWQHBSabVWa4pNGZmBH3HW939wWzXkwnu/nl3L3P3coL/3/9y9wG7x+ju7wC7zew9iUlzgJoslhS2t4BZZjYo8fs9hwF8cryD1cAnEq8/Afw8i7W0k5ftAs6Eu8fM7C5gDcFogxXuviXLZYXpauAvgN+b2WuJaV9w92ezWJP0vr8Gnkjs3OwEFme5ntC4+wYzexp4lWBU3G/pw7de6CkzexKYDZSaWS3wJeB+4Ckz+xRBIP559ipsT7eYEBGJuP7WNSQiIr1MQSAiEnEKAhGRiFMQiIhEnIJARCTiFAQiCWbWbGavJX312hW+ZlaefCdKkb6kX11HIBKyenefke0iRDJNRwQi3TCzXWb2L2b2SuLrwsT0883sBTPbnPg+ITF9jJk9Y2a/S3y13EIh18y+nbgX/1ozK04sv9TMahLrWZWlzZQIUxCItCnu0DX08aR5R919JvANgrujknj9fXefDjwBPJKY/gjw3+5+KcF9g1qufp8MPOruU4F3gT9LTF8GXJZYzx1hbZxIZ3RlsUiCmR139yEppu8C/tTddyZuAviOu5eY2QHgXHdvSkzf6+6lZlYHlLl7Q9I6yoHnEw8lwcw+B+S7+1fN7DngOPAz4GfufjzkTRVpR0cEIunxTl53tkwqDUmvm2k7R/chgifvXQ5sSjywRSRjFAQi6fl40vffJF6/TNtjFm8FXkq8fgG4E1qfvTyss5WaWQ4w3t3XETyMZwRw2lGJSJi05yHSpjjpLq8QPEe4ZQhpoZltINh5WpiYthRYYWZ/R/CUsZa7ht4NLE/cZbKZIBT2dvIzc4F/N7PhBA9e+noEHlUpfYzOEYh0I3GOoMrdD2S7FpEwqGtIRCTidEQgIhJxOiIQEYk4BYGISMQpCEREIk5BICIScQoCEZGI+/9BaoJ3LE0i+QAAAABJRU5ErkJggg==\n", "text/plain": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "learn.fit_one_cycle(EPOCHS_BODY, LEARNING_RATE)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Evaluation " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In [01_training introduction](01_training_introduction.ipynb), we demonstrated evaluating a CV model using the performance metrics for precision, recall and ROC. In this section, we will evaluate our model using the following characteristics:\n", "- accuracy (performance)\n", "- inference speed\n", "- parameter export size / memory footprint required" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Performance \n", "To keep things simple, we just look at the final evaluation metric on the validation set." ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "accuracy on validation set: 0.88\n" ] } ], "source": [ "_, validation_accuracy = learn.validate(learn.data.valid_dl, metrics=[metric])\n", "print(f\"{metric.__name__} on validation set: {float(validation_accuracy):2.2f}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Inference speed\n", "\n", "Time model inference speed." ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "im_folder = learn.data.classes[0] if not multilabel else 'images'\n", "im = open_image(f\"{(Path(DATA_PATH)/im_folder).ls()[0]}\")" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "23.1 ms ± 1.2 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)\n" ] } ], "source": [ "%%timeit\n", "learn.predict(im)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Memory footprint\n", "\n", "Export the model to inspect the size of the model file." ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "learn.export(f\"{MODEL_TYPE}\")" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "'fast_inference' is 44.77MB.\n" ] } ], "source": [ "size_in_mb = os.path.getsize(Path(DATA_PATH)/MODEL_TYPE) / (1024*1024.)\n", "print(f\"'{MODEL_TYPE}' is {round(size_in_mb, 2)}MB.\")" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "application/scrapbook.scrap.json+json": { "data": [ 0.807692289352417, 0.807692289352417, 0.8461538553237915, 0.8846153616905212, 0.8461538553237915, 0.9230769276618958, 0.9230769276618958, 0.9230769276618958, 0.9230769276618958, 0.8846153616905212, 0.9230769276618958, 0.8846153616905212 ], "encoder": "json", "name": "training_accuracies", "version": 1 } }, "metadata": { "scrapbook": { "data": true, "display": false, "name": "training_accuracies" } }, "output_type": "display_data" }, { "data": { "application/scrapbook.scrap.json+json": { "data": 0.8846153616905212, "encoder": "json", "name": "validation_accuracy", "version": 1 } }, "metadata": { "scrapbook": { "data": true, "display": false, "name": "validation_accuracy" } }, "output_type": "display_data" } ], "source": [ "# Preserve some of the notebook outputs\n", "training_accuracies = [x[0].numpy().ravel()[0] for x in learn.recorder.metrics]\n", "sb.glue(\"training_accuracies\", training_accuracies)\n", "sb.glue(\"validation_accuracy\", float(validation_accuracy))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Fine-tuning parameters \n", "\n", "If you use the default parameters we have provided, you can get good results across a wide variety of datasets. However, as in most machine learning projects, getting the best possible results for a new dataset often requires tuning the parameters further. The following section provides guidelines on optimizing for accuracy, inference speed, or model size for a given dataset. We'll go through the parameters that will make the largest impact on your model as well as the parameters that may not be worth modifying.\n", "\n", "Generally speaking, models for image classification come with a trade-off between training time versus model accuracy. The four parameters that have the biggest impact on this trade-off are the DNN architecture, image resolution, learning rate, and number of epochs. DNN architecture and image resolution will additionally affect the model's inference time and memory footprint. As a rule of thumb, deeper networks with high image resolution will achieve higher accuracy at the cost of large model sizes and low training and inference speeds. Shallow networks with low image resolution will result in models with fast inference speed, fast training speeds and low model sizes at the cost of the model accuracy. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## DNN architectures \n", "\n", "When choosing an architecture, we want to make sure it fits our requirements for accuracy, memory footprint, inference speed and training speeds. Some DNNs have hundreds of layers and end up with a large memory footprint and millions of parameters to tune, while others are compact and small enough to fit onto memory limited edge devices. \n", "\n", "Lets take a __squeezenet1_1__ model, a __resnet18__ model and __resnet50__ model and compare these using an experiment over diverse set of 6 datasets. (More about the datasets in the appendix below.)\n", "\n", "![architecture_comparisons](media/architecture_comparisons.png)\n", "\n", "As you can see from the graph, there is a clear trade-off when deciding between the models. \n", "\n", "In terms of accuracy, __resnet50__ outperforms the rest, but it also suffers from having the highest memory footprint, and the longest training and inference times. Alternatively, __squeezenet1_1__ performs the worst in terms of accuracy, but has the smallest memory footprint.\n", "\n", "Generally speaking, given enough data, the deeper DNN and the higher the image resolution, the higher the accuracy you'll be able to achieve with your model.\n", "\n", "---\n", "\n", "
See the code to generate the graphs\n", "

\n", "\n", "### Code snippet to generate graphs in this cell\n", "```python\n", "import pandas as pd\n", "from utils_cv.classification.parameter_sweeper import add_value_labels\n", "\n", "%matplotlib inline\n", "\n", "df = pd.DataFrame(\n", " {\n", " \"accuracy\": [0.9472, 0.9190, 0.8251],\n", " \"training_duration\": [385.3, 280.5, 272.5],\n", " \"inference_duration\": [34.2, 27.8, 27.6],\n", " \"memory\": [99, 45, 4.9],\n", " \"model\": [\"resnet50\", \"resnet18\", \"squeezenet1_1\"],\n", " }\n", ").set_index(\"model\")\n", "\n", "ax1, ax2, ax3, ax4 = df.plot.bar(\n", " rot=90, subplots=True, legend=False, figsize=(8, 10)\n", ")\n", "\n", "for ax in [ax1, ax2, ax3, ax4]:\n", " for i in [0, 1, 2]:\n", " if i == 0:\n", " ax.get_children()[i].set_color(\"r\")\n", " if i == 1:\n", " ax.get_children()[i].set_color(\"g\")\n", " if i == 2:\n", " ax.get_children()[i].set_color(\"b\")\n", "\n", "ax1.set_title(\"Accuracy (%)\")\n", "ax2.set_title(\"Training Duration (seconds)\")\n", "ax3.set_title(\"Inference Time (seconds)\")\n", "ax4.set_title(\"Memory Footprint (mb)\")\n", "\n", "ax1.set_ylabel(\"%\")\n", "ax2.set_ylabel(\"seconds\")\n", "ax3.set_ylabel(\"seconds\")\n", "ax4.set_ylabel(\"mb\")\n", "\n", "ax1.set_ylim(top=df[\"accuracy\"].max() * 1.3)\n", "ax2.set_ylim(top=df[\"training_duration\"].max() * 1.3)\n", "ax3.set_ylim(top=df[\"inference_duration\"].max() * 1.3)\n", "ax4.set_ylim(top=df[\"memory\"].max() * 1.3)\n", "\n", "add_value_labels(ax1, percentage=True)\n", "add_value_labels(ax2)\n", "add_value_labels(ax3)\n", "add_value_labels(ax4)\n", "```\n", "\n", "

\n", "
\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Key parameters \n", "This section examines some of the key parameters when training a deep learning model for image classification. The table below shows default parameters we recommend using.\n", "## Key Parameters \n", "This section examines some of the key parameters used in training a deep learning model for image classification. The table below shows default parameters:\n", "\n", "| Parameter | Default Value |\n", "| --- | --- |\n", "| Learning Rate | 1e-4 |\n", "| Epochs | 15 |\n", "| Batch Size | 16 |\n", "| Image Size | 300 X 300 |\n", "\n", "__Learning rate__ \n", "\n", "Learning rate or the step size is used when optimizing your model with gradient descent and tends to be one of the most important parameters to set when training your model. If your learning rate is set too low, training will progress very slowly since we're only making tiny updates to the weights in your network. However, if your learning rate is too high, it can cause undesirable divergent behavior in your loss function. Generally speaking, choosing a learning rate of 1e-4 was shown to work pretty well for most datasets. If you want to reduce training time (by training for fewer epochs), you can try setting the learning rate to 5e-3, but if you notice a spike in the training or validation loss, you may want to try reducing your learning rate.\n", "\n", "The learning rate section of [appendix below](#appendix-learning-rate) has more detail.\n", "\n", "__Epochs__\n", "\n", "An _epoch_ is a full gradient descent iteration cycle across the DNN architecture. Unless your are working with small datasets, using around 15 epochs tends to work well in most cases. When it comes to choosing the number of epochs, a common question is - _Won't too many epochs cause overfitting_? It turns out that the accuracy on the test set typically does not get worse, even if training for too many epochs. Unless your are working with small datasets, using around 15 epochs tends to work pretty well in most cases.\n", "\n", "\n", "__Batch Size__\n", "\n", "Batch size is the number of training samples you use in order to make one update to the model parameters. A batch size of 16 or 32 works well for most cases. Larger batch sizes help speed training time, but at the expense of an increased DNN memory consumption. Depending on your dataset and the GPU you have, you can start with a batch size of 32, and move down to 16 if your GPU doesn't have enough memory. After a certain batch size, improvements to training speed become marginal, hence we found 16 (or 32) to be a good trade-off between training speed and memory consumption. If you reduce the batch size, you may also have to reduce the learning rate.\n", "\n", "__Image size__ \n", "\n", "The default image size is __300 X 300__ pixels. Using higher image resolutions can help improve model accuracy but will result in longer training and inference times.\n", "\n", "The [appendix below](#appendix-imsize) discussed impact of image resolution in detail.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Additional parameters \n", "\n", "There are many hyperparameters used to tune DNNs, though in our experience the exact value of these parameters does not have a large impact on model performance, training/inference speed, or memory footprint. \n", "\n", "| Parameter | Good Default Value |\n", "| --- | --- |\n", "| Dropout | 0.5 or (0.5 on the final layer and 0.25 on all previous layers) |\n", "| Weight Decay | 0.01 |\n", "| Momentum | 0.9 or (min=0.85 and max=0.95 when using cyclical momentum) |\n", "\n", "__Dropout__\n", "\n", "Dropout is used to discard activations at random when training your model. It is a way to keep the model from over-fitting on the training data. In fast.ai, dropout is set to 0.5 by default on the final layer, and 0.25 on all other layer. Unless there is clear evidence of over-fitting, this dropout tends to work well.\n", "\n", "__Weight decay (L2 regularization)__\n", "\n", "Weight decay is a regularization term applied to help minimize the network loss function. We can think of it as a penalty applied to the weights after an update to prevent the weights from growing too large (the model may not converge if the weights get too large). In fast.ai, the default weight decay is 0.1, which we find to be almost always acceptable. \n", "\n", "__Momentum__\n", "\n", "Momentum is a way to accelerate convergence when training a model. Momentum uses a weighted average of the most recent updates applied to the current update. Fast.ai implements cyclical momentum when calling `fit_one_cycle()`, so the momentum will fluctuate over the course of the training cycle. We control this by setting a min and max value for the momentum. \n", "\n", "When using `fit_one_cycle()`, the default values of max=0.95 and min=0.85 are known to work well. If using `fit()`, the default value of 0.9 has been shown to work well. These defaults represent a good trade-off between training speed and the ability of the model to converge to a good solution." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Testing parameters \n", "The `ParameterSweeper` module can be used to search over the parameter space to locate the \"best\" value for that parameter. See the [exploring hyperparameters notebook](./11_exploring_hyperparameters.ipynb) for more information. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Appendix " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Learning rate \n", "\n", "Setting a low learning rate requires training for many epochs to reach convergence. However, each additional epoch directly increases the model training time in a linear fashion. To efficiently build a model, it helps to set the learning rate in the correct range. To demonstrate this, we've tested various learning rates on 6 different datasets, training the full network for 3 or 15 epochs.\n", "\n", "![lr_comparisons](media/lr_comparisons.png)\n", "\n", "
Understanding the diagram\n", "

\n", " \n", "> The figure on the left shows results of different learning rates on different datasets at 15 epochs. We see that a learning rate of 1e-4 results in the the best overall accuracy for the datasets we have tested. Notice there is a pretty significant variance between the datasets and a learning rate of 1-e3 may work better for some datasets. \n", "In the figure on the right, at 15 epochs, the results of 1e-4 are only slightly better than that of 1e-3. However, at only 3 epochs, a learning rate of 1e-3 out performs the smaller learning rates. This makes sense since we're limiting the training to only 3 epochs so a model that updates weights more quickly should perform better. Effectively a larger learning rate gets closer to the model convergence. This result indicates higher learning rates (such as 1e-3) may help minimize the training time, and lower learning rates (such as 1e-5) may be better if training time is not constrained. \n", "\n", "

\n", "
\n", "\n", "In both figures, we can see that a learning rate of 1e-3 and 1e-4 tends to workin general. We observe that training with 3 epochs results in lower accuracy compared to 15 epochs. And in some cases, smaller learning rates may prevent the DNN from converging. \n", "\n", "Fast.ai has implemented [one cycle policy with cyclical momentum](https://arxiv.org/abs/1803.09820) which adaptively optimizes the learning rate. This function takes a maximum learning rate value as an argument to help the method avoid the convergence problem. Replace the `fit()` method with `fit_one_cycle()` to use this capability.\n", "\n", "---\n", "\n", "
See the code to generate the graphs\n", "

\n", "\n", "### Code snippet to generate graphs in this cell\n", "\n", "```python\n", "import matplotlib.pyplot as plt\n", "%matplotlib inline\n", "\n", "df_dataset_comp = pd.DataFrame(\n", " {\n", " \"fashionTexture\": [0.8749, 0.8481, 0.2491, 0.670318, 0.1643],\n", " \"flickrLogos32Subset\": [0.9069, 0.9064, 0.2179, 0.7175, 0.1073],\n", " \"food101Subset\": [0.9294, 0.9127, 0.6891, 0.9090, 0.555827],\n", " \"fridgeObjects\": [0.9591, 0.9727, 0.272727, 0.6136, 0.181818],\n", " \"lettuce\": [0.8992, 0.9104, 0.632, 0.8192, 0.5120],\n", " \"recycle_v3\": [0.9527, 0.9581, 0.766, 0.8591, 0.2876],\n", " \"learning_rate\": [0.000100, 0.001000, 0.010000, 0.000010, 0.000001],\n", " }\n", ").set_index(\"learning_rate\")\n", "\n", "df_epoch_comp = pd.DataFrame(\n", " {\n", " \"3_epochs\": [0.823808, 0.846394, 0.393808, 0.455115, 0.229120],\n", " \"15_epochs\": [0.920367, 0.918067, 0.471138, 0.764786, 0.301474],\n", " \"learning_rate\": [0.000100, 0.001000, 0.010000, 0.000010, 0.000001],\n", " }\n", ").set_index(\"learning_rate\")\n", "\n", "plt.figure(1)\n", "ax1 = plt.subplot(121)\n", "ax2 = plt.subplot(122)\n", "\n", "vals = ax2.get_yticks()\n", "\n", "df_dataset_comp.sort_index().plot(kind=\"bar\", rot=0, figsize=(15, 6), ax=ax1)\n", "vals = ax1.get_yticks()\n", "ax1.set_yticklabels([\"{:,.2%}\".format(x) for x in vals])\n", "ax1.set_ylim(0, 1)\n", "ax1.set_ylabel(\"Accuracy (%)\")\n", "ax1.set_title(\"Accuracy of Learning Rates by Datasets @ 15 Epochs\")\n", "ax1.legend(loc=2)\n", "\n", "df_epoch_comp.sort_index().plot(kind=\"bar\", rot=0, figsize=(15, 6), ax=ax2)\n", "ax2.set_yticklabels([\"{:,.2%}\".format(x) for x in vals])\n", "ax2.set_ylim(0, 1)\n", "ax2.set_title(\"Accuracy of Learning Rates by Epochs\")\n", "ax2.legend(loc=2)\n", "```\n", "\n", "

\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Image resolution \n", "\n", "A model's input image resolution also impacts model accuracy. Usually, convolutional neural networks are able to take advantage of higher resolution images, especially if the object-of-interest is small in the overall image. But how does image size impact other model aspects? \n", "\n", "We find that image size doesn't significantly affect the model's memory footprint given the same network architecture, but it has a huge effect on GPU memory. Image size also impacts training and inference speeds.\n", "\n", "![imsize_comparisons](media/imsize_comparisons.png)\n", "\n", "From the results, we can see that an increase in image resolution from __300 X 300__ to __500 X 500__ will increase the performance marginally at the cost of a longer training duration and slower inference speed.\n", "\n", "---\n", "\n", "
See the code to generate the graphs\n", "

\n", "\n", "### Code snippet to generate graphs in this cell\n", "\n", "```python\n", "import pandas as pd\n", "from utils_cv.classification.parameter_sweeper import add_value_labels\n", "%matplotlib inline\n", "\n", "df = pd.DataFrame(\n", " {\n", " \"accuracy\": [0.9472, 0.9394, 0.9190, 0.9164, 0.8366, 0.8251],\n", " \"training_duration\": [385.3, 218.8, 280.5, 184.9, 272.5, 182.3],\n", " \"inference_duration\": [34.2, 23.2, 27.8, 17.8, 27.6, 17.3],\n", " \"model\": [\n", " \"resnet50 X 499\",\n", " \"resnet50 X 299\",\n", " \"resnet18 X 499\",\n", " \"resnet18 X 299\",\n", " \"squeezenet1_1 X 499\",\n", " \"squeezenet1_1 X 299\",\n", " ],\n", " }\n", ").set_index(\"model\")\n", "df\n", "\n", "ax1, ax2, ax3 = df.plot.bar(\n", " rot=90, subplots=True, legend=False, figsize=(12, 12)\n", ")\n", "\n", "for i in range(len(df)):\n", " if i < len(df) / 3:\n", " ax1.get_children()[i].set_color(\"r\")\n", " ax2.get_children()[i].set_color(\"r\")\n", " ax3.get_children()[i].set_color(\"r\")\n", " if i >= len(df) / 3 and i < 2 * len(df) / 3:\n", " ax1.get_children()[i].set_color(\"g\")\n", " ax2.get_children()[i].set_color(\"g\")\n", " ax3.get_children()[i].set_color(\"g\")\n", " if i >= 2 * len(df) / 3:\n", " ax1.get_children()[i].set_color(\"b\")\n", " ax2.get_children()[i].set_color(\"b\")\n", " ax3.get_children()[i].set_color(\"b\")\n", "\n", "ax1.set_title(\"Accuracy (%)\")\n", "ax2.set_title(\"Training Duration (seconds)\")\n", "ax3.set_title(\"Inference Speed (seconds)\")\n", "\n", "ax1.set_ylabel(\"%\")\n", "ax2.set_ylabel(\"seconds\")\n", "ax3.set_ylabel(\"seconds\")\n", "\n", "ax1.set_ylim(top=df[\"accuracy\"].max() * 1.2)\n", "ax2.set_ylim(top=df[\"training_duration\"].max() * 1.2)\n", "ax3.set_ylim(top=df[\"inference_duration\"].max() * 1.2)\n", "\n", "add_value_labels(ax1, percentage=True)\n", "add_value_labels(ax2)\n", "add_value_labels(ax3)\n", "```\n", "\n", "

\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## How we found good default parameters \n", "\n", "We conducted various experiments to explore the impact of different hyperparameters on a model's _accuracy_, _training duration_, _inference speed_, and _memory footprint_. \n", "\n", "### Datasets \n", "\n", "For our experiments, we relied on a set of six different classification datasets. When selecting these datasets, we wanted to have a variety of image types with different amounts of data and number of classes. \n", "\n", "| Dataset Name | Number of Images | Number of Classes | \n", "| --- | --- | --- |\n", "| food101Subset | 5000 | 5 | \n", "| flickrLogos32Subset | 2740 | 33 | \n", "| fashionTexture | 1716 | 11 | \n", "| recycle_v3 | 564 | 11 | \n", "| lettuce | 380 | 2 |\n", "| fridgeObjects | 134 | 4 | \n", "\n", "### Model Characteristics \n", "\n", "In our experiment, we look at these characteristics to evaluate the impact of various parameters. Here is how we calculated each of the following metrics:\n", "\n", "- __Accuracy__ metric is averaged over 5 runs for each dataset. \n", "\n", "\n", "- __Training Duration__ metric is the average duration over 5 runs for each dataset.\n", "\n", "\n", "- __Inference Speed__ is the time it takes the model to run 1000 predictions.\n", "\n", "\n", "- __Memory Footprint__ is the size of the model pickle file output from the `learn.export(...)` method. \n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python (cv)", "language": "python", "name": "cv" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.8" }, "toc": { "base_numbering": 1, "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": false, "toc_position": {}, "toc_section_display": true, "toc_window_display": false } }, "nbformat": 4, "nbformat_minor": 2 }