{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Computer Vision Learner" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[`vision.learner`](/vision.learner.html#vision.learner) is the module that defines the [`create_cnn`](/vision.learner.html#create_cnn) method, to easily get a model suitable for transfer learning." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "hide_input": true }, "outputs": [], "source": [ "from fastai.gen_doc.nbdoc import *\n", "from fastai.vision import *\n", "from fastai import *" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Transfer learning" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Transfer learning is a technique where you use a model trained on a very large dataset (usually [ImageNet](http://image-net.org/) in computer vision) and then adapt it to your own dataset. The idea is that it has learned to recognize many features on all of this data, and that you will benefit from this knowledge, especially if your dataset is small, compared to starting from a randomly initiliazed model. It has been proved in [this article](https://arxiv.org/abs/1805.08974) on a wide range of tasks that transfer learning nearly always give better results.\n", "\n", "In practice, you need to change the last part of your model to be adapted to your own number of classes. Most convolutional models end with a few linear layers (a part will call head). The last convolutional layer will have analyzed features in the image that went through the model, and the job of the head is to convert those in predictions for each of our classes. In transfer learning we will keep all the convolutional layers (called the body or the backbone of the model) with their weights pretrained on ImageNet but will define a new head initiliazed randomly.\n", "\n", "Then we will train the model we obtain in two phases: first we freeze the body weights and only train the head (to convert those analyzed features into predictions for our own data), then we unfreeze the layers of the backbone (gradually if necessary) and fine-tune the whole model (possily using differential learning rates).\n", "\n", "The [`create_cnn`](/vision.learner.html#create_cnn) factory method helps you to automatically get a pretrained model from a given architecture with a custom head that is suitable for your data." ] }, { "cell_type": "markdown", "metadata": { "hide_input": false }, "source": [ "
create_cnn
[source]create_cnn
([`data`](/vision.data.html#vision.data):[`DataBunch`](/basic_data.html#DataBunch), `arch`:`Callable`, `cut`:`Union`\\[`int`, `Callable`\\]=`None`, `pretrained`:`bool`=`True`, `lin_ftrs`:`Optional`\\[`Collection`\\[`int`\\]\\]=`None`, `ps`:`Floats`=`0.5`, `custom_head`:`Optional`\\[[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)\\]=`None`, `split_on`:`Union`\\[`Callable`, `Collection`\\[`ModuleList`\\], `NoneType`\\]=`None`, `kwargs`:`Any`)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This method creates a [`Learner`](/basic_train.html#Learner) object from the [`data`](/vision.data.html#vision.data) object and model inferred from it with the backbone given in `arch`. Specifically, it will cut the model defined by `arch` (randomly initialized if `pretrained` is False) at the last convolutional layer by default (or as defined in `cut`, see below) and add:\n",
"- an [`AdaptiveConcatPool2d`](/layers.html#AdaptiveConcatPool2d) layer,\n",
"- a [`Flatten`](/layers.html#Flatten) layer,\n",
"- blocks of \\[[`nn.BatchNorm1d`](https://pytorch.org/docs/stable/nn.html#torch.nn.BatchNorm1d), [`nn.Dropout`](https://pytorch.org/docs/stable/nn.html#torch.nn.Dropout), [`nn.Linear`](https://pytorch.org/docs/stable/nn.html#torch.nn.Linear), [`nn.ReLU`](https://pytorch.org/docs/stable/nn.html#torch.nn.ReLU)\\] layers.\n",
"\n",
"The blocks are defined by the `lin_ftrs` and `ps` arguments. Specifically, the first block will have a number of inputs inferred from the backbone `arch` and the last one will have a number of outputs equal to `data.c` (which contains the number of classes of the data) and the intermediate blocks have a number of inputs/outputs determined by `lin_frts` (of course a block has a number of inputs equal to the number of outputs of the previous block). The default is to have an intermediate hidden size of 512 (which makes two blocks `model_activation` -> 512 -> `n_classes`). If you pass a float then the final dropout layer will have the value `ps`, and the remaining will be `ps/2`. If you pass a list then the values are used for dropout probabilities directly.\n",
"\n",
"Note that the very last block doesn't have a [`nn.ReLU`](https://pytorch.org/docs/stable/nn.html#torch.nn.ReLU) activation, to allow you to use any final activation you want (generally included in the loss function in pytorch). Also, the backbone will be frozen if you choose `pretrained=True` (so only the head will train if you call [`fit`](/basic_train.html#fit)) so that you can immediately start phase one of training as described above.\n",
"\n",
"Alternatively, you can define your own `custom_head` to put on top of the backbone. If you want to specify where to split `arch` you should so in the argument `cut` which can either be the index of a specific layer (the result will not include that layer) or a function that, when passed the model, will return the backbone you want.\n",
"\n",
"The final model obtained by stacking the backbone and the head (custom or defined as we saw) is then separated in groups for gradual unfreezeing or differential learning rates. You can specify of to split the backbone in groups with the optional argument `split_on` (should be a function that returns those groups when given the backbone). \n",
"\n",
"The `kwargs` will be passed on to [`Learner`](/basic_train.html#Learner), so you can put here anything that [`Learner`](/basic_train.html#Learner) will accept ([`metrics`](/metrics.html#metrics), `loss_func`, `opt_func`...)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"path = untar_data(URLs.MNIST_SAMPLE)\n",
"data = ImageDataBunch.from_folder(path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Total time: 00:09\n",
"epoch train_loss valid_loss accuracy\n",
"1 0.112763 0.066951 0.979882 (00:09)\n",
"\n"
]
}
],
"source": [
"learner = create_cnn(data, models.resnet18, metrics=[accuracy])\n",
"learner.fit_one_cycle(1,1e-3)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learner.save('one_epoch')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Get predictions"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once you've actually trained your model, you may want to use it on a single image. This is done by using the following method."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"predict
[source]predict
(`img`:[`Image`](/vision.image.html#Image))\n",
"\n",
"Return prect class, label and probabilities for `img`. "
],
"text/plain": [
"create_body
[source]create_body
(`model`:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module), `cut`:`Optional`\\[`int`\\]=`None`, `body_fn`:`Callable`\\[[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module), [`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)\\]=`None`)\n",
"\n",
"Cut off the body of a typically pretrained `model` at `cut` or as specified by `body_fn`. "
],
"text/plain": [
"create_head
[source]create_head
(`nf`:`int`, `nc`:`int`, `lin_ftrs`:`Optional`\\[`Collection`\\[`int`\\]\\]=`None`, `ps`:`Floats`=`0.5`)"
],
"text/plain": [
"class
ClassificationInterpretation
[source]ClassificationInterpretation
(`data`:[`DataBunch`](/basic_data.html#DataBunch), `probs`:`Tensor`, `y_true`:`Tensor`, `losses`:`Tensor`, `sigmoid`:`bool`=`None`)\n",
"\n",
"Interpretation methods for classification models. "
],
"text/plain": [
"from_learner
[source]from_learner
(`learn`:[`Learner`](/basic_train.html#Learner), `sigmoid`:`bool`=`None`, `tta`=`False`)\n",
"\n",
"Create an instance of [`ClassificationInterpretation`](/vision.learner.html#ClassificationInterpretation). `tta` indicates if we want to use Test Time Augmentation. "
],
"text/plain": [
"plot_top_losses
[source]plot_top_losses
(`k`, `largest`=`True`, `figsize`=`(12, 12)`)\n",
"\n",
"Show images in `top_losses` along with their prediction, actual, loss, and probability of predicted class. "
],
"text/plain": [
"