{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Model Layers" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This module contains many layer classes that we might be interested in using in our models. These layers complement the default [Pytorch layers](https://pytorch.org/docs/stable/nn.html) which we can also use as predefined layers." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "hide_input": true }, "outputs": [], "source": [ "from fastai import *\n", "from fastai.vision import *\n", "from fastai.gen_doc.nbdoc import *" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class AdaptiveConcatPool2d[source]

\n", "\n", "> AdaptiveConcatPool2d(`sz`:`Optional`\\[`int`\\]=`None`) :: [`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(AdaptiveConcatPool2d, doc_string=False)" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "hide_input": true }, "outputs": [], "source": [ "from fastai.gen_doc.nbdoc import *\n", "from fastai.layers import * " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Layer that concats `AdaptiveAvgPool2d` and `AdaptiveMaxPool2d`. Output will be `2*sz` or 2 if `sz` is None." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The [`AdaptiveConcatPool2d`](/layers.html#AdaptiveConcatPool2d) object uses adaptive average pooling and adaptive max pooling and concatenates them both. We use this because it provides the model with the information of both methods and improves performance. This technique is called `adaptive` because it allows us to decide on what output dimensions we want, instead of choosing the input's dimensions to fit a desired output size.\n", "\n", "Let's try training with Adaptive Average Pooling first, then with Adaptive Max Pooling and finally with the concatenation of them both to see how they fare in performance.\n", "\n", "We will first define a [`simple_cnn`](/layers.html#simple_cnn) using [Adapative Max Pooling](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveMaxPool2d) by changing the source code a bit." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "path = untar_data(URLs.MNIST_SAMPLE)\n", "data = ImageDataBunch.from_folder(path)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def simple_cnn_max(actns:Collection[int], kernel_szs:Collection[int]=None,\n", " strides:Collection[int]=None) -> nn.Sequential:\n", " \"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`\"\n", " nl = len(actns)-1\n", " kernel_szs = ifnone(kernel_szs, [3]*nl)\n", " strides = ifnone(strides , [2]*nl)\n", " layers = [conv2d_relu(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])\n", " for i in range(len(strides))]\n", " layers.append(nn.Sequential(nn.AdaptiveMaxPool2d(1), Flatten()))\n", " return nn.Sequential(*layers)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "", "version_major": 2, "version_minor": 0 }, "text/plain": [ "VBox(children=(HBox(children=(IntProgress(value=0, max=1), HTML(value='0.00% [0/1 00:00<00:00]'))), HTML(value…" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "Total time: 00:02\n", "epoch train loss valid loss accuracy\n", "0 0.082607 0.087943 0.970069 (00:02)\n", "\n" ] } ], "source": [ "model = simple_cnn_max((3,16,16,2))\n", "learner = Learner(data, model, metrics=[accuracy])\n", "learner.fit(1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's try with [Adapative Average Pooling](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveAvgPool2d) now." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def simple_cnn_avg(actns:Collection[int], kernel_szs:Collection[int]=None,\n", " strides:Collection[int]=None) -> nn.Sequential:\n", " \"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`\"\n", " nl = len(actns)-1\n", " kernel_szs = ifnone(kernel_szs, [3]*nl)\n", " strides = ifnone(strides , [2]*nl)\n", " layers = [conv2d_relu(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])\n", " for i in range(len(strides))]\n", " layers.append(nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten()))\n", " return nn.Sequential(*layers)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "", "version_major": 2, "version_minor": 0 }, "text/plain": [ "VBox(children=(HBox(children=(IntProgress(value=0, max=1), HTML(value='0.00% [0/1 00:00<00:00]'))), HTML(value…" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "Total time: 00:02\n", "epoch train loss valid loss accuracy\n", "0 0.151425 0.126878 0.957802 (00:02)\n", "\n" ] } ], "source": [ "model = simple_cnn_avg((3,16,16,2))\n", "learner = Learner(data, model, metrics=[accuracy])\n", "learner.fit(1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally we will try with the concatenation of them both [`AdaptiveConcatPool2d`](/layers.html#AdaptiveConcatPool2d). We will see that, in fact, it increases our accuracy and decreases our loss considerably!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def simple_cnn(actns:Collection[int], kernel_szs:Collection[int]=None,\n", " strides:Collection[int]=None) -> nn.Sequential:\n", " \"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`\"\n", " nl = len(actns)-1\n", " kernel_szs = ifnone(kernel_szs, [3]*nl)\n", " strides = ifnone(strides , [2]*nl)\n", " layers = [conv2d_relu(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])\n", " for i in range(len(strides))]\n", " layers.append(nn.Sequential(AdaptiveConcatPool2d(1), Flatten()))\n", " return nn.Sequential(*layers)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "", "version_major": 2, "version_minor": 0 }, "text/plain": [ "VBox(children=(HBox(children=(IntProgress(value=0, max=1), HTML(value='0.00% [0/1 00:00<00:00]'))), HTML(value…" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "Total time: 00:02\n", "epoch train loss valid loss accuracy\n", "0 0.076629 0.054396 0.983808 (00:02)\n", "\n" ] } ], "source": [ "model = simple_cnn((3,16,16,2))\n", "learner = Learner(data, model, metrics=[accuracy])\n", "learner.fit(1)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class Lambda[source]

\n", "\n", "> Lambda(`func`:`LambdaFunc`) :: [`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Lambda, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lambda allows us to define functions and use them as layers in our networks inside a [Sequential](https://pytorch.org/docs/stable/nn.html#torch.nn.Sequential) object. \n", "\n", "So, for example, say we want to apply a [log_softmax loss](https://pytorch.org/docs/stable/nn.html#torch.nn.functional.log_softmax) and we need to change the shape of our output batches to be able to use this loss. We can add a layer that applies the necessary change in shape by calling:\n", "\n", "`Lambda(lambda x: x.view(x.size(0),-1))`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's see an example of how the shape of our output can change when we add this layer." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "torch.Size([64, 10, 1, 1])\n" ] } ], "source": [ "model = nn.Sequential(\n", " nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.AdaptiveAvgPool2d(1),\n", ")\n", "\n", "model.cuda()\n", "\n", "for xb, yb in data.train_dl:\n", " out = (model(*[xb]))\n", " print(out.size())\n", " break" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "torch.Size([64, 10])\n" ] } ], "source": [ "model = nn.Sequential(\n", " nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.AdaptiveAvgPool2d(1),\n", " Lambda(lambda x: x.view(x.size(0),-1))\n", ")\n", "\n", "model.cuda()\n", "\n", "for xb, yb in data.train_dl:\n", " out = (model(*[xb]))\n", " print(out.size())\n", " break" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

Flatten[source]

\n", "\n", "> Flatten() → `Tensor`\n", "\n", "Flattens `x` to a single dimension, often used at the end of a model. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Flatten)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The function we build above is actually implemented in our library as [`Flatten`](/layers.html#Flatten). We can see that it returns the same size when we run it." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "torch.Size([64, 10])\n" ] } ], "source": [ "model = nn.Sequential(\n", " nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.AdaptiveAvgPool2d(1),\n", " Flatten(),\n", ")\n", "\n", "model.cuda()\n", "\n", "for xb, yb in data.train_dl:\n", " out = (model(*[xb]))\n", " print(out.size())\n", " break" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

PoolFlatten[source]

\n", "\n", "> PoolFlatten() → [`Sequential`](https://pytorch.org/docs/stable/nn.html#torch.nn.Sequential)\n", "\n", "Apply [`nn.AdaptiveAvgPool2d`](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveAvgPool2d) to `x` and then flatten the result. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(PoolFlatten)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can combine these two final layers ([AdaptiveAvgPool2d](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveAvgPool2d) and [`Flatten`](/layers.html#Flatten)) by using [`PoolFlatten`](/layers.html#PoolFlatten)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "torch.Size([64, 10])\n" ] } ], "source": [ "model = nn.Sequential(\n", " nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " PoolFlatten()\n", ")\n", "\n", "model.cuda()\n", "\n", "for xb, yb in data.train_dl:\n", " out = (model(*[xb]))\n", " print(out.size())\n", " break" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

ResizeBatch[source]

\n", "\n", "> ResizeBatch(`size`:`int`) → `Tensor`\n", "\n", "Layer that resizes x to `size`, good for connecting mismatched layers. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(ResizeBatch)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another use we give to the Lambda function is to resize batches with [`ResizeBatch`](/layers.html#ResizeBatch) when we have a layer that expects a different input than what comes from the previous one. Let's see an example:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([[ 1., -1.],\n", " [ 1., -1.]])\n" ] } ], "source": [ "a = torch.tensor([[1., -1.], [1., -1.]])\n", "print(a)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([[ 1., -1., 1., -1.]])\n" ] } ], "source": [ "out = ResizeBatch(4)\n", "print(out(a))" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class StdUpsample[source]

\n", "\n", "> StdUpsample(`n_in`:`int`, `n_out`:`int`) :: [`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(StdUpsample, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Increases the dimensionality of our data from `n_in` to `n_out` by applying a transposed convolution layer to the input and with batchnorm and a RELU activation." ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class CrossEntropyFlat[source]

\n", "\n", "> CrossEntropyFlat(`weight`=`None`, `size_average`=`None`, `ignore_index`=`-100`, `reduce`=`None`, `reduction`=`'elementwise_mean'`) :: [`CrossEntropyLoss`](https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss)" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(CrossEntropyFlat, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": { "hide_input": true }, "source": [ "Same as [nn.CrossEntropyLoss](https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss), but flattens input and target. Is used to calculate cross entropy on arrays (which Pytorch will not let us do with their [nn.CrossEntropyLoss](https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss) function). An example of a use case is image segmentation models where the output in an image (or an array of pixels).\n", "\n", "The parameters are the same as [nn.CrossEntropyLoss](https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss): `weight` to rescale each class, `size_average` whether we want to sum the losses across elements in a batch or we want to add them up, `ignore_index` what targets do we want to ignore, `reduce` on whether we want to return a loss per batch element and `reduction` specifies which type of reduction (if any) we want to apply to our input." ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class MSELossFlat[source]

\n", "\n", "> MSELossFlat(`size_average`=`None`, `reduce`=`None`, `reduction`=`'elementwise_mean'`) :: [`MSELoss`](https://pytorch.org/docs/stable/nn.html#torch.nn.MSELoss)\n", "\n", "Same as [`nn.MSELoss`](https://pytorch.org/docs/stable/nn.html#torch.nn.MSELoss), but flattens input and target. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(MSELossFlat)" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class Debugger[source]

\n", "\n", "> Debugger() :: [`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)\n", "\n", "A module to debug inside a model. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Debugger)" ] }, { "cell_type": "markdown", "metadata": { "hide_input": false }, "source": [ "The debugger module allows us to peek inside a network while its training and see in detail what is going on. We can see inputs, ouputs and sizes at any point in the network.\n", "\n", "For instance, if you run the following:\n", "\n", "``` python\n", "model = nn.Sequential(\n", " nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " Debugger(),\n", " nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", ")\n", "\n", "model.cuda()\n", "\n", "learner = Learner(data, model, metrics=[accuracy])\n", "learner.fit(5)\n", "```\n", "... you'll see something like this:\n", "\n", "```\n", "/home/ubuntu/fastai/fastai/layers.py(74)forward()\n", " 72 def forward(self,x:Tensor) -> Tensor:\n", " 73 set_trace()\n", "---> 74 return x\n", " 75 \n", " 76 class StdUpsample(nn.Module):\n", "\n", "ipdb>\n", "```" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

bn_drop_lin[source]

\n", "\n", "> bn_drop_lin(`n_in`:`int`, `n_out`:`int`, `bn`:`bool`=`True`, `p`:`float`=`0.0`, `actn`:`Optional`\\[[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)\\]=`None`)" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(bn_drop_lin, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The [`bn_drop_lin`](/layers.html#bn_drop_lin) function returns a sequence of [batch normalization](https://arxiv.org/abs/1502.03167), [dropout](https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf) and a linear layer. This custom layer is usually used at the end of a model. \n", "\n", "`n_in` represents the number of size of the input `n_out` the size of the output, `bn` whether we want batch norm or not, `p` is how much dropout and `actn` is an optional parameter to add an activation function at the end." ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

conv2d[source]

\n", "\n", "> conv2d(`ni`:`int`, `nf`:`int`, `ks`:`int`=`3`, `stride`:`int`=`1`, `padding`:`int`=`None`, `bias`=`False`) → [`Conv2d`](https://pytorch.org/docs/stable/nn.html#torch.nn.Conv2d)\n", "\n", "Create [`nn.Conv2d`](https://pytorch.org/docs/stable/nn.html#torch.nn.Conv2d) layer: `ni` inputs, `nf` outputs, `ks` kernel size. `padding` defaults to `k//2`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(conv2d)" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

conv2d_relu[source]

\n", "\n", "> conv2d_relu(`ni`:`int`, `nf`:`int`, `ks`:`int`=`3`, `stride`:`int`=`1`, `padding`:`int`=`None`, `bn`:`bool`=`False`, `bias`:`bool`=`False`) → [`Sequential`](https://pytorch.org/docs/stable/nn.html#torch.nn.Sequential)" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(conv2d_relu, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create a [`conv2d`](/layers.html#conv2d) layer with [`nn.ReLU`](https://pytorch.org/docs/stable/nn.html#torch.nn.ReLU) activation and optional(`bn`) [`nn.BatchNorm2d`](https://pytorch.org/docs/stable/nn.html#torch.nn.BatchNorm2d): `ni` input, `nf` out filters, `ks` kernel, `stride`:stride, `padding`:padding, `bn`: batch normalization." ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

conv2d_trans[source]

\n", "\n", "> conv2d_trans(`ni`:`int`, `nf`:`int`, `ks`:`int`=`2`, `stride`:`int`=`2`, `padding`:`int`=`0`, `bias`=`False`) → [`ConvTranspose2d`](https://pytorch.org/docs/stable/nn.html#torch.nn.ConvTranspose2d)\n", "\n", "Create [`nn.ConvTranspose2d`](https://pytorch.org/docs/stable/nn.html#torch.nn.ConvTranspose2d) layer: `ni` inputs, `nf` outputs, `ks` kernel size, `stride`: stride. `padding` defaults to 0. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(conv2d_trans)" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

conv_layer[source]

\n", "\n", "> conv_layer(`ni`:`int`, `nf`:`int`, `ks`:`int`=`3`, `stride`:`int`=`1`) → [`Sequential`](https://pytorch.org/docs/stable/nn.html#torch.nn.Sequential)" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(conv_layer, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The [`conv_layer`](/layers.html#conv_layer) function returns a sequence of [nn.Conv2D](https://pytorch.org/docs/stable/nn.html#torch.nn.Conv2d), [BatchNorm2d](https://arxiv.org/abs/1502.03167) and a [leaky RELU](https://ai.stanford.edu/~amaas/papers/relu_hybrid_icml2013_final.pdf) activation function.\n", "\n", "`n_in` represents the number of size of the input `n_out` the size of the output, `ks` kernel size, `stride` the stride with which we want to apply the convolutions." ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

get_embedding[source]

\n", "\n", "> get_embedding(`ni`:`int`, `nf`:`int`) → [`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(get_embedding, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create an [embedding layer](https://arxiv.org/abs/1711.09160) with input size `ni` and output size `nf`." ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

simple_cnn[source]

\n", "\n", "> simple_cnn(`actns`:`Collection`\\[`int`\\], `kernel_szs`:`Collection`\\[`int`\\]=`None`, `strides`:`Collection`\\[`int`\\]=`None`, `bn`=`False`) → [`Sequential`](https://pytorch.org/docs/stable/nn.html#torch.nn.Sequential)\n", "\n", "CNN with [`conv2d_relu`](/layers.html#conv2d_relu) layers defined by `actns`, `kernel_szs` and `strides`, plus batchnorm if `bn`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(simple_cnn)" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

std_upsample_head[source]

\n", "\n", "> std_upsample_head(`c`, `nfs`:`Collection`\\[`int`\\]) → [`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(std_upsample_head, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create a sequence of upsample layers with a RELU at the beggining and a [nn.ConvTranspose2d](https://pytorch.org/docs/stable/nn.html#torch.nn.ConvTranspose2d). \n", "\n", "`nfs` is a list with the input and output sizes of each upsample layer and `c` is the output size of the final 2D Transpose Convolutional layer." ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

trunc_normal_[source]

\n", "\n", "> trunc_normal_(`x`:`Tensor`, `mean`:`float`=`0.0`, `std`:`float`=`1.0`) → `Tensor`\n", "\n", "Truncated normal initialization. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(trunc_normal_)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Undocumented Methods - Methods moved below this line will intentionally be hidden" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## New Methods - Please document or move to the undocumented section" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

forward[source]

\n", "\n", "> forward(`x`:`Tensor`) → `Tensor`\n", "\n", "Defines the computation performed at every call. Should be overridden by all subclasses.\n", "\n", ".. note::\n", " Although the recipe for forward pass needs to be defined within\n", " this function, one should call the :class:`Module` instance afterwards\n", " instead of this since the former takes care of running the\n", " registered hooks while the latter silently ignores them. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Debugger.forward)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": 22, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

forward[source]

\n", "\n", "> forward(`x`:`Tensor`) → `Tensor`\n", "\n", "Defines the computation performed at every call. Should be overridden by all subclasses.\n", "\n", ".. note::\n", " Although the recipe for forward pass needs to be defined within\n", " this function, one should call the :class:`Module` instance afterwards\n", " instead of this since the former takes care of running the\n", " registered hooks while the latter silently ignores them. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(StdUpsample.forward)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": 23, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

forward[source]

\n", "\n", "> forward(`input`:`Tensor`, `target`:`Tensor`) → `Rank0Tensor`\n", "\n", "Defines the computation performed at every call. Should be overridden by all subclasses.\n", "\n", ".. note::\n", " Although the recipe for forward pass needs to be defined within\n", " this function, one should call the :class:`Module` instance afterwards\n", " instead of this since the former takes care of running the\n", " registered hooks while the latter silently ignores them. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(MSELossFlat.forward)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": 24, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

forward[source]

\n", "\n", "> forward(`input`:`Tensor`, `target`:`Tensor`) → `Rank0Tensor`\n", "\n", "Defines the computation performed at every call. Should be overridden by all subclasses.\n", "\n", ".. note::\n", " Although the recipe for forward pass needs to be defined within\n", " this function, one should call the :class:`Module` instance afterwards\n", " instead of this since the former takes care of running the\n", " registered hooks while the latter silently ignores them. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(CrossEntropyFlat.forward)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": 25, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

forward[source]

\n", "\n", "> forward(`x`)\n", "\n", "Defines the computation performed at every call. Should be overridden by all subclasses.\n", "\n", ".. note::\n", " Although the recipe for forward pass needs to be defined within\n", " this function, one should call the :class:`Module` instance afterwards\n", " instead of this since the former takes care of running the\n", " registered hooks while the latter silently ignores them. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Lambda.forward)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": 26, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

forward[source]

\n", "\n", "> forward(`x`)\n", "\n", "Defines the computation performed at every call. Should be overridden by all subclasses.\n", "\n", ".. note::\n", " Although the recipe for forward pass needs to be defined within\n", " this function, one should call the :class:`Module` instance afterwards\n", " instead of this since the former takes care of running the\n", " registered hooks while the latter silently ignores them. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(AdaptiveConcatPool2d.forward)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] } ], "metadata": { "jekyll": { "keywords": "fastai", "summary": "Provides essential functions to building and modifying `Model` architectures.", "title": "layers" }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.6" } }, "nbformat": 4, "nbformat_minor": 2 }