{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Model Layers" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This module contains many layer classes that we might be interested in using in our models. These layers complement the default [Pytorch layers](https://pytorch.org/docs/stable/nn.html) which we can also use as predefined layers." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "hide_input": true }, "outputs": [], "source": [ "from fastai.vision import *\n", "from fastai.gen_doc.nbdoc import *" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Custom fastai modules" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class AdaptiveConcatPool2d[source][test]

\n", "\n", "> AdaptiveConcatPool2d(**`sz`**:`Optional`\\[`int`\\]=***`None`***) :: [`PrePostInitMeta`](/core.html#PrePostInitMeta) :: [`Module`](/torch_core.html#Module)\n", "\n", "
×

No tests found for AdaptiveConcatPool2d. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Layer that concats `AdaptiveAvgPool2d` and `AdaptiveMaxPool2d`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(AdaptiveConcatPool2d, title_level=3)" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "hide_input": true }, "outputs": [], "source": [ "from fastai.gen_doc.nbdoc import *\n", "from fastai.layers import * " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The output will be `2*sz`, or just 2 if `sz` is None." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The [`AdaptiveConcatPool2d`](/layers.html#AdaptiveConcatPool2d) object uses adaptive average pooling and adaptive max pooling and concatenates them both. We use this because it provides the model with the information of both methods and improves performance. This technique is called `adaptive` because it allows us to decide on what output dimensions we want, instead of choosing the input's dimensions to fit a desired output size.\n", "\n", "Let's try training with Adaptive Average Pooling first, then with Adaptive Max Pooling and finally with the concatenation of them both to see how they fare in performance.\n", "\n", "We will first define a [`simple_cnn`](/layers.html#simple_cnn) using [Adaptive Max Pooling](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveMaxPool2d) by changing the source code a bit." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "path = untar_data(URLs.MNIST_SAMPLE)\n", "data = ImageDataBunch.from_folder(path)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def simple_cnn_max(actns:Collection[int], kernel_szs:Collection[int]=None,\n", " strides:Collection[int]=None) -> nn.Sequential:\n", " \"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`\"\n", " nl = len(actns)-1\n", " kernel_szs = ifnone(kernel_szs, [3]*nl)\n", " strides = ifnone(strides , [2]*nl)\n", " layers = [conv_layer(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])\n", " for i in range(len(strides))]\n", " layers.append(nn.Sequential(nn.AdaptiveMaxPool2d(1), Flatten()))\n", " return nn.Sequential(*layers)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/html": [ "Total time: 00:02

\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
epochtrain_lossvalid_lossaccuracy
10.1027580.0646760.984298
\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "model = simple_cnn_max((3,16,16,2))\n", "learner = Learner(data, model, metrics=[accuracy])\n", "learner.fit(1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's try with [Adaptive Average Pooling](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveAvgPool2d) now." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def simple_cnn_avg(actns:Collection[int], kernel_szs:Collection[int]=None,\n", " strides:Collection[int]=None) -> nn.Sequential:\n", " \"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`\"\n", " nl = len(actns)-1\n", " kernel_szs = ifnone(kernel_szs, [3]*nl)\n", " strides = ifnone(strides , [2]*nl)\n", " layers = [conv_layer(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])\n", " for i in range(len(strides))]\n", " layers.append(nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten()))\n", " return nn.Sequential(*layers)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/html": [ "Total time: 00:02

\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
epochtrain_lossvalid_lossaccuracy
10.2414850.2011160.973994
\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "model = simple_cnn_avg((3,16,16,2))\n", "learner = Learner(data, model, metrics=[accuracy])\n", "learner.fit(1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally we will try with the concatenation of them both [`AdaptiveConcatPool2d`](/layers.html#AdaptiveConcatPool2d). We will see that, in fact, it increases our accuracy and decreases our loss considerably!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def simple_cnn(actns:Collection[int], kernel_szs:Collection[int]=None,\n", " strides:Collection[int]=None) -> nn.Sequential:\n", " \"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`\"\n", " nl = len(actns)-1\n", " kernel_szs = ifnone(kernel_szs, [3]*nl)\n", " strides = ifnone(strides , [2]*nl)\n", " layers = [conv_layer(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])\n", " for i in range(len(strides))]\n", " layers.append(nn.Sequential(AdaptiveConcatPool2d(1), Flatten()))\n", " return nn.Sequential(*layers)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/html": [ "Total time: 00:02

\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
epochtrain_lossvalid_lossaccuracy
10.2030150.1220940.988224
\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "model = simple_cnn((3,16,16,2))\n", "learner = Learner(data, model, metrics=[accuracy])\n", "learner.fit(1)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class Lambda[source][test]

\n", "\n", "> Lambda(**`func`**:`LambdaFunc`) :: [`PrePostInitMeta`](/core.html#PrePostInitMeta) :: [`Module`](/torch_core.html#Module)\n", "\n", "
×

No tests found for Lambda. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Create a layer that simply calls `func` with `x` " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Lambda, title_level=3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is very useful to use functions as layers in our networks inside a [Sequential](https://pytorch.org/docs/stable/nn.html#torch.nn.Sequential) object. So, for example, say we want to apply a [log_softmax loss](https://pytorch.org/docs/stable/nn.html#torch.nn.functional.log_softmax) and we need to change the shape of our output batches to be able to use this loss. We can add a layer that applies the necessary change in shape by calling:\n", "\n", "`Lambda(lambda x: x.view(x.size(0),-1))`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's see an example of how the shape of our output can change when we add this layer." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "torch.Size([64, 10, 1, 1])\n" ] } ], "source": [ "model = nn.Sequential(\n", " nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.AdaptiveAvgPool2d(1),\n", ")\n", "\n", "model.cuda()\n", "\n", "for xb, yb in data.train_dl:\n", " out = (model(*[xb]))\n", " print(out.size())\n", " break" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "torch.Size([64, 10])\n" ] } ], "source": [ "model = nn.Sequential(\n", " nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.AdaptiveAvgPool2d(1),\n", " Lambda(lambda x: x.view(x.size(0),-1))\n", ")\n", "\n", "model.cuda()\n", "\n", "for xb, yb in data.train_dl:\n", " out = (model(*[xb]))\n", " print(out.size())\n", " break" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class Flatten[source][test]

\n", "\n", "> Flatten(**`full`**:`bool`=***`False`***) :: [`PrePostInitMeta`](/core.html#PrePostInitMeta) :: [`Module`](/torch_core.html#Module)\n", "\n", "
×

No tests found for Flatten. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Flatten `x` to a single dimension, often used at the end of a model. `full` for rank-1 tensor " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Flatten)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The function we build above is actually implemented in our library as [`Flatten`](/layers.html#Flatten). We can see that it returns the same size when we run it." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "torch.Size([64, 10])\n" ] } ], "source": [ "model = nn.Sequential(\n", " nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.AdaptiveAvgPool2d(1),\n", " Flatten(),\n", ")\n", "\n", "model.cuda()\n", "\n", "for xb, yb in data.train_dl:\n", " out = (model(*[xb]))\n", " print(out.size())\n", " break" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

PoolFlatten[source][test]

\n", "\n", "> PoolFlatten() → [`Sequential`](https://pytorch.org/docs/stable/nn.html#torch.nn.Sequential)\n", "\n", "
×

No tests found for PoolFlatten. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Apply [`nn.AdaptiveAvgPool2d`](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveAvgPool2d) to `x` and then flatten the result. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(PoolFlatten)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can combine these two final layers ([AdaptiveAvgPool2d](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveAvgPool2d) and [`Flatten`](/layers.html#Flatten)) by using [`PoolFlatten`](/layers.html#PoolFlatten)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "torch.Size([64, 10])\n" ] } ], "source": [ "model = nn.Sequential(\n", " nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " PoolFlatten()\n", ")\n", "\n", "model.cuda()\n", "\n", "for xb, yb in data.train_dl:\n", " out = (model(*[xb]))\n", " print(out.size())\n", " break" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another use we give to the Lambda function is to resize batches with [`ResizeBatch`](/layers.html#ResizeBatch) when we have a layer that expects a different input than what comes from the previous one." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class ResizeBatch[source][test]

\n", "\n", "> ResizeBatch(**\\*`size`**:`int`) :: [`PrePostInitMeta`](/core.html#PrePostInitMeta) :: [`Module`](/torch_core.html#Module)\n", "\n", "
×

No tests found for ResizeBatch. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Reshape `x` to `size`, keeping batch dim the same size " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(ResizeBatch)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([[[ 1., -1.],\n", " [ 1., -1.]]])\n" ] } ], "source": [ "a = torch.tensor([[1., -1.], [1., -1.]])[None]\n", "print(a)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([[ 1., -1., 1., -1.]])\n" ] } ], "source": [ "out = ResizeBatch(4)\n", "print(out(a))" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class Debugger[source][test]

\n", "\n", "> Debugger() :: [`PrePostInitMeta`](/core.html#PrePostInitMeta) :: [`Module`](/torch_core.html#Module)\n", "\n", "
×

No tests found for Debugger. To contribute a test please refer to this guide and this discussion.

\n", "\n", "A module to debug inside a model. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Debugger, title_level=3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The debugger module allows us to peek inside a network while its training and see in detail what is going on. We can see inputs, outputs and sizes at any point in the network.\n", "\n", "For instance, if you run the following:\n", "\n", "``` python\n", "model = nn.Sequential(\n", " nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " Debugger(),\n", " nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", ")\n", "\n", "model.cuda()\n", "\n", "learner = Learner(data, model, metrics=[accuracy])\n", "learner.fit(5)\n", "```\n", "... you'll see something like this:\n", "\n", "```\n", "/home/ubuntu/fastai/fastai/layers.py(74)forward()\n", " 72 def forward(self,x:Tensor) -> Tensor:\n", " 73 set_trace()\n", "---> 74 return x\n", " 75 \n", " 76 class StdUpsample(nn.Module):\n", "\n", "ipdb>\n", "```" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class PixelShuffle_ICNR[source][test]

\n", "\n", "> PixelShuffle_ICNR(**`ni`**:`int`, **`nf`**:`int`=***`None`***, **`scale`**:`int`=***`2`***, **`blur`**:`bool`=***`False`***, **`norm_type`**=***``***, **`leaky`**:`float`=***`None`***) :: [`PrePostInitMeta`](/core.html#PrePostInitMeta) :: [`Module`](/torch_core.html#Module)\n", "\n", "
×

No tests found for PixelShuffle_ICNR. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Upsample by `scale` from `ni` filters to `nf` (default `ni`), using [`nn.PixelShuffle`](https://pytorch.org/docs/stable/nn.html#torch.nn.PixelShuffle), [`icnr`](/layers.html#icnr) init, and [`weight_norm`](https://pytorch.org/docs/stable/nn.html#torch.nn.weight_norm). " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(PixelShuffle_ICNR, title_level=3)" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class MergeLayer[source][test]

\n", "\n", "> MergeLayer(**`dense`**:`bool`=***`False`***) :: [`PrePostInitMeta`](/core.html#PrePostInitMeta) :: [`Module`](/torch_core.html#Module)\n", "\n", "
×

No tests found for MergeLayer. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Merge a shortcut with the result of the module by adding them or concatenating them if `dense=True`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(MergeLayer, title_level=3)" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class PartialLayer[source][test]

\n", "\n", "> PartialLayer(**`func`**, **\\*\\*`kwargs`**) :: [`PrePostInitMeta`](/core.html#PrePostInitMeta) :: [`Module`](/torch_core.html#Module)\n", "\n", "
×

No tests found for PartialLayer. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Layer that applies `partial(func, **kwargs)`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(PartialLayer, title_level=3)" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class SigmoidRange[source][test]

\n", "\n", "> SigmoidRange(**`low`**, **`high`**) :: [`PrePostInitMeta`](/core.html#PrePostInitMeta) :: [`Module`](/torch_core.html#Module)\n", "\n", "
×

No tests found for SigmoidRange. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Sigmoid module with range `(low,x_max)` " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(SigmoidRange, title_level=3)" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class SequentialEx[source][test]

\n", "\n", "> SequentialEx(**\\*`layers`**) :: [`PrePostInitMeta`](/core.html#PrePostInitMeta) :: [`Module`](/torch_core.html#Module)\n", "\n", "
×

No tests found for SequentialEx. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Like [`nn.Sequential`](https://pytorch.org/docs/stable/nn.html#torch.nn.Sequential), but with ModuleList semantics, and can access module input " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(SequentialEx, title_level=3)" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class SelfAttention[source][test]

\n", "\n", "> SelfAttention(**`n_channels`**:`int`) :: [`PrePostInitMeta`](/core.html#PrePostInitMeta) :: [`Module`](/torch_core.html#Module)\n", "\n", "
×

Tests found for SelfAttention:

  • pytest -sv tests/test_torch_core.py::test_keep_parameter [source]

To run tests please refer to this guide.

\n", "\n", "Self attention layer for nd. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(SelfAttention, title_level=3)" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class BatchNorm1dFlat[source][test]

\n", "\n", "> BatchNorm1dFlat(**`num_features`**, **`eps`**=***`1e-05`***, **`momentum`**=***`0.1`***, **`affine`**=***`True`***, **`track_running_stats`**=***`True`***) :: [`BatchNorm1d`](https://pytorch.org/docs/stable/nn.html#torch.nn.BatchNorm1d)\n", "\n", "
×

No tests found for BatchNorm1dFlat. To contribute a test please refer to this guide and this discussion.

\n", "\n", "[`nn.BatchNorm1d`](https://pytorch.org/docs/stable/nn.html#torch.nn.BatchNorm1d), but first flattens leading dimensions " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(BatchNorm1dFlat, title_level=3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Loss functions" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class FlattenedLoss[source][test]

\n", "\n", "> FlattenedLoss(**`func`**, **\\*`args`**, **`axis`**:`int`=***`-1`***, **`floatify`**:`bool`=***`False`***, **`is_2d`**:`bool`=***`True`***, **\\*\\*`kwargs`**)\n", "\n", "
×

No tests found for FlattenedLoss. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Same as `func`, but flattens input and target. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(FlattenedLoss, title_level=3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create an instance of `func` with `args` and `kwargs`. When passing an output and target, it\n", "- puts `axis` first in output and target with a transpose\n", "- casts the target to `float` if `floatify=True`\n", "- squeezes the `output` to two dimensions if `is_2d`, otherwise one dimension, squeezes the target to one dimension\n", "- applies the instance of `func`." ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

BCEFlat[source][test]

\n", "\n", "> BCEFlat(**\\*`args`**, **`axis`**:`int`=***`-1`***, **`floatify`**:`bool`=***`True`***, **\\*\\*`kwargs`**)\n", "\n", "
×

No tests found for BCEFlat. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Same as [`nn.BCELoss`](https://pytorch.org/docs/stable/nn.html#torch.nn.BCELoss), but flattens input and target. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(BCEFlat)" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

BCEWithLogitsFlat[source][test]

\n", "\n", "> BCEWithLogitsFlat(**\\*`args`**, **`axis`**:`int`=***`-1`***, **`floatify`**:`bool`=***`True`***, **\\*\\*`kwargs`**)\n", "\n", "
×

No tests found for BCEWithLogitsFlat. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Same as [`nn.BCEWithLogitsLoss`](https://pytorch.org/docs/stable/nn.html#torch.nn.BCEWithLogitsLoss), but flattens input and target. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(BCEWithLogitsFlat)" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

CrossEntropyFlat[source][test]

\n", "\n", "> CrossEntropyFlat(**\\*`args`**, **`axis`**:`int`=***`-1`***, **\\*\\*`kwargs`**)\n", "\n", "
×

No tests found for CrossEntropyFlat. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Same as [`nn.CrossEntropyLoss`](https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss), but flattens input and target. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(CrossEntropyFlat)" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

MSELossFlat[source][test]

\n", "\n", "> MSELossFlat(**\\*`args`**, **`axis`**:`int`=***`-1`***, **`floatify`**:`bool`=***`True`***, **\\*\\*`kwargs`**)\n", "\n", "
×

No tests found for MSELossFlat. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Same as [`nn.MSELoss`](https://pytorch.org/docs/stable/nn.html#torch.nn.MSELoss), but flattens input and target. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(MSELossFlat)" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class NoopLoss[source][test]

\n", "\n", "> NoopLoss() :: [`PrePostInitMeta`](/core.html#PrePostInitMeta) :: [`Module`](/torch_core.html#Module)\n", "\n", "
×

No tests found for NoopLoss. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Just returns the mean of the `output`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(NoopLoss)" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class WassersteinLoss[source][test]

\n", "\n", "> WassersteinLoss() :: [`PrePostInitMeta`](/core.html#PrePostInitMeta) :: [`Module`](/torch_core.html#Module)\n", "\n", "
×

No tests found for WassersteinLoss. To contribute a test please refer to this guide and this discussion.

\n", "\n", "For WGAN. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(WassersteinLoss)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Helper functions to create modules" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

bn_drop_lin[source][test]

\n", "\n", "> bn_drop_lin(**`n_in`**:`int`, **`n_out`**:`int`, **`bn`**:`bool`=***`True`***, **`p`**:`float`=***`0.0`***, **`actn`**:`Optional`\\[[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)\\]=***`None`***)\n", "\n", "
×

No tests found for bn_drop_lin. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(bn_drop_lin, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The [`bn_drop_lin`](/layers.html#bn_drop_lin) function returns a sequence of [batch normalization](https://arxiv.org/abs/1502.03167), [dropout](https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf) and a linear layer. This custom layer is usually used at the end of a model. \n", "\n", "`n_in` represents the size of the input, `n_out` the size of the output, `bn` whether we want batch norm or not, `p` how much dropout, and `actn` (optional parameter) adds an activation function at the end." ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

conv2d[source][test]

\n", "\n", "> conv2d(**`ni`**:`int`, **`nf`**:`int`, **`ks`**:`int`=***`3`***, **`stride`**:`int`=***`1`***, **`padding`**:`int`=***`None`***, **`bias`**=***`False`***, **`init`**:`LayerFunc`=***`'kaiming_normal_'`***) → [`Conv2d`](https://pytorch.org/docs/stable/nn.html#torch.nn.Conv2d)\n", "\n", "
×

No tests found for conv2d. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Create and initialize [`nn.Conv2d`](https://pytorch.org/docs/stable/nn.html#torch.nn.Conv2d) layer. `padding` defaults to `ks//2`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(conv2d)" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

conv2d_trans[source][test]

\n", "\n", "> conv2d_trans(**`ni`**:`int`, **`nf`**:`int`, **`ks`**:`int`=***`2`***, **`stride`**:`int`=***`2`***, **`padding`**:`int`=***`0`***, **`bias`**=***`False`***) → [`ConvTranspose2d`](https://pytorch.org/docs/stable/nn.html#torch.nn.ConvTranspose2d)\n", "\n", "
×

No tests found for conv2d_trans. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Create [`nn.ConvTranspose2d`](https://pytorch.org/docs/stable/nn.html#torch.nn.ConvTranspose2d) layer. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(conv2d_trans)" ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

conv_layer[source][test]

\n", "\n", "> conv_layer(**`ni`**:`int`, **`nf`**:`int`, **`ks`**:`int`=***`3`***, **`stride`**:`int`=***`1`***, **`padding`**:`int`=***`None`***, **`bias`**:`bool`=***`None`***, **`is_1d`**:`bool`=***`False`***, **`norm_type`**:`Optional`\\[[`NormType`](/layers.html#NormType)\\]=***``***, **`use_activ`**:`bool`=***`True`***, **`leaky`**:`float`=***`None`***, **`transpose`**:`bool`=***`False`***, **`init`**:`Callable`=***`'kaiming_normal_'`***, **`self_attention`**:`bool`=***`False`***)\n", "\n", "
×

No tests found for conv_layer. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(conv_layer, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The [`conv_layer`](/layers.html#conv_layer) function returns a sequence of [nn.Conv2D](https://pytorch.org/docs/stable/nn.html#torch.nn.Conv2d), [BatchNorm](https://arxiv.org/abs/1502.03167) and a ReLU or [leaky RELU](https://ai.stanford.edu/~amaas/papers/relu_hybrid_icml2013_final.pdf) activation function.\n", "\n", "`n_in` represents the size of the input, `n_out` the size of the output, `ks` the kernel size, `stride` the stride with which we want to apply the convolutions. `bias` will decide if they have bias or not (if None, defaults to True unless using batchnorm). `norm_type` selects the type of normalization (or `None`). If `leaky` is None, the activation is a standard `ReLU`, otherwise it's a `LeakyReLU` of slope `leaky`. Finally if `transpose=True`, the convolution is replaced by a `ConvTranspose2D`." ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

embedding[source][test]

\n", "\n", "> embedding(**`ni`**:`int`, **`nf`**:`int`) → [`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)\n", "\n", "
×

No tests found for embedding. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(embedding, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create an [embedding layer](https://arxiv.org/abs/1711.09160) with input size `ni` and output size `nf`." ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

relu[source][test]

\n", "\n", "> relu(**`inplace`**:`bool`=***`False`***, **`leaky`**:`float`=***`None`***)\n", "\n", "
×

No tests found for relu. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Return a relu activation, maybe `leaky` and `inplace`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(relu)" ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

res_block[source][test]

\n", "\n", "> res_block(**`nf`**, **`dense`**:`bool`=***`False`***, **`norm_type`**:`Optional`\\[[`NormType`](/layers.html#NormType)\\]=***``***, **`bottle`**:`bool`=***`False`***, **\\*\\*`conv_kwargs`**)\n", "\n", "
×

No tests found for res_block. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Resnet block of `nf` features. `conv_kwargs` are passed to [`conv_layer`](/layers.html#conv_layer). " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(res_block)" ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

sigmoid_range[source][test]

\n", "\n", "> sigmoid_range(**`x`**, **`low`**, **`high`**)\n", "\n", "
×

No tests found for sigmoid_range. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Sigmoid function with range `(low, high)` " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(sigmoid_range)" ] }, { "cell_type": "code", "execution_count": 31, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

simple_cnn[source][test]

\n", "\n", "> simple_cnn(**`actns`**:`Collection`\\[`int`\\], **`kernel_szs`**:`Collection`\\[`int`\\]=***`None`***, **`strides`**:`Collection`\\[`int`\\]=***`None`***, **`bn`**=***`False`***) → [`Sequential`](https://pytorch.org/docs/stable/nn.html#torch.nn.Sequential)\n", "\n", "
×

No tests found for simple_cnn. To contribute a test please refer to this guide and this discussion.

\n", "\n", "CNN with [`conv_layer`](/layers.html#conv_layer) defined by `actns`, `kernel_szs` and `strides`, plus batchnorm if `bn`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(simple_cnn)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Initialization of modules" ] }, { "cell_type": "code", "execution_count": 32, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

batchnorm_2d[source][test]

\n", "\n", "> batchnorm_2d(**`nf`**:`int`, **`norm_type`**:[`NormType`](/layers.html#NormType)=***``***)\n", "\n", "
×

No tests found for batchnorm_2d. To contribute a test please refer to this guide and this discussion.

\n", "\n", "A batchnorm2d layer with `nf` features initialized depending on `norm_type`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(batchnorm_2d)" ] }, { "cell_type": "code", "execution_count": 33, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

icnr[source][test]

\n", "\n", "> icnr(**`x`**, **`scale`**=***`2`***, **`init`**=***`'kaiming_normal_'`***)\n", "\n", "
×

No tests found for icnr. To contribute a test please refer to this guide and this discussion.

\n", "\n", "ICNR init of `x`, with `scale` and `init` function. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(icnr)" ] }, { "cell_type": "code", "execution_count": 34, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

trunc_normal_[source][test]

\n", "\n", "> trunc_normal_(**`x`**:`Tensor`, **`mean`**:`float`=***`0.0`***, **`std`**:`float`=***`1.0`***) → `Tensor`\n", "\n", "
×

No tests found for trunc_normal_. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Truncated normal initialization. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(trunc_normal_)" ] }, { "cell_type": "code", "execution_count": 35, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

icnr[source][test]

\n", "\n", "> icnr(**`x`**, **`scale`**=***`2`***, **`init`**=***`'kaiming_normal_'`***)\n", "\n", "
×

No tests found for icnr. To contribute a test please refer to this guide and this discussion.

\n", "\n", "ICNR init of `x`, with `scale` and `init` function. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(icnr)" ] }, { "cell_type": "code", "execution_count": 36, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

`NormType`[test]

\n", "\n", "> Enum = [Batch, BatchZero, Weight, Spectral]\n", "\n", "
×

No tests found for NormType. To contribute a test please refer to this guide and this discussion.

\n", "\n", "An enumeration. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(NormType)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Undocumented Methods - Methods moved below this line will intentionally be hidden" ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

forward[source][test]

\n", "\n", "> forward(**`x`**:`Tensor`) → `Tensor`\n", "\n", "
×

No tests found for forward. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Debugger.forward)" ] }, { "cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

forward[source][test]

\n", "\n", "> forward(**`x`**)\n", "\n", "
×

No tests found for forward. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Lambda.forward)" ] }, { "cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

forward[source][test]

\n", "\n", "> forward(**`x`**)\n", "\n", "
×

No tests found for forward. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(AdaptiveConcatPool2d.forward)" ] }, { "cell_type": "code", "execution_count": 40, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

forward[source][test]

\n", "\n", "> forward(**`output`**, **\\*`args`**)\n", "\n", "
×

No tests found for forward. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(NoopLoss.forward)" ] }, { "cell_type": "code", "execution_count": 41, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

forward[source][test]

\n", "\n", "> forward(**`x`**)\n", "\n", "
×

No tests found for forward. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(PixelShuffle_ICNR.forward)" ] }, { "cell_type": "code", "execution_count": 42, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

forward[source][test]

\n", "\n", "> forward(**`real`**, **`fake`**)\n", "\n", "
×

No tests found for forward. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(WassersteinLoss.forward)" ] }, { "cell_type": "code", "execution_count": 43, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

forward[source][test]

\n", "\n", "> forward(**`x`**)\n", "\n", "
×

No tests found for forward. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(MergeLayer.forward)" ] }, { "cell_type": "code", "execution_count": 44, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

forward[source][test]

\n", "\n", "> forward(**`x`**)\n", "\n", "
×

No tests found for forward. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(SigmoidRange.forward)" ] }, { "cell_type": "code", "execution_count": 45, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

forward[source][test]

\n", "\n", "> forward(**`x`**)\n", "\n", "
×

No tests found for forward. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(MergeLayer.forward)" ] }, { "cell_type": "code", "execution_count": 46, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

forward[source][test]

\n", "\n", "> forward(**`x`**)\n", "\n", "
×

No tests found for forward. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(SelfAttention.forward)" ] }, { "cell_type": "code", "execution_count": 47, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

forward[source][test]

\n", "\n", "> forward(**`x`**)\n", "\n", "
×

No tests found for forward. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(SequentialEx.forward)" ] }, { "cell_type": "code", "execution_count": 48, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

append[source][test]

\n", "\n", "> append(**`l`**)\n", "\n", "
×

No tests found for append. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(SequentialEx.append)" ] }, { "cell_type": "code", "execution_count": 49, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

extend[source][test]

\n", "\n", "> extend(**`l`**)\n", "\n", "
×

No tests found for extend. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(SequentialEx.extend)" ] }, { "cell_type": "code", "execution_count": 50, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

insert[source][test]

\n", "\n", "> insert(**`i`**, **`l`**)\n", "\n", "
×

No tests found for insert. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(SequentialEx.insert)" ] }, { "cell_type": "code", "execution_count": 51, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

forward[source][test]

\n", "\n", "> forward(**`x`**)\n", "\n", "
×

No tests found for forward. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(PartialLayer.forward)" ] }, { "cell_type": "code", "execution_count": 52, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

forward[source][test]

\n", "\n", "> forward(**`x`**)\n", "\n", "
×

No tests found for forward. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(BatchNorm1dFlat.forward)" ] }, { "cell_type": "code", "execution_count": 53, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

forward[source][test]

\n", "\n", "> forward(**`x`**)\n", "\n", "
×

No tests found for forward. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Flatten.forward)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## New Methods - Please document or move to the undocumented section" ] }, { "cell_type": "code", "execution_count": 54, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class View[source][test]

\n", "\n", "> View(**\\*`size`**:`int`) :: [`PrePostInitMeta`](/core.html#PrePostInitMeta) :: [`Module`](/torch_core.html#Module)\n", "\n", "
×

No tests found for View. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Reshape `x` to `size` " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(View)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": 55, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

forward[source][test]

\n", "\n", "> forward(**`x`**)\n", "\n", "
×

No tests found for forward. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(ResizeBatch.forward)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": 56, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

forward[source][test]

\n", "\n", "> forward(**`x`**)\n", "\n", "
×

No tests found for forward. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(View.forward)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] } ], "metadata": { "jekyll": { "keywords": "fastai", "summary": "Provides essential functions to building and modifying `Model` architectures.", "title": "layers" }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.2" } }, "nbformat": 4, "nbformat_minor": 2 }