{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Torch Core" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This module contains all the basic functions we need in other modules of the fastai library (split with [`core`](/core.html#core) that contains the ones not requiring pytorch). Its documentation can easily be skipped at a first read, unless you want to know what a given function does." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [], "source": [ "from fastai.imports import *\n", "from fastai.gen_doc.nbdoc import *\n", "from fastai.layers import *\n", "from fastai.torch_core import * " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Global constants" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`AdamW = partial(optim.Adam, betas=(0.9,0.99))`
[source]
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`bn_types = (nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d)`
[source]
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`defaults.device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')`
[source]
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you are trying to make fastai run on the CPU, simply change the default device: `defaults.device = 'cpu'`. \n", "\n", "Alternatively, if not using wildcard imports: `fastai.torch_core.defaults.device = 'cpu'`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Functions that operate conversions" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

batch_to_half[source][test]

\n", "\n", "> batch_to_half(**`b`**:`Collection`\\[`Tensor`\\]) → `Collection`\\[`Tensor`\\]\n", "\n", "
×

Tests found for batch_to_half:

To run tests please refer to this guide.

\n", "\n", "Set the input of batch `b` to half precision. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(batch_to_half)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

flatten_model[source][test]

\n", "\n", "> flatten_model(**`m`**)\n", "\n", "
×

No tests found for <lambda>. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(flatten_model, full_name='flatten_model')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Flattens all the layers of `m` into an array. This allows for easy access to the layers of the model and allows you to manipulate the model as if it was an array." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Sequential(\n", " (0): Sequential(\n", " (0): Conv2d(3, 6, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))\n", " (1): ReLU(inplace)\n", " )\n", " (1): Sequential(\n", " (0): Conv2d(6, 12, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))\n", " (1): ReLU(inplace)\n", " )\n", " (2): Sequential(\n", " (0): AdaptiveAvgPool2d(output_size=1)\n", " (1): Flatten()\n", " )\n", ")" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "m = simple_cnn([3,6,12])\n", "m" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Conv2d(3, 6, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)),\n", " ReLU(inplace),\n", " Conv2d(6, 12, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)),\n", " ReLU(inplace),\n", " AdaptiveAvgPool2d(output_size=1),\n", " Flatten()]" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "flatten_model(m)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

model2half[source][test]

\n", "\n", "> model2half(**`model`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)) → [`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)\n", "\n", "
×

Tests found for model2half:

  • pytest -sv tests/test_callback_fp16.py::test_model2half [source]
  • pytest -sv tests/test_callback_fp16.py::test_model2half_forward [source]

To run tests please refer to this guide.

\n", "\n", "Convert `model` to half precision except the batchnorm layers. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(model2half)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Converting model parameters to half precision allows us to leverage fast `FP16` arithmetic which can speed up the computations by 2-8 times. It also reduces memory consumption allowing us to train deeper models. \n", "\n", "**Note**: Batchnorm layers are not converted to half precision as that may lead to instability in training." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "dtypes of model parameters before model2half: \n", "0.0.weight : torch.float32\n", "0.2.weight : torch.float32\n", "0.2.bias : torch.float32\n", "0.2.running_mean : torch.float32\n", "0.2.running_var : torch.float32\n", "0.2.num_batches_tracked : torch.int64\n", "1.0.weight : torch.float32\n", "1.0.bias : torch.float32\n", "\n", "dtypes of model parameters after model2half: \n", "0.0.weight : torch.float16\n", "0.2.weight : torch.float32\n", "0.2.bias : torch.float32\n", "0.2.running_mean : torch.float32\n", "0.2.running_var : torch.float32\n", "0.2.num_batches_tracked : torch.int64\n", "1.0.weight : torch.float16\n", "1.0.bias : torch.float16\n", "\n" ] } ], "source": [ "m = simple_cnn([3,6,12], bn=True)\n", "\n", "def show_params_dtype(state_dict):\n", " \"\"\"Simple function to pretty print the dtype of the model params\"\"\"\n", " for wt_name, param in state_dict.items():\n", " print(\"{:<30}: {}\".format(wt_name, str(param.dtype)))\n", " print() \n", "\n", "print(\"dtypes of model parameters before model2half: \")\n", "show_params_dtype(m.state_dict())\n", "\n", "# Converting model to half precision\n", "m_half = model2half(m)\n", "\n", "print(\"dtypes of model parameters after model2half: \")\n", "show_params_dtype(m_half.state_dict())" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

np2model_tensor[source][test]

\n", "\n", "> np2model_tensor(**`a`**)\n", "\n", "
×

Tests found for np2model_tensor:

  • pytest -sv tests/test_torch_core.py::test_np2model_tensor [source]

To run tests please refer to this guide.

\n", "\n", "Tranform numpy array `a` to a tensor of the same type. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(np2model_tensor)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is a wrapper on top of Pytorch's `torch.as_tensor` which converts numpy array to torch tensor, and additionally attempts to map all floats to `torch.float32` and all integers to `torch.int64` for consistencies in model data. Below is an example demonstrating it's functionality for floating number, similar functionality applies to integer as well." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Datatype of as': float16, float32, float64\n", "Datatype of bs': torch.float32, torch.float32, torch.float32\n" ] } ], "source": [ "a1 = np.ones((2, 3)).astype(np.float16)\n", "a2 = np.ones((2, 3)).astype(np.float32)\n", "a3 = np.ones((2, 3)).astype(np.float64)\n", "\n", "b1 = np2model_tensor(a1) # Maps to torch.float32\n", "b2 = np2model_tensor(a2) # Maps to torch.float32\n", "b3 = np2model_tensor(a3) # Maps to torch.float32\n", "\n", "print(f\"Datatype of as': {a1.dtype}, {a2.dtype}, {a3.dtype}\")\n", "print(f\"Datatype of bs': {b1.dtype}, {b2.dtype}, {b3.dtype}\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

requires_grad[source][test]

\n", "\n", "> requires_grad(**`m`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module), **`b`**:`Optional`\\[`bool`\\]=***`None`***) → `Optional`\\[`bool`\\]\n", "\n", "
×

Tests found for requires_grad:

  • pytest -sv tests/test_torch_core.py::test_requires_grad [source]
  • pytest -sv tests/test_torch_core.py::test_requires_grad_set [source]

Related tests:

  • pytest -sv tests/test_torch_core.py::test_set_bn_eval [source]

To run tests please refer to this guide.

\n", "\n", "If `b` is not set return [`requires_grad`](/torch_core.html#requires_grad) of first param, else set [`requires_grad`](/torch_core.html#requires_grad) on all params as `b` " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(requires_grad)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Performs both getting and setting of [`requires_grad`](/torch_core.html#requires_grad) parameter of the tensors, which decided whether to accumulate gradients or not. \n", "\n", "* If `b` is `None`: The function **gets** the [`requires_grad`](/torch_core.html#requires_grad) for the model parameter, to be more specific it returns the [`requires_grad`](/torch_core.html#requires_grad) of the first element in the model.\n", "\n", "* Else if `b` is passed (a boolean value), [`requires_grad`](/torch_core.html#requires_grad) of all parameters of the model is **set** to `b`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "requires_grad of model: True\n", "requires_grad of model: False\n" ] } ], "source": [ "# Any Pytorch model\n", "m = simple_cnn([3, 6, 12], bn=True)\n", "\n", "# Get the requires_grad of model\n", "print(\"requires_grad of model: {}\".format(requires_grad(m)))\n", "\n", "# Set requires_grad of all params in model to false\n", "requires_grad(m, False)\n", "\n", "# Get the requires_grad of model\n", "print(\"requires_grad of model: {}\".format(requires_grad(m)))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

tensor[source][test]

\n", "\n", "> tensor(**`x`**:`Any`, **\\*`rest`**) → `Tensor`\n", "\n", "
×

Tests found for tensor:

  • pytest -sv tests/test_torch_core.py::test_tensor_with_list [source]
  • pytest -sv tests/test_torch_core.py::test_tensor_with_ndarray [source]
  • pytest -sv tests/test_torch_core.py::test_tensor_with_tensor [source]

Direct tests:

  • pytest -sv tests/test_torch_core.py::test_np2model_tensor [source]
  • pytest -sv tests/test_torch_core.py::test_tensor_array_monkey_patch [source]

To run tests please refer to this guide.

\n", "\n", "Like `torch.as_tensor`, but handle lists too, and can pass multiple vector elements directly. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(tensor)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Handy function when you want to convert any list type object to tensor, initialize your weights manually, and other similar cases.\n", "\n", "**NB**: When passing multiple vectors, all vectors must be of same dimensions. (Obvious but can be forgotten sometimes)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([1, 2, 3]) \n", "tensor([1, 2, 3]) \n", "tensor([1, 2, 3]) \n", "tensor([[1, 2],\n", " [3, 4]]) \n" ] } ], "source": [ "# Conversion from any numpy array\n", "b = tensor(np.array([1, 2, 3]))\n", "print(b, type(b))\n", "\n", "# Passing as multiple parameters\n", "b = tensor(1, 2, 3)\n", "print(b, type(b))\n", "\n", "# Passing a single list\n", "b = tensor([1, 2, 3])\n", "print(b, type(b))\n", "\n", "# Can work with multiple vectors / lists\n", "b = tensor([1, 2], [3, 4])\n", "print(b, type(b))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

to_cpu[source][test]

\n", "\n", "> to_cpu(**`b`**:`ItemsList`)\n", "\n", "
×

No tests found for to_cpu. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Recursively map lists of tensors in `b ` to the cpu. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(to_cpu)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A wrapper on top of Pytorch's `torch.Tensor.cpu()` function, which creates and returns a copy of a tensor or even a **list** of tensors in the CPU. As described in Pytorch's docs, if the tensor or list of tensor is already on the CPU, the exact data is returned and no copy is made.\n", "\n", "Useful to convert all the list of parameters of the model to CPU in a single call." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[tensor([[-0.5932]], device='cuda:0'), tensor([[-0.2867]], device='cuda:0'), tensor([[-1.0616]], device='cuda:0')]\n", "Id of tensors in a: \n", "139974954993416\n", "139977016149120\n", "139974955521008\n", "[tensor([[-0.5932]]), tensor([[-0.2867]]), tensor([[-1.0616]])]\n", "Id of tensors in b:\n", "139974954963016\n", "139974955458280\n", "139974955521152\n", "[tensor([[-0.5932]]), tensor([[-0.2867]]), tensor([[-1.0616]])]\n", "Id of tensors in c:\n", "139974954963016\n", "139974955458280\n", "139974955521152\n" ] } ], "source": [ "if torch.cuda.is_available():\n", " a = [torch.randn((1, 1)).cuda() for i in range(3)]\n", " print(a)\n", " print(\"Id of tensors in a: \")\n", " for i in a: print(id(i))\n", " \n", " # Getting a CPU version of the tensors in GPU\n", " b = to_cpu(a)\n", " print(b)\n", " print(\"Id of tensors in b:\")\n", " for i in b: print(id(i))\n", " \n", " # Trying to perform to_cpu on a list of tensor already in CPU\n", " c = to_cpu(b)\n", " print(c)\n", " # The tensors in c has exact id as that of b. No copy performed.\n", " print(\"Id of tensors in c:\")\n", " for i in c: print(id(i))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

to_data[source][test]

\n", "\n", "> to_data(**`b`**:`ItemsList`)\n", "\n", "
×

No tests found for to_data. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Recursively map lists of items in `b ` to their wrapped data. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(to_data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Returns the data attribute from the object or collection of objects that inherits from [`ItemBase`](/core.html#ItemBase) class. Useful to examine the exact values of the data, could be used to work with the data outside of `fastai` classes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Category display names: [Category 7, Category 3]\n", "Unique classes internally represented as: [1, 0]\n" ] } ], "source": [ "# Default example examined\n", "\n", "from fastai import *\n", "from fastai.vision import *\n", "\n", "path = untar_data(URLs.MNIST_SAMPLE)\n", "data = ImageDataBunch.from_folder(path)\n", "\n", "# Examin the labels\n", "ys = list(data.y)\n", "print(\"Category display names: \", [ys[0], ys[-1]])\n", "\n", "print(\"Unique classes internally represented as: \", to_data([ys[0], ys[-1]]))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

to_detach[source][test]

\n", "\n", "> to_detach(**`b`**:`Tensors`, **`cpu`**:`bool`=***`True`***)\n", "\n", "
×

No tests found for to_detach. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Recursively detach lists of tensors in `b `; put them on the CPU if `cpu=True`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(to_detach)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

to_device[source][test]

\n", "\n", "> to_device(**`b`**:`Tensors`, **`device`**:[`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch-device))\n", "\n", "
×

No tests found for to_device. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Recursively put `b` on `device`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(to_device)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

to_half[source][test]

\n", "\n", "> to_half(**`b`**:`Collection`\\[`Tensor`\\]) → `Collection`\\[`Tensor`\\]\n", "\n", "
×

Tests found for to_half:

  • pytest -sv tests/test_callback_fp16.py::test_to_half [source]

To run tests please refer to this guide.

\n", "\n", "Recursively map lists of tensors in `b ` to FP16. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(to_half)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Converts the tensor or list of `FP16`, resulting in less memory consumption and faster computations with the tensor. It does not convert `torch.int` types to half precision." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "dtype of as: \ttorch.int64\ttorch.int32\ttorch.int16\ttorch.float64\ttorch.float32\ttorch.float16\n", "dtype of bs: \ttorch.int64\ttorch.int32\ttorch.int16\ttorch.float16\ttorch.float16\ttorch.float16\n" ] } ], "source": [ "a1 = torch.tensor([1, 2], dtype=torch.int64)\n", "a2 = torch.tensor([1, 2], dtype=torch.int32)\n", "a3 = torch.tensor([1, 2], dtype=torch.int16)\n", "a4 = torch.tensor([1, 2], dtype=torch.float64)\n", "a5 = torch.tensor([1, 2], dtype=torch.float32)\n", "a6 = torch.tensor([1, 2], dtype=torch.float16)\n", "\n", "print(\"dtype of as: \", a1.dtype, a2.dtype, a3.dtype, a4.dtype, a5.dtype, a6.dtype, sep=\"\\t\")\n", "\n", "b1, b2, b3, b4, b5, b6 = to_half([a1, a2, a3, a4, a5, a6])\n", "\n", "print(\"dtype of bs: \", b1.dtype, b2.dtype, b3.dtype, b4.dtype, b5.dtype, b6.dtype, sep=\"\\t\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

to_np[source][test]

\n", "\n", "> to_np(**`x`**)\n", "\n", "
×

No tests found for to_np. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Convert a tensor to a numpy array. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(to_np)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Internally puts the data to CPU, and converts to `numpy.ndarray` equivalent of `torch.tensor` by calling `torch.Tensor.numpy()`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([1., 2.], dtype=torch.float64) cpu\n", "[1. 2.] \n" ] } ], "source": [ "a = torch.tensor([1, 2], dtype=torch.float64)\n", "\n", "if torch.cuda.is_available():\n", " a = a.cuda()\n", "\n", "print(a, type(a), a.device)\n", "\n", "b = to_np(a)\n", "\n", "print(b, type(b))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

try_int[source][test]

\n", "\n", "> try_int(**`o`**:`Any`) → `Any`\n", "\n", "
×

No tests found for try_int. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Try to convert `o` to int, default to `o` if not possible. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(try_int)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "12 \n", "[1.5] float64\n", "1 \n", "2 \n", "12.5 \n" ] } ], "source": [ "# Converts floating point numbers to integer\n", "print(try_int(12.5), type(try_int(12.5)))\n", "\n", "# This is a Rank-1 ndarray, which ideally should not be converted to int \n", "print(try_int(np.array([1.5])), try_int(np.array([1.5])).dtype)\n", "\n", "# Numpy array with a single elements are converted to int\n", "print(try_int(np.array(1.5)), type(try_int(np.array(1.5))))\n", "\n", "print(try_int(torch.tensor(2.5)), type(try_int(torch.tensor(2.5))))\n", "\n", "# Strings are not converted to int (of course)\n", "print(try_int(\"12.5\"), type(try_int(\"12.5\")))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Functions to deal with model initialization" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

apply_init[source][test]

\n", "\n", "> apply_init(**`m`**, **`init_func`**:`LayerFunc`)\n", "\n", "
×

Tests found for apply_init:

  • pytest -sv tests/test_torch_core.py::test_apply_init [source]

To run tests please refer to this guide.

\n", "\n", "Initialize all non-batchnorm layers of `m` with `init_func`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(apply_init)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

apply_leaf[source][test]

\n", "\n", "> apply_leaf(**`m`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module), **`f`**:`LayerFunc`)\n", "\n", "
×

Tests found for apply_leaf:

  • pytest -sv tests/test_torch_core.py::test_apply_init [source]

To run tests please refer to this guide.

\n", "\n", "Apply `f` to children of `m`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(apply_leaf)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

cond_init[source][test]

\n", "\n", "> cond_init(**`m`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module), **`init_func`**:`LayerFunc`)\n", "\n", "
×

No tests found for cond_init. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Initialize the non-batchnorm layers of `m` with `init_func`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(cond_init)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

in_channels[source][test]

\n", "\n", "> in_channels(**`m`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)) → `List`\\[`int`\\]\n", "\n", "
×

Tests found for in_channels:

  • pytest -sv tests/test_torch_core.py::test_in_channels [source]
  • pytest -sv tests/test_torch_core.py::test_in_channels_no_weights [source]

To run tests please refer to this guide.

\n", "\n", "Return the shape of the first weight layer in `m`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(in_channels)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

init_default[source][test]

\n", "\n", "> init_default(**`m`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module), **`func`**:`LayerFunc`=***`'kaiming_normal_'`***)\n", "\n", "
×

No tests found for init_default. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Initialize `m` weights with `func` and set `bias` to 0. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(init_default)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Functions to get information of a model" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

children[source][test]

\n", "\n", "> children(**`m`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)) → `ModuleList`\n", "\n", "
×

Tests found for children:

Direct tests:

  • pytest -sv tests/test_torch_core.py::test_range_children [source]

To run tests please refer to this guide.

\n", "\n", "Get children of `m`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(children)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

children_and_parameters[source][test]

\n", "\n", "> children_and_parameters(**`m`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module))\n", "\n", "
×

No tests found for children_and_parameters. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Return the children of `m` and its direct parameters not registered in modules. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(children_and_parameters)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

first_layer[source][test]

\n", "\n", "> first_layer(**`m`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)) → [`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)\n", "\n", "
×

No tests found for first_layer. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Retrieve first layer in a module `m`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(first_layer)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

last_layer[source][test]

\n", "\n", "> last_layer(**`m`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)) → [`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)\n", "\n", "
×

No tests found for last_layer. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Retrieve last layer in a module `m`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(last_layer)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

num_children[source][test]

\n", "\n", "> num_children(**`m`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)) → `int`\n", "\n", "
×

No tests found for num_children. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Get number of children modules in `m`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(num_children)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

one_param[source][test]

\n", "\n", "> one_param(**`m`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)) → `Tensor`\n", "\n", "
×

No tests found for one_param. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Return the first parameter of `m`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(one_param)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

range_children[source][test]

\n", "\n", "> range_children(**`m`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)) → `Iterator`\\[`int`\\]\n", "\n", "
×

Tests found for range_children:

  • pytest -sv tests/test_torch_core.py::test_range_children [source]

To run tests please refer to this guide.

\n", "\n", "Return iterator of len of children of `m`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(range_children)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

trainable_params[source][test]

\n", "\n", "> trainable_params(**`m`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)) → `ParamList`\n", "\n", "
×

No tests found for trainable_params. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Return list of trainable params in `m`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(trainable_params)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Functions to deal with BatchNorm layers" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

bn2float[source][test]

\n", "\n", "> bn2float(**`module`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)) → [`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)\n", "\n", "
×

No tests found for bn2float. To contribute a test please refer to this guide and this discussion.

\n", "\n", "If `module` is batchnorm don't use half precision. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(bn2float)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

set_bn_eval[source][test]

\n", "\n", "> set_bn_eval(**`m`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module))\n", "\n", "
×

Tests found for set_bn_eval:

  • pytest -sv tests/test_torch_core.py::test_set_bn_eval [source]

To run tests please refer to this guide.

\n", "\n", "Set bn layers in eval mode for all recursive children of `m`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(set_bn_eval)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

split_no_wd_params[source][test]

\n", "\n", "> split_no_wd_params(**`layer_groups`**:`ModuleList`) → `List`\\[`List`\\[[`Parameter`](https://pytorch.org/docs/stable/nn.html#torch.nn.Parameter)\\]\\]\n", "\n", "
×

Tests found for split_no_wd_params:

  • pytest -sv tests/test_torch_core.py::test_split_no_wd_params [source]

To run tests please refer to this guide.

\n", "\n", "Separate the parameters in `layer_groups` between `no_wd_types` and bias (`bias_types`) from the rest. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(split_no_wd_params)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is used by the optimizer to determine which params should be applied weight decay when using the option `bn_wd=False` is used in a [`Learner`](/basic_train.html#Learner)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Functions to get random tensors" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

log_uniform[source][test]

\n", "\n", "> log_uniform(**`low`**, **`high`**, **`size`**:`Optional`\\[`List`\\[`int`\\]\\]=***`None`***) → `FloatOrTensor`\n", "\n", "
×

No tests found for log_uniform. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Draw 1 or shape=`size` random floats from uniform dist: min=log(`low`), max=log(`high`). " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(log_uniform)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "tensor([0.5775, 0.7902, 0.6087, 0.5730, 0.8057, 0.8845, 0.8975, 0.5585])" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "log_uniform(0.5,2,(8,))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

rand_bool[source][test]

\n", "\n", "> rand_bool(**`p`**:`float`, **`size`**:`Optional`\\[`List`\\[`int`\\]\\]=***`None`***) → `BoolOrTensor`\n", "\n", "
×

No tests found for rand_bool. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Draw 1 or shape=`size` random booleans (`True` occuring with probability `p`). " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(rand_bool)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "tensor([1, 1, 0, 1, 0, 0, 1, 0], dtype=torch.uint8)" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "rand_bool(0.5, 8)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

uniform[source][test]

\n", "\n", "> uniform(**`low`**:`Number`, **`high`**:`Number`=***`None`***, **`size`**:`Optional`\\[`List`\\[`int`\\]\\]=***`None`***) → `FloatOrTensor`\n", "\n", "
×

No tests found for uniform. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Draw 1 or shape=`size` random floats from uniform dist: min=`low`, max=`high`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(uniform)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "tensor([0.6432, 0.3110, 0.7588, 0.7058, 0.7121, 0.8552, 0.3352, 0.2620])" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "uniform(0,1,(8,))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

uniform_int[source][test]

\n", "\n", "> uniform_int(**`low`**:`int`, **`high`**:`int`, **`size`**:`Optional`\\[`List`\\[`int`\\]\\]=***`None`***) → `IntOrTensor`\n", "\n", "
×

No tests found for uniform_int. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Generate int or tensor `size` of ints between `low` and `high` (included). " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(uniform_int)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "tensor([0, 1, 1, 2, 1, 1, 1, 2])" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "uniform_int(0,2,(8,))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Other functions" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class ModelOnCPU[source][test]

\n", "\n", "> ModelOnCPU(**`model`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module))\n", "\n", "
×

No tests found for ModelOnCPU. To contribute a test please refer to this guide and this discussion.

\n", "\n", "A context manager to evaluate `model` on the CPU inside. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(ModelOnCPU, title_level=3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class NoneReduceOnCPU[source][test]

\n", "\n", "> NoneReduceOnCPU(**`loss_func`**:`LossFunction`)\n", "\n", "
×

Tests found for NoneReduceOnCPU:

  • pytest -sv tests/test_torch_core.py::test_none_reduce_on_cpu [source]

To run tests please refer to this guide.

\n", "\n", "A context manager to evaluate `loss_func` with none reduce and weights on the CPU inside. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(NoneReduceOnCPU, title_level=3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class ParameterModule[source][test]

\n", "\n", "> ParameterModule(**`p`**:[`Parameter`](https://pytorch.org/docs/stable/nn.html#torch.nn.Parameter)) :: [`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)\n", "\n", "
×

No tests found for ParameterModule. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Register a lone parameter `p` in a module. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(ParameterModule, title_level=3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

data_collate[source][test]

\n", "\n", "> data_collate(**`batch`**:`ItemsList`) → `Tensor`\n", "\n", "
×

No tests found for data_collate. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Convert `batch` items to tensor data. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(data_collate)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

get_model[source][test]

\n", "\n", "> get_model(**`model`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module))\n", "\n", "
×

No tests found for get_model. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Return the model maybe wrapped inside `model`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(get_model)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

grab_idx[source][test]

\n", "\n", "> grab_idx(**`x`**, **`i`**, **`batch_first`**:`bool`=***`True`***)\n", "\n", "
×

No tests found for grab_idx. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Grab the `i`-th batch in `x`, `batch_first` stating the batch dimension. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(grab_idx)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

logit[source][test]

\n", "\n", "> logit(**`x`**:`Tensor`) → `Tensor`\n", "\n", "
×

No tests found for logit. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Logit of `x`, clamped to avoid inf. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(logit)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

logit_[source][test]

\n", "\n", "> logit_(**`x`**:`Tensor`) → `Tensor`\n", "\n", "
×

No tests found for logit_. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Inplace logit of `x`, clamped to avoid inf " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(logit_)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

model_type[source][test]

\n", "\n", "> model_type(**`dtype`**)\n", "\n", "
×

No tests found for model_type. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Return the torch type corresponding to `dtype`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(model_type)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

np_address[source][test]

\n", "\n", "> np_address(**`x`**:`ndarray`) → `int`\n", "\n", "
×

Tests found for np_address:

Related tests:

  • pytest -sv tests/test_torch_core.py::test_tensor_with_ndarray [source]

To run tests please refer to this guide.

\n", "\n", "Address of `x` in memory. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(np_address)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

split_model[source][test]

\n", "\n", "> split_model(**`model`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module), **`splits`**:`Collection`\\[`Union`\\[[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module), `ModuleList`\\]\\], **`want_idxs`**:`bool`=***`False`***)\n", "\n", "
×

Tests found for split_model:

  • pytest -sv tests/test_torch_core.py::test_split_model [source]

To run tests please refer to this guide.

\n", "\n", "Split `model` according to the layers in `splits`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(split_model)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If `splits` are layers, the model is split at those (not included) sequentially. If `want_idxs` is True, the corresponding indexes are returned. If `splits` are lists of layers, the model is split according to those." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

split_model_idx[source][test]

\n", "\n", "> split_model_idx(**`model`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module), **`idxs`**:`Collection`\\[`int`\\]) → `ModuleList`\n", "\n", "
×

No tests found for split_model_idx. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Split `model` according to the indexes in `idxs`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(split_model_idx)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

trange_of[source][test]

\n", "\n", "> trange_of(**`x`**)\n", "\n", "
×

No tests found for trange_of. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Create a tensor from `range_of(x)`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(trange_of)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Undocumented Methods - Methods moved below this line will intentionally be hidden" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

tensor__array__[source][test]

\n", "\n", "> tensor__array__(**`dtype`**=***`None`***)\n", "\n", "
×

No tests found for tensor__array__. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(tensor__array__)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

forward[source][test]

\n", "\n", "> forward(**`x`**)\n", "\n", "
×

No tests found for forward. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Defines the computation performed at every call. Should be overridden by all subclasses.\n", "\n", ".. note::\n", " Although the recipe for forward pass needs to be defined within\n", " this function, one should call the :class:`Module` instance afterwards\n", " instead of this since the former takes care of running the\n", " registered hooks while the latter silently ignores them. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(ParameterModule.forward)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## New Methods - Please document or move to the undocumented section" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

to_float[source][test]

\n", "\n", "> to_float(**`b`**:`Collection`\\[`Tensor`\\]) → `Collection`\\[`Tensor`\\]\n", "\n", "
×

No tests found for to_float. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Recursively map lists of tensors in `b ` to FP16. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(to_float)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

flatten_check[source][test]

\n", "\n", "> flatten_check(**`out`**:`Tensor`, **`targ`**:`Tensor`) → `Tensor`\n", "\n", "
×

No tests found for flatten_check. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Check that `out` and `targ` have the same number of elements and flatten them. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(flatten_check)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] } ], "metadata": { "jekyll": { "keywords": "fastai", "summary": "Basic functions using pytorch", "title": "torch_core" }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 2 }