{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# A basic training loop" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This notebook builds upon the work of the [previous notebook](001a_nn_basics.ipynb) in which we created a simple training loop (including calculating the loss on a validation set) and then a 3-layer CNN using PyTorch's Sequential class.\n", "\n", "Here, we will" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## From the last notebook..." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "import pickle, gzip, torch, math, numpy as np, torch.nn.functional as F\n", "from pathlib import Path\n", "from IPython.core.debugger import set_trace\n", "from dataclasses import dataclass\n", "from typing import Any, Collection, Callable, NewType, List, Union, TypeVar, Optional\n", "from functools import partial, reduce\n", "from numbers import Number\n", "\n", "from numpy import array\n", "from torch import nn, optim, tensor, Tensor\n", "from torch.utils.data import TensorDataset, Dataset, DataLoader" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The data was downloaded in section 1.1 of the [previous notebook](001a_nn_basics.ipynb), so make sure you have run that code before you continue." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "DATA_PATH = Path('data')\n", "PATH = DATA_PATH/'mnist'\n", "\n", "with gzip.open(PATH/'mnist.pkl.gz', 'rb') as f:\n", " ((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding='latin-1')\n", "\n", "x_train,y_train,x_valid,y_valid = map(tensor, (x_train,y_train,x_valid,y_valid))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After creating our training and validation sets, we print out the min and max to get a sense for the range of feature values. It is always a good idea to inspect your data. In the case of the MNIST dataset, the x-values for each training example correspond to pixel values, that range from 0 to ~1." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x_train.min(),x_train.max()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we try to look inside the data, we will see that it's mostly zeros:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x_train" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's find the index of the first non-zero value of the first image and look at a few values in its vicinity:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "idx = x_train[0].nonzero()[0]\n", "x_train[0][idx-3:idx+15]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next we define a few training parameters:\n", "\n", "* `bs`: batch size\n", "* `epochs`: how many full training cycles to perform on the training set\n", "* `lr`: learning rate\n", "\n", "[Here is a reference](https://github.com/fastai/fastai_pytorch/blob/master/docs/abbr.md) for these and other abbreviations used as variable names. \n", "\n", "The fast.ai library differs from PEP 8 and instead follows conventions developed around the [APL](https://en.wikipedia.org/wiki/APL_\\(programming_language\\)) / [J](https://en.wikipedia.org/wiki/J_\\(programming_language\\)) / [K](https://en.wikipedia.org/wiki/K_\\(programming_language\\)) programming languages (all of which are centered around multi-dimensional arrays), which are more concise and closer to math notation. [Here is a more detailed explanation](https://github.com/fastai/fastai/blob/master/docs/style.md) of the fast.ai style guide." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "bs=64\n", "epochs=2\n", "lr=0.2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "PyTorch's [TensorDataset](https://pytorch.org/docs/stable/_modules/torch/utils/data/dataset.html#TensorDataset) is a Dataset wrapping tensors. It gives us a way to iterate, index, and slice along the first dimension of a tensor. This will make it easier to access both the independent and dependent variables in the same line as we train." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_ds = TensorDataset(x_train, y_train)\n", "valid_ds = TensorDataset(x_valid, y_valid)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We are using the same `loss_batch`, `fit`, and `Lambda` as were defined in the previous notebook:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "Rank0Tensor = NewType('OneEltTensor', Tensor)\n", "LossFunction = Callable[[Tensor, Tensor], Rank0Tensor]\n", "Model = nn.Module\n", "\n", "def is_listy(x:Any)->bool: return isinstance(x, (tuple,list))\n", "\n", "def loss_batch(model:Model, xb:Tensor, yb:Tensor, \n", " loss_fn:LossFunction, opt:optim.Optimizer=None):\n", " \"Calculate loss for the batch `xb,yb` and backprop with `opt`\"\n", " if not is_listy(xb): xb = [xb]\n", " if not is_listy(yb): yb = [yb]\n", " loss = loss_fn(model(*xb), *yb)\n", "\n", " if opt is not None:\n", " loss.backward()\n", " opt.step()\n", " opt.zero_grad()\n", " \n", " return loss.item(), len(yb)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "def fit(epochs:int, model:Model, loss_fn:LossFunction, \n", " opt:optim.Optimizer, train_dl:DataLoader, valid_dl:DataLoader):\n", " \"Train `model` on `train_dl` with `optim` then validate against `valid_dl`\"\n", " for epoch in range(epochs):\n", " model.train()\n", " for xb,yb in train_dl: loss,_ = loss_batch(model, xb, yb, loss_fn, opt)\n", "\n", " model.eval()\n", " with torch.no_grad():\n", " losses,nums = zip(*[loss_batch(model, xb, yb, loss_fn)\n", " for xb,yb in valid_dl])\n", " val_loss = np.sum(np.multiply(losses,nums)) / np.sum(nums)\n", "\n", " print(epoch, val_loss)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "LambdaFunc = Callable[[Tensor],Tensor]\n", "class Lambda(nn.Module):\n", " \"An easy way to create a pytorch layer for a simple `func`\"\n", " def __init__(self, func:LambdaFunc):\n", " \"create a layer that simply calls `func` with `x`\"\n", " super().__init__()\n", " self.func=func\n", " \n", " def forward(self, x): return self.func(x)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Simplify nn.Sequential layers" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As a reminder, our 3-layer CNN from the previous notebook was defined:\n", "\n", "```\n", "model = nn.Sequential(\n", " Lambda(lambda x: x.view(-1,1,28,28)),\n", " nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n", " nn.AvgPool2d(4),\n", " Lambda(lambda x: x.view(x.size(0),-1))\n", ")\n", "```\n", "\n", "Let's refactor this a bit to make it more readable, and to make the components more reusable:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "def noop(x): return x\n", "\n", "def ResizeBatch(*size:int) -> Tensor: \n", " \"Layer that resizes x to `size`, good for connecting mismatched layers\"\n", " return Lambda(lambda x: x.view((-1,)+size))\n", "def Flatten()->Tensor: \n", " \"Flattens `x` to a single dimension, often used at the end of a model\"\n", " return Lambda(lambda x: x.view((x.size(0), -1)))\n", "def PoolFlatten()->nn.Sequential:\n", " \"Apply `nn.AdaptiveAvgPool2d` to `x` and then flatten the result\"\n", " return nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten())\n", "\n", "def conv2d(ni:int, nf:int, ks:int=3, stride:int=1, padding:int=None, bias=False) -> nn.Conv2d:\n", " \"Create `nn.Conv2d` layer: `ni` inputs, `nf` outputs, `ks` kernel size. `padding` defaults to `k//2`\"\n", " if padding is None: padding = ks//2\n", " return nn.Conv2d(ni, nf, kernel_size=ks, stride=stride, padding=padding, bias=bias)\n", "\n", "def conv2d_relu(ni:int, nf:int, ks:int=3, stride:int=1, \n", " padding:int=None, bn:bool=False) -> nn.Sequential:\n", " \"Create a `conv2d` layer with `nn.ReLU` activation and optional(`bn`) `nn.BatchNorm2d`\"\n", " layers = [conv2d(ni, nf, ks=ks, stride=stride, padding=padding), nn.ReLU()]\n", " if bn: layers.append(nn.BatchNorm2d(nf))\n", " return nn.Sequential(*layers)\n", "\n", "def conv2d_trans(ni:int, nf:int, ks:int=2, stride:int=2, padding:int=0) -> nn.ConvTranspose2d:\n", " \"Create `nn.nn.ConvTranspose2d` layer: `ni` inputs, `nf` outputs, `ks` kernel size. `padding` defaults to 0\"\n", " return nn.ConvTranspose2d(ni, nf, kernel_size=ks, stride=stride, padding=padding)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using our newly defined layers and functions, we can instead now define the same networks as:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model = nn.Sequential(\n", " ResizeBatch(1,28,28),\n", " conv2d_relu(1, 16), \n", " conv2d_relu(16, 16),\n", " conv2d_relu(16, 10),\n", " PoolFlatten()\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that we will nearly always use small kernels of size 3 due to the reasons presented in section 2.3 in [this paper](https://arxiv.org/pdf/1409.1556.pdf) (a few small kernels achieve a receptive field of the same dimension as one bigger kernel while at the same time achieving increased discriminative power and using fewer parameters). \n", "\n", "We will use the same `get_data method` as defined in the previous notebook:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def get_data(train_ds, valid_ds, bs):\n", " return (DataLoader(train_ds, batch_size=bs, shuffle=True),\n", " DataLoader(valid_ds, batch_size=bs*2))\n", "\n", "train_dl,valid_dl = get_data(train_ds, valid_ds, bs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Set loss function**\n", "\n", "[Here](https://rdipietro.github.io/friendly-intro-to-cross-entropy-loss/) is tutorial explaining why cross entropy is a reasonable loss function for classifciation tasks." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "loss_fn = F.cross_entropy" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Set optimizer**\n", "\n", "We stick with stochastic gradient descent without momentum as our optimizer. This is a basic optimizer and it is [easy to understand](http://ruder.io/optimizing-gradient-descent/index.html#stochasticgradientdescent). We will move into better optimizers as we go forward." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "opt = optim.SGD(model.parameters(), lr=lr)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Test our loss function**\n", "\n", "We try out our loss function on one batch of X features and y targets to make sure it's working correctly." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "loss_fn(model(x_valid[0:bs]), y_valid[0:bs])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Fit**\n", "\n", "Everything looks ready, we call the fit function we developed earlier for two epochs to confirm that the model learns." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fit(epochs, model, loss_fn, opt, train_dl, valid_dl)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Transformations" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We are going to refactor some of the data transformations out of the network and into a pipeline that is applied to the data being fed into the Dataloders.\n", "\n", "This is more flexible, simplifies the model, and will be useful later when we want to apply additional transformations, like data augmentation." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Define transformations**\n", "\n", "In this example our only transformation will be *mnist2image*. This is a utility function to reshape our features into 28x28 arrays.\n", "\n", "X is a batch of features, where the first dimension is the number of samples in the batch and the remaining dimensions define the shape of the training example. y is the target variable to be learned, in this case, it is an integer representing one of 10 image classes.\n", "\n", "With MNIST data, the X features start out as a 1x784 vector and we want to convert the features to 1x28x28 images (see line 62). This helper function does that for an entire batch of features." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def mnist2image(b): return b.view(1,28,28)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "@dataclass\n", "class DatasetTfm(Dataset):\n", " \"Applies `tfm` to `ds`\"\n", " ds: Dataset\n", " tfm: Callable = None\n", " \n", " def __len__(self): return len(self.ds)\n", " \n", " def __getitem__(self,idx:int):\n", " \"Apply `tfm` to `x` and return `(x[idx],y[idx])`\"\n", " x,y = self.ds[idx]\n", " if self.tfm is not None: x = self.tfm(x)\n", " return x,y" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "DatasetTfm.__len__.__doc__" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_tds = DatasetTfm(train_ds, mnist2image)\n", "valid_tds = DatasetTfm(valid_ds, mnist2image)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def get_data(train_ds, valid_ds, bs):\n", " return (DataLoader(train_ds, bs, shuffle=True),\n", " DataLoader(valid_ds, bs*2, shuffle=False))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_dl,valid_dl = get_data(train_tds, valid_tds, bs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We make some checks to make sure that *mnist2image* is working correctly:\n", "1. The input and output shapes are as expected\n", "2. The input and output data (features) are the same" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x,y = next(iter(valid_dl))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "valid_ds[0][0].shape, x[0].shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "torch.allclose(valid_ds[0][0], x[0].view(-1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Refactor network" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Define layer types and loop over them**\n", "\n", "When a layer type is used more than once in a contiguous fashion (one after the other), it makes sense to define a function for that layer type and then use that function to build our model function. \n", "\n", "That is what we do here with *conv2_relu* with which we avoid the three subsequent lines of code in line 12 (this saving becomes more significant in deeper networks)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "def simple_cnn(actns:Collection[int], kernel_szs:Collection[int], \n", " strides:Collection[int]) -> nn.Sequential:\n", " \"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`\"\n", " layers = [conv2d_relu(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])\n", " for i in range(len(strides))]\n", " layers.append(PoolFlatten())\n", " return nn.Sequential(*layers)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def get_model():\n", " model = simple_cnn([1,16,16,10], [3,3,3], [2,2,2])\n", " return model, optim.SGD(model.parameters(), lr=lr)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model,opt = get_model()\n", "model" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fit(epochs, model, loss_fn, opt, train_dl, valid_dl)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## CUDA" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Run in GPU and add progress bar**\n", "\n", "To run our Pytorch networks in the GPU we have to specify it in the code. This is done by setting *torch.device('cuda')*. We will also add a progress bar to keep track of the progress during training. This we accomplish with [fastprogress](https://github.com/fastai/fastprogress) package.\n", "\n", "We integrate both these features into a custom Dataloader which we build on top of the Pytorch Dataloader." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "def ifnone(a:bool,b:Any):\n", " \"`a` if its not None, otherwise `b`\"\n", " return b if a is None else a\n", "\n", "default_device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')\n", "Tensors = Union[Tensor, Collection['Tensors']]\n", "\n", "def to_device(b:Tensors, device:torch.device):\n", " \"Ensure `b` is on `device`\"\n", " device = ifnone(device, default_device)\n", " if is_listy(b): return [to_device(o, device) for o in b]\n", " return b.to(device)\n", "\n", "@dataclass\n", "class DeviceDataLoader():\n", " \"`DataLoader` that ensures batches from `dl` are on `device`\"\n", " dl: DataLoader\n", " device: torch.device\n", "\n", " def __len__(self) -> int: return len(self.dl)\n", " def proc_batch(self,b:Tensors): return to_device(b, self.device)\n", "\n", " def __iter__(self)->Tensors:\n", " \"Ensure batches from `dl` are on `device` as we iterate\"\n", " self.gen = map(self.proc_batch, self.dl)\n", " return iter(self.gen)\n", "\n", " @classmethod\n", " def create(cls, *args, device:torch.device=default_device, **kwargs): return cls(DataLoader(*args, **kwargs), device=device)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Tensors" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def get_data(train_ds, valid_ds, bs):\n", " return (DeviceDataLoader.create(train_ds, bs, shuffle=True, num_workers=2),\n", " DeviceDataLoader.create(valid_ds, bs*2, shuffle=False, num_workers=2))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_dl,valid_dl = get_data(train_tds, valid_tds, bs)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def get_model():\n", " model = simple_cnn([1,16,16,10], [3,3,3], [2,2,2]).to(default_device)\n", " return model, optim.SGD(model.parameters(), lr=lr)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model,opt = get_model()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x,y = next(iter(valid_dl))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x.type(),y.type()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "def fit(epochs:int, model:Model, loss_fn:LossFunction, \n", " opt:optim.Optimizer, train_dl:DataLoader, valid_dl:DataLoader) -> None:\n", " \"Train `model` for `epochs` with `loss_fun` and `optim`\"\n", " for epoch in range(epochs):\n", " model.train()\n", " for xb,yb in train_dl: loss,_ = loss_batch(model, xb, yb, loss_fn, opt)\n", "\n", " model.eval()\n", " with torch.no_grad():\n", " losses,nums = zip(*[loss_batch(model, xb, yb, loss_fn)\n", " for xb,yb in valid_dl])\n", " val_loss = np.sum(np.multiply(losses,nums)) / np.sum(nums)\n", "\n", " print(epoch, val_loss)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fit(epochs, model, loss_fn, opt, train_dl, valid_dl)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Learner" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Define learner**\n", "\n", "Finally, we are missing a learner class to close the gap between our loaded data and our model. The learner class will receive our loaded data (after transformations) and the model and we will be able to call `fit()` on it to start the training phase." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "TItem = TypeVar('TItem')\n", "TfmCallable = Callable[[TItem],TItem]\n", "TfmList = Union[TfmCallable, Collection[TfmCallable]]\n", "Tfms = Optional[TfmList]\n", "\n", "@dataclass\n", "class DataBunch():\n", " \"Bind `train_dl`, `valid_dl` to `device`\"\n", " train_dl:DataLoader\n", " valid_dl:DataLoader\n", " device:torch.device=None\n", "\n", " @classmethod\n", " def create(cls, train_ds:Dataset, valid_ds:Dataset, bs:int=64, \n", " train_tfm:Tfms=None, valid_tfm:Tfms=None, device:torch.device=None, **kwargs):\n", " return cls(DeviceDataLoader.create(DatasetTfm(train_ds, train_tfm), bs, shuffle=True, device=device, **kwargs),\n", " DeviceDataLoader.create(DatasetTfm(valid_ds, valid_tfm), bs*2, shuffle=False, device=device, **kwargs),\n", " device=device)\n", "\n", "class Learner():\n", " \"Train `model` on `data` for `epochs` using learning rate `lr` and `opt_fn` to optimize training\"\n", " def __init__(self, data:DataBunch, model:Model):\n", " self.data,self.model = data,to_device(model, data.device)\n", "\n", " def fit(self, epochs, lr, opt_fn=optim.SGD):\n", " opt = opt_fn(self.model.parameters(), lr=lr)\n", " loss_fn = F.cross_entropy\n", " fit(epochs, self.model, loss_fn, opt, self.data.train_dl, self.data.valid_dl)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data = DataBunch.create(train_ds, valid_ds, bs=bs, train_tfm=mnist2image, valid_tfm=mnist2image)\n", "model = simple_cnn([1,16,16,10], [3,3,3], [2,2,2])\n", "learner = Learner(data, model)\n", "opt_fn = partial(optim.SGD, momentum=0.9)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learner.fit(1, lr/5, opt_fn=opt_fn)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learner.fit(2, lr, opt_fn=opt_fn)\n", "learner.fit(1, lr/5, opt_fn=opt_fn)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 2 }