{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#default_exp data.transforms" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "from local.torch_basics import *\n", "from local.test import *\n", "from local.data.core import *\n", "from local.data.load import *\n", "from local.data.external import *" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from local.notebook.showdoc import *" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Helper functions for processing data and basic transforms\n", "\n", "> Functions for getting, splitting, and labeling data, as well as generic transforms" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Get, split, and label" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For most data source creation we need functions to get a list of items, split them in to train/valid sets, and label them. fastai provides functions to make each of these steps easy (especially when combined with `fastai.data.blocks`)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Get" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First we'll look at functions that *get* a list of items (generally file names).\n", "\n", "We'll use *tiny MNIST* (a subset of MNIST with just two classes, `7`s and `3`s) for our examples/tests throughout this page." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(#2) [/home/sgugger/.fastai/data/mnist_tiny/train/3,/home/sgugger/.fastai/data/mnist_tiny/train/7]" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "path = untar_data(URLs.MNIST_TINY)\n", "(path/'train').ls()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# export\n", "def _get_files(p, fs, extensions=None):\n", " p = Path(p)\n", " res = [p/f for f in fs if not f.startswith('.')\n", " and ((not extensions) or f'.{f.split(\".\")[-1].lower()}' in extensions)]\n", " return res" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# export\n", "def get_files(path, extensions=None, recurse=True, folders=None):\n", " \"Get all the files in `path` with optional `extensions`, optionally with `recurse`, only in `folders`, if specified.\"\n", " path = Path(path)\n", " folders=L(folders)\n", " extensions = setify(extensions)\n", " extensions = {e.lower() for e in extensions}\n", " if recurse:\n", " res = []\n", " for i,(p,d,f) in enumerate(os.walk(path)): # returns (dirpath, dirnames, filenames)\n", " if len(folders) !=0 and i==0: d[:] = [o for o in d if o in folders]\n", " else: d[:] = [o for o in d if not o.startswith('.')]\n", " res += _get_files(p, f, extensions)\n", " else:\n", " f = [o.name for o in os.scandir(path) if o.is_file()]\n", " res = _get_files(path, f, extensions)\n", " return L(res)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is the most general way to grab a bunch of file names from disk. If you pass `extensions` (including the `.`) then returned file names are filtered by that list. Only those files directly in `path` are included, unless you pass `recurse`, in which case all child folders are also searched recursively. `folders` is an optional list of directories to limit the search to." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(#709) [/home/sgugger/.fastai/data/mnist_tiny/train/3/8055.png,/home/sgugger/.fastai/data/mnist_tiny/train/3/9466.png,/home/sgugger/.fastai/data/mnist_tiny/train/3/7778.png,/home/sgugger/.fastai/data/mnist_tiny/train/3/8824.png,/home/sgugger/.fastai/data/mnist_tiny/train/3/8228.png,/home/sgugger/.fastai/data/mnist_tiny/train/3/9620.png,/home/sgugger/.fastai/data/mnist_tiny/train/3/8790.png,/home/sgugger/.fastai/data/mnist_tiny/train/3/7497.png,/home/sgugger/.fastai/data/mnist_tiny/train/3/7383.png,/home/sgugger/.fastai/data/mnist_tiny/train/3/9324.png...]" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "t3 = get_files(path/'train'/'3', extensions='.png', recurse=False)\n", "t7 = get_files(path/'train'/'7', extensions='.png', recurse=False)\n", "t = get_files(path/'train', extensions='.png', recurse=True)\n", "test_eq(len(t), len(t3)+len(t7))\n", "test_eq(len(get_files(path/'train'/'3', extensions='.jpg', recurse=False)),0)\n", "test_eq(len(t), len(get_files(path, extensions='.png', recurse=True, folders='train')))\n", "t" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#hide\n", "test_eq(len(get_files(path/'train'/'3', recurse=False)),346)\n", "test_eq(len(get_files(path, extensions='.png', recurse=True, folders=['train', 'test'])),729)\n", "test_eq(len(get_files(path, extensions='.png', recurse=True, folders='train')),709)\n", "test_eq(len(get_files(path, extensions='.png', recurse=True, folders='training')),0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It's often useful to be able to create functions with customized behavior. `fastai.data` generally uses functions named as CamelCase verbs ending in `er` to create these functions. `FileGetter` is a simple example of such a function creator." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "def FileGetter(suf='', extensions=None, recurse=True, folders=None):\n", " \"Create `get_files` partial function that searches path suffix `suf`, only in `folders`, if specified, and passes along args\"\n", " def _inner(o, extensions=extensions, recurse=recurse, folders=folders):\n", " return get_files(o/suf, extensions, recurse, folders)\n", " return _inner" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fpng = FileGetter(extensions='.png', recurse=False)\n", "test_eq(len(t7), len(fpng(path/'train'/'7')))\n", "test_eq(len(t), len(fpng(path/'train', recurse=True)))\n", "fpng_r = FileGetter(extensions='.png', recurse=True)\n", "test_eq(len(t), len(fpng_r(path/'train')))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "image_extensions = set(k for k,v in mimetypes.types_map.items() if v.startswith('image/'))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "def get_image_files(path, recurse=True, folders=None):\n", " \"Get image files in `path` recursively, only in `folders`, if specified.\"\n", " return get_files(path, extensions=image_extensions, recurse=recurse, folders=folders)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is simply `get_files` called with a list of standard image extensions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_eq(len(t), len(get_image_files(path, recurse=True, folders='train')))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "def ImageGetter(suf='', recurse=True, folders=None):\n", " \"Create `get_image_files` partial function that searches path suffix `suf` and passes along `kwargs`, only in `folders`, if specified.\"\n", " def _inner(o, recurse=recurse, folders=folders): return get_image_files(o/suf, recurse, folders)\n", " return _inner" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Same as `FileGetter`, but for image extensions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_eq(len(get_files(path/'train', extensions='.png', recurse=True, folders='3')),\n", " len(ImageGetter( 'train', recurse=True, folders='3')(path)))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "def get_text_files(path, recurse=True, folders=None):\n", " \"Get text files in `path` recursively, only in `folders`, if specified.\"\n", " return get_files(path, extensions=['.txt'], recurse=recurse, folders=folders)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Split" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The next set of functions are used to *split* data into training and validation sets. The functions return two lists - a list of indices or masks for each of training and validation sets." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# export\n", "def RandomSplitter(valid_pct=0.2, seed=None, **kwargs):\n", " \"Create function that splits `items` between train/val with `valid_pct` randomly.\"\n", " def _inner(o, **kwargs):\n", " if seed is not None: torch.manual_seed(seed)\n", " rand_idx = L(int(i) for i in torch.randperm(len(o)))\n", " cut = int(valid_pct * len(o))\n", " return rand_idx[cut:],rand_idx[:cut]\n", " return _inner" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "src = list(range(30))\n", "f = RandomSplitter(seed=42)\n", "trn,val = f(src)\n", "assert 00 else []\n", " \n", " def __call__(self, o, **kwargs): return detuplify(tuple(self._do_one(o, c) for c in self.cols))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`cols` can be a list of column names or a list of indices (or a mix of both). If `label_delim` is passed, the result is split using it." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = pd.DataFrame({'a': 'a b c d'.split(), 'b': ['1 2', '0', '', '1 2 3']})\n", "f = ColReader('a', pref='0', suff='1')\n", "test_eq([f(o) for o in df.itertuples()], '0a1 0b1 0c1 0d1'.split())\n", "\n", "f = ColReader('b', label_delim=' ')\n", "test_eq([f(o) for o in df.itertuples()], [['1', '2'], ['0'], [], ['1', '2', '3']])\n", "\n", "df['a1'] = df['a']\n", "f = ColReader(['a', 'a1'], pref='0', suff='1')\n", "test_eq([f(o) for o in df.itertuples()], [('0a1', '0a1'), ('0b1', '0b1'), ('0c1', '0c1'), ('0d1', '0d1')])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Categorize -" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "class CategoryMap(CollBase):\n", " \"Collection of categories with the reverse mapping in `o2i`\"\n", " def __init__(self, col, sort=True, add_na=False):\n", " if is_categorical_dtype(col): items = L(col.cat.categories, use_list=True)\n", " else:\n", " if not hasattr(col,'unique'): col = L(col, use_list=True)\n", " # `o==o` is the generalized definition of non-NaN used by Pandas\n", " items = L(o for o in col.unique() if o==o)\n", " if sort: items = items.sorted()\n", " self.items = '#na#' + items if add_na else items\n", " self.o2i = defaultdict(int, self.items.val2idx()) if add_na else dict(self.items.val2idx())\n", " def __eq__(self,b): return all_equal(b,self)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "t = CategoryMap([4,2,3,4])\n", "test_eq(t, [2,3,4])\n", "test_eq(t.o2i, {2:0,3:1,4:2})\n", "test_fail(lambda: t.o2i['unseen label'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "t = CategoryMap([4,2,3,4], add_na=True)\n", "test_eq(t, ['#na#',2,3,4])\n", "test_eq(t.o2i, {'#na#':0,2:1,3:2,4:3})" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "t = CategoryMap(pd.Series([4,2,3,4]), sort=False)\n", "test_eq(t, [4,2,3])\n", "test_eq(t.o2i, {4:0,2:1,3:2})" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "col = pd.Series(pd.Categorical(['M','H','L','M'], categories=['H','M','L'], ordered=True))\n", "t = CategoryMap(col)\n", "test_eq(t, ['H','M','L'])\n", "test_eq(t.o2i, {'H':0,'M':1,'L':2})" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# export\n", "class Categorize(Transform):\n", " \"Reversible transform of category string to `vocab` id\"\n", " loss_func,order=CrossEntropyLossFlat(),1\n", " def __init__(self, vocab=None, add_na=False):\n", " self.add_na = add_na\n", " self.vocab = None if vocab is None else CategoryMap(vocab, add_na=add_na)\n", "\n", " def setups(self, dsrc):\n", " if self.vocab is None and dsrc is not None: self.vocab = CategoryMap(dsrc, add_na=self.add_na)\n", " self.c = len(self.vocab)\n", "\n", " def encodes(self, o): return TensorCategory(self.vocab.o2i[o])\n", " def decodes(self, o): return Category (self.vocab [o])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "class Category(str, ShowTitle): _show_args = {'label': 'category'}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cat = Categorize()\n", "tds = DataSource(['cat', 'dog', 'cat'], tfms=[cat])\n", "test_eq(cat.vocab, ['cat', 'dog'])\n", "test_eq(cat('cat'), 0)\n", "test_eq(cat.decode(1), 'dog')\n", "test_stdout(lambda: show_at(tds,2), 'cat')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cat = Categorize(add_na=True)\n", "tds = DataSource(['cat', 'dog', 'cat'], tfms=[cat])\n", "test_eq(cat.vocab, ['#na#', 'cat', 'dog'])\n", "test_eq(cat('cat'), 1)\n", "test_eq(cat.decode(2), 'dog')\n", "test_stdout(lambda: show_at(tds,2), 'cat')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Multicategorize -" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# export\n", "class MultiCategorize(Categorize):\n", " \"Reversible transform of multi-category strings to `vocab` id\"\n", " loss_func,order=BCEWithLogitsLossFlat(),1\n", " def __init__(self, vocab=None, add_na=False):\n", " self.add_na = add_na\n", " self.vocab = None if vocab is None else CategoryMap(vocab, add_na=add_na)\n", " \n", " def setups(self, dsrc):\n", " if not dsrc: return\n", " if self.vocab is None:\n", " vals = set()\n", " for b in dsrc: vals = vals.union(set(b))\n", " self.vocab = CategoryMap(list(vals), add_na=self.add_na)\n", "\n", " def encodes(self, o): return TensorMultiCategory([self.vocab.o2i[o_] for o_ in o])\n", " def decodes(self, o): return MultiCategory ([self.vocab [o_] for o_ in o])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "class MultiCategory(L):\n", " def show(self, ctx=None, sep=';', color='black', **kwargs):\n", " return show_title(sep.join(self.map(str)), ctx=ctx, color=color, **kwargs)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cat = MultiCategorize()\n", "tds = DataSource([['b', 'c'], ['a'], ['a', 'c'], []], tfms=[cat])\n", "test_eq(tds[3][0], tensor([]))\n", "test_eq(cat.vocab, ['a', 'b', 'c'])\n", "test_eq(cat(['a', 'c']), tensor([0,2]))\n", "test_eq(cat([]), tensor([]))\n", "test_eq(cat.decode([1]), ['b'])\n", "test_eq(cat.decode([0,2]), ['a', 'c'])\n", "test_stdout(lambda: show_at(tds,2), 'a;c')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# export\n", "class OneHotEncode(Transform):\n", " \"One-hot encodes targets\"\n", " order=2\n", " def __init__(self, c=None): self.c = c\n", "\n", " def setups(self, dsrc):\n", " if self.c is None: self.c = len(L(getattr(dsrc, 'vocab', None)))\n", " if not self.c: warn(\"Couldn't infer the number of classes, please pass a value for `c` at init\")\n", "\n", " def encodes(self, o): return TensorMultiCategory(one_hot(o, self.c).float())\n", " def decodes(self, o): return one_hot_decode(o, None)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Works in conjunction with ` MultiCategorize` or on its own if you have one-hot encoded targets (pass a `vocab` for decoding and `do_encode=False` in this case)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "_tfm = OneHotEncode(c=3)\n", "test_eq(_tfm([0,2]), tensor([1.,0,1]))\n", "test_eq(_tfm.decode(tensor([0,1,1])), [1,2])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tds = DataSource([['b', 'c'], ['a'], ['a', 'c'], []], [[MultiCategorize(), OneHotEncode()]])\n", "test_eq(tds[1], [tensor([1.,0,0])])\n", "test_eq(tds[3], [tensor([0.,0,0])])\n", "test_eq(tds.decode([tensor([False, True, True])]), [['b','c']])\n", "test_eq(type(tds[1][0]), TensorMultiCategory)\n", "test_stdout(lambda: show_at(tds,2), 'a;c')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#hide\n", "#test with passing the vocab\n", "tds = DataSource([['b', 'c'], ['a'], ['a', 'c'], []], [[MultiCategorize(vocab=['a', 'b', 'c']), OneHotEncode()]])\n", "test_eq(tds[1], [tensor([1.,0,0])])\n", "test_eq(tds[3], [tensor([0.,0,0])])\n", "test_eq(tds.decode([tensor([False, True, True])]), [['b','c']])\n", "test_eq(type(tds[1][0]), TensorMultiCategory)\n", "test_stdout(lambda: show_at(tds,2), 'a;c')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# export\n", "class EncodedMultiCategorize(Categorize):\n", " \"Transform of one-hot encoded multi-category that decodes with `vocab`\"\n", " loss_func,order=BCEWithLogitsLossFlat(),1\n", " def __init__(self, vocab): self.vocab,self.c = vocab,len(vocab)\n", " def encodes(self, o): return TensorCategory(tensor(o).float())\n", " def decodes(self, o): return MultiCategory (one_hot_decode(o, self.vocab))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "_tfm = EncodedMultiCategorize(vocab=['a', 'b', 'c'])\n", "test_eq(_tfm([1,0,1]), tensor([1., 0., 1.]))\n", "test_eq(type(_tfm([1,0,1])), TensorCategory)\n", "test_eq(_tfm.decode(tensor([False, True, True])), ['b','c'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "def get_c(dbunch):\n", " if getattr(dbunch, 'c', False): return dbunch.c\n", " vocab = getattr(dbunch, 'vocab', [])\n", " if len(vocab) > 0 and is_listy(vocab[-1]): vocab = vocab[-1]\n", " return len(vocab)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## End-to-end dataset example with MNIST" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's show how to use those functions to grab the mnist dataset in a `DataSource`. First we grab all the images." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "path = untar_data(URLs.MNIST_TINY)\n", "items = get_image_files(path)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then we split between train and validation depending on the folder." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "((#3) [/home/jhoward/.fastai/data/mnist_tiny/train/7/723.png,/home/jhoward/.fastai/data/mnist_tiny/train/7/7446.png,/home/jhoward/.fastai/data/mnist_tiny/train/7/8566.png],\n", " (#3) [/home/jhoward/.fastai/data/mnist_tiny/valid/7/946.png,/home/jhoward/.fastai/data/mnist_tiny/valid/7/9608.png,/home/jhoward/.fastai/data/mnist_tiny/valid/7/825.png])" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "splitter = GrandparentSplitter()\n", "splits = splitter(items)\n", "train,valid = (items[i] for i in splits)\n", "train[:3],valid[:3]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our inputs are images that we open and convert to tensors, our targets are labeled depending on the parent directory and are categories." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from PIL import Image\n", "def open_img(fn:Path): return Image.open(fn).copy()\n", "def img2tensor(im:Image.Image): return TensorImage(array(im)[None])\n", "\n", "tfms = [[open_img, img2tensor],\n", " [parent_label, Categorize()]]\n", "train_ds = DataSource(train, tfms)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x,y = train_ds[3]\n", "xd,yd = decode_at(train_ds,3)\n", "test_eq(parent_label(train[3]),yd)\n", "test_eq(array(Image.open(train[3])),xd[0].numpy())" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAEQAAABUCAYAAAA7xZEpAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjAsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+17YcXAAADj0lEQVR4nO2aPUscURSGnyt+gBu1EkQxIqQSC0FETaGigqBtFG0SG3+DjY2lgo0pxSIpRIimUKxiISSCohYWIigYwcYvRI2Nok6KMGzmZLM7rrN7L3IeWNg5O8w5vPvOuXPurvE8DyVOju0CXEMFEaggAhVEoIIIVBCBCiKwLogx5ka8HowxH23Vk2srsY/nea/898aYGHACfLFVj3WHCN4Bp8B3WwW4JsgH4LNncZ4wrswyxpjXwE/gjed5P23V4ZJD3gM/bIoB7gnyyXYRTtwyxpi3wDegzPO8XzZrccUhH4CvtsUARxziEq44xBlUEIEKIlBBBKmGu5fccU2ioDpEoIIIVBCBCiJQQQQqiEAFEaggAhVEoIIIVBCBCiJQQQRWfsq8u7sD4OHhAYDT01Pm5uYAWFtbA2B+fh4Af4tzcHAQgN7eXgC6u7szUps6RJBqkznUfsj9/T0A19fXgfjU1BQAl5eXgfjS0hIAOzs7IcuM09XVBcDCwgIAublpm1z3Q8IQiUM2NzcBaGxsTLuQpqYmAGKxGACdnZ0AnJ+fAzAxMRE4/+DgAICqqqp0U6pDwhDJKuP3kP9hzJ8vo7+/P+HnQ0NDNDc3A5Cfn5/w2u3t7QD09PQAMDs7C8Dw8HCaVSdGHSKIxCH19fUAHB0dJT2vvLz8ydf2V5HCwsJA/PDw8MnXCoM6RBCJQ/Ly8oD0HJCK29tbAEZHRwPxhoaGyHOBOuQfrP8tMxUbGxsArKysBOIdHR0Zyee8IMvLy4HjgoIC4FmP7EnRW0bgrEP8keLi4iIQ97cBKioqMpJXHSKIZLjLBGdnZwCUlZUBUFRUBMD29jbwrKHOR4e7MDjbQ1paWgLHAwMDQCTOSIo6ROCcQ8bHxwHY29sD4r0j6jH/f6hDBM6sMqurqwC0tbUB8SdR/9G9trY26pS6yoTBmR6yuLgIwOPjIwDFxcVARpyRFHWIwAmHzMzMMD09HYitr69bqUUdIrC6yuzv7wPQ2trKyckJEJ9d/A3rnJyMfWe6yoTBikOurq4AqKmpAeD4+Ji+vj4AJicnASgtLc1E6r9Rh4Qhq6uM/7PkyMgI8McZAJWVlYyNjQFZcUZS1CGCrPaQm5sbAEpKSgLxra0t6urqokwVBu0hYchqD9nd3Q0c+7tf1dXV2SwjKeoQgTP7IRbQHhKGVD0koYovGXWIQAURqCACFUSggghUEMFvi57zogPe+9UAAAAASUVORK5CYII=\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "ax = show_at(train_ds, 3, cmap=\"Greys\", figsize=(1,1))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "assert ax.title.get_text() in ('3','7')\n", "test_fig_exists(ax)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## ToTensor -" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "class ToTensor(Transform):\n", " \"Convert item to appropriate tensor class\"\n", " order = 15" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Cuda -" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# export\n", "@docs\n", "class Cuda(Transform):\n", " \"Move batch to `device` (defaults to `default_device()`)\"\n", " def __init__(self,device=None):\n", " self.device=default_device() if device is None else device\n", " super().__init__(split_idx=None, as_item=False)\n", " def encodes(self, b): return to_device(b, self.device)\n", " def decodes(self, b): return to_cpu(b)\n", "\n", " _docs=dict(encodes=\"Move batch to `device`\", decodes=\"Return batch to CPU\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

Cuda.encodes[source]

\n", "\n", "> Cuda.encodes()\n", "\n", "Move batch to [`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch-device)" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Cuda.encodes, name='Cuda.encodes')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that, like all `Transform`s, `encodes` is called by `tfm()` and `decodes` is called by `tfm.decode()`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tfm = Cuda()\n", "t = tfm((tensor(1),))\n", "test_eq(*t,1)\n", "test_eq(t[0].type(),'torch.cuda.LongTensor' if default_device().type=='cuda' else 'torch.LongTensor')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

Cuda.decodes[source]

\n", "\n", "> Cuda.decodes()\n", "\n", "Return batch to CPU" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Cuda.decodes, name='Cuda.decodes')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "t = tfm.decode(t)\n", "test_eq(*t,1)\n", "test_eq(t[0].type(),'torch.LongTensor')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class A(Transform): \n", " def encodes(self, x): return x \n", " def decodes(self, x): return Int(x) \n", " \n", "start = torch.arange(0,50)\n", "tds = DataSource(start, [A()])\n", "tdl = TfmdDL(tds, after_batch=Cuda, bs=4)\n", "test_eq(tdl.device, default_device())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## IntToFloatTensor -" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# export\n", "class IntToFloatTensor(Transform):\n", " \"Transform image to float tensor, optionally dividing by 255 (e.g. for images).\"\n", " order = 20 #Need to run after CUDA if on the GPU\n", " def __init__(self, div=255., div_mask=1, split_idx=None, as_item=True):\n", " super().__init__(split_idx=split_idx,as_item=as_item)\n", " self.div,self.div_mask = div,div_mask\n", "\n", " def encodes(self, o:TensorImage): return o.float().div_(self.div)\n", " def encodes(self, o:TensorMask ): return o.div_(self.div_mask).long()\n", " def decodes(self, o:TensorImage): return o.clamp(0., 1.) if self.div else o" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "t = (TensorImage(tensor(1)),tensor(2).long(),TensorMask(tensor(3)))\n", "tfm = IntToFloatTensor(as_item=False)\n", "ft = tfm(t)\n", "test_eq(ft, [1./255, 2, 3])\n", "test_eq(type(ft[0]), TensorImage)\n", "test_eq(type(ft[2]), TensorMask)\n", "test_eq(ft[0].type(),'torch.FloatTensor')\n", "test_eq(ft[1].type(),'torch.LongTensor')\n", "test_eq(ft[2].type(),'torch.LongTensor')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Normalization -" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# export\n", "def broadcast_vec(dim, ndim, *t, cuda=True):\n", " \"Make a vector broadcastable over `dim` (out of `ndim` total) by prepending and appending unit axes\"\n", " v = [1]*ndim\n", " v[dim] = -1\n", " f = to_device if cuda else noop\n", " return [f(tensor(o).view(*v)) for o in t]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# export\n", "@docs\n", "class Normalize(Transform):\n", " \"Normalize/denorm batch of `TensorImage`\"\n", " order=99\n", " def __init__(self, mean, std, dim=1, ndim=4, cuda=True):\n", " self.mean,self.std = broadcast_vec(dim, ndim, mean, std, cuda=cuda)\n", "\n", " def encodes(self, x:TensorImage): return (x-self.mean) / self.std\n", " def decodes(self, x:TensorImage):\n", " f = to_cpu if x.device.type=='cpu' else noop\n", " return (x*f(self.std) + f(self.mean))\n", "\n", " _docs=dict(encodes=\"Normalize batch\", decodes=\"Denormalize batch\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mean,std = [0.5]*3,[0.5]*3\n", "mean,std = broadcast_vec(1, 4, mean, std)\n", "batch_tfms = [Cuda(), IntToFloatTensor(), Normalize(mean,std)]\n", "tdl = TfmdDL(train_ds, after_batch=batch_tfms, bs=4)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x,y = tdl.one_batch()\n", "xd,yd = tdl.after_batch.decode((x,y))\n", "\n", "test_eq(x.type(), 'torch.cuda.FloatTensor' if default_device().type=='cuda' else 'torch.FloatTensor')\n", "test_eq(xd.type(), 'torch.FloatTensor')\n", "test_eq(type(x), TensorImage)\n", "test_eq(type(y), TensorCategory)\n", "assert x.mean()<0.0\n", "assert x.std()>0.5\n", "assert 0" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAOcAAAD3CAYAAADmIkO7AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjAsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+17YcXAAAGDUlEQVR4nO3dv4tUVxjH4XOCNiYrgoVBxCqVpdoYsbBXQbSxyZbiD1Bs7RTB2hQWWqiNRSCljY26QVb8H0whiIqViT8Q5KRKYdg5C3t3cr8z+zwgyLzMvRfkwxHfnbG21gqQ57uxHwBYmTghlDghlDghlDghlDghlDghlDjnQK317//8+lpr/XXs52KYTWM/AMO11n749/e11u9LKW9KKb+N90SsByfn/DlZSnlbSlka+0EYRpzzZ7GUcq/5ucyZV/0Zzo9a6+5Syp+llJ9aa3+O/TwM4+ScL7+UUv4Q5nwQ53z5pZRyd+yHYH34a+2cqLX+XEp5WEr5sbX219jPw3BOzvmxWEr5XZjzw8kJoZycEEqcEEqcEEqcEKr7g++1Vv9aBFPWWqsrve7khFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFCbxn6AMezatas7f/ny5aDrt9a689evX0+c3b59e9C9h7p169bE2Zs3b7rv/fLly3o/zobm5IRQ4oRQ4oRQ4oRQ4oRQ4oRQ4oRQtbeTq7X2F3Yz6vjx4935/fv3u/PNmzev5+PMjdV2tFeuXOnOX716NXG22u54lrXW6kqvOzkhlDghlDghlDghlDghlDghlDgh1Ibcc67m5MmT3fnNmzendu+lpaXu/NChQ4Ouv9qOdmFhYdD1h9i5c+fE2WqfJZ1l9pwwY8QJocQJocQJocQJocQJocQJoew5N5jdu3d355cuXZo4O3PmTPe9mzYN+xpke85vOTkhlDghlDghlDghlDghlDghlDghlD0n39i7d+/E2cOHD7vv3bZt26B723N+y8kJocQJocQJocQJocQJocQJoYZ9xoeZs2XLlu78wIEDE2dDVyXv37/vzr9+/Tro+vPGyQmhxAmhxAmhxAmhxAmhxAmhxAmh7Dk3mMOHD3fnN27cmNq9T5061Z2/e/duaveeRU5OCCVOCCVOCCVOCCVOCCVOCCVOCGXPOWcWFha688XFxand+9GjR93548ePp3bveeTkhFDihFDihFDihFDihFDihFDihFD2nHPmxIkTg+ZDLC8vd+efPn2a2r3nkZMTQokTQokTQokTQokTQokTQokTQtXW2uRhrZOHjOLo0aPd+Z07d7rzIf/H5r1797rzs2fPduf2nCtrrdWVXndyQihxQihxQihxQihxQihxQiirlDB79uzpzpeWlrrzIauSUkq5e/fuxNn58+e77/348eOge29UVikwY8QJocQJocQJocQJocQJocQJoXw15gi2bt06cXbt2rXue4fuMZ8+fdqd93aZ9pj/LycnhBInhBInhBInhBInhBInhBInhLLnHMH169cnzo4dOzbo2m/fvu3OL1++3J3bZeZwckIocUIocUIocUIocUIocUIocUIoe84p6H1es5RS9u3bN7V7nzt3rjt/8uTJ1O7N+nJyQihxQihxQihxQihxQihxQiirlCk4cuRId75///41X/v58+fd+YMHD9Z8bbI4OSGUOCGUOCGUOCGUOCGUOCGUOCGUPecarPaRsAsXLqz52qt9teXFixe788+fP6/53mRxckIocUIocUIocUIocUIocUIocUIoe8412LFjR3c+5POaL1686M6Xl5fXfG1mi5MTQokTQokTQokTQokTQokTQokTQtlzhnn27NnYj0AIJyeEEieEEieEEieEEieEEieEEieEsucM8+HDh7EfgRBOTgglTgglTgglTgglTgglTghVW2uTh7VOHm5g27dv786vXr3anZ8+fXribLWvvjx48GB3zuxprdWVXndyQihxQihxQihxQihxQihxQihxQih7ThiZPSfMGHFCKHFCKHFCKHFCKHFCKHFCqO6eExiPkxNCiRNCiRNCiRNCiRNCiRNC/QOdxge8PqZH1AAAAABJRU5ErkJggg==\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAOcAAAD3CAYAAADmIkO7AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjAsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+17YcXAAAGIklEQVR4nO3dMYtVSR6H4So1EEcNFHUSJ5BFIwURETZQDAQTsxEDYQbBoBPjhhZjFQODRVQQZMfAYOn9AA5Cyxh3ojQGDRqYuIEGu5iIno02mBlv6d7b1/O7p58HBOk/p06BvJZY7bV2XVeAPBv63gDweeKEUOKEUOKEUOKEUOKEUOKEUOIcgFrrf/7w42Ot9W9974vJbOp7A0yu67qt//t5rfW7UsqbUso/+tsRa8HJOTw/llL+VUr5re+NMBlxDs/PpZRfOt+XOfOqX8PhqLX+UEp5WUr5S9d1L/veD5Nxcg7LT6WUp8IcBnEOy0+llL/3vQnWhj/WDkSt9a+llF9LKd93XffvvvfD5Jycw/FzKeWfwhwOJyeEcnJCKHFCKHFCKHFCqOY3vtda/W0RTFnXdfVzX3dyQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQqhNfW+A31tYWGjOu677RjuZLVevXu17C2vOyQmhxAmhxAmhxAmhxAmhxAmhxAmhauverNba26XahQsXmvPLly8357t27Ro5e/36dfPZ27dvN+fbt29vzufn55vzSdb+9OnT2GsP2caNG/vewti6rquf+7qTE0KJE0KJE0KJE0KJE0KJE0KJE0JN9Z7z0qVLI2dfugvcs2dPc75hQ+7vKx8+fGjOHz58OHJ26NCh5rP3799vzvfv39+cb9u2rTmfxMmTJ5vzvXv3jr328vJyc3706NGx1+6be06YMeKEUOKEUOKEUOKEUOKEUOKEUFP93NorV66MnO3cuXOary5LS0sjZ8+ePWs+++jRo+b83r17zfnx48eb89XV1ZGzHTt2NJ999+5dc97n59rOzc0157du3Rp77cXFxbGfnVVOTgglTgglTgglTgglTgglTgglTgg11XvOGzdujJydOnVqorVv3rzZnD958mTk7P37981nN2/e3JyfPn26OW/dY37J27dvx3522r70mbrnz5+faP2PHz+OnL169WqitWeRkxNCiRNCiRNCiRNCiRNCiRNCxf4XgOS5e/duc37x4sWJ1l9ZWRk5O3jw4ERrJ/PRmDBjxAmhxAmhxAmhxAmhxAmhxAmhpvpPxpg9W7ZsGTmb9l3jnTt3prr+rHFyQihxQihxQihxQihxQihxQihxQij3nPxO62M/jx079g13gpMTQokTQokTQokTQokTQokTQokTQrnnXGc2bWr/ki8sLEzt3W/evGnOW/9t43rk5IRQ4oRQ4oRQ4oRQ4oRQ4oRQ4oRQ7jnXmTNnzjTnhw8fntq75+fnm/Pnz59P7d2zyMkJocQJocQJocQJocQJocQJoVylrDMnTpxozmutY6/94sWL5vzBgwdjr70eOTkhlDghlDghlDghlDghlDghlDghlHvOgTlw4EBzfu7cuea867qx3z3Js/yZkxNCiRNCiRNCiRNCiRNCiRNCiRNCueccmLm5ueZ89+7dU3v38vLy1NZej5ycEEqcEEqcEEqcEEqcEEqcEEqcEMo958CcPXu2t3cvLi729u4hcnJCKHFCKHFCKHFCKHFCKHFCKFcpfLWVlZXmfGlp6dtsZJ1wckIocUIocUIocUIocUIocUIocUIocUIocUIocUIocUIocUIocUIocUIocUIo/56Tr7Zv377m/MiRI83548eP13I7g+fkhFDihFDihFDihFDihFDihFDihFDuOflqT58+bc7dY64tJyeEEieEEieEEieEEieEEieEEieEcs/JV7t+/XrfW1hXnJwQSpwQSpwQSpwQSpwQSpwQylXKwFy7dq0537p169hrr66ujv0s/z8nJ4QSJ4QSJ4QSJ4QSJ4QSJ4QSJ4SqXdeNHtY6egisia7r6ue+7uSEUOKEUOKEUOKEUOKEUOKEUOKEUM17TqA/Tk4IJU4IJU4IJU4IJU4IJU4I9V/StOLP8M/xRgAAAABJRU5ErkJggg==\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAOcAAAD3CAYAAADmIkO7AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjAsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+17YcXAAAGbklEQVR4nO3dv2tVaR7H8efIaApFUBQXlGChWIsIuiii2CkRwkK6GVsLQbD0HxDEaivtZhEbxSIQkWw6FxQtFAMWNlOpzFjJrr+RM9UUs+Q8d7jXeD/35PUCQfLlOfeAvH3EJ+ekadu2AHnWjfsGgJWJE0KJE0KJE0KJE0KJE0KJE0KJsweapvnf//362jTNP8d9X4zmh3HfAKNr23bTH79vmmZjKeXXUsqt8d0R34Kds3/+UUr5rZRyf9w3wmjE2T8/lVL+1fq+zInX+DPsj6Zppkspv5RS9rRt+8u474fR2Dn75cdSyn+E2Q/i7JcfSyk/j/sm+Db8s7Ynmqb5eynl36WUv7Vt+99x3w+js3P2x0+llDvC7A87J4Syc0IocUIocUIocUKo6je+N03jf4tglbVt26z0dTsnhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhPph3DewFm3cuLFztmHDhuravXv3Vuezs7PV+cGDB6vz48ePd86apqmunZ+fr84XFhaq8+vXr1fna42dE0KJE0KJE0KJE0KJE0KJE0KJE0I1bdt2D5umezhmU1NT1fmuXbuGvva5c+eq882bNw997VJKOXbsWOdsz549I1072cOHD6vzEydOdM4+ffr0rW8nRtu2Kx4g2zkhlDghlDghlDghlDghlDghlDgh1MSec545c6Y6v3Pnzne6kyzLy8vV+cePH6vz+/fvd862bt1aXXv27NnqfJBDhw51zh4/fjzStZM554QJI04IJU4IJU4IJU4IJU4IJU4INbHvrV2/fv3YPrt2NlxKKYuLi6v22Tdv3qzOB53vvn//fujPHvQM7YMHD6rza9euVedzc3Odsz6fc3axc0IocUIocUIocUIocUIocUIocUKoiT3nHPSzHvfv379qnz3onHPQM5WTatC7Y0c5Qy2llOnp6ZHW942dE0KJE0KJE0KJE0KJE0KJE0JN7FHKhw8fqvNnz559pztZOzZt2lSdX7hwYaTrP3nyZKT1fWPnhFDihFDihFDihFDihFDihFDihFATe87J93f69Onq/MCBAyNdf2lpaaT1fWPnhFDihFDihFDihFDihFDihFDihFDOOfnLjhw5MtL6z58/V+dfvnwZ6fp9Y+eEUOKEUOKEUOKEUOKEUOKEUOKEUM45+ZOmaTpnW7ZsGena8/Pz1fnTp09Hun7f2DkhlDghlDghlDghlDghlDghlDghVNO2bfewabqH9NK+ffs6Z8+fP6+ufffuXXV+8uTJ6vzRo0fVeV+1bbvi4bKdE0KJE0KJE0KJE0KJE0KJE0J5ZIw/uXv37tBr7927V52v1aOSYdk5IZQ4IZQ4IZQ4IZQ4IZQ4IZQ4IZRzzjXmypUr1fnu3bs7Z4MeCbt69eowt0QHOyeEEieEEieEEieEEieEEieEEieE8mrMnpmbm6vOb9y4UZ1//fq1czYzM1Ndu7i4WJ2zMq/GhAkjTgglTgglTgglTgglTgglTgjlec6eGfRj9tatq/99/Pbt286Zc8zvy84JocQJocQJocQJocQJocQJoRylTJiLFy9W57OzsyNd/9SpUyOt59uxc0IocUIocUIocUIocUIocUIocUIor8YMc/To0er89u3b1fm2bduq8zdv3lTnO3fu7JzVXpvJ8LwaEyaMOCGUOCGUOCGUOCGUOCGUOCGU5znHYHp6unN269at6tpB55hLS0vV+fnz56tzZ5k57JwQSpwQSpwQSpwQSpwQSpwQSpwQyjnnKpiamqrOL1++3Dnbvn17de3r16+r80uXLlXnL168qM7JYeeEUOKEUOKEUOKEUOKEUOKEUOKEUN5buwp27NhRnb969Wroa8/MzFTnCwsLQ1+b8fDeWpgw4oRQ4oRQ4oRQ4oRQ4oRQHhlbBYcPHx567cuXL6vz5eXloa/NZLFzQihxQihxQihxQihxQihxQihxQiiPjMGYeWQMJow4IZQ4IZQ4IZQ4IZQ4IZQ4IVT1nBMYHzsnhBInhBInhBInhBInhBInhPodZU8V7x+hHlwAAAAASUVORK5CYII=\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "tdl.show_batch((x,y))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAOcAAAD3CAYAAADmIkO7AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjAsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+17YcXAAAG/klEQVR4nO3dPWhW7R3H8XPkkQgtoot00A5iRHB2KS5Rg1F8GaygII+D4qYZXRTBQbcM1iAOvtQsSkEHcZCKElpUUBAcHBR9fCGiBXFoF8XmdHJpc195mju39y93Ph8QJH9OroP69RKvnJO6aZoKyLOg2zcATE2cEEqcEEqcEEqcEEqcEEqcEEqcPaCu63/9149/13X9p27fF+35qds3QPuapvnt95/Xdf2bqqo+VlX1l+7dEbPBztl7/lhV1T+qqvpbt2+E9oiz9+yvqupK4+sy57za72HvqOv691VV/VJV1aqmaX7p9v3QHjtnb/m5qqq/C7M3iLO3/FxV1Z+7fRPMDv+s7RF1Xf+hqqq/VlX1u6Zp/tnt+6F9ds7esb+qquvC7B12Tghl54RQ4oRQ4oRQ4oRQxS98r+va/xZBhzVNU0/1cTsnhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhPqp2zfQi/r6+orzoaGhlrMbN24Ur/306VNxfuHCheJ81apVxfmuXbtazh49elS89u7du8V5Jz1//rw4v3z5cnE+OTk5i3czO+ycEEqcEEqcEEqcEEqcEEqcEEqcEKpumqb1sK5bD+exFStWFOfnz58vzjdv3jybt8Ov0N/fX5y/evXqB93J/2qapp7q43ZOCCVOCCVOCCVOCCVOCCVOCCVOCOV5ziksXry4OL9z505xPt0zk+04e/ZscT4+Pt7W5z9w4EDL2dq1a4vXTnf+20mfP38uzr98+fKD7mT22DkhlDghlDghlDghlDghlDghlDgh1Lw855zuHHNsbKw4b/ccs/QM7XTvrT116lRx/vHjxxnd03fXr19vOVu+fHnx2p07dxbnZ86cmdE9/RrTvTN3YmKiY2t3ip0TQokTQokTQokTQokTQokTQokTQs3Lc841a9YU59u2bevo+k+fPm052717d0fXbsfAwEBxfuzYsY6tPd357cjISMfW7hY7J4QSJ4QSJ4QSJ4QSJ4QSJ4Sal0cpHz58KM5v3rxZnG/fvr04v3LlSnE+PDxcnKcaHBwszpctW9axtY8cOVKcP3z4sGNrd4udE0KJE0KJE0KJE0KJE0KJE0KJE0LVpdc01nXdetjDFiwo/521aNGi4vzbt2/F+devX//ve5otdV0X56Ojoy1nhw4dautzT+fcuXMtZ9Odc05OTra1djc1TTPlL5ydE0KJE0KJE0KJE0KJE0KJE0KJE0I55+wxCxcuLM737t1bnF+6dGnGa5f+LFVVVT158qQ4X7du3YzXnsucc8IcI04IJU4IJU4IJU4IJU4IJU4I5Zyzx6xfv744Hx8f79jab968Kc5XrlzZsbXnMuecMMeIE0KJE0KJE0KJE0KJE0KJE0LNy+/POZf19/cX5wcPHuzY2u/fvy/Ot2zZ0rG15yM7J4QSJ4QSJ4QSJ4QSJ4QSJ4TyyFiYvr6+4vzWrVvF+cDAQFvrv3v3ruVs69atxWufPXvW1trzlUfGYI4RJ4QSJ4QSJ4QSJ4QSJ4QSJ4TyyFgXLFjQ+u/EixcvFq9t9xxzOvv27Ws5c475Y9k5IZQ4IZQ4IZQ4IZQ4IZQ4IZQ4IZRzzi44cuRIy9mePXva+tzTvb7y9OnTxfmDBw/aWp/ZY+eEUOKEUOKEUOKEUOKEUOKEUOKEUN5b2wHDw8PF+cjISMfWvnfvXnG+adOmjq3NzHhvLcwx4oRQ4oRQ4oRQ4oRQ4oRQHhmbgQ0bNhTnJ0+e7NjaY2NjxfmJEyc6tjY/lp0TQokTQokTQokTQokTQokTQokTQnlkbAqDg4PF+dWrV4vzJUuWzHjtFy9eFOdDQ0PF+evXr2e8Nt3hkTGYY8QJocQJocQJocQJocQJocQJoeblOefq1auL8/v37xfnS5cubWv9V69etZxt3LixeO3bt2/bWps8zjlhjhEnhBInhBInhBInhBInhBInhJqX7609fPhwcd7uOebLly+L8+PHj7ecOcfkOzsnhBInhBInhBInhBInhBInhBInhOrZc84dO3a0nO3fv7+ja9++fbs4v3btWkfXpzfYOSGUOCGUOCGUOCGUOCGUOCFUzx6lPH78uOVsYmKieO10r848evRocT46Olqcw69h54RQ4oRQ4oRQ4oRQ4oRQ4oRQ4oRQ8/JbAEIS3wIQ5hhxQihxQihxQihxQihxQihxQqjiOSfQPXZOCCVOCCVOCCVOCCVOCCVOCPUfVvlCSeFR5iUAAAAASUVORK5CYII=\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAOcAAAD3CAYAAADmIkO7AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjAsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+17YcXAAAGDUlEQVR4nO3dv4tUVxjH4XOCNiYrgoVBxCqVpdoYsbBXQbSxyZbiD1Bs7RTB2hQWWqiNRSCljY26QVb8H0whiIqViT8Q5KRKYdg5C3t3cr8z+zwgyLzMvRfkwxHfnbG21gqQ57uxHwBYmTghlDghlDghlDghlDghlDghlDjnQK317//8+lpr/XXs52KYTWM/AMO11n749/e11u9LKW9KKb+N90SsByfn/DlZSnlbSlka+0EYRpzzZ7GUcq/5ucyZV/0Zzo9a6+5Syp+llJ9aa3+O/TwM4+ScL7+UUv4Q5nwQ53z5pZRyd+yHYH34a+2cqLX+XEp5WEr5sbX219jPw3BOzvmxWEr5XZjzw8kJoZycEEqcEEqcEEqcEKr7g++1Vv9aBFPWWqsrve7khFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFDihFCbxn6AMezatas7f/ny5aDrt9a689evX0+c3b59e9C9h7p169bE2Zs3b7rv/fLly3o/zobm5IRQ4oRQ4oRQ4oRQ4oRQ4oRQ4oRQtbeTq7X2F3Yz6vjx4935/fv3u/PNmzev5+PMjdV2tFeuXOnOX716NXG22u54lrXW6kqvOzkhlDghlDghlDghlDghlDghlDgh1Ibcc67m5MmT3fnNmzendu+lpaXu/NChQ4Ouv9qOdmFhYdD1h9i5c+fE2WqfJZ1l9pwwY8QJocQJocQJocQJocQJocQJoew5N5jdu3d355cuXZo4O3PmTPe9mzYN+xpke85vOTkhlDghlDghlDghlDghlDghlDghlD0n39i7d+/E2cOHD7vv3bZt26B723N+y8kJocQJocQJocQJocQJocQJoYZ9xoeZs2XLlu78wIEDE2dDVyXv37/vzr9+/Tro+vPGyQmhxAmhxAmhxAmhxAmhxAmhxAmh7Dk3mMOHD3fnN27cmNq9T5061Z2/e/duaveeRU5OCCVOCCVOCCVOCCVOCCVOCCVOCGXPOWcWFha688XFxand+9GjR93548ePp3bveeTkhFDihFDihFDihFDihFDihFDihFD2nHPmxIkTg+ZDLC8vd+efPn2a2r3nkZMTQokTQokTQokTQokTQokTQokTQtXW2uRhrZOHjOLo0aPd+Z07d7rzIf/H5r1797rzs2fPduf2nCtrrdWVXndyQihxQihxQihxQihxQihxQiirlDB79uzpzpeWlrrzIauSUkq5e/fuxNn58+e77/348eOge29UVikwY8QJocQJocQJocQJocQJocQJoXw15gi2bt06cXbt2rXue4fuMZ8+fdqd93aZ9pj/LycnhBInhBInhBInhBInhBInhBInhLLnHMH169cnzo4dOzbo2m/fvu3OL1++3J3bZeZwckIocUIocUIocUIocUIocUIocUIoe84p6H1es5RS9u3bN7V7nzt3rjt/8uTJ1O7N+nJyQihxQihxQihxQihxQihxQiirlCk4cuRId75///41X/v58+fd+YMHD9Z8bbI4OSGUOCGUOCGUOCGUOCGUOCGUOCGUPecarPaRsAsXLqz52qt9teXFixe788+fP6/53mRxckIocUIocUIocUIocUIocUIocUIoe8412LFjR3c+5POaL1686M6Xl5fXfG1mi5MTQokTQokTQokTQokTQokTQokTQtlzhnn27NnYj0AIJyeEEieEEieEEieEEieEEieEEieEsucM8+HDh7EfgRBOTgglTgglTgglTgglTgglTghVW2uTh7VOHm5g27dv786vXr3anZ8+fXribLWvvjx48GB3zuxprdWVXndyQihxQihxQihxQihxQihxQihxQih7ThiZPSfMGHFCKHFCKHFCKHFCKHFCKHFCqO6eExiPkxNCiRNCiRNCiRNCiRNCiRNC/QOdxge8PqZH1AAAAABJRU5ErkJggg==\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAOcAAAD3CAYAAADmIkO7AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjAsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+17YcXAAAGIklEQVR4nO3dMYtVSR6H4So1EEcNFHUSJ5BFIwURETZQDAQTsxEDYQbBoBPjhhZjFQODRVQQZMfAYOn9AA5Cyxh3ojQGDRqYuIEGu5iIno02mBlv6d7b1/O7p58HBOk/p06BvJZY7bV2XVeAPBv63gDweeKEUOKEUOKEUOKEUOKEUOKEUOIcgFrrf/7w42Ot9W9974vJbOp7A0yu67qt//t5rfW7UsqbUso/+tsRa8HJOTw/llL+VUr5re+NMBlxDs/PpZRfOt+XOfOqX8PhqLX+UEp5WUr5S9d1L/veD5Nxcg7LT6WUp8IcBnEOy0+llL/3vQnWhj/WDkSt9a+llF9LKd93XffvvvfD5Jycw/FzKeWfwhwOJyeEcnJCKHFCKHFCKHFCqOY3vtda/W0RTFnXdfVzX3dyQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQihxQqhNfW+A31tYWGjOu677RjuZLVevXu17C2vOyQmhxAmhxAmhxAmhxAmhxAmhxAmhauverNba26XahQsXmvPLly8357t27Ro5e/36dfPZ27dvN+fbt29vzufn55vzSdb+9OnT2GsP2caNG/vewti6rquf+7qTE0KJE0KJE0KJE0KJE0KJE0KJE0JN9Z7z0qVLI2dfugvcs2dPc75hQ+7vKx8+fGjOHz58OHJ26NCh5rP3799vzvfv39+cb9u2rTmfxMmTJ5vzvXv3jr328vJyc3706NGx1+6be06YMeKEUOKEUOKEUOKEUOKEUOKEUFP93NorV66MnO3cuXOary5LS0sjZ8+ePWs+++jRo+b83r17zfnx48eb89XV1ZGzHTt2NJ999+5dc97n59rOzc0157du3Rp77cXFxbGfnVVOTgglTgglTgglTgglTgglTgglTgg11XvOGzdujJydOnVqorVv3rzZnD958mTk7P37981nN2/e3JyfPn26OW/dY37J27dvx3522r70mbrnz5+faP2PHz+OnL169WqitWeRkxNCiRNCiRNCiRNCiRNCiRNCxf4XgOS5e/duc37x4sWJ1l9ZWRk5O3jw4ERrJ/PRmDBjxAmhxAmhxAmhxAmhxAmhxAmhpvpPxpg9W7ZsGTmb9l3jnTt3prr+rHFyQihxQihxQihxQihxQihxQihxQij3nPxO62M/jx079g13gpMTQokTQokTQokTQokTQokTQokTQrnnXGc2bWr/ki8sLEzt3W/evGnOW/9t43rk5IRQ4oRQ4oRQ4oRQ4oRQ4oRQ4oRQ7jnXmTNnzjTnhw8fntq75+fnm/Pnz59P7d2zyMkJocQJocQJocQJocQJocQJoVylrDMnTpxozmutY6/94sWL5vzBgwdjr70eOTkhlDghlDghlDghlDghlDghlDghlHvOgTlw4EBzfu7cuea867qx3z3Js/yZkxNCiRNCiRNCiRNCiRNCiRNCiRNCueccmLm5ueZ89+7dU3v38vLy1NZej5ycEEqcEEqcEEqcEEqcEEqcEEqcEMo958CcPXu2t3cvLi729u4hcnJCKHFCKHFCKHFCKHFCKHFCKFcpfLWVlZXmfGlp6dtsZJ1wckIocUIocUIocUIocUIocUIocUIocUIocUIocUIocUIocUIocUIocUIocUIo/56Tr7Zv377m/MiRI83548eP13I7g+fkhFDihFDihFDihFDihFDihFDihFDuOflqT58+bc7dY64tJyeEEieEEieEEieEEieEEieEEieEcs/JV7t+/XrfW1hXnJwQSpwQSpwQSpwQSpwQSpwQylXKwFy7dq0537p169hrr66ujv0s/z8nJ4QSJ4QSJ4QSJ4QSJ4QSJ4QSJ4SqXdeNHtY6egisia7r6ue+7uSEUOKEUOKEUOKEUOKEUOKEUOKEUM17TqA/Tk4IJU4IJU4IJU4IJU4IJU4I9V/StOLP8M/xRgAAAABJRU5ErkJggg==\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAOcAAAD3CAYAAADmIkO7AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjAsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+17YcXAAAGbklEQVR4nO3dv2tVaR7H8efIaApFUBQXlGChWIsIuiii2CkRwkK6GVsLQbD0HxDEaivtZhEbxSIQkWw6FxQtFAMWNlOpzFjJrr+RM9UUs+Q8d7jXeD/35PUCQfLlOfeAvH3EJ+ekadu2AHnWjfsGgJWJE0KJE0KJE0KJE0KJE0KJE0KJsweapvnf//362jTNP8d9X4zmh3HfAKNr23bTH79vmmZjKeXXUsqt8d0R34Kds3/+UUr5rZRyf9w3wmjE2T8/lVL+1fq+zInX+DPsj6Zppkspv5RS9rRt+8u474fR2Dn75cdSyn+E2Q/i7JcfSyk/j/sm+Db8s7Ynmqb5eynl36WUv7Vt+99x3w+js3P2x0+llDvC7A87J4Syc0IocUIocUIocUKo6je+N03jf4tglbVt26z0dTsnhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhBInhPph3DewFm3cuLFztmHDhuravXv3Vuezs7PV+cGDB6vz48ePd86apqmunZ+fr84XFhaq8+vXr1fna42dE0KJE0KJE0KJE0KJE0KJE0KJE0I1bdt2D5umezhmU1NT1fmuXbuGvva5c+eq882bNw997VJKOXbsWOdsz549I1072cOHD6vzEydOdM4+ffr0rW8nRtu2Kx4g2zkhlDghlDghlDghlDghlDghlDgh1MSec545c6Y6v3Pnzne6kyzLy8vV+cePH6vz+/fvd862bt1aXXv27NnqfJBDhw51zh4/fjzStZM554QJI04IJU4IJU4IJU4IJU4IJU4INbHvrV2/fv3YPrt2NlxKKYuLi6v22Tdv3qzOB53vvn//fujPHvQM7YMHD6rza9euVedzc3Odsz6fc3axc0IocUIocUIocUIocUIocUIocUKoiT3nHPSzHvfv379qnz3onHPQM5WTatC7Y0c5Qy2llOnp6ZHW942dE0KJE0KJE0KJE0KJE0KJE0JN7FHKhw8fqvNnz559pztZOzZt2lSdX7hwYaTrP3nyZKT1fWPnhFDihFDihFDihFDihFDihFDihFATe87J93f69Onq/MCBAyNdf2lpaaT1fWPnhFDihFDihFDihFDihFDihFDihFDOOfnLjhw5MtL6z58/V+dfvnwZ6fp9Y+eEUOKEUOKEUOKEUOKEUOKEUOKEUM45+ZOmaTpnW7ZsGena8/Pz1fnTp09Hun7f2DkhlDghlDghlDghlDghlDghlDghVNO2bfewabqH9NK+ffs6Z8+fP6+ufffuXXV+8uTJ6vzRo0fVeV+1bbvi4bKdE0KJE0KJE0KJE0KJE0KJE0J5ZIw/uXv37tBr7927V52v1aOSYdk5IZQ4IZQ4IZQ4IZQ4IZQ4IZQ4IZRzzjXmypUr1fnu3bs7Z4MeCbt69eowt0QHOyeEEieEEieEEieEEieEEieEEieE8mrMnpmbm6vOb9y4UZ1//fq1czYzM1Ndu7i4WJ2zMq/GhAkjTgglTgglTgglTgglTgglTgjlec6eGfRj9tatq/99/Pbt286Zc8zvy84JocQJocQJocQJocQJocQJoRylTJiLFy9W57OzsyNd/9SpUyOt59uxc0IocUIocUIocUIocUIocUIocUIor8YMc/To0er89u3b1fm2bduq8zdv3lTnO3fu7JzVXpvJ8LwaEyaMOCGUOCGUOCGUOCGUOCGUOCGU5znHYHp6unN269at6tpB55hLS0vV+fnz56tzZ5k57JwQSpwQSpwQSpwQSpwQSpwQSpwQyjnnKpiamqrOL1++3Dnbvn17de3r16+r80uXLlXnL168qM7JYeeEUOKEUOKEUOKEUOKEUOKEUOKEUN5buwp27NhRnb969Wroa8/MzFTnCwsLQ1+b8fDeWpgw4oRQ4oRQ4oRQ4oRQ4oRQHhlbBYcPHx567cuXL6vz5eXloa/NZLFzQihxQihxQihxQihxQihxQihxQiiPjMGYeWQMJow4IZQ4IZQ4IZQ4IZQ4IZQ4IVT1nBMYHzsnhBInhBInhBInhBInhBInhPodZU8V7x+hHlwAAAAASUVORK5CYII=\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "x,y = torch.add(x,0),torch.add(y,0) #Lose type of tensors (to emulate predictions)\n", "test_ne(type(x), TensorImage)\n", "tdl.show_batch((x,y), figsize=(4,4)) #Check that types are put back by dl." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#TODO: make the above check a proper test" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Export -" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Converted 00_test.ipynb.\n", "Converted 01_core_foundation.ipynb.\n", "Converted 01a_core_utils.ipynb.\n", "Converted 01b_core_dispatch.ipynb.\n", "Converted 01c_core_transform.ipynb.\n", "Converted 02_core_script.ipynb.\n", "Converted 03_torchcore.ipynb.\n", "Converted 03a_layers.ipynb.\n", "Converted 04_data_load.ipynb.\n", "Converted 05_data_core.ipynb.\n", "Converted 06_data_transforms.ipynb.\n", "Converted 07_data_block.ipynb.\n", "Converted 08_vision_core.ipynb.\n", "Converted 09_vision_augment.ipynb.\n", "Converted 09a_vision_data.ipynb.\n", "Converted 09b_vision_utils.ipynb.\n", "Converted 10_pets_tutorial.ipynb.\n", "Converted 11_vision_models_xresnet.ipynb.\n", "Converted 12_optimizer.ipynb.\n", "Converted 13_learner.ipynb.\n", "Converted 13a_metrics.ipynb.\n", "Converted 14_callback_schedule.ipynb.\n", "Converted 14a_callback_data.ipynb.\n", "Converted 15_callback_hook.ipynb.\n", "Converted 15a_vision_models_unet.ipynb.\n", "Converted 16_callback_progress.ipynb.\n", "Converted 17_callback_tracker.ipynb.\n", "Converted 18_callback_fp16.ipynb.\n", "Converted 19_callback_mixup.ipynb.\n", "Converted 20_interpret.ipynb.\n", "Converted 20a_distributed.ipynb.\n", "Converted 21_vision_learner.ipynb.\n", "Converted 22_tutorial_imagenette.ipynb.\n", "Converted 23_tutorial_transfer_learning.ipynb.\n", "Converted 30_text_core.ipynb.\n", "Converted 31_text_data.ipynb.\n", "Converted 32_text_models_awdlstm.ipynb.\n", "Converted 33_text_models_core.ipynb.\n", "Converted 34_callback_rnn.ipynb.\n", "Converted 35_tutorial_wikitext.ipynb.\n", "Converted 36_text_models_qrnn.ipynb.\n", "Converted 37_text_learner.ipynb.\n", "Converted 38_tutorial_ulmfit.ipynb.\n", "Converted 40_tabular_core.ipynb.\n", "Converted 41_tabular_model.ipynb.\n", "Converted 42_tabular_rapids.ipynb.\n", "Converted 50_data_block_examples.ipynb.\n", "Converted 60_medical_imaging.ipynb.\n", "Converted 65_medical_text.ipynb.\n", "Converted 70_callback_wandb.ipynb.\n", "Converted 71_callback_tensorboard.ipynb.\n", "Converted 90_notebook_core.ipynb.\n", "Converted 91_notebook_export.ipynb.\n", "Converted 92_notebook_showdoc.ipynb.\n", "Converted 93_notebook_export2html.ipynb.\n", "Converted 94_notebook_test.ipynb.\n", "Converted 95_index.ipynb.\n", "Converted 96_data_external.ipynb.\n", "Converted 97_utils_test.ipynb.\n", "Converted notebook2jekyll.ipynb.\n", "Converted xse_resnext.ipynb.\n" ] } ], "source": [ "#hide\n", "from local.notebook.export import notebook2script\n", "notebook2script(all_fs=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 2 }