{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#hide\n", "#skip\n", "! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# default_exp losses\n", "# default_cls_lvl 3" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "from fastai.imports import *\n", "from fastai.torch_imports import *\n", "from fastai.torch_core import *\n", "from fastai.layers import *" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#hide\n", "from nbdev.showdoc import *" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Loss Functions\n", "> Custom fastai loss functions" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# export\n", "class BaseLoss():\n", " \"Same as `loss_cls`, but flattens input and target.\"\n", " activation=decodes=noops\n", " def __init__(self, loss_cls, *args, axis=-1, flatten=True, floatify=False, is_2d=True, **kwargs):\n", " store_attr(\"axis,flatten,floatify,is_2d\")\n", " self.func = loss_cls(*args,**kwargs)\n", " functools.update_wrapper(self, self.func)\n", "\n", " def __repr__(self): return f\"FlattenedLoss of {self.func}\"\n", " @property\n", " def reduction(self): return self.func.reduction\n", " @reduction.setter\n", " def reduction(self, v): self.func.reduction = v\n", "\n", " def _contiguous(self,x):\n", " return TensorBase(x.transpose(self.axis,-1).contiguous()) if isinstance(x,torch.Tensor) else x\n", "\n", " def __call__(self, inp, targ, **kwargs):\n", " inp,targ = map(self._contiguous, (inp,targ))\n", " if self.floatify and targ.dtype!=torch.float16: targ = targ.float()\n", " if targ.dtype in [torch.int8, torch.int16, torch.int32]: targ = targ.long()\n", " if self.flatten: inp = inp.view(-1,inp.shape[-1]) if self.is_2d else inp.view(-1)\n", " return self.func.__call__(inp, targ.view(-1) if self.flatten else targ, **kwargs)\n", " \n", " def to(self, device):\n", " if isinstance(self.func, nn.Module): self.func.to(device)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Wrapping a general loss function inside of `BaseLoss` provides extra functionalities to your loss functions:\n", "- flattens the tensors before trying to take the losses since it's more convenient (with a potential tranpose to put `axis` at the end)\n", "- a potential `activation` method that tells the library if there is an activation fused in the loss (useful for inference and methods such as `Learner.get_preds` or `Learner.predict`)\n", "- a potential decodes method that is used on predictions in inference (for instance, an argmax in classification)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `args` and `kwargs` will be passed to `loss_cls` during the initialization to instantiate a loss function. `axis` is put at the end for losses like softmax that are often performed on the last axis. If `floatify=True`, the `targs` will be converted to floats (useful for losses that only accept float targets like `BCEWithLogitsLoss`), and `is_2d` determines if we flatten while keeping the first dimension (batch size) or completely flatten the input. We want the first for losses like Cross Entropy, and the second for pretty much anything else." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# export\n", "@delegates()\n", "class CrossEntropyLossFlat(BaseLoss):\n", " \"Same as `nn.CrossEntropyLoss`, but flattens input and target.\"\n", " y_int = True\n", " @use_kwargs_dict(keep=True, weight=None, ignore_index=-100, reduction='mean')\n", " def __init__(self, *args, axis=-1, **kwargs): super().__init__(nn.CrossEntropyLoss, *args, axis=axis, **kwargs)\n", " def decodes(self, x): return x.argmax(dim=self.axis)\n", " def activation(self, x): return F.softmax(x, dim=self.axis)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tst = CrossEntropyLossFlat()\n", "output = torch.randn(32, 5, 10)\n", "target = torch.randint(0, 10, (32,5))\n", "#nn.CrossEntropy would fail with those two tensors, but not our flattened version.\n", "_ = tst(output, target)\n", "test_fail(lambda x: nn.CrossEntropyLoss()(output,target))\n", "\n", "#Associated activation is softmax\n", "test_eq(tst.activation(output), F.softmax(output, dim=-1))\n", "#This loss function has a decodes which is argmax\n", "test_eq(tst.decodes(output), output.argmax(dim=-1))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#In a segmentation task, we want to take the softmax over the channel dimension\n", "tst = CrossEntropyLossFlat(axis=1)\n", "output = torch.randn(32, 5, 128, 128)\n", "target = torch.randint(0, 5, (32, 128, 128))\n", "_ = tst(output, target)\n", "\n", "test_eq(tst.activation(output), F.softmax(output, dim=1))\n", "test_eq(tst.decodes(output), output.argmax(dim=1))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#hide\n", "#cuda\n", "tst = CrossEntropyLossFlat(weight=torch.ones(10))\n", "device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n", "tst.to(device)\n", "output = torch.randn(32, 10, device=device)\n", "target = torch.randint(0, 10, (32,), device=device)\n", "_ = tst(output, target)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Focal Loss](https://arxiv.org/pdf/1708.02002.pdf) is the same as cross entropy except easy-to-classify observations are down-weighted in the loss calculation. The strength of down-weighting is proportional to the size of the `gamma` parameter. Put another way, the larger `gamma` the less the easy-to-classify observations contribute to the loss." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# export\n", "class FocalLossFlat(CrossEntropyLossFlat):\n", " \"\"\"\n", " Same as CrossEntropyLossFlat but with focal paramter, `gamma`. Focal loss is introduced by Lin et al. \n", " https://arxiv.org/pdf/1708.02002.pdf. Note the class weighting factor in the paper, alpha, can be \n", " implemented through pytorch `weight` argument in nn.CrossEntropyLoss.\n", " \"\"\"\n", " y_int = True\n", " @use_kwargs_dict(keep=True, weight=None, ignore_index=-100, reduction='mean')\n", " def __init__(self, *args, gamma=2, axis=-1, **kwargs):\n", " self.gamma = gamma\n", " self.reduce = kwargs.pop('reduction') if 'reduction' in kwargs else 'mean'\n", " super().__init__(*args, reduction='none', axis=axis, **kwargs)\n", " def __call__(self, inp, targ, **kwargs):\n", " ce_loss = super().__call__(inp, targ, **kwargs)\n", " pt = torch.exp(-ce_loss)\n", " fl_loss = (1-pt)**self.gamma * ce_loss\n", " return fl_loss.mean() if self.reduce == 'mean' else fl_loss.sum() if self.reduce == 'sum' else fl_loss" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#Compare focal loss with gamma = 0 to cross entropy\n", "fl = FocalLossFlat(gamma=0)\n", "ce = CrossEntropyLossFlat()\n", "output = torch.randn(32, 5, 10)\n", "target = torch.randint(0, 10, (32,5))\n", "test_close(fl(output, target), ce(output, target))\n", "#Test focal loss with gamma > 0 is different than cross entropy\n", "fl = FocalLossFlat(gamma=2)\n", "test_ne(fl(output, target), ce(output, target))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#In a segmentation task, we want to take the softmax over the channel dimension\n", "fl = FocalLossFlat(gamma=0, axis=1)\n", "ce = CrossEntropyLossFlat(axis=1)\n", "output = torch.randn(32, 5, 128, 128)\n", "target = torch.randint(0, 5, (32, 128, 128))\n", "test_close(fl(output, target), ce(output, target), eps=1e-4)\n", "test_eq(fl.activation(output), F.softmax(output, dim=1))\n", "test_eq(fl.decodes(output), output.argmax(dim=1))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# export\n", "@delegates()\n", "class BCEWithLogitsLossFlat(BaseLoss):\n", " \"Same as `nn.BCEWithLogitsLoss`, but flattens input and target.\"\n", " @use_kwargs_dict(keep=True, weight=None, reduction='mean', pos_weight=None)\n", " def __init__(self, *args, axis=-1, floatify=True, thresh=0.5, **kwargs):\n", " if kwargs.get('pos_weight', None) is not None and kwargs.get('flatten', None) is True:\n", " raise ValueError(\"`flatten` must be False when using `pos_weight` to avoid a RuntimeError due to shape mismatch\")\n", " if kwargs.get('pos_weight', None) is not None: kwargs['flatten'] = False\n", " super().__init__(nn.BCEWithLogitsLoss, *args, axis=axis, floatify=floatify, is_2d=False, **kwargs)\n", " self.thresh = thresh\n", "\n", " def decodes(self, x): return x>self.thresh\n", " def activation(self, x): return torch.sigmoid(x)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tst = BCEWithLogitsLossFlat()\n", "output = torch.randn(32, 5, 10)\n", "target = torch.randn(32, 5, 10)\n", "#nn.BCEWithLogitsLoss would fail with those two tensors, but not our flattened version.\n", "_ = tst(output, target)\n", "test_fail(lambda x: nn.BCEWithLogitsLoss()(output,target))\n", "output = torch.randn(32, 5)\n", "target = torch.randint(0,2,(32, 5))\n", "#nn.BCEWithLogitsLoss would fail with int targets but not our flattened version.\n", "_ = tst(output, target)\n", "test_fail(lambda x: nn.BCEWithLogitsLoss()(output,target))\n", "\n", "tst = BCEWithLogitsLossFlat(pos_weight=torch.ones(10))\n", "output = torch.randn(32, 5, 10)\n", "target = torch.randn(32, 5, 10)\n", "_ = tst(output, target)\n", "test_fail(lambda x: nn.BCEWithLogitsLoss()(output,target))\n", "\n", "#Associated activation is sigmoid\n", "test_eq(tst.activation(output), torch.sigmoid(output))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# export\n", "@use_kwargs_dict(weight=None, reduction='mean')\n", "def BCELossFlat(*args, axis=-1, floatify=True, **kwargs):\n", " \"Same as `nn.BCELoss`, but flattens input and target.\"\n", " return BaseLoss(nn.BCELoss, *args, axis=axis, floatify=floatify, is_2d=False, **kwargs)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tst = BCELossFlat()\n", "output = torch.sigmoid(torch.randn(32, 5, 10))\n", "target = torch.randint(0,2,(32, 5, 10))\n", "_ = tst(output, target)\n", "test_fail(lambda x: nn.BCELoss()(output,target))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# export\n", "@use_kwargs_dict(reduction='mean')\n", "def MSELossFlat(*args, axis=-1, floatify=True, **kwargs):\n", " \"Same as `nn.MSELoss`, but flattens input and target.\"\n", " return BaseLoss(nn.MSELoss, *args, axis=axis, floatify=floatify, is_2d=False, **kwargs)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tst = MSELossFlat()\n", "output = torch.sigmoid(torch.randn(32, 5, 10))\n", "target = torch.randint(0,2,(32, 5, 10))\n", "_ = tst(output, target)\n", "test_fail(lambda x: nn.MSELoss()(output,target))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#hide\n", "#cuda\n", "#Test losses work in half precision\n", "if torch.cuda.is_available():\n", " output = torch.sigmoid(torch.randn(32, 5, 10)).half().cuda()\n", " target = torch.randint(0,2,(32, 5, 10)).half().cuda()\n", " for tst in [BCELossFlat(), MSELossFlat()]: _ = tst(output, target)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# export\n", "@use_kwargs_dict(reduction='mean')\n", "def L1LossFlat(*args, axis=-1, floatify=True, **kwargs):\n", " \"Same as `nn.L1Loss`, but flattens input and target.\"\n", " return BaseLoss(nn.L1Loss, *args, axis=axis, floatify=floatify, is_2d=False, **kwargs)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "class LabelSmoothingCrossEntropy(Module):\n", " y_int = True\n", " def __init__(self, eps:float=0.1, weight=None, reduction='mean'): \n", " store_attr()\n", "\n", " def forward(self, output, target):\n", " c = output.size()[1]\n", " log_preds = F.log_softmax(output, dim=1)\n", " if self.reduction=='sum': loss = -log_preds.sum()\n", " else:\n", " loss = -log_preds.sum(dim=1) #We divide by that size at the return line so sum and not mean\n", " if self.reduction=='mean': loss = loss.mean()\n", " return loss*self.eps/c + (1-self.eps) * F.nll_loss(log_preds, target.long(), weight=self.weight, reduction=self.reduction)\n", "\n", " def activation(self, out): return F.softmax(out, dim=-1)\n", " def decodes(self, out): return out.argmax(dim=-1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lmce = LabelSmoothingCrossEntropy()\n", "output = torch.randn(32, 5, 10)\n", "target = torch.randint(0, 10, (32,5))\n", "test_close(lmce(output.flatten(0,1), target.flatten()), lmce(output.transpose(-1,-2), target))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "On top of the formula we define:\n", "- a `reduction` attribute, that will be used when we call `Learner.get_preds`\n", "- `weight` attribute to pass to BCE.\n", "- an `activation` function that represents the activation fused in the loss (since we use cross entropy behind the scenes). It will be applied to the output of the model when calling `Learner.get_preds` or `Learner.predict`\n", "- a decodes function that converts the output of the model to a format similar to the target (here indices). This is used in `Learner.predict` and `Learner.show_results` to decode the predictions " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "@delegates()\n", "class LabelSmoothingCrossEntropyFlat(BaseLoss):\n", " \"Same as `LabelSmoothingCrossEntropy`, but flattens input and target.\"\n", " y_int = True\n", " @use_kwargs_dict(keep=True, eps=0.1, reduction='mean')\n", " def __init__(self, *args, axis=-1, **kwargs): super().__init__(LabelSmoothingCrossEntropy, *args, axis=axis, **kwargs)\n", " def activation(self, out): return F.softmax(out, dim=-1)\n", " def decodes(self, out): return out.argmax(dim=-1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We present a general `Dice` loss for segmentation tasks. It is commonly used together with `CrossEntropyLoss` or `FocalLoss` in kaggle competitions. This is very similar to the `DiceMulti` metric, but to be able to derivate through, we replace the `argmax` activation by a `softmax` and compare this with a one-hot encoded target mask. This function also adds a `smooth` parameter to help numerical stabilities in the intersection over union division." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "class DiceLoss:\n", " \"Dice loss for segmentation\"\n", " def __init__(self, axis=1, smooth=1): \n", " store_attr()\n", " def __call__(self, pred, targ):\n", " targ = self._one_hot(targ, pred.shape[self.axis])\n", " pred, targ = flatten_check(self.activation(pred), targ)\n", " inter = (pred*targ).sum()\n", " union = (pred+targ).sum()\n", " return 1 - (2. * inter + self.smooth)/(union + self.smooth)\n", " @staticmethod\n", " def _one_hot(x, classes, axis=1):\n", " \"Creates one binay mask per class\"\n", " return torch.stack([torch.where(x==c, 1, 0) for c in range(classes)], axis=axis)\n", " def activation(self, x): return F.softmax(x, dim=self.axis)\n", " def decodes(self, x): return x.argmax(dim=self.axis)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dl = DiceLoss()\n", "_x = tensor( [[[1, 0, 2],\n", " [2, 2, 1]]])\n", "_one_hot_x = tensor([[[[0, 1, 0],\n", " [0, 0, 0]],\n", " [[1, 0, 0],\n", " [0, 0, 1]],\n", " [[0, 0, 1],\n", " [1, 1, 0]]]])\n", "test_eq(dl._one_hot(_x, 3), _one_hot_x)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dl = DiceLoss()\n", "model_output = tensor([[[[2., 1.],\n", " [1., 5.]],\n", " [[1, 2.],\n", " [3., 1.]],\n", " [[3., 0],\n", " [4., 3.]]]])\n", "target = tensor([[[2, 1],\n", " [2, 0]]])\n", "dl_out = dl(model_output, target)\n", "test_eq(dl.decodes(model_output), target)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dl = DiceLoss(smooth=0.)\n", "#identical masks\n", "model_output = tensor([[[.1], [.1], [100.]]])\n", "target = tensor([[2]])\n", "test_close(dl(model_output, target), 0)\n", "\n", "#50% intersection\n", "model_output = tensor([[[.1, 100.], [.1, .1], [100., .1]]])\n", "target = tensor([[2, 1]])\n", "test_close(dl(model_output, target), .5)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#hide\n", "#cuda\n", "#Test DicceLoss work in half precision\n", "if torch.cuda.is_available():\n", " output = torch.randn(32, 4, 5, 10).half().cuda()\n", " target = torch.randint(0,2,(32, 5, 10)).half().cuda()\n", " _ = dl(output, target)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You could easily combine this loss with `FocalLoss` defining a `CombinedLoss`, to balance between global (Dice) and local (Focal) features on the target mask." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class CombinedLoss:\n", " \"Dice and Focal combined\"\n", " def __init__(self, axis=1, smooth=1., alpha=1.):\n", " store_attr()\n", " self.focal_loss = FocalLossFlat(axis=axis)\n", " self.dice_loss = DiceLoss(axis, smooth)\n", " \n", " def __call__(self, pred, targ):\n", " return self.focal_loss(pred, targ) + self.alpha * self.dice_loss(pred, targ)\n", " \n", " def decodes(self, x): return x.argmax(dim=self.axis)\n", " def activation(self, x): return F.softmax(x, dim=self.axis)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cl = CombinedLoss()\n", "output = torch.randn(32, 4, 5, 10)\n", "target = torch.randint(0,2,(32, 5, 10))\n", "_ = cl(output, target)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Export -" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Converted 00_torch_core.ipynb.\n", "Converted 01_layers.ipynb.\n", "Converted 01a_losses.ipynb.\n", "Converted 02_data.load.ipynb.\n", "Converted 03_data.core.ipynb.\n", "Converted 04_data.external.ipynb.\n", "Converted 05_data.transforms.ipynb.\n", "Converted 06_data.block.ipynb.\n", "Converted 07_vision.core.ipynb.\n", "Converted 08_vision.data.ipynb.\n", "Converted 09_vision.augment.ipynb.\n", "Converted 09b_vision.utils.ipynb.\n", "Converted 09c_vision.widgets.ipynb.\n", "Converted 10_tutorial.pets.ipynb.\n", "Converted 10b_tutorial.albumentations.ipynb.\n", "Converted 11_vision.models.xresnet.ipynb.\n", "Converted 12_optimizer.ipynb.\n", "Converted 13_callback.core.ipynb.\n", "Converted 13a_learner.ipynb.\n", "Converted 13b_metrics.ipynb.\n", "Converted 14_callback.schedule.ipynb.\n", "Converted 14a_callback.data.ipynb.\n", "Converted 15_callback.hook.ipynb.\n", "Converted 15a_vision.models.unet.ipynb.\n", "Converted 16_callback.progress.ipynb.\n", "Converted 17_callback.tracker.ipynb.\n", "Converted 18_callback.fp16.ipynb.\n", "Converted 18a_callback.training.ipynb.\n", "Converted 18b_callback.preds.ipynb.\n", "Converted 19_callback.mixup.ipynb.\n", "Converted 20_interpret.ipynb.\n", "Converted 20a_distributed.ipynb.\n", "Converted 21_vision.learner.ipynb.\n", "Converted 22_tutorial.imagenette.ipynb.\n", "Converted 23_tutorial.vision.ipynb.\n", "Converted 24_tutorial.image_sequence.ipynb.\n", "Converted 24_tutorial.siamese.ipynb.\n", "Converted 24_vision.gan.ipynb.\n", "Converted 30_text.core.ipynb.\n", "Converted 31_text.data.ipynb.\n", "Converted 32_text.models.awdlstm.ipynb.\n", "Converted 33_text.models.core.ipynb.\n", "Converted 34_callback.rnn.ipynb.\n", "Converted 35_tutorial.wikitext.ipynb.\n", "Converted 37_text.learner.ipynb.\n", "Converted 38_tutorial.text.ipynb.\n", "Converted 39_tutorial.transformers.ipynb.\n", "Converted 40_tabular.core.ipynb.\n", "Converted 41_tabular.data.ipynb.\n", "Converted 42_tabular.model.ipynb.\n", "Converted 43_tabular.learner.ipynb.\n", "Converted 44_tutorial.tabular.ipynb.\n", "Converted 45_collab.ipynb.\n", "Converted 46_tutorial.collab.ipynb.\n", "Converted 50_tutorial.datablock.ipynb.\n", "Converted 60_medical.imaging.ipynb.\n", "Converted 61_tutorial.medical_imaging.ipynb.\n", "Converted 65_medical.text.ipynb.\n", "Converted 70_callback.wandb.ipynb.\n", "Converted 71_callback.tensorboard.ipynb.\n", "Converted 72_callback.neptune.ipynb.\n", "Converted 73_callback.captum.ipynb.\n", "Converted 74_callback.azureml.ipynb.\n", "Converted 97_test_utils.ipynb.\n", "Converted 99_pytorch_doc.ipynb.\n", "Converted dev-setup.ipynb.\n", "Converted app_examples.ipynb.\n", "Converted camvid.ipynb.\n", "Converted migrating_catalyst.ipynb.\n", "Converted migrating_ignite.ipynb.\n", "Converted migrating_lightning.ipynb.\n", "Converted migrating_pytorch.ipynb.\n", "Converted ulmfit.ipynb.\n", "Converted index.ipynb.\n", "Converted quick_start.ipynb.\n", "Converted tutorial.ipynb.\n" ] } ], "source": [ "#hide\n", "from nbdev.export import *\n", "notebook2script()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "jupytext": { "split_at_heading": true }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 4 }