{ "cells": [ { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Installing packages:\n", "\t.package(path: \"/home/ubuntu/fastai_docs/dev_swift/FastaiNotebook_06_cuda\")\n", "\t\tFastaiNotebook_06_cuda\n", "With SwiftPM flags: []\n", "Working in: /tmp/tmpgocy_x9t/swift-install\n", "Fetching https://github.com/mxcl/Path.swift\n", "Fetching https://github.com/JustHTTP/Just\n", "Completed resolution in 3.43s\n", "Cloning https://github.com/JustHTTP/Just\n", "Resolving https://github.com/JustHTTP/Just at 0.7.1\n", "Cloning https://github.com/mxcl/Path.swift\n", "Resolving https://github.com/mxcl/Path.swift at 0.16.2\n", "Compile Swift Module 'Path' (9 sources)\n", "Compile Swift Module 'Just' (1 sources)\n", "Compile Swift Module 'FastaiNotebook_06_cuda' (11 sources)\n", "Compile Swift Module 'jupyterInstalledPackages' (1 sources)\n", "Linking ./.build/x86_64-unknown-linux/debug/libjupyterInstalledPackages.so\n", "Initializing Swift...\n", "Installation complete!\n" ] } ], "source": [ "%install '.package(path: \"$cwd/FastaiNotebook_06_cuda\")' FastaiNotebook_06_cuda" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load data" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "('inline', 'module://ipykernel.pylab.backend_inline')\n" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import FastaiNotebook_06_cuda\n", "%include \"EnableIPythonDisplay.swift\"\n", "IPythonDisplay.shell.enable_matplotlib(\"inline\")" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "// export\n", "import Path\n", "import TensorFlow\n", "import Python" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "let plt = Python.import(\"matplotlib.pyplot\")" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "let data = mnistDataBunch(flat: false, bs: 512)" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "let opt = SimpleSGD(learningRate: 0.4)\n", "func modelInit() -> CnnModel { return CnnModel(channelIn: 1, nOut: 10, filters: [8, 16, 32, 32]) }\n", "let learner = Learner(data: data, lossFunction: softmaxCrossEntropy, optimizer: opt, initializingWith: modelInit)\n", "let recorder = learner.makeDefaultDelegates(metrics: [accuracy])\n", "learner.addDelegates([learner.makeNormalize(mean: mnistStats.mean, std: mnistStats.std),\n", " learner.makeAddChannel()])" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Epoch 0: [0.6109115, 0.7906] \n", "7393.339826 ms \n" ] } ], "source": [ "time { try! learner.fit(1) }" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Batchnorm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Custom" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's start by building our own `BatchNorm` layer from scratch. Eventually we intend for this code to do the trick:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "struct AlmostBatchNorm: Differentiable {\n", " // Configuration hyperparameters\n", " @noDerivative let momentum: Scalar\n", " @noDerivative let epsilon: Scalar\n", " // Running statistics\n", " @noDerivative var runningMean: Tensor\n", " @noDerivative var runningVariance: Tensor\n", " // Trainable parameters\n", " var scale: Tensor\n", " var offset: Tensor\n", " \n", " init(featureCount: Int, momentum: Scalar = 0.9, epsilon: Scalar = 1e-5) {\n", " self.momentum = momentum\n", " self.epsilon = epsilon\n", " self.scale = Tensor(ones: [featureCount])\n", " self.offset = Tensor(zeros: [featureCount])\n", " self.runningMean = Tensor(0)\n", " self.runningVariance = Tensor(1)\n", " }\n", "\n", " mutating func applied(to input: Tensor) -> Tensor {\n", " let mean: Tensor\n", " let variance: Tensor\n", " switch Context.local.learningPhase {\n", " case .training:\n", " mean = input.mean(alongAxes: [0, 1, 2])\n", " variance = input.variance(alongAxes: [0, 1, 2])\n", " runningMean += (mean - runningMean) * (1 - momentum)\n", " runningVariance += (variance - runningVariance) * (1 - momentum)\n", " case .inference:\n", " mean = runningMean\n", " variance = runningVariance\n", " }\n", " let normalizer = rsqrt(variance + epsilon) * scale\n", " return (input - mean) * normalizer + offset\n", " }\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But there are some automatic differentiation limitations (control flow support) and `Layer` protocol constraints (mutating `applied`) that make this impossible for now (note the lack of `@differentiable` or a `Layer` conformance), so we'll need a few workarounds. A `Reference` will let us update running statistics without declaring the `applied` method `mutating`:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "//export\n", "class Reference {\n", " var value: T\n", " init(_ value: T) { self.value = value }\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following snippet will let us differentiate a layer's `applied` method if it's composed of training and inference implementations that are each differentiable:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "//export\n", "protocol LearningPhaseDependent: Layer {\n", " var delegate: LayerDelegate { get set }\n", " @differentiable func forwardTraining(to input: Input) -> Output\n", " @differentiable func forwardInference(to input: Input) -> Output\n", "}\n", "\n", "extension LearningPhaseDependent {\n", " func forward(_ input: Input) -> Output {\n", " switch Context.local.learningPhase {\n", " case .training: return forwardTraining(to: input)\n", " case .inference: return forwardInference(to: input)\n", " }\n", " }\n", "\n", " @differentiating(applied)\n", " func gradForward(_ input: Input) ->\n", " (value: Output, pullback: (Output.CotangentVector) ->\n", " (Self.CotangentVector, Input.CotangentVector)) {\n", " switch Context.local.learningPhase {\n", " case .training:\n", " return valueWithPullback(at: input) { $0.forwardTraining(to: $1) }\n", " case .inference:\n", " return valueWithPullback(at: input) { $0.forwardInference(to: $1) }\n", " }\n", " }\n", " \n", " @differentiable\n", " public func applied(to input: Input) -> Output {\n", " let activation = forward(input)\n", " delegate.didProduceActivation(activation)\n", " return activation\n", " }\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can implement a BatchNorm that we can use in our models:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "//export\n", "protocol Norm: Layer where Input == Tensor, Output == Tensor{\n", " associatedtype Scalar\n", " init(featureCount: Int, epsilon: Scalar)\n", "}\n", "\n", "public struct FABatchNorm: LearningPhaseDependent, Norm {\n", " // Configuration hyperparameters\n", " @noDerivative var momentum: Scalar\n", " @noDerivative var epsilon: Scalar\n", " // Running statistics\n", " @noDerivative let runningMean: Reference>\n", " @noDerivative let runningVariance: Reference>\n", " @noDerivative public var delegate: LayerDelegate = LayerDelegate()\n", " // Trainable parameters\n", " public var scale: Tensor\n", " public var offset: Tensor\n", " // TODO: check why these aren't being synthesized\n", " public typealias Input = Tensor\n", " public typealias Output = Tensor\n", " \n", " public init(featureCount: Int, momentum: Scalar, epsilon: Scalar = 1e-5) {\n", " self.momentum = momentum\n", " self.epsilon = epsilon\n", " self.scale = Tensor(ones: [featureCount])\n", " self.offset = Tensor(zeros: [featureCount])\n", " self.runningMean = Reference(Tensor(0))\n", " self.runningVariance = Reference(Tensor(1))\n", " }\n", " \n", " public init(featureCount: Int, epsilon: Scalar = 1e-5) {\n", " self.init(featureCount: featureCount, momentum: 0.9, epsilon: epsilon)\n", " }\n", "\n", " @differentiable\n", " public func forwardTraining(to input: Tensor) -> Tensor {\n", " let mean = input.mean(alongAxes: [0, 1, 2])\n", " let variance = input.variance(alongAxes: [0, 1, 2])\n", " runningMean.value += (mean - runningMean.value) * (1 - momentum)\n", " runningVariance.value += (variance - runningVariance.value) * (1 - momentum)\n", " let normalizer = rsqrt(variance + epsilon) * scale\n", " return (input - mean) * normalizer + offset\n", " }\n", " \n", " @differentiable\n", " public func forwardInference(to input: Tensor) -> Tensor {\n", " let mean = runningMean.value\n", " let variance = runningVariance.value\n", " let normalizer = rsqrt(variance + epsilon) * scale\n", " return (input - mean) * normalizer + offset\n", " }\n", "}" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "//export\n", "public struct ConvBN: FALayer {\n", " public var conv: FANoBiasConv2D\n", " public var norm: FABatchNorm\n", " @noDerivative public var delegate: LayerDelegate = LayerDelegate()\n", " \n", " public init(_ cIn: Int, _ cOut: Int, ks: Int = 3, stride: Int = 2){\n", " // TODO (when control flow AD works): use Conv2D without bias\n", " self.conv = FANoBiasConv2D(filterShape: (ks, ks, cIn, cOut), \n", " strides: (stride,stride), \n", " padding: .same, \n", " activation: relu)\n", " self.norm = FABatchNorm(featureCount: cOut, epsilon: 1e-5)\n", " }\n", "\n", " @differentiable\n", " public func forward(_ input: Tensor) -> Tensor {\n", " return norm.applied(to: conv.applied(to: input))\n", " }\n", " \n", " @differentiable\n", " public func applied(to input: Tensor) -> Tensor {\n", " let activation = forward(input)\n", " delegate.didProduceActivation(activation)\n", " return activation\n", " }\n", "}" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "// Would be great if this generic could work\n", "// struct ConvNorm: Layer\n", "// where NormType.Scalar == Scalar {\n", "// var conv: Conv2D\n", "// var norm: NormType\n", "// init(\n", "// filterShape: (Int, Int, Int, Int),\n", "// strides: (Int, Int) = (1, 1),\n", "// padding: Padding = .valid,\n", "// activation: @escaping Conv2D.Activation = identity\n", "// ) {\n", "// // TODO (when control flow AD works): use Conv2D without bias\n", "// self.conv = Conv2D(\n", "// filterShape: filterShape,\n", "// strides: strides,\n", "// padding: padding,\n", "// activation: activation)\n", "// self.norm = NormType.init(featureCount: filterShape.3, epsilon: 1e-5)\n", "// }\n", "\n", "// @differentiable\n", "// func applied(to input: Tensor) -> Tensor {\n", "// return norm.applied(to: conv.applied(to: input))\n", "// }\n", "// }\n", "//typealias ConvBN = ConvNorm, Float>" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "//export\n", "public struct CnnModelBN: Layer {\n", " public var convs: [ConvBN]\n", " public var pool = FAAdaptiveAvgPool2D()\n", " public var flatten = Flatten()\n", " public var linear: FADense\n", " \n", " public init(channelIn: Int, nOut: Int, filters: [Int]){\n", " convs = []\n", " let allFilters = [channelIn] + filters\n", " for i in 0..(inputSize: filters.last!, outputSize: nOut)\n", " }\n", " \n", " @differentiable\n", " public func applied(to input: TF) -> TF {\n", " return input.sequenced(through: convs, pool, flatten, linear)\n", " }\n", "}" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "let opt = SimpleSGD(learningRate: 0.4)\n", "func modelInit() -> CnnModelBN { return CnnModelBN(channelIn: 1, nOut: 10, filters: [8, 16, 32, 32]) }\n", "let learner = Learner(data: data, lossFunction: softmaxCrossEntropy, optimizer: opt, initializingWith: modelInit)\n", "let recorder = learner.makeDefaultDelegates(metrics: [accuracy])\n", "learner.addDelegates([learner.makeNormalize(mean: mnistStats.mean, std: mnistStats.std),\n", " learner.makeAddChannel()])" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Epoch 0: [0.14512135, 0.9556] \n", "12103.279594 ms \n" ] } ], "source": [ "time { try! learner.fit(1) }" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "TODO: hooks/LayerDelegates" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## More norms" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Layer norm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "From [the paper](https://arxiv.org/abs/1607.06450): \"*batch normalization cannot be applied to online learning tasks or to extremely large distributed models where the minibatches have to be small*\"." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "General equation for a norm layer with learnable affine:\n", "\n", "$$y = \\frac{x - \\mathrm{E}[x]}{ \\sqrt{\\mathrm{Var}[x] + \\epsilon}} * \\gamma + \\beta$$\n", "\n", "The difference with BatchNorm is\n", "1. we don't keep a moving average\n", "2. we don't average over the batches dimension but over the hidden dimension, so it's independent of the batch size" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [], "source": [ "struct LayerNorm2D: Norm {\n", " // Configuration hyperparameters\n", " @noDerivative let epsilon: Scalar\n", " // Trainable parameters\n", " var scale: Tensor\n", " var offset: Tensor\n", " \n", " init(featureCount: Int, epsilon: Scalar = 1e-5) {\n", " self.epsilon = epsilon\n", " self.scale = Tensor(ones: [featureCount])\n", " self.offset = Tensor(zeros: [featureCount])\n", " }\n", " \n", " @differentiable\n", " func applied(to input: Tensor) -> Tensor {\n", " let mean = input.mean(alongAxes: [1, 2, 3])\n", " let variance = input.variance(alongAxes: [1, 2, 3])\n", " let normalizer = rsqrt(variance + epsilon) * scale\n", " return (input - mean) * normalizer + offset\n", " }\n", "}\n", "\n", "struct ConvLN: FALayer {\n", " var conv: FANoBiasConv2D\n", " var norm: LayerNorm2D\n", " @noDerivative public var delegate: LayerDelegate = LayerDelegate()\n", " \n", " init(_ cIn: Int, _ cOut: Int, ks: Int = 3, stride: Int = 2){\n", " // TODO (when control flow AD works): use Conv2D without bias\n", " self.conv = FANoBiasConv2D(filterShape: (ks, ks, cIn, cOut), \n", " strides: (stride,stride), \n", " padding: .same, \n", " activation: relu)\n", " self.norm = LayerNorm2D(featureCount: cOut, epsilon: 1e-5)\n", " }\n", "\n", " @differentiable\n", " func forward(_ input: Tensor) -> Tensor {\n", " return norm.applied(to: conv.applied(to: input))\n", " }\n", " \n", " @differentiable\n", " public func applied(to input: Tensor) -> Tensor {\n", " let activation = forward(input)\n", " delegate.didProduceActivation(activation)\n", " return activation\n", " }\n", "}" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [], "source": [ "public struct CnnModelLN: Layer {\n", " public var convs: [ConvLN]\n", " public var pool = FAAdaptiveAvgPool2D()\n", " public var flatten = Flatten()\n", " public var linear: FADense\n", " \n", " public init(channelIn: Int, nOut: Int, filters: [Int]){\n", " convs = []\n", " let allFilters = [channelIn] + filters\n", " for i in 0..(inputSize: filters.last!, outputSize: nOut)\n", " }\n", " \n", " @differentiable\n", " public func applied(to input: TF) -> TF {\n", " return input.sequenced(through: convs, pool, flatten, linear)\n", " }\n", "}\n" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [], "source": [ "struct InstanceNorm: Norm {\n", " // Configuration hyperparameters\n", " @noDerivative let epsilon: Scalar\n", " // Trainable parameters\n", " var scale: Tensor\n", " var offset: Tensor\n", " \n", " init(featureCount: Int, epsilon: Scalar = 1e-5) {\n", " self.epsilon = epsilon\n", " self.scale = Tensor(ones: [featureCount])\n", " self.offset = Tensor(zeros: [featureCount])\n", " }\n", " \n", " @differentiable\n", " func applied(to input: Tensor) -> Tensor {\n", " let mean = input.mean(alongAxes: [2, 3])\n", " let variance = input.variance(alongAxes: [2, 3])\n", " let normalizer = rsqrt(variance + epsilon) * scale\n", " return (input - mean) * normalizer + offset\n", " }\n", "}\n", "\n", "struct ConvIN: FALayer {\n", " var conv: FANoBiasConv2D\n", " var norm: InstanceNorm\n", " @noDerivative public var delegate: LayerDelegate = LayerDelegate()\n", " \n", " init(_ cIn: Int, _ cOut: Int, ks: Int = 3, stride: Int = 2){\n", " // TODO (when control flow AD works): use Conv2D without bias\n", " self.conv = FANoBiasConv2D(filterShape: (ks, ks, cIn, cOut), \n", " strides: (stride,stride), \n", " padding: .same, \n", " activation: relu)\n", " self.norm = InstanceNorm(featureCount: cOut, epsilon: 1e-5)\n", " }\n", "\n", " @differentiable\n", " func forward(_ input: Tensor) -> Tensor {\n", " return norm.applied(to: conv.applied(to: input))\n", " }\n", " \n", " @differentiable\n", " public func applied(to input: Tensor) -> Tensor {\n", " let activation = forward(input)\n", " delegate.didProduceActivation(activation)\n", " return activation\n", " }\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lost in all those norms? The authors from the [group norm paper](https://arxiv.org/pdf/1803.08494.pdf) have you covered:\n", "\n", "![Various norms](../dev_course/dl2/images/norms.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "TODO/skipping GroupNorm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Running Batch Norm" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "struct RunningBatchNorm: LearningPhaseDependent, Norm {\n", " // Configuration hyperparameters\n", " @noDerivative let momentum: Scalar\n", " @noDerivative let epsilon: Scalar\n", " // Running statistics\n", " @noDerivative let runningSum: Reference>\n", " @noDerivative let runningSumOfSquares: Reference>\n", " @noDerivative let runningCount: Reference\n", " @noDerivative let samplesSeen: Reference\n", " // Trainable parameters\n", " var scale: Tensor\n", " var offset: Tensor\n", " // TODO: check why these aren't being synthesized\n", " typealias Input = Tensor\n", " typealias Output = Tensor\n", " \n", " init(featureCount: Int, momentum: Scalar, epsilon: Scalar = 1e-5) {\n", " self.momentum = momentum\n", " self.epsilon = epsilon\n", " self.scale = Tensor(ones: [featureCount])\n", " self.offset = Tensor(zeros: [featureCount])\n", " self.runningSum = Reference(Tensor(0))\n", " self.runningSumOfSquares = Reference(Tensor(0))\n", " self.runningCount = Reference(Scalar(0))\n", " self.samplesSeen = Reference(0)\n", " }\n", " \n", " init(featureCount: Int, epsilon: Scalar = 1e-5) {\n", " self.init(featureCount: featureCount, momentum: 0.9, epsilon: epsilon)\n", " }\n", "\n", " @differentiable\n", " func forwardTraining(to input: Tensor) -> Tensor {\n", " let (batch, channels) = (input.shape[0], Scalar(input.shape[3]))\n", " let sum = input.sum(alongAxes: [0, 1, 2])\n", " let sumOfSquares = (input * input).sum(alongAxes: [0, 1, 2])\n", " let count = Scalar(input.scalarCount).withoutDerivative() / channels\n", " let mom = momentum / sqrt(Scalar(batch) - 1)\n", " let runningSum = mom * self.runningSum.value + (1 - mom) * sum\n", " let runningSumOfSquares = mom * self.runningSumOfSquares.value + (\n", " 1 - mom) * sumOfSquares\n", " let runningCount = mom * self.runningCount.value + (1 - mom) * count\n", " \n", " self.runningSum.value = runningSum\n", " self.runningSumOfSquares.value = runningSumOfSquares\n", " self.runningCount.value = runningCount\n", " self.samplesSeen.value += batch\n", " \n", " let mean = runningSum / runningCount\n", " let variance = runningSumOfSquares / runningCount - mean * mean\n", " \n", " let normalizer = rsqrt(variance + epsilon) * scale\n", " return (input - mean) * normalizer + offset\n", " }\n", " \n", " @differentiable\n", " func forwardInference(to input: Tensor) -> Tensor {\n", " let mean = runningSum.value / runningCount.value\n", " let variance = runningSumOfSquares.value / runningCount.value - mean * mean\n", " let normalizer = rsqrt(variance + epsilon) * scale\n", " return (input - mean) * normalizer + offset\n", " }\n", "}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "TODO: XLA compilation + test RBN" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Export" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "notebookToScript(fname: (Path.cwd / \"07_batchnorm.ipynb\").string)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Swift", "language": "swift", "name": "swift" }, "language_info": { "file_extension": ".swift", "mimetype": "text/x-swift", "name": "swift", "version": "" } }, "nbformat": 4, "nbformat_minor": 2 }