{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Installing packages:\n", "\t.package(path: \"/home/ubuntu/fastai_docs/dev_swift/FastaiNotebook_06_cuda\")\n", "\t\tFastaiNotebook_06_cuda\n", "With SwiftPM flags: []\n", "Working in: /tmp/tmplakka2c8\n", "Fetching https://github.com/mxcl/Path.swift\n", "Fetching https://github.com/JustHTTP/Just\n", "Completed resolution in 3.11s\n", "Cloning https://github.com/JustHTTP/Just\n", "Resolving https://github.com/JustHTTP/Just at 0.7.1\n", "Cloning https://github.com/mxcl/Path.swift\n", "Resolving https://github.com/mxcl/Path.swift at 0.16.2\n", "Compile Swift Module 'Just' (1 sources)\n", "Compile Swift Module 'Path' (9 sources)\n", "Compile Swift Module 'FastaiNotebook_06_cuda' (10 sources)\n", "Compile Swift Module 'jupyterInstalledPackages' (1 sources)\n", "Linking ./.build/x86_64-unknown-linux/debug/libjupyterInstalledPackages.so\n", "Initializing Swift...\n", "Loading library...\n", "Installation complete!\n" ] } ], "source": [ "%install '.package(path: \"$cwd/FastaiNotebook_06_cuda\")' FastaiNotebook_06_cuda" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "('inline', 'module://ipykernel.pylab.backend_inline')\n" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import FastaiNotebook_06_cuda\n", "%include \"EnableIPythonDisplay.swift\"\n", "IPythonDisplay.shell.enable_matplotlib(\"inline\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "// export\n", "import Path\n", "import TensorFlow\n", "import Python" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "let plt = Python.import(\"matplotlib.pyplot\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "let data = mnistDataBunch(flat: false, bs: 512)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "let opt = SGD(learningRate: 0.4)\n", "func modelInit() -> CnnModel { return CnnModel(sizeIn:28, channelIn: 1, channelOut: 10, nFilters: [8, 16, 32, 32]) }\n", "let learner = Learner(data: data, lossFunction: softmaxCrossEntropy, optimizer: opt, initializingWith: modelInit)\n", "let recorder = learner.makeDefaultDelegates(metrics: [accuracy])\n", "learner.delegates.append(learner.makeNormalize(mean: mnistStats.mean, std: mnistStats.std))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Epoch 0: [0.3132431, 0.9025] \n", "25091.188661 ms \n" ] } ], "source": [ "time { try! learner.fit(1) }" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Batchnorm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Custom" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's start by building our own `BatchNorm` layer from scratch. Eventually we intend for this code to do the trick:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "struct AlmostBatchNorm: Differentiable {\n", " // Configuration hyperparameters\n", " @noDerivative let momentum: Scalar\n", " @noDerivative let epsilon: Scalar\n", " // Running statistics\n", " @noDerivative var runningMean: Tensor\n", " @noDerivative var runningVariance: Tensor\n", " // Trainable parameters\n", " var scale: Tensor\n", " var offset: Tensor\n", " \n", " init(featureCount: Int, momentum: Scalar = 0.9, epsilon: Scalar = 1e-5) {\n", " self.momentum = momentum\n", " self.epsilon = epsilon\n", " self.scale = Tensor(ones: [Int32(featureCount)])\n", " self.offset = Tensor(zeros: [Int32(featureCount)])\n", " self.runningMean = Tensor(0)\n", " self.runningVariance = Tensor(1)\n", " }\n", "\n", " mutating func applied(to input: Tensor, in context: Context) -> Tensor {\n", " let mean: Tensor\n", " let variance: Tensor\n", " switch context.learningPhase {\n", " case .training:\n", " mean = input.mean(alongAxes: [0, 1, 2])\n", " variance = input.variance(alongAxes: [0, 1, 2])\n", " runningMean += (mean - runningMean) * (1 - momentum)\n", " runningVariance += (variance - runningVariance) * (1 - momentum)\n", " case .inference:\n", " mean = runningMean\n", " variance = runningVariance\n", " }\n", " let normalizer = rsqrt(variance + epsilon) * scale\n", " return (input - mean) * normalizer + offset\n", " }\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But there are some automatic differentiation limitations (control flow support) and `Layer` protocol constraints (mutating `applied`) that make this impossible for now (note the lack of `@differentiable` or a `Layer` conformance), so we'll need a few workarounds. A `Reference` will let us update running statistics without declaring the `applied` method `mutating`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "//export\n", "class Reference {\n", " var value: T\n", " init(_ value: T) {\n", " self.value = value\n", " }\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following snippet will let us differentiate a layer's `applied` method if it's composed of training and inference implementations that are each differentiable:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "//export\n", "protocol LearningPhaseDependent: Layer {\n", " @differentiable func applyingTraining(to input: Input) -> Output\n", " @differentiable func applyingInference(to input: Input) -> Output\n", "}\n", "\n", "extension LearningPhaseDependent {\n", " func applied(to input: Input, in context: Context) -> Output {\n", " switch context.learningPhase {\n", " case .training: return applyingTraining(to: input)\n", " case .inference: return applyingInference(to: input)\n", " }\n", " }\n", "\n", " @differentiating(applied)\n", " func gradApplied(to input: Input, in context: Context) ->\n", " (value: Output, pullback: (Output.CotangentVector) ->\n", " (Self.CotangentVector, Input.CotangentVector)) {\n", " switch context.learningPhase {\n", " case .training:\n", " return valueWithPullback(at: input) {\n", " $0.applyingTraining(to: $1)\n", " }\n", " case .inference:\n", " return valueWithPullback(at: input) {\n", " $0.applyingInference(to: $1)\n", " }\n", " }\n", " }\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can implement a BatchNorm that we can use in our models:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "//export\n", "protocol Norm: Layer {\n", " associatedtype Scalar\n", " typealias Input = Tensor\n", " typealias Output = Tensor\n", " init(featureCount: Int, epsilon: Scalar)\n", "}\n", "\n", "struct FABatchNorm: LearningPhaseDependent, Norm {\n", " // Configuration hyperparameters\n", " @noDerivative let momentum: Scalar\n", " @noDerivative let epsilon: Scalar\n", " // Running statistics\n", " @noDerivative let runningMean: Reference>\n", " @noDerivative let runningVariance: Reference>\n", " // Trainable parameters\n", " var scale: Tensor\n", " var offset: Tensor\n", " // TODO: check why these aren't being synthesized\n", " typealias Input = Tensor\n", " typealias Output = Tensor\n", " \n", " init(featureCount: Int, momentum: Scalar, epsilon: Scalar = 1e-5) {\n", " self.momentum = momentum\n", " self.epsilon = epsilon\n", " self.scale = Tensor(ones: [Int32(featureCount)])\n", " self.offset = Tensor(zeros: [Int32(featureCount)])\n", " self.runningMean = Reference(Tensor(0))\n", " self.runningVariance = Reference(Tensor(1))\n", " }\n", " \n", " init(featureCount: Int, epsilon: Scalar = 1e-5) {\n", " self.init(featureCount: featureCount, momentum: 0.9, epsilon: epsilon)\n", " }\n", "\n", " @differentiable\n", " func applyingTraining(to input: Tensor) -> Tensor {\n", " let mean = input.mean(alongAxes: [0, 1, 2])\n", " let variance = input.variance(alongAxes: [0, 1, 2])\n", " runningMean.value += (mean - runningMean.value) * (1 - momentum)\n", " runningVariance.value += (variance - runningVariance.value) * (1 - momentum)\n", " let normalizer = rsqrt(variance + epsilon) * scale\n", " return (input - mean) * normalizer + offset\n", " }\n", " \n", " @differentiable\n", " func applyingInference(to input: Tensor) -> Tensor {\n", " let mean = runningMean.value\n", " let variance = runningVariance.value\n", " let normalizer = rsqrt(variance + epsilon) * scale\n", " return (input - mean) * normalizer + offset\n", " }\n", "}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "//export\n", "struct ConvBN: Layer {\n", " var conv: Conv2D\n", " var norm: FABatchNorm\n", " init(\n", " filterShape: (Int, Int, Int, Int),\n", " strides: (Int, Int) = (1, 1),\n", " padding: Padding = .valid,\n", " activation: @escaping Conv2D.Activation = identity\n", " ) {\n", " // TODO (when control flow AD works): use Conv2D without bias\n", " self.conv = Conv2D(\n", " filterShape: filterShape,\n", " strides: strides,\n", " padding: padding,\n", " activation: activation)\n", " self.norm = FABatchNorm(featureCount: filterShape.3, epsilon: 1e-5)\n", " }\n", "\n", " @differentiable\n", " func applied(to input: Tensor, in context: Context) -> Tensor {\n", " return norm.applied(to: conv.applied(to: input, in: context), in: context)\n", " }\n", "}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "// Would be great if this generic could work\n", "// struct ConvNorm: Layer\n", "// where NormType.Scalar == Scalar {\n", "// var conv: Conv2D\n", "// var norm: NormType\n", "// init(\n", "// filterShape: (Int, Int, Int, Int),\n", "// strides: (Int, Int) = (1, 1),\n", "// padding: Padding = .valid,\n", "// activation: @escaping Conv2D.Activation = identity\n", "// ) {\n", "// // TODO (when control flow AD works): use Conv2D without bias\n", "// self.conv = Conv2D(\n", "// filterShape: filterShape,\n", "// strides: strides,\n", "// padding: padding,\n", "// activation: activation)\n", "// self.norm = NormType.init(featureCount: filterShape.3, epsilon: 1e-5)\n", "// }\n", "\n", "// @differentiable\n", "// func applied(to input: Tensor, in context: Context) -> Tensor {\n", "// return norm.applied(to: conv.applied(to: input, in: context), in: context)\n", "// }\n", "// }\n", "//typealias ConvBN = ConvNorm, Float>" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "//export\n", "struct CnnModelBN: Layer {\n", " var reshapeToSquare = Reshape([-1, 28, 28, 1])\n", " var conv1 = ConvBN(\n", " filterShape: (5, 5, 1, 8),\n", " strides: (2, 2),\n", " padding: .same,\n", " activation: relu)\n", " var conv2 = ConvBN(\n", " filterShape: (3, 3, 8, 16),\n", " strides: (2, 2),\n", " padding: .same,\n", " activation: relu)\n", " var conv3 = ConvBN(\n", " filterShape: (3, 3, 16, 32),\n", " strides: (2, 2),\n", " padding: .same,\n", " activation: relu)\n", " var conv4 = ConvBN(\n", " filterShape: (3, 3, 32, 32),\n", " strides: (2, 2),\n", " padding: .same,\n", " activation: relu)\n", " \n", " var pool = AvgPool2D(poolSize: (2, 2), strides: (1, 1))\n", " \n", " var flatten = Flatten()\n", " var linear = Dense(inputSize: 32, outputSize: 10)\n", " \n", " @differentiable\n", " func applied(to input: Tensor, in context: Context) -> Tensor {\n", " // There isn't a \"sequenced\" defined with enough layers.\n", " let intermediate = input.sequenced(\n", " in: context,\n", " through: reshapeToSquare, conv1, conv2, conv3, conv4)\n", " return intermediate.sequenced(in: context, through: pool, flatten, linear)\n", " }\n", "}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "let opt = SGD(learningRate: 0.4)\n", "func modelInit() -> CnnModelBN { return CnnModelBN() }\n", "let learner = Learner(data: data, lossFunction: softmaxCrossEntropy, optimizer: opt, initializingWith: modelInit)\n", "let recorder = learner.makeDefaultDelegates(metrics: [accuracy])\n", "learner.delegates.append(learner.makeNormalize(mean: mnistStats.mean, std: mnistStats.std))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Epoch 0: [0.103150696, 0.9708] \n", "28583.478799 ms \n" ] } ], "source": [ "time { try! learner.fit(1) }" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "TODO: hooks/LayerDelegates" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## More norms" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Layer norm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "From [the paper](https://arxiv.org/abs/1607.06450): \"*batch normalization cannot be applied to online learning tasks or to extremely large distributed models where the minibatches have to be small*\"." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "General equation for a norm layer with learnable affine:\n", "\n", "$$y = \\frac{x - \\mathrm{E}[x]}{ \\sqrt{\\mathrm{Var}[x] + \\epsilon}} * \\gamma + \\beta$$\n", "\n", "The difference with BatchNorm is\n", "1. we don't keep a moving average\n", "2. we don't average over the batches dimension but over the hidden dimension, so it's independent of the batch size" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "struct LayerNorm2D: Norm {\n", " // Configuration hyperparameters\n", " @noDerivative let epsilon: Scalar\n", " // Trainable parameters\n", " var scale: Tensor\n", " var offset: Tensor\n", " \n", " init(featureCount: Int, epsilon: Scalar = 1e-5) {\n", " self.epsilon = epsilon\n", " self.scale = Tensor(ones: [Int32(featureCount)])\n", " self.offset = Tensor(zeros: [Int32(featureCount)])\n", " }\n", " \n", " @differentiable\n", " func applied(to input: Tensor, in context: Context) -> Tensor {\n", " let mean = input.mean(alongAxes: [1, 2, 3])\n", " let variance = input.variance(alongAxes: [1, 2, 3])\n", " let normalizer = rsqrt(variance + epsilon) * scale\n", " return (input - mean) * normalizer + offset\n", " }\n", "}\n", "\n", "struct ConvLN: Layer {\n", " var conv: Conv2D\n", " var norm: LayerNorm2D\n", " init(\n", " filterShape: (Int, Int, Int, Int),\n", " strides: (Int, Int) = (1, 1),\n", " padding: Padding = .valid,\n", " activation: @escaping Conv2D.Activation = identity\n", " ) {\n", " // TODO (when control flow AD works): use Conv2D without bias\n", " self.conv = Conv2D(\n", " filterShape: filterShape,\n", " strides: strides,\n", " padding: padding,\n", " activation: activation)\n", " self.norm = LayerNorm2D(featureCount: filterShape.3, epsilon: 1e-5)\n", " }\n", "\n", " @differentiable\n", " func applied(to input: Tensor, in context: Context) -> Tensor {\n", " return norm.applied(to: conv.applied(to: input, in: context), in: context)\n", " }\n", "}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "ename": "", "evalue": "", "output_type": "error", "traceback": [ "error: :42:16: error: ambiguous use of 'learner'\nlet recorder = learner.makeDefaultDelegates(metrics: [accuracy])\n ^\n\n:41:5: note: 'learner' declared here\nlet learner = Learner(data: data, lossFunction: softmaxCrossEntropy1, optimizer: opt, initializingWith: modelInit)\n ^\n\nerror: :41:49: error: use of unresolved identifier 'softmaxCrossEntropy1'; did you mean 'softmaxCrossEntropy'?\nlet learner = Learner(data: data, lossFunction: softmaxCrossEntropy1, optimizer: opt, initializingWith: modelInit)\n ^~~~~~~~~~~~~~~~~~~~\n softmaxCrossEntropy\n\nTensorFlow.softmaxCrossEntropy:2:13: note: 'softmaxCrossEntropy' declared here\npublic func softmaxCrossEntropy(logits: TensorFlow.Tensor, labels: TensorFlow.Tensor) -> TensorFlow.Tensor where Scalar : TensorFlowFloatingPoint\n ^\n\nTensorFlow.softmaxCrossEntropy:2:13: note: 'softmaxCrossEntropy' declared here\npublic func softmaxCrossEntropy(logits: TensorFlow.Tensor, probabilities: TensorFlow.Tensor) -> TensorFlow.Tensor where Scalar : TensorFlowFloatingPoint\n ^\n\nerror: :27:62: error: use of unresolved identifier 'classCount'\n var linear = Dense(inputSize: 32, outputSize: Int(classCount))\n ^~~~~~~~~~\n\nerror: :2:47: error: use of unresolved identifier 'exampleSideSize'\n var reshapeToSquare = Reshape([-1, exampleSideSize, exampleSideSize, 1])\n ^~~~~~~~~~~~~~~\n\nerror: :2:64: error: use of unresolved identifier 'exampleSideSize'\n var reshapeToSquare = Reshape([-1, exampleSideSize, exampleSideSize, 1])\n ^~~~~~~~~~~~~~~\n\nerror: :40:48: error: missing argument for parameter 'sizeIn' in call\nfunc modelInit() -> CnnModel { return CnnModel() }\n ^\n sizeIn: <#Int#>\n\nFastaiNotebook_06_cuda.CnnModel:10:12: note: 'init(sizeIn:channelIn:channelOut:nFilters:)' declared here\n public init(sizeIn: Int, channelIn: Int, channelOut: Int, nFilters: [Int])\n ^\n\n" ] } ], "source": [ "struct CnnModelLN: Layer {\n", " var reshapeToSquare = Reshape([-1, exampleSideSize, exampleSideSize, 1])\n", " var conv1 = ConvLN(\n", " filterShape: (5, 5, 1, 8),\n", " strides: (2, 2),\n", " padding: .same,\n", " activation: relu)\n", " var conv2 = ConvLN(\n", " filterShape: (3, 3, 8, 16),\n", " strides: (2, 2),\n", " padding: .same,\n", " activation: relu)\n", " var conv3 = ConvLN(\n", " filterShape: (3, 3, 16, 32),\n", " strides: (2, 2),\n", " padding: .same,\n", " activation: relu)\n", " var conv4 = ConvLN(\n", " filterShape: (3, 3, 32, 32),\n", " strides: (2, 2),\n", " padding: .same,\n", " activation: relu)\n", " \n", " var pool = AvgPool2D(poolSize: (2, 2), strides: (1, 1))\n", " \n", " var flatten = Flatten()\n", " var linear = Dense(inputSize: 32, outputSize: Int(classCount))\n", " \n", " @differentiable\n", " func applied(to input: Tensor, in context: Context) -> Tensor {\n", " // There isn't a \"sequenced\" defined with enough layers.\n", " let intermediate = input.sequenced(\n", " in: context,\n", " through: reshapeToSquare, conv1, conv2, conv3, conv4)\n", " return intermediate.sequenced(in: context, through: pool, flatten, linear)\n", " }\n", "}\n", "\n", "let opt = SGD(learningRate: 0.4)\n", "func modelInit() -> CnnModel { return CnnModel() }\n", "let learner = Learner(data: data, lossFunction: softmaxCrossEntropy1, optimizer: opt, initializingWith: modelInit)\n", "let recorder = learner.makeDefaultDelegates(metrics: [accuracy])\n", "\n", "time { try! learner.fit(1) }" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "struct InstanceNorm: Norm {\n", " // Configuration hyperparameters\n", " @noDerivative let epsilon: Scalar\n", " // Trainable parameters\n", " var scale: Tensor\n", " var offset: Tensor\n", " \n", " init(featureCount: Int, epsilon: Scalar = 1e-5) {\n", " self.epsilon = epsilon\n", " self.scale = Tensor(ones: [Int32(featureCount)])\n", " self.offset = Tensor(zeros: [Int32(featureCount)])\n", " }\n", " \n", " @differentiable\n", " func applied(to input: Tensor, in context: Context) -> Tensor {\n", " let mean = input.mean(alongAxes: [2, 3])\n", " let variance = input.variance(alongAxes: [2, 3])\n", " let normalizer = rsqrt(variance + epsilon) * scale\n", " return (input - mean) * normalizer + offset\n", " }\n", "}\n", "\n", "struct ConvIN: Layer {\n", " var conv: Conv2D\n", " var norm: InstanceNorm\n", " init(\n", " filterShape: (Int, Int, Int, Int),\n", " strides: (Int, Int) = (1, 1),\n", " padding: Padding = .valid,\n", " activation: @escaping Conv2D.Activation = identity\n", " ) {\n", " // TODO (when control flow AD works): use Conv2D without bias\n", " self.conv = Conv2D(\n", " filterShape: filterShape,\n", " strides: strides,\n", " padding: padding,\n", " activation: activation)\n", " self.norm = InstanceNorm(featureCount: filterShape.3, epsilon: 1e-5)\n", " }\n", "\n", " @differentiable\n", " func applied(to input: Tensor, in context: Context) -> Tensor {\n", " return norm.applied(to: conv.applied(to: input, in: context), in: context)\n", " }\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lost in all those norms? The authors from the [group norm paper](https://arxiv.org/pdf/1803.08494.pdf) have you covered:\n", "\n", "![Various norms](../dev_course/dl2/images/norms.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "TODO/skipping GroupNorm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Running Batch Norm" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "struct RunningBatchNorm: LearningPhaseDependent, Norm {\n", " // Configuration hyperparameters\n", " @noDerivative let momentum: Scalar\n", " @noDerivative let epsilon: Scalar\n", " // Running statistics\n", " @noDerivative let runningSum: Reference>\n", " @noDerivative let runningSumOfSquares: Reference>\n", " @noDerivative let runningCount: Reference\n", " @noDerivative let samplesSeen: Reference\n", " // Trainable parameters\n", " var scale: Tensor\n", " var offset: Tensor\n", " // TODO: check why these aren't being synthesized\n", " typealias Input = Tensor\n", " typealias Output = Tensor\n", " \n", " init(featureCount: Int, momentum: Scalar, epsilon: Scalar = 1e-5) {\n", " self.momentum = momentum\n", " self.epsilon = epsilon\n", " self.scale = Tensor(ones: [Int32(featureCount)])\n", " self.offset = Tensor(zeros: [Int32(featureCount)])\n", " self.runningSum = Reference(Tensor(0))\n", " self.runningSumOfSquares = Reference(Tensor(0))\n", " self.runningCount = Reference(Scalar(0))\n", " self.samplesSeen = Reference(0)\n", " }\n", " \n", " init(featureCount: Int, epsilon: Scalar = 1e-5) {\n", " self.init(featureCount: featureCount, momentum: 0.9, epsilon: epsilon)\n", " }\n", "\n", " @differentiable\n", " func applyingTraining(to input: Tensor) -> Tensor {\n", " let (batch, channels) = (input.shape[0], Scalar(input.shape[3]))\n", " let sum = input.sum(alongAxes: [0, 1, 2])\n", " let sumOfSquares = (input * input).sum(alongAxes: [0, 1, 2])\n", " let count = Scalar(input.scalarCount).withoutDerivative() / channels\n", " let mom = momentum / sqrt(Scalar(batch) - 1)\n", " let runningSum = mom * self.runningSum.value + (1 - mom) * sum\n", " let runningSumOfSquares = mom * self.runningSumOfSquares.value + (\n", " 1 - mom) * sumOfSquares\n", " let runningCount = mom * self.runningCount.value + (1 - mom) * count\n", " \n", " self.runningSum.value = runningSum\n", " self.runningSumOfSquares.value = runningSumOfSquares\n", " self.runningCount.value = runningCount\n", " self.samplesSeen.value += batch\n", " \n", " let mean = runningSum / runningCount\n", " let variance = runningSumOfSquares / runningCount - mean * mean\n", " \n", " let normalizer = rsqrt(variance + epsilon) * scale\n", " return (input - mean) * normalizer + offset\n", " }\n", " \n", " @differentiable\n", " func applyingInference(to input: Tensor) -> Tensor {\n", " let mean = runningSum.value / runningCount.value\n", " let variance = runningSumOfSquares.value / runningCount.value - mean * mean\n", " let normalizer = rsqrt(variance + epsilon) * scale\n", " return (input - mean) * normalizer + offset\n", " }\n", "}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "TODO: XLA compilation + test RBN" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Export" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "notebookToScript(fname: (Path.cwd / \"07_batchnorm.ipynb\").string)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Swift", "language": "swift", "name": "swift" } }, "nbformat": 4, "nbformat_minor": 2 }