{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Quickstart\n", "\n", "This notebook demonstrates how you can find adversarial examples for a pre-trained example network on the MNIST dataset.\n", "\n", "We suggest having the `Gurobi` solver installed, since its performance is significantly faster. If this is not possible, the `Cbc` solver is another option.\n", "\n", "The `Images` package is only necessary for visualizing the sample images." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "┌ Info: Precompiling MIPVerify [e5e5f8be-2a6a-5994-adbb-5afbd0e30425]\n", "└ @ Base loading.jl:1260\n", "┌ Info: Precompiling Images [916415d5-f1e6-5110-898d-aaa5f9f070e0]\n", "└ @ Base loading.jl:1260\n" ] } ], "source": [ "using MIPVerify\n", "using Gurobi\n", "using Images" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setup\n", "\n", "### MNIST dataset\n", "\n", "We begin by loading the MNIST dataset. The data is provided as a Julia `struct` for easy access. The training images and test images are provided as a 4-dimensional array of size `(num_samples, height, width, num_channels)`." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "mnist:\n", " `train`: {LabelledImageDataset}\n", " `images`: 60000 images of size (28, 28, 1), with pixels in [0.0, 1.0].\n", " `labels`: 60000 corresponding labels, with 10 unique labels in [0, 9].\n", " `test`: {LabelledImageDataset}\n", " `images`: 10000 images of size (28, 28, 1), with pixels in [0.0, 1.0].\n", " `labels`: 10000 corresponding labels, with 10 unique labels in [0, 9]." ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "mnist = MIPVerify.read_datasets(\"MNIST\")" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{LabelledImageDataset}\n", " `images`: 60000 images of size (28, 28, 1), with pixels in [0.0, 1.0].\n", " `labels`: 60000 corresponding labels, with 10 unique labels in [0, 9]." ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "mnist.train" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(60000, 28, 28, 1)" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "size(mnist.train.images)" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "60000-element Array{UInt8,1}:\n", " 0x05\n", " 0x00\n", " 0x04\n", " 0x01\n", " 0x09\n", " 0x02\n", " 0x01\n", " 0x03\n", " 0x01\n", " 0x04\n", " 0x03\n", " 0x05\n", " 0x03\n", " ⋮\n", " 0x07\n", " 0x08\n", " 0x09\n", " 0x02\n", " 0x09\n", " 0x05\n", " 0x01\n", " 0x08\n", " 0x03\n", " 0x05\n", " 0x06\n", " 0x08" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "mnist.train.labels" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Sample Neural Network\n", "\n", "We import a sample pre-trained neural network. " ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "sequential net MNIST.n1\n", " (1) Flatten(): flattens 4 dimensional input, with dimensions permuted according to the order [4, 3, 2, 1]\n", " (2) Linear(784 -> 40)\n", " (3) ReLU()\n", " (4) Linear(40 -> 20)\n", " (5) ReLU()\n", " (6) Linear(20 -> 10)\n" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "n1 = MIPVerify.get_example_network_params(\"MNIST.n1\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`MIPVerify.frac_correct` allows us to verify that the network has a reasonable accuracy on the test set of 96.95%. (This step is crucial when working with your own neural net parameters; since the training is done outside of Julia, a common mistake is to transfer the parameters incorrectly.)" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "\u001b[32mComputing fraction correct...100%|██████████████████████| Time: 0:00:02\u001b[39m\n" ] }, { "data": { "text/plain": [ "0.9695" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "MIPVerify.frac_correct(n1, mnist.test, 10000)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We feed the first image into the neural net, obtaining the activations of the final softmax layer. \n", "\n", "Note that the image must be specified as a 4-dimensional array with size `(1, height, width, num_channels)`. We provide a helper function `MIPVerify.get_image` that extracts the image from the dataset while preserving all four dimensions." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "1×28×28×1 Array{Float64,4}:\n", "[:, :, 1, 1] =\n", " 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0 0.0 0.0 0.0\n", "\n", "[:, :, 2, 1] =\n", " 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0 0.0 0.0 0.0\n", "\n", "[:, :, 3, 1] =\n", " 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0 0.0 0.0 0.0\n", "\n", "...\n", "\n", "[:, :, 26, 1] =\n", " 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0 0.0 0.0 0.0\n", "\n", "[:, :, 27, 1] =\n", " 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0 0.0 0.0 0.0\n", "\n", "[:, :, 28, 1] =\n", " 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0 0.0 0.0 0.0" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sample_image = MIPVerify.get_image(mnist.test.images, 1)" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "10-element Array{Float64,1}:\n", " -0.02074390040759505\n", " -0.017499541361042703\n", " 0.16707187742051954\n", " -0.05323712887827292\n", " -0.019291011852467455\n", " -0.07951546424946399\n", " 0.06191130931372918\n", " 4.833970937815984\n", " 0.46706000134294867\n", " 0.40145201599055125" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "output_activations = sample_image |> n1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The category that has the largest activation is category 8, corresponding to a label of 7." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "7" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "(output_activations |> MIPVerify.get_max_index) - 1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This matches the true label." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "7" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "MIPVerify.get_label(mnist.test.labels, 1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Finding an Adversarial Example\n", "\n", "We now try to find the closest $L_infty$ norm adversarial example to the first image, setting the target category as index `10` (corresponding to a true label of 9). Note that we restrict the search space to a distance of `0.05` around the original image via the specified `pp`." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Academic license - for non-commercial use only\n", "\u001b[36m[notice | MIPVerify]: Attempting to find adversarial example. Neural net predicted label is 8, target labels are [10]\u001b[39m\n", "\u001b[36m[notice | MIPVerify]: Determining upper and lower bounds for the input to each non-linear unit.\u001b[39m\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[32m Calculating upper bounds: 100%|███████████████████████| Time: 0:00:00\u001b[39m\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Academic license - for non-commercial use only\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[32m Calculating lower bounds: 100%|███████████████████████| Time: 0:00:00\u001b[39m\n", "\u001b[32m Imposing relu constraint: 100%|███████████████████████| Time: 0:00:00\u001b[39m\n", "\u001b[32m Calculating upper bounds: 10%|██▎ | ETA: 0:02:41\u001b[39m" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Academic license - for non-commercial use only\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[32m Calculating upper bounds: 100%|███████████████████████| Time: 0:00:26\u001b[39m\n", "\u001b[32m Calculating lower bounds: 100%|███████████████████████| Time: 0:00:08\u001b[39m\n", "\u001b[32m Imposing relu constraint: 100%|███████████████████████| Time: 0:00:00\u001b[39m\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Academic license - for non-commercial use only\n", "Academic license - for non-commercial use only\n", "Gurobi Optimizer version 9.0.2 build v9.0.2rc0 (linux64)\n", "Optimize a model with 3249 rows, 2405 columns and 54806 nonzeros\n", "Model fingerprint: 0x64c1ce1f\n", "Variable types: 2379 continuous, 26 integer (26 binary)\n", "Coefficient statistics:\n", " Matrix range [1e-05, 3e+01]\n", " Objective range [1e+00, 1e+00]\n", " Bounds range [5e-03, 1e+01]\n", " RHS range [4e-03, 3e+01]\n", "Presolve removed 2263 rows and 1564 columns\n", "Presolve time: 0.16s\n", "Presolved: 986 rows, 841 columns, 45508 nonzeros\n", "Variable types: 815 continuous, 26 integer (26 binary)\n", "\n", "Root relaxation: objective 6.198262e-04, 1141 iterations, 0.07 seconds\n", "\n", " Nodes | Current Node | Objective Bounds | Work\n", " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", "\n", " 0 0 0.00062 0 16 - 0.00062 - - 0s\n", " 0 0 0.00524 0 18 - 0.00524 - - 0s\n", " 0 0 0.00545 0 18 - 0.00545 - - 0s\n", " 0 0 0.00568 0 18 - 0.00568 - - 0s\n", " 0 0 0.00569 0 18 - 0.00569 - - 0s\n", " 0 0 0.00570 0 18 - 0.00570 - - 0s\n", " 0 0 0.00570 0 18 - 0.00570 - - 0s\n", " 0 0 0.00572 0 18 - 0.00572 - - 0s\n", " 0 0 0.00572 0 18 - 0.00572 - - 0s\n", " 0 0 0.00572 0 18 - 0.00572 - - 0s\n", "H 0 0 0.0462748 0.00572 87.6% - 1s\n", " 0 2 0.00573 0 17 0.04627 0.00573 87.6% - 1s\n", "H 59 2 0.0460847 0.04270 7.35% 71.8 2s\n", "\n", "Cutting planes:\n", " MIR: 10\n", " RLT: 3\n", "\n", "Explored 70 nodes (5782 simplex iterations) in 2.53 seconds\n", "Thread count was 4 (of 4 available processors)\n", "\n", "Solution count 2: 0.0460847 0.0462748 \n", "\n", "Optimal solution found (tolerance 1.00e-04)\n", "Best objective 4.608468158892e-02, best bound 4.608468158892e-02, gap 0.0000%\n" ] }, { "data": { "text/plain": [ "Dict{Any,Any} with 11 entries:\n", " :TargetIndexes => [10]\n", " :SolveTime => 2.52689\n", " :TotalTime => 55.0765\n", " :Perturbation => JuMP.VariableRef[noname noname … noname noname]…\n", " :PerturbedInput => JuMP.VariableRef[noname noname … noname noname]…\n", " :TighteningApproach => \"mip\"\n", " :PerturbationFamily => linf-norm-bounded-0.05\n", " :SolveStatus => OPTIMAL\n", " :Model => A JuMP Model…\n", " :Output => JuMP.GenericAffExpr{Float64,JuMP.VariableRef}[-0.01206…\n", " :PredictedIndex => 8" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "target_label_index = 10\n", "d = MIPVerify.find_adversarial_example(\n", " n1, \n", " sample_image, \n", " target_label_index, \n", " Gurobi.Optimizer, \n", " Dict(),\n", " norm_order = Inf,\n", " pp=MIPVerify.LInfNormBoundedPerturbationFamily(0.05)\n", ")" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "1×28×28×1 Array{Float64,4}:\n", "[:, :, 1, 1] =\n", " 0.0460847 0.0 0.0 0.0460847 0.0 … 0.0 0.0 0.0 0.0 0.0 0.0460847\n", "\n", "[:, :, 2, 1] =\n", " 0.0 0.0 0.0 0.0460847 0.0460847 0.0 … 0.0 0.0 0.0 0.0 0.0460847\n", "\n", "[:, :, 3, 1] =\n", " 0.0 0.0 0.0 0.0 0.0 0.0 0.0460847 … 0.0 0.0 0.0460847 0.0 0.0\n", "\n", "...\n", "\n", "[:, :, 26, 1] =\n", " 0.0 0.0 0.0460847 0.0 0.0 0.0 0.0 … 0.0460847 0.0460847 0.0 0.0\n", "\n", "[:, :, 27, 1] =\n", " 0.0460847 0.0460847 0.0 0.0 0.0 0.0 … 0.0460847 0.0 0.0460847\n", "\n", "[:, :, 28, 1] =\n", " 0.0460847 0.0 0.0460847 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0460847 0.0" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "using JuMP\n", "\n", "perturbed_sample_image = JuMP.value.(d[:PerturbedInput])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As a sanity check, we feed the perturbed image into the neural net and inspect the activation in the final layer. We verify that the perturbed image does maximize the activation of the target label index, which is 10." ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "10-element Array{Float64,1}:\n", " 0.6749450628745557\n", " 0.6179790360668576\n", " 0.3930321598089386\n", " 0.29656185967035986\n", " 0.2410105349548306\n", " 0.16060021203574193\n", " 0.5428526100447275\n", " 4.288351484573889\n", " -0.22643018233076273\n", " 4.288351484573882" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "perturbed_sample_image |> n1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We visualize the perturbed image and compare it to the original image. Since we are minimizing the $L_infty$-norm, changes are made to many pixels but the change to each pixels is not very noticeable." ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAHAAAABwCAAAAADji6uXAAAABGdBTUEAALGPC/xhBQAAACBjSFJNAAB6JgAAgIQAAPoAAACA6AAAdTAAAOpgAAA6mAAAF3CculE8AAAAAmJLR0QA/4ePzL8AAALHSURBVGje7dq/r0xBFAfwz/LiVx4FiQh5OhIaEREFhY5QKJQaGhWN0EgU/gydSCQanYhQiB9BJRHxM2hQ0L4tJCQU915vdnbm7ja792WyJ9nszNxzZnK/35wzM+fc3jz6BmW+/u8H7Vw/tusn7MOxFaYs5S84F/IQ8hnzE4/PR/9abMPn5UM6fQ7DTuMvfwz7Upv/xf7ZzDEX9Bud8iGd+oK9kKtG2mJiTlIxN6VfPqTdxNIwLuZ8rD9iolxM7vwNy1+wF+I9yidH8SiaR8KmfEinzyGDmJM+T4Y89xM2Ir1YGrvyIZ1+LM1xcghX8BK/cLMe/5SZKPblcEzwrHxIu/FDhrH/iTV1fz0W69+blsm24Bsu4L20z5YPaXfn0lD6OIA9eIvd2IvD2IavWAj0/2AlNtT9a7gonRMoH9LlwSFp/BdUXN7BfpUT/1XF2o94h404hxu1Xbyflg9p97G07bzSPI9t4CSu4wt2YZX0ebZ8SLvjsJFx+It1NuM11uE0brfYlw9pt7k28n6Y46+Pqyr+fuNDxq6zNyx/wd44+bS22sVB3Kvbx/FYPt528oblLzhQtxjnHh+PHav/X+B5i97sTDMxGdgPc9zl+FyLp9iJI3iW0JndDycucwzXjVJxNcXjJRV/Tyzxl8vTzWrAE5OBu8U4Oe2Gi+O4he84gwfStarZfjhxGfLDlMT1/dW4XPfvqvhj2G9F/Vm+dCKSvOOPkofYh884ih8j9Gf74USll6sz5GLrVlU+G06o+Ixt2s5G5UPabR0/lBR/m3C/bl+yxF/DWdv3VM03OeVD2s1+mLqTx3dFOIvtdftRYDMqxxrOWT6kyyPX1kgYE3fgvKVaoowe7ZyWD2l3HOa+q2nGD9XtRdVZJnV3zOVUO33D8hf8z2EuTxr71CucUtXrY722dtMvH9LllfNu4yMl49iXD+nUF/wHg+y5HmmDeFIAAAAASUVORK5CYII=", "text/plain": [ "28×28 reinterpret(Gray{Float64}, ::Array{Float64,2}):\n", " Gray{Float64}(0.0460847) … Gray{Float64}(0.0460847)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0460847)\n", " Gray{Float64}(0.0460847) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0460847) … Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0460847) Gray{Float64}(0.0460847)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0460847)\n", " Gray{Float64}(0.0460847) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0460847) … Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " ⋮ ⋱ \n", " Gray{Float64}(0.0) Gray{Float64}(0.0460847)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0460847)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0460847)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0460847)\n", " Gray{Float64}(0.0) … Gray{Float64}(0.0460847)\n", " Gray{Float64}(0.0460847) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0460847)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) … Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0460847)\n", " Gray{Float64}(0.0460847) Gray{Float64}(0.0)" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "colorview(Gray, perturbed_sample_image[1, :, :, 1])" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAHAAAABwCAAAAADji6uXAAAABGdBTUEAALGPC/xhBQAAACBjSFJNAAB6JgAAgIQAAPoAAACA6AAAdTAAAOpgAAA6mAAAF3CculE8AAAAAmJLR0QA/4ePzL8AAAGrSURBVGje7dk/a9VQHIfxT711EFysKDhYOzl0KSKCoILiYtuhg30L10U7dnZ3dPAddBEEQRERKuigDl1E7T+8HVREEOqghaKFOiRDKVy9aUp78uN8l5z8IQ8PX06SQ8jJycnJycnJycnJyamfvm4nJtHGV6xjBt/wsSbwwF4bxgd27XAFQ9uO/cSH/9zwC+5gLhXD+MD+bifaGME8hnEGl3Een3Fyy7Ub+I4T5f4nucM9TF+Vi48oupzDuS3H17GMBQzgFu6lYhgfWKnDf+U67uM9rmA1FcP4wF3p8DjeldtJPEjJMD6wv/4tuIlj+IGl1AzjA2vPwwt4joOKb56XqRnGB9aeh2OK/mbxOkXD+MBaHR7CNfzGbfxJ0TA+sFaH04q1xlO8StUwPnDH78NxPMQaRvX2HN0Xw/jAHc3Do7iLFp7ovb99MYwPrDwPW3iDs+go3oedlA3jAyt3eBqL5XgCj1I3jA+s9Cw9hWfleBqPm2AYH1ipwxsYLMcvsNkEw/jAnju8hKkmGsYH9tzhRRwuxx38aophfGDltcVbXNX931JyhvGBOc3PX/q9Oc17OzXKAAAAAElFTkSuQmCC", "text/plain": [ "28×28 reinterpret(Gray{Float64}, ::Array{Float64,2}):\n", " Gray{Float64}(0.0) Gray{Float64}(0.0) … Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0) … Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0) … Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " ⋮ ⋱ \n", " Gray{Float64}(0.0) Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0) … Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0) … Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0) Gray{Float64}(0.0)" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "colorview(Gray, sample_image[1, :, :, 1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That concludes this quickstart! The next tutorial will introduce you to each of the layers, and show how you can import your own neural network parameters." ] } ], "metadata": { "kernelspec": { "display_name": "Julia 1.4.2", "language": "julia", "name": "julia-1.4" }, "language_info": { "file_extension": ".jl", "mimetype": "application/julia", "name": "julia", "version": "1.4.2" } }, "nbformat": 4, "nbformat_minor": 2 }