{ "cells": [ { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "# Interpreting the output of `find_adversarial_example`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using the same example from the quickstart, we explore what information is available from the result of `find_adversarial_example`." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "using MIPVerify\n", "using HiGHS\n", "using JuMP\n", "using Images\n", "\n", "mnist = MIPVerify.read_datasets(\"MNIST\")\n", "n1 = MIPVerify.get_example_network_params(\"MNIST.n1\")\n", "sample_image = MIPVerify.get_image(mnist.test.images, 1);\n", "\n", "function view_diff(diff::Array{<:Real, 2})\n", " n = 1001\n", " colormap(\"RdBu\", n)[ceil.(Int, (diff .+ 1) ./ 2 .* n)]\n", "end;" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's take a look at the output dictionary that results from a solve." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[36m[notice | MIPVerify]: Attempting to find adversarial example. Neural net predicted label is 8, target labels are [10]\u001b[39m\n", "\u001b[36m[notice | MIPVerify]: Determining upper and lower bounds for the input to each non-linear unit.\u001b[39m\n", "Running HiGHS 1.4.2 [date: 1970-01-01, git hash: f797c1ab6]\n", "Copyright (c) 2022 ERGO-Code under MIT licence terms\n", "Presolving model\n", "1779 rows, 1627 cols, 56716 nonzeros\n", "1779 rows, 1627 cols, 56716 nonzeros\n", "\n", "Solving MIP model with:\n", " 1779 rows\n", " 1627 cols (29 binary, 0 integer, 0 implied int., 1598 continuous)\n", " 56716 nonzeros\n", "\n", " Nodes | B&B Tree | Objective Bounds | Dynamic Constraints | Work \n", " Proc. InQueue | Leaves Expl. | BestBound BestSol Gap | Cuts InLp Confl. | LpIters Time\n", "\n", " 0 0 0 0.00% 0 inf inf 0 0 0 0 0.0s\n", " 0 0 0 0.00% 0 inf inf 0 0 13 1616 0.1s\n", " 0 0 0 0.00% 0 inf inf 3942 12 99 30330 14.5s\n", " T 15 0 3 1.61% 0 0.0498354344 100.00% 3951 12 109 39558 16.4s\n", " T 27 1 10 3.42% 0 0.0474230913 100.00% 3974 12 131 45590 17.4s\n", " T 45 4 20 77.27% 0.0381825316 0.0460846816 17.15% 4563 15 162 58011 20.3s\n", "\n", "Solving report\n", " Status Optimal\n", " Primal bound 0.0460846815889\n", " Dual bound 0.0460846815889\n", " Gap 0% (tolerance: 0.01%)\n", " Solution status feasible\n", " 0.0460846815889 (objective)\n", " 0 (bound viol.)\n", " 0 (int. viol.)\n", " 0 (row viol.)\n", " Timing 21.61 (total)\n", " 0.02 (presolve)\n", " 0.00 (postsolve)\n", " Nodes 53\n", " LP iterations 60163 (total)\n", " 19015 (strong br.)\n", " 707 (separation)\n", " 28049 (heuristics)\n" ] }, { "data": { "text/plain": [ "Dict{Any, Any} with 11 entries:\n", " :TargetIndexes => [10]\n", " :SolveTime => 21.6141\n", " :TotalTime => 38.8481\n", " :Perturbation => [_[1] _[2] … _[27] _[28];;; _[29] _[30] … _[55] _[56];…\n", " :PerturbedInput => [_[785] _[786] … _[811] _[812];;; _[813] _[814] … _[83…\n", " :TighteningApproach => \"lp\"\n", " :PerturbationFamily => linf-norm-bounded-0.05\n", " :SolveStatus => OPTIMAL\n", " :Model => A JuMP Model…\n", " :Output => AffExpr[-0.012063867412507534 _[1601] + 0.660652577877…\n", " :PredictedIndex => 8" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "d1 = MIPVerify.find_adversarial_example(\n", " n1, \n", " sample_image, \n", " 10, \n", " HiGHS.Optimizer,\n", " Dict(),\n", " pp = MIPVerify.LInfNormBoundedPerturbationFamily(0.05),\n", " norm_order = Inf,\n", " tightening_algorithm = lp,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### `:Model`\n", "\n", "The model stores a lot of information. (Remember not to try to print large models!) " ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "m = d1[:Model];" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here are some `JuMP` methods you might find useful." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "2411" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "num_variables(m)" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "2381" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "num_constraints(m, VariableRef, MOI.GreaterThan{Float64})" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "21.61407446861267" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "solve_time(m)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this case, the lower bound on the objective and the best value we found is the same (but it can be different if we set time or other user limits)" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.046084681588922406" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "objective_bound(m)" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.046084681588922406" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "objective_value(m)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### `:PerturbationFamily`\n", "\n", "Information on the family of perturbations we are searching over is stored in `:PerturbationFamily`." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "linf-norm-bounded-0.05" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "d1[:PerturbationFamily]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### `:TargetIndexes`\n", "\n", "The perturbed image is guaranteed to be classified in one of the target indexes. (Strictly speaking, we guarantee that the highest activation in the output layer among the target indexes is at least highest activation in the output layer among non-target indexes within a small numeric tolerance)." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "1-element Vector{Int64}:\n", " 10" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "d1[:TargetIndexes]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Multiple target labels and an inverted target selection are appropriately handled." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[36m[notice | MIPVerify]: Attempting to find adversarial example. Neural net predicted label is 8, target labels are [1, 2, 3, 4, 5, 6, 7, 9, 10]\u001b[39m\n", "\u001b[36m[notice | MIPVerify]: Determining upper and lower bounds for the input to each non-linear unit.\u001b[39m\n", "Running HiGHS 1.4.2 [date: 1970-01-01, git hash: f797c1ab6]\n", "Copyright (c) 2022 ERGO-Code under MIT licence terms\n", "Presolving model\n", "1790 rows, 1637 cols, 64883 nonzeros\n", "1790 rows, 1637 cols, 64883 nonzeros\n", "\n", "Solving MIP model with:\n", " 1790 rows\n", " 1637 cols (38 binary, 0 integer, 0 implied int., 1599 continuous)\n", " 64883 nonzeros\n", "\n", " Nodes | B&B Tree | Objective Bounds | Dynamic Constraints | Work \n", " Proc. InQueue | Leaves Expl. | BestBound BestSol Gap | Cuts InLp Confl. | LpIters Time\n", "\n", " 0 0 0 0.00% 0 inf inf 0 0 0 0 0.0s\n", " 0 0 0 0.00% 0 inf inf 0 0 17 1272 0.1s\n", "\n", "Solving report\n", " Status Time limit reached\n", " Primal bound inf\n", " Dual bound 0\n", " Gap inf\n", " Solution status -\n", " Timing 5.00 (total)\n", " 0.03 (presolve)\n", " 0.00 (postsolve)\n", " Nodes 0\n", " LP iterations 7786 (total)\n", " 0 (strong br.)\n", " 2296 (separation)\n", " 4218 (heuristics)\n" ] }, { "data": { "text/plain": [ "9-element Vector{Int64}:\n", " 1\n", " 2\n", " 3\n", " 4\n", " 5\n", " 6\n", " 7\n", " 9\n", " 10" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "d2 = MIPVerify.find_adversarial_example(\n", " n1, \n", " sample_image, \n", " 8,\n", " HiGHS.Optimizer,\n", " Dict(\"time_limit\" => 5.0),\n", " pp = MIPVerify.LInfNormBoundedPerturbationFamily(0.05),\n", " norm_order = Inf,\n", " tightening_algorithm = lp,\n", " invert_target_selection = true,\n", ")\n", "d2[:TargetIndexes]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### `:SolveStatus`\n", "\n", "This is the result of the solve. [More information on solve statuses](https://jump.dev/MathOptInterface.jl/stable/apireference/#MathOptInterface.TerminationStatusCode)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We typically find an optimal solution:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "OPTIMAL::TerminationStatusCode = 1" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "d1[:SolveStatus]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But we can encounter other solve statuses if (for instance) we set a time limit." ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "TIME_LIMIT::TerminationStatusCode = 12" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "d2[:SolveStatus]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### `:Perturbation`\n", "\n", "This is the (pixel-wise) difference between the original image and the perturbed image." ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "1×28×28×1 Array{Float64, 4}:\n", "[:, :, 1, 1] =\n", " 0.0460847 0.0 0.0 0.0460847 0.0 … 0.0 0.0 0.0 0.0 0.0 0.0460847\n", "\n", "[:, :, 2, 1] =\n", " 0.0 0.0 0.0 0.0460847 0.0460847 0.0 … 0.0 0.0 0.0 0.0 0.0460847\n", "\n", "[:, :, 3, 1] =\n", " 0.0 0.0 0.0 0.0 0.0 0.0 0.0460847 … 0.0 0.0 0.0460847 0.0 0.0\n", "\n", ";;; … \n", "\n", "[:, :, 26, 1] =\n", " 0.0 0.0 0.0460847 0.0 0.0 0.0 0.0 … 0.0460847 0.0460847 0.0 0.0\n", "\n", "[:, :, 27, 1] =\n", " 0.0460847 0.0460847 0.0 0.0 0.0 0.0 … 0.0460847 0.0 0.0460847\n", "\n", "[:, :, 28, 1] =\n", " 0.0460847 0.0 0.0460847 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0460847 0.0" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "perturbation = JuMP.value.(d1[:Perturbation])" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAHAAAABwCAIAAABJgmMcAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAAgY0hSTQAAeiYAAICEAAD6AAAAgOgAAHUwAADqYAAAOpgAABdwnLpRPAAABOpJREFUeAHtwcFqXVkOhtHPyk/sELgIB/z+r9cGGxFwl50oco2UymCHU+e2qnqy19J//vuD5pacESVW3JIWJY64JS1KHHFLzogSzS1pUWLFLWlRorklLUqsiG2U2EaJbZTckiNRorklK27JGW7JiluyEiVWosQRt2TFLZngljSxjRLbKLGNEn+DW9KiRLMbfooSK25JixItShxxS85wS1qUOBIl2ocbfvrxzlKUaG5JixJNbKPENkpsoxQlVtySFbekRYnmlrQo0aJEc0talGhuyRG3ZMUtORIlVtySFiWaW3KGW9LENkpso8Q2Sm5JixIrUaK5JWe4JS1KNLfkn+SWHIkSzS2ZILZRYhsltlGKEs0tORIlmluy4pa0KNHckhYlmlvy/xYlVtySM8Q2SmyjxDZKbkmLEteKEs0tWYkSzS1ZiRLNLWlRYoJb0tySa0WJ5pY0sY0S2yixjVKUaG5JixJnPLw90x5v72kPb88ceby9Z5pbshIljrglZ0SJJrZRYhsltlFySyY83t7THt6eWfn6+Qvt8vJEe3h75sgD13u8vWfl4e2Z9u3ThRYlmlvSosQRsY0S2yixjVKUmPDw9syRy8sT/5bH23vanW5YeeSen4qf3JIWJY64JU1so8Q2SmyjxP8gSjT/dKFFiSNuSYsSKw9vzyzd8Jd3lh7envnpjaULf/n26cIZbsmK2EaJbZTYRsktWYkSK25JixItSpwRJVbckvbt04WVKHHG3Ycb2uuPd9rD2zMtSjS3ZMUtOSK2UWIbJbZR4jfckpUoMc0tWYkSE+7ev9NeEe3x9p6VKLESJVbckia2UWIbJbZR4iS3pEWJa7klK1HiDLekRYl2pxtapLiWW3KG2EaJbZTYRilKrLglLUo0t6S5JWdEiRYlVtySFiWORIkm4xS35EiUOENso8Q2Smyj5JYccUtWokRzS1qUuFaUuJbshvaa75wRJc5wS1bENkpso8Q2SvwNUaK5JStR4ohb0qLEtO8/3jkjShxxS1qUaFFiRWyjxDZKbKPEL6LEkSjR3JIWJZpbcsQtaVHiWne6ob3mO0fckhYlmlvSosSKW9KixIrYRoltlNhGiV+4JS1KNLfkiFtyLbekRYkjHz/c0C4vT7TX23uaW9KiRIsSzS1pUaK5JS1KnCG2UWIbJbZR4hdR4p8UJZpbsuKWtI9/fGXl6+cvrLglK27JEbfkDLekRYkmtlFiGyW2UeIXbkmLEi1KNLfkWm7JkY/5ypHLyxP/FrekRYkWJVbENkpso8Q2SlHiiFtyJEo0t+Rq379xxrdPF45EieaWtCjR3JIWJc5wS5rYRoltlNhGyS1pUeJabsm1Pv7xlVNu71iJEs0taW7JiltyhlvSokSLEk1so8Q2SmyjxC/ckiNRYsUtWYkSzS2Z8M0+suKWtCgxLUqsuCVNbKPENkpso8RJbsmRKNHckhYl2sPbMytfP3+hXV6eOCNKNLekRYkVt6RFiQliGyW2UWIbJX4jSjS3ZCVKrLglLUo0t+TI5eWJa7klLUo0t2QlSkwT2yixjRLbKPEbbsm1okRzS1qUaA8ce7y9p7klR6JEc0tWokRzS1aiRHNLjkSJJrZRYhsltlGKEituSYsSzS1pbslKlFhxS9rj7T3NLVlxkjPckpUo0dySCVGiuSVNbKPENkpso/4Eb+X0k62YSkIAAAAASUVORK5CYII=", "text/html": [ "" ], "text/plain": [ "28×28 Array{RGB{Float64},2} with eltype RGB{Float64}:\n", " RGB{Float64}(0.911923,0.96363,0.991867) … RGB{Float64}(0.911923,0.96363,0.991867)\n", " RGB{Float64}(0.974944,0.972996,0.976035) RGB{Float64}(0.974944,0.972996,0.976035)\n", " RGB{Float64}(0.974944,0.972996,0.976035) RGB{Float64}(0.911923,0.96363,0.991867)\n", " RGB{Float64}(0.911923,0.96363,0.991867) RGB{Float64}(0.974944,0.972996,0.976035)\n", " RGB{Float64}(0.974944,0.972996,0.976035) RGB{Float64}(0.974944,0.972996,0.976035)\n", " RGB{Float64}(0.911923,0.96363,0.991867) … RGB{Float64}(0.974944,0.972996,0.976035)\n", " RGB{Float64}(0.974944,0.972996,0.976035) RGB{Float64}(0.974944,0.972996,0.976035)\n", " RGB{Float64}(0.911923,0.96363,0.991867) RGB{Float64}(0.911923,0.96363,0.991867)\n", " RGB{Float64}(0.974944,0.972996,0.976035) RGB{Float64}(0.911923,0.96363,0.991867)\n", " RGB{Float64}(0.911923,0.96363,0.991867) RGB{Float64}(0.974944,0.972996,0.976035)\n", " ⋮ ⋱ \n", " RGB{Float64}(0.974944,0.972996,0.976035) RGB{Float64}(0.911923,0.96363,0.991867)\n", " RGB{Float64}(0.974944,0.972996,0.976035) … RGB{Float64}(0.911923,0.96363,0.991867)\n", " RGB{Float64}(0.911923,0.96363,0.991867) RGB{Float64}(0.974944,0.972996,0.976035)\n", " RGB{Float64}(0.974944,0.972996,0.976035) RGB{Float64}(0.911923,0.96363,0.991867)\n", " RGB{Float64}(0.974944,0.972996,0.976035) RGB{Float64}(0.974944,0.972996,0.976035)\n", " RGB{Float64}(0.974944,0.972996,0.976035) RGB{Float64}(0.974944,0.972996,0.976035)\n", " RGB{Float64}(0.974944,0.972996,0.976035) … RGB{Float64}(0.974944,0.972996,0.976035)\n", " RGB{Float64}(0.974944,0.972996,0.976035) RGB{Float64}(0.911923,0.96363,0.991867)\n", " RGB{Float64}(0.911923,0.96363,0.991867) RGB{Float64}(0.974944,0.972996,0.976035)" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "view_diff(perturbation[1, :, :, 1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### `:PerturbedInput`\n", "\n", "This is the perturbed image." ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "1×28×28×1 Array{Float64, 4}:\n", "[:, :, 1, 1] =\n", " 0.0460847 0.0 0.0 0.0460847 0.0 … 0.0 0.0 0.0 0.0 0.0 0.0460847\n", "\n", "[:, :, 2, 1] =\n", " 0.0 0.0 0.0 0.0460847 0.0460847 0.0 … 0.0 0.0 0.0 0.0 0.0460847\n", "\n", "[:, :, 3, 1] =\n", " 0.0 0.0 0.0 0.0 0.0 0.0 0.0460847 … 0.0 0.0 0.0460847 0.0 0.0\n", "\n", ";;; … \n", "\n", "[:, :, 26, 1] =\n", " 0.0 0.0 0.0460847 0.0 0.0 0.0 0.0 … 0.0460847 0.0460847 0.0 0.0\n", "\n", "[:, :, 27, 1] =\n", " 0.0460847 0.0460847 0.0 0.0 0.0 0.0 … 0.0460847 0.0 0.0460847\n", "\n", "[:, :, 28, 1] =\n", " 0.0460847 0.0 0.0460847 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0460847 0.0" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "perturbed_input = JuMP.value.(d1[:PerturbedInput])" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAHAAAABwCAAAAADji6uXAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAAgY0hSTQAAeiYAAICEAAD6AAAAgOgAAHUwAADqYAAAOpgAABdwnLpRPAAAAwtJREFUaAW9wb/rrgUZB+AruqmU24aCCMGkoYZAQiJocGhTjKCppd2mlkNnCSSqPyFoaAkCwVkkooaQwB9LGGRZVEPZ0NRwPoPggfrCfeDx8X3Poem+rmrEWRtBOwTtEGeNoI2gEbRRlpVlZVnFoREjzmLEaKMdYsR1QVlWlpVl5X2Cxl20ETTaIc6CdmjcRRmNoFGWlWVlWbURI2gEjUbQiEOMOLQRNOIsKMvKsrKsgjYacRY04sHi0IjryrKyrCyrRhwaMYJG0Ij/XxtxVpaVZWVZBe1SI0YjRiNo1wVxqRGUZWVZWVaNoJ09hefxO7yLF4y/ui5GO8RZoywry8qyiut+jo/hCTyCb+MO3nJ/n8Y7uIW30QjaoSwry8qyatd9HV/EH/EFPImv4iv4Jx5zuIsP4+N4Av/AdxG0Q1CWlWVlWbmi8QbeMt7ET/EYnsTL+DI+hP/iXfwFf8In8CriLEZZVpaVZRWjjaDRzhr/wUvGG2iHb+Jx/B0vohEjDmVZWVaWlXtitPuL0c4+hZ8YP8RHjDbaCMqysqwsKx8QtEsx2lnwAzyM9/BnZ3FWlpVlZVm1ETSCOGu0EbTD07hlfAN/QBzaWVlWlpVlFYcYjaBdamfPGq/jNYd2FqMsK8vKsvI+jRjtELRLD+EZ4/t4z2iHoNFGWVaWlWXlRjs0gkYcgnZ2G5/Hb/GqETSCNoI2yrKyrCwrN4JGHOJS0MbXcBv/wo8QNBpBI2hnZVlZVpaVG+3BGjGCj+J7xi/wayMOcRY0yrKyrCwrN4JGIy7F2S/xWbyJ5z1Y0GijLCvLyrJqhzg04tKj+JJxC/92aASNoJ0FZVlZVpZV0C7FpU/iV8Zt/MZoBDHiLGg0yrKyrCwrN+LQRoxGjOfwGeMVh6DdXxtBWVaWlWXlA+LQCBqfw3fwCO44awRtBO26sqwsK8vKPY0YbQSN4Ck07uBviEPQiNHurywry8qyck9cagTt8Ht8C+84NGI0YjTi0CjLyrKyrNoIGkGMNoKf4ceui9GI6xpBWVaWlWX/A06qydY5JLkuAAAAAElFTkSuQmCC", "text/html": [ "" ], "text/plain": [ "28×28 reinterpret(reshape, Gray{Float64}, ::Matrix{Float64}) with eltype Gray{Float64}:\n", " Gray{Float64}(0.0460847) … Gray{Float64}(0.0460847)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0460847)\n", " Gray{Float64}(0.0460847) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0460847) … Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0460847) Gray{Float64}(0.0460847)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0460847)\n", " Gray{Float64}(0.0460847) Gray{Float64}(0.0)\n", " ⋮ ⋱ \n", " Gray{Float64}(0.0) Gray{Float64}(0.0460847)\n", " Gray{Float64}(0.0) … Gray{Float64}(0.0460847)\n", " Gray{Float64}(0.0460847) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0460847)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) … Gray{Float64}(0.0)\n", " Gray{Float64}(0.0) Gray{Float64}(0.0460847)\n", " Gray{Float64}(0.0460847) Gray{Float64}(0.0)" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "colorview(Gray, perturbed_input[1, :, :, 1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can verify that the perturbed input is in fact the sample image added to the perturbation." ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "false" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "all(perturbed_input == sample_image + perturbation)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### `:Output`\n", "\n", "This is the calculated value of the activations of the final layer of the neural net with the perturbed input. Note that `output[10]` is (tied for) the largest element of the array." ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "10-element Vector{Float64}:\n", " 0.6749450628745499\n", " 0.6179790360668559\n", " 0.39303215980893874\n", " 0.29656185967035825\n", " 0.24101053495482927\n", " 0.1606002120357409\n", " 0.5428526100447263\n", " 4.288351484573891\n", " -0.22643018233076517\n", " 4.288351484573884" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "output = JuMP.value.(d1[:Output])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can verify that these activations are indeed accurate:" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "10-element Vector{Float64}:\n", " 0.6749450628745521\n", " 0.6179790360668558\n", " 0.3930321598089389\n", " 0.29656185967035864\n", " 0.24101053495482938\n", " 0.16060021203574099\n", " 0.5428526100447264\n", " 4.288351484573887\n", " -0.22643018233076218\n", " 4.2883514845738775" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "perturbed_input |> n1" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Julia 1.7.1", "language": "julia", "name": "julia-1.7" }, "language_info": { "file_extension": ".jl", "mimetype": "application/julia", "name": "julia", "version": "1.7.1" } }, "nbformat": 4, "nbformat_minor": 2 }