{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Block Coordinate Descent code\n", "\n", "The initial code was checked in to the [github repository](https://github.com/Stat990-033/BlockCoordinateDescent.jl).\n", "\n", "There are several update steps that are expressed as functions. One is to take a matrix of linear predictor values and convert them to probabilities." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "multProbPrecompute (generic function with 1 method)" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#computes predicted probabilities for multinomial regression, given X*beta\n", "function multProbPrecompute(Xbeta)\n", " numerator = exp.(Xbeta)\n", " numerator ./ sum(numerator, dims = 2) # normalize rows to unit length\n", "end" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This function copies the matrix `Xbeta` in the call to `exp(Xbeta)`, and creates another copy in the evaluation of `prob`. This is how one would approach the task in R. In Julia you can do this in place.\n", "\n", "It is a good practice to write tests as you go along so that you can verify the results even for very simple functions." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "100×200 Array{Float64,2}:\n", " 1.0 0.942218 0.24867 0.508796 … 0.833132 0.243615 0.79745 \n", " 1.0 0.042852 0.798958 0.946023 0.271972 0.676288 0.473405 \n", " 1.0 0.658443 0.910798 0.99185 0.309513 0.738769 0.466282 \n", " 1.0 0.933942 0.215572 0.711079 0.0693223 0.966414 0.996814 \n", " 1.0 0.493509 0.0107833 0.322528 0.470281 0.389887 0.853022 \n", " 1.0 0.216062 0.731008 0.543219 … 0.555514 0.842862 0.820857 \n", " 1.0 0.55655 0.626748 0.847555 0.0846821 0.104867 0.393553 \n", " 1.0 0.698472 0.550688 0.539199 0.124176 0.120487 0.820305 \n", " 1.0 0.477957 0.746092 0.675665 0.5284 0.913444 0.876063 \n", " 1.0 0.288074 0.0495433 0.89312 0.324252 0.305764 0.961244 \n", " 1.0 0.762155 0.00279019 0.30776 … 0.612043 0.000933508 0.0783043\n", " 1.0 0.231349 0.578132 0.088052 0.417872 0.0834675 0.950657 \n", " 1.0 0.358739 0.995562 0.0488842 0.904 0.648354 0.640511 \n", " ⋮ ⋱ \n", " 1.0 0.871992 0.544112 0.180798 0.907896 0.909636 0.701654 \n", " 1.0 0.826786 0.955972 0.0744761 0.804368 0.0112623 0.920605 \n", " 1.0 0.703566 0.137559 0.882054 … 0.969421 0.299211 0.301055 \n", " 1.0 0.50145 0.303211 0.30445 0.243777 0.411264 0.949961 \n", " 1.0 0.810933 0.899724 0.512623 0.23256 0.972064 0.891446 \n", " 1.0 0.620525 0.55652 0.782564 0.956294 0.598255 0.220518 \n", " 1.0 0.440561 0.874214 0.267746 0.312244 0.820928 0.370913 \n", " 1.0 0.715693 0.372606 0.752476 … 0.349817 0.966559 0.659257 \n", " 1.0 0.00561552 0.597088 0.413342 0.862901 0.760024 0.115013 \n", " 1.0 0.502888 0.352371 0.286198 0.159112 0.30687 0.970337 \n", " 1.0 0.0869479 0.918245 0.353192 0.912009 0.104156 0.430432 \n", " 1.0 0.0272969 0.971082 0.198065 0.113312 0.909728 0.082192 " ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "const n = 100\n", "const num = 200\n", "using Random\n", "Random.seed!(1234321) # set the random number seed for reproducibility\n", "\n", "const X = rand(n, num)\n", "X[:, 1] .= 1\n", "X" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "11×4 Array{Float64,2}:\n", " -0.842878 -1.48142 -0.569145 -0.882919\n", " 0.229552 0.809904 -0.432924 -0.388652\n", " 1.11555 0.495039 -0.0618396 -1.08376 \n", " -0.978813 -0.12077 -1.38466 -0.753775\n", " 0.845695 1.30246 0.0835514 -1.35018 \n", " 0.734366 -0.987098 0.831515 -1.12607 \n", " 1.48704 -0.466602 0.535014 0.454538\n", " 0.790596 0.981425 -0.117606 0.585541\n", " 0.421596 1.39621 -0.176295 -0.378822\n", " 1.1885 -0.775636 -0.718342 1.12486 \n", " 0.0 0.0 0.0 0.0 " ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "using Distributions\n", "const k = 4\n", "\n", "const β = zeros(num, k)\n", "rand!(Uniform(-1.5, 1.5), view(β, 1:10, :))\n", "β[1:11, :]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that the `const` declaration doesn't mean that the contents of the array must be constant. It just means that the type, size and location of the array cannot be changed." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "(100, 4)" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "const Xβ = X * β # linear predictor\n", "probs = multProbPrecompute(Xβ)\n", "size(probs)" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "10×4 Array{Float64,2}:\n", " 0.694888 0.231298 0.0409205 0.0328935 \n", " 0.972676 0.00503632 0.0155342 0.00675307\n", " 0.876706 0.0848717 0.0360854 0.00233681\n", " 0.687598 0.199428 0.0697715 0.0432018 \n", " 0.805965 0.038707 0.130989 0.0243393 \n", " 0.806296 0.118814 0.0720161 0.00287435\n", " 0.838491 0.126393 0.0323506 0.0027648 \n", " 0.730423 0.235906 0.0192987 0.0143724 \n", " 0.957658 0.0283855 0.00825927 0.005697 \n", " 0.771638 0.194524 0.016277 0.0175611 " ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "probs[1:10, :]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The array of probabilities is oriented so that each row adds to 1." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "(0.9999999999999998, 1.0000000000000002)" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "extrema(sum(probs, dims=2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In languages like R and Julia where matrices are stored in column-major order it is an advantage to work with the transpose of this matrix.\n", "\n", "## Tuning the elementary operation\n", "\n", "Exponentiating and normalizing a vector can be combined into two loops. We exponentiate and accumulate the sum in one loop, and in the second loop normalize the probabilities. In Julia it is okay to use loops if that makes sense for the operation." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "expnorm! (generic function with 1 method)" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "function expnorm!(v)\n", " ss = sum(map!(exp, v, v))\n", " v ./= ss\n", "end" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "4-element Array{Float64,1}:\n", " 1.4125396324688992 \n", " 0.31249547628056473\n", " -1.419581047798883 \n", " -1.6379353964967804 " ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "vv = vec(Xβ[1, :])" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "4-element Array{Float64,1}:\n", " 0.6948880834566447 \n", " 0.2312979359283825 \n", " 0.0409204574983493 \n", " 0.03289352311662352" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pr = expnorm!(vv)" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0.9999999999999999" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sum(pr)" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "true" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pr ≈ vec(probs[1,:])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Creating a structure or type\n", "\n", "The arrays `X`, `beta` and `probs` are associated with each other and must have compatible dimensions. We define a single structure to hold them and the response `z`." ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": true }, "outputs": [], "source": [ "struct BCD{T<:AbstractFloat}\n", " X::Matrix{T}\n", " β::Matrix{T}\n", " probs::Matrix{T}\n", " z::Matrix{T}\n", "end" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The operation of updating the probabilities is done in-place." ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "updateprobs! (generic function with 1 method)" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "using LinearAlgebra\n", "function updateprobs!(bcd::BCD)\n", " probs = bcd.probs\n", " LinearAlgebra.mul!(probs, bcd.β', bcd.X')\n", " for j in 1:size(probs, 2)\n", " expnorm!(view(probs, :, j))\n", " end\n", " bcd\n", "end" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The constructor for the type only requires `X` and `z`. The sizes of `probs` and β can be inferred" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "BCD" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "function BCD(X::Matrix{T}, z::Matrix{T}) where {T<:AbstractFloat} # constructor\n", " n, num = size(X)\n", " k, r = size(z) # z is also transposed\n", " if r ≠ n\n", " throw(DimensionMismatch())\n", " end\n", " res = BCD(X, zeros(T, (num, k)), similar(z), z)\n", " updateprobs!(res)\n", "end " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To create such an object we need to simulate the data. Recall that the array `probs` should be transposed to fit our new schema." ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "4×100 Adjoint{Float64,Array{Float64,2}}:\n", " 0.694888 0.972676 0.876706 … 0.822341 0.80093 0.947382 \n", " 0.231298 0.00503632 0.0848717 0.0750398 0.106154 0.016788 \n", " 0.0409205 0.0155342 0.0360854 0.0610365 0.088035 0.0303635 \n", " 0.0328935 0.00675307 0.00233681 0.0415823 0.00488122 0.00546646" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pr = probs'" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "4×100 Array{Float64,2}:\n", " 6.90739e-310 6.90739e-310 6.90739e-310 … 9.73907e-317 9.73907e-317\n", " 6.90739e-310 6.9074e-310 6.90739e-310 0.0 0.0 \n", " 6.90739e-310 6.90739e-310 6.90739e-310 9.73907e-317 9.73907e-317\n", " 6.90739e-310 6.90739e-310 6.90739e-310 0.0 0.0 " ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "z = similar(pr)" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": false }, "outputs": [ { "ename": "MethodError", "evalue": "MethodError: no method matching Multinomial(::Int64, ::SubArray{Float64,1,Adjoint{Float64,Array{Float64,2}},Tuple{Base.Slice{Base.OneTo{Int64}},Int64},false})\nClosest candidates are:\n Multinomial(::Integer, !Matched::Array{T<:Real,1}) where T<:Real at /home/bates/.julia/packages/Distributions/fMt8c/src/multivariate/multinomial.jl:37\n Multinomial(::Integer, !Matched::Integer) at /home/bates/.julia/packages/Distributions/fMt8c/src/multivariate/multinomial.jl:38", "output_type": "error", "traceback": [ "MethodError: no method matching Multinomial(::Int64, ::SubArray{Float64,1,Adjoint{Float64,Array{Float64,2}},Tuple{Base.Slice{Base.OneTo{Int64}},Int64},false})\nClosest candidates are:\n Multinomial(::Integer, !Matched::Array{T<:Real,1}) where T<:Real at /home/bates/.julia/packages/Distributions/fMt8c/src/multivariate/multinomial.jl:37\n Multinomial(::Integer, !Matched::Integer) at /home/bates/.julia/packages/Distributions/fMt8c/src/multivariate/multinomial.jl:38", "", "Stacktrace:", " [1] top-level scope at ./In[17]:2" ] } ], "source": [ "for j in 1:size(z, 2)\n", " rand!(Multinomial(1, view(pr, :, j)), view(z, :, j))\n", "end" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Unfortunately, this doesn't work because the vector of probabilities for the Multinomial must be an `Vector`. A `SubArray` won't do." ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "collapsed": false }, "outputs": [], "source": [ "for j in 1:size(z, 2)\n", " rand!(Multinomial(1, vec(pr[:, j])), view(z, :, j))\n", "end" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "4×4 Array{Float64,2}:\n", " 0.0 1.0 0.0 1.0\n", " 1.0 0.0 1.0 0.0\n", " 0.0 0.0 0.0 0.0\n", " 0.0 0.0 0.0 0.0" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "z[:,1:4] # the probabilities for the first category are large" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "4×1 Array{Float64,2}:\n", " 82.0\n", " 13.0\n", " 4.0\n", " 1.0" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sum(z, dims=2)" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "collapsed": true }, "outputs": [], "source": [ "bcd = BCD(X, z);" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "4×100 Array{Float64,2}:\n", " 0.25 0.25 0.25 0.25 0.25 0.25 … 0.25 0.25 0.25 0.25 0.25 0.25\n", " 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25\n", " 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25\n", " 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "bcd.probs" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "4×100 Array{Float64,2}:\n", " 0.0 1.0 0.0 1.0 0.0 1.0 1.0 1.0 … 1.0 1.0 1.0 1.0 1.0 1.0 1.0\n", " 1.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0\n", " 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0\n", " 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "bcd.z" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Evaluating the log-likelihood\n", "\n", "Currently the function to evaluate the log-likelihood is\n", "```julia\n", "#computes multinomial log likelihood given X, beta, z\n", "function loglikelihood(X,beta,z)\n", " p=size(X)[2]\n", " k=size(z)[2]\n", " beta=reshape(beta,p,k)\n", " probs=multProb(X,beta,k)\n", " val=0\n", " for i = 1:(size(X)[1])\n", " val=val+log(probs[i,find(z[i,:].==1)])\n", " end\n", " beta=vec(beta)\n", " return(val)\n", "end\n", "```\n", "\n", "The `find(z[i, :] .== 1)` call checks which element of each row is non-zero every time the log-likelihood is evaluated. But that never changes. Thus we can evaluate it once only and be done. The best option here is to change the BCD type and its constructor but, for illustration I will simply create a vector in the global workspace. (To change the type I would need to restart the session.)" ] }, { "cell_type": "code", "execution_count": 42, "metadata": { "collapsed": true }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "WARNING: redefining constant y\n" ] }, { "ename": "TypeError", "evalue": "TypeError: non-boolean (Float64) used in boolean context", "output_type": "error", "traceback": [ "TypeError: non-boolean (Float64) used in boolean context", "", "Stacktrace:", " [1] findnext at ./array.jl:1609 [inlined]", " [2] findfirst(::SubArray{Float64,1,Array{Float64,2},Tuple{Base.Slice{Base.OneTo{Int64}},Int64},true}) at ./array.jl:1660", " [3] top-level scope at ./In[42]:3" ] } ], "source": [ "const y = Int[]\n", "for j in 1:size(z, 2)\n", " append!(y, findfirst(view(z, :, j)))\n", "end" ] }, { "cell_type": "code", "execution_count": 41, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0" ] }, "execution_count": 41, "metadata": {}, "output_type": "execute_result" } ], "source": [ "length(y)" ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "collapsed": true }, "outputs": [], "source": [ "using StatsBase" ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "4-element Array{Int64,1}:\n", " 0\n", " 0\n", " 0\n", " 0" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "counts(y, 1:4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There is already a `loglikelihood` function in the `StatsBase` package so we will add a method to it rather than overwriting the name." ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "collapsed": false }, "outputs": [], "source": [ "function StatsBase.loglikelihood(bcd::BCD{T}) where {T}\n", " ss = zero(T)\n", " probs = bcd.probs # in a productiuon version we would store y as bcd.y\n", " for j in 1:length(y)\n", " ss += log(probs[y[j], j])\n", " end\n", " ss\n", "end " ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0.0" ] }, "execution_count": 29, "metadata": {}, "output_type": "execute_result" } ], "source": [ "loglikelihood(bcd)" ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "-138.62943611198907" ] }, "execution_count": 30, "metadata": {}, "output_type": "execute_result" } ], "source": [ "100 * log(0.25)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that, because `updateprobs!` returns its argument, you can compose updating the probabilities and evaluating the loglikelihood, as is done in the existing function." ] }, { "cell_type": "code", "execution_count": 31, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0.0" ] }, "execution_count": 31, "metadata": {}, "output_type": "execute_result" } ], "source": [ "updateprobs!(bcd) |> loglikelihood" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "which is equivalent to" ] }, { "cell_type": "code", "execution_count": 32, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0.0" ] }, "execution_count": 32, "metadata": {}, "output_type": "execute_result" } ], "source": [ "loglikelihood(updateprobs!(bcd))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Comparisons" ] }, { "cell_type": "code", "execution_count": 33, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " 0.000044 seconds (105 allocations: 4.859 KiB)\n" ] } ], "source": [ "@time loglikelihood(updateprobs!(bcd));" ] }, { "cell_type": "code", "execution_count": 34, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "ll (generic function with 1 method)" ] }, "execution_count": 34, "metadata": {}, "output_type": "execute_result" } ], "source": [ "function multProb(X,beta,k)\n", " p=size(X)[2]\n", " n=size(X)[1]\n", " beta=reshape(beta,p,k)\n", " denominator=zeros(n)\n", " numerator=exp(X*beta)\n", " denominator=sum(numerator,2)\n", " prob=numerator./denominator\n", " beta=vec(beta)\n", " return(prob)\n", "end\n", "\n", "function ll(X,beta,z)\n", " p=size(X)[2]\n", " k=size(z)[2]\n", " beta=reshape(beta,p,k)\n", " probs=multProb(X,beta,k)\n", " val=0\n", " for i = 1:(size(X)[1])\n", " val=val+log(probs[i,find(z[i,:].==1)])\n", " end\n", " beta=vec(beta)\n", " return(val)\n", "end" ] }, { "cell_type": "code", "execution_count": 35, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "((100, 200), (200, 4))" ] }, "execution_count": 35, "metadata": {}, "output_type": "execute_result" } ], "source": [ "size(X), size(β)" ] }, { "cell_type": "code", "execution_count": 36, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "100×4 Adjoint{Float64,Array{Float64,2}}:\n", " 0.0 1.0 0.0 0.0\n", " 1.0 0.0 0.0 0.0\n", " 0.0 1.0 0.0 0.0\n", " 1.0 0.0 0.0 0.0\n", " 0.0 0.0 1.0 0.0\n", " 1.0 0.0 0.0 0.0\n", " 1.0 0.0 0.0 0.0\n", " 1.0 0.0 0.0 0.0\n", " 1.0 0.0 0.0 0.0\n", " 1.0 0.0 0.0 0.0\n", " 1.0 0.0 0.0 0.0\n", " 1.0 0.0 0.0 0.0\n", " 1.0 0.0 0.0 0.0\n", " ⋮ \n", " 0.0 1.0 0.0 0.0\n", " 1.0 0.0 0.0 0.0\n", " 1.0 0.0 0.0 0.0\n", " 1.0 0.0 0.0 0.0\n", " 0.0 0.0 1.0 0.0\n", " 1.0 0.0 0.0 0.0\n", " 1.0 0.0 0.0 0.0\n", " 1.0 0.0 0.0 0.0\n", " 1.0 0.0 0.0 0.0\n", " 1.0 0.0 0.0 0.0\n", " 1.0 0.0 0.0 0.0\n", " 1.0 0.0 0.0 0.0" ] }, "execution_count": 36, "metadata": {}, "output_type": "execute_result" } ], "source": [ "const zz = z'" ] }, { "cell_type": "code", "execution_count": 37, "metadata": { "collapsed": true }, "outputs": [], "source": [ "fill!(β, 0.25);" ] }, { "cell_type": "code", "execution_count": 38, "metadata": { "collapsed": false }, "outputs": [ { "ename": "DimensionMismatch", "evalue": "DimensionMismatch(\"matrix is not square: dimensions are (100, 4)\")", "output_type": "error", "traceback": [ "DimensionMismatch(\"matrix is not square: dimensions are (100, 4)\")", "", "Stacktrace:", " [1] checksquare at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.1/LinearAlgebra/src/LinearAlgebra.jl:214 [inlined]", " [2] exp!(::Array{Float64,2}) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.1/LinearAlgebra/src/dense.jl:513", " [3] exp at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.1/LinearAlgebra/src/dense.jl:481 [inlined]", " [4] multProb(::Array{Float64,2}, ::Array{Float64,2}, ::Int64) at ./In[34]:6", " [5] ll(::Array{Float64,2}, ::Array{Float64,2}, ::Adjoint{Float64,Array{Float64,2}}) at ./In[34]:17", " [6] top-level scope at In[38]:1" ] } ], "source": [ "ll(X, β, zz)" ] }, { "cell_type": "code", "execution_count": 39, "metadata": { "collapsed": false }, "outputs": [ { "ename": "DimensionMismatch", "evalue": "DimensionMismatch(\"matrix is not square: dimensions are (100, 4)\")", "output_type": "error", "traceback": [ "DimensionMismatch(\"matrix is not square: dimensions are (100, 4)\")", "", "Stacktrace:", " [1] checksquare at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.1/LinearAlgebra/src/LinearAlgebra.jl:214 [inlined]", " [2] exp!(::Array{Float64,2}) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.1/LinearAlgebra/src/dense.jl:513", " [3] exp at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.1/LinearAlgebra/src/dense.jl:481 [inlined]", " [4] multProb(::Array{Float64,2}, ::Array{Float64,2}, ::Int64) at ./In[34]:6", " [5] ll(::Array{Float64,2}, ::Array{Float64,2}, ::Adjoint{Float64,Array{Float64,2}}) at ./In[34]:17", " [6] top-level scope at util.jl:156", " [7] top-level scope at In[39]:1" ] } ], "source": [ "@time ll(X, β, zz)" ] } ], "metadata": { "kernelspec": { "display_name": "Julia 1.1.0", "language": "julia", "name": "julia-1.1" }, "language_info": { "file_extension": ".jl", "mimetype": "application/julia", "name": "julia", "version": "1.1.0" } }, "nbformat": 4, "nbformat_minor": 2 }