{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Biostat M280 Homework 2\n", "\n", "**Due Friday, May 3 @ 11:59PM**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Q1. Nonnegative Matrix Factorization\n", "\n", "Nonnegative matrix factorization (NNMF) was introduced by [Lee and Seung (1999)](https://www.nature.com/articles/44565) as an analog of principal components and vector quantization with applications in data compression and clustering. In this homework we consider algorithms for fitting NNMF and high performance computing using graphical processing units (GPUs).\n", "\n", "\n", "\n", "In mathematical terms, one approximates a data matrix $\\mathbf{X} \\in \\mathbb{R}^{m \\times n}$ with nonnegative entries $x_{ij}$ by a product of two low-rank matrices $\\mathbf{V} \\in \\mathbb{R}^{m \\times r}$ and $\\mathbf{W} \\in \\mathbb{R}^{r \\times n}$ with nonnegative entries $v_{ik}$ and $w_{kj}$. Consider minimization of the squared Frobenius norm\n", "$$\n", "\tL(\\mathbf{V}, \\mathbf{W}) = \\|\\mathbf{X} - \\mathbf{V} \\mathbf{W}\\|_{\\text{F}}^2 = \\sum_i \\sum_j \\left(x_{ij} - \\sum_k v_{ik} w_{kj} \\right)^2, \\quad v_{ik} \\ge 0, w_{kj} \\ge 0,\n", "$$\n", "which should lead to a good factorization. Later in the course we will learn how to derive a majorization-minimization (MM) algorithm with iterative updates\n", "$$\n", "\tv_{ik}^{(t+1)} = v_{ik}^{(t)} \\frac{\\sum_j x_{ij} w_{kj}^{(t)}}{\\sum_j b_{ij}^{(t)} w_{kj}^{(t)}}, \\quad \\text{where } b_{ij}^{(t)} = \\sum_k v_{ik}^{(t)} w_{kj}^{(t)},\n", "$$\n", "$$\n", "\tw_{kj}^{(t+1)} = w_{kj}^{(t)} \\frac{\\sum_i x_{ij} v_{ik}^{(t+1)}}{\\sum_i b_{ij}^{(t+1/2)} v_{ik}^{(t+1)}}, \\quad \\text{where } b_{ij}^{(t+1/2)} = \\sum_k v_{ik}^{(t+1)} w_{kj}^{(t)}\n", "$$\n", "that drive the objective $L^{(t)} = L(\\mathbf{V}^{(t)}, \\mathbf{W}^{(t)})$ downhill. Superscript $t$ indicates iteration number. Efficiency (both speed and memory) will be the most important criterion when grading this problem.\n", "\n", "\n", "1. Implement the algorithm with arguments: $\\mathbf{X}$ (data, each row is a vectorized image), rank $r$, convergence tolerance, and optional starting point.\n", "\n", "```julia\n", "function nnmf(\n", " X::Matrix{T},\n", " r::Integer;\n", " maxiter::Integer=1000, \n", " tol::Number=1e-4,\n", " V::Matrix{T}=rand(T, size(X, 1), r),\n", " W::Matrix{T}=rand(T, r, size(X, 2))\n", " ) where T <: AbstractFloat\n", " # implementation\n", " # Output\n", " return V, W\n", "end\n", "```\n", "\n", "2. Database 1 from the [MIT Center for Biological and Computational Learning (CBCL)](http://cbcl.mit.edu) reduces to a matrix $\\mathbf{X}$ containing $m = 2,429$ gray-scale face images with $n = 19 \\times 19 = 361$ pixels per face. Each image (row) is scaled to have mean and standard deviation 0.25. \n", "Read in the [`nnmf-2429-by-361-face.txt`](http://hua-zhou.github.io/teaching/biostatm280-2019spring/hw/hw2/nnmf-2429-by-361-face.txt) file, e.g., using \n", "```julia\n", "using DelimitedFiles\n", "X = readdlm(\"nnmf-2429-by-361-face.txt\", ' ', Float64)\n", "```\n", "Display a couple sample images, e.g., using `imshow` function in [ImageView](https://github.com/JuliaImages/ImageView.jl) or [PyPlot](https://github.com/JuliaPy/PyPlot.jl) package.\n", "\n", "3. Report the run times, using `@time`, of your function for fitting NNMF on the MIT CBCL face data set at ranks $r=10, 20, 30, 40, 50$. For ease of comparison (and grading), please start your algorithm with the provided $\\mathbf{V}^{(0)}$ (first $r$ columns of [`V0.txt`](http://hua-zhou.github.io/teaching/biostatm280-2019spring/hw/hw2/V0.txt)) and $\\mathbf{W}^{(0)}$ (first $r$ rows of [`W0.txt`](http://hua-zhou.github.io/teaching/biostatm280-2019spring/hw/hw2/W0.txt)) and stopping criterion\n", "$$\n", "\t\\frac{|L^{(t+1)} - L^{(t)}|}{|L^{(t)}| + 1} \\le 10^{-4}.\n", "$$\n", "\n", "4. Choose an $r \\in \\{10, 20, 30, 40, 50\\}$ and start your algorithm from a different $\\mathbf{V}^{(0)}$ and $\\mathbf{W}^{(0)}$. Do you obtain the same objective value and $(\\mathbf{V}, \\mathbf{W})$? Explain what you find.\n", "\n", "5. For the same $r$, start your algorithm from $v_{ik}^{(0)} = w_{kj}^{(0)} = 1$ for all $i,j,k$. Do you obtain the same objective value and $(\\mathbf{V}, \\mathbf{W})$? Explain what you find.\n", "\n", "6. Plot the basis images (rows of $\\mathbf{W}$) at rank $r=50$. What do you find?\n", "\n", "7. Investigate the GPU capabilities of Julia. Report the speed gain of your GPU code over CPU code at ranks $r=10, 20, 30, 40, 50$. Make sure to use the same starting point as in part 2." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Q2. Evaluate Multivariate Normal Density\n", "\n", "Consider a linear mixed effects model\n", "$$\n", "\ty_i = \\mathbf{x}_i^T \\beta + \\mathbf{z}_i^T \\gamma + \\epsilon_i, \\quad i=1,\\ldots,n,\n", "$$\n", "where $\\epsilon_i$ are independent normal errors $N(0,\\sigma_0^2)$, $\\beta \\in \\mathbb{R}^p$ are fixed effects, and $\\gamma \\in \\mathbb{R}^q$ are random effects assumed to be $N(\\mathbf{0}_q, \\sigma_1^2 \\mathbf{I}_q$) independent of $\\epsilon_i$. \n", "\n", "1. Show that \n", "$$\n", " \\mathbf{y} \\sim N \\left( \\mathbf{X} \\beta, \\sigma_0^2 \\mathbf{I}_n + \\sigma_1^2 \\mathbf{Z} \\mathbf{Z}^T \\right),\n", "$$\n", "where $\\mathbf{y} = (y_1, \\ldots, y_n)^T \\in \\mathbb{R}^n$, $\\mathbf{X} = (\\mathbf{x}_1, \\ldots, \\mathbf{x}_n)^T \\in \\mathbb{R}^{n \\times p}$, and $\\mathbf{Z} = (\\mathbf{z}_1, \\ldots, \\mathbf{z}_n)^T \\in \\mathbb{R}^{n \\times q}$. \n", "\n", "2. Write a function, with interface \n", " ```julia\n", " logpdf_mvn(y::Vector, Z::Matrix, σ0::Number, σ1::Number),\n", " ```\n", "that evaluates the log-density of a multivariate normal with mean $\\mathbf{0}$ and covariance $\\sigma_0^2 \\mathbf{I} + \\sigma_1^2 \\mathbf{Z} \\mathbf{Z}^T$ at $\\mathbf{y}$. Make your code efficient in the $n \\gg q$ case. \n", "\n", "3. Compare your result (both accuracy and timing) to the [Distributions.jl](https://juliastats.github.io/Distributions.jl/stable/multivariate.html#Distributions.MvNormal) package using following data.\n", "\n", " ```julia\n", " using BenchmarkTools, Distributions, Random\n", "\n", " Random.seed!(280)\n", " n, q = 2000, 10\n", " Z = randn(n, q)\n", " σ0, σ1 = 0.5, 2.0\n", " Σ = σ1^2 * Z * transpose(Z) + σ0^2 * I \n", " # MVN(0, Σ)\n", " mvn = MvNormal(Σ) \n", " # generate one instance from MNV(0, Σ)\n", " y = rand(mvn) \n", "\n", " # check you answer matches that from Distributions.jl\n", " @show logpdf_mvn(y, Z, σ0, σ1)\n", " @show logpdf(mvn, y)\n", "\n", " # benchmark\n", " @benchmark logpdf_mvn(y, Z, σ0, σ1)\n", " @benchmark logpdf(mvn, y)\n", " ```" ] } ], "metadata": { "@webio": { "lastCommId": null, "lastKernelId": null }, "kernelspec": { "display_name": "Julia 1.1.0", "language": "julia", "name": "julia-1.1" }, "language_info": { "file_extension": ".jl", "mimetype": "application/julia", "name": "julia", "version": "1.1.0" }, "toc": { "colors": { "hover_highlight": "#DAA520", "running_highlight": "#FF0000", "selected_highlight": "#FFD700" }, "moveMenuLeft": true, "nav_menu": { "height": "87px", "width": "252px" }, "navigate_menu": true, "number_sections": true, "sideBar": true, "skip_h1_title": true, "threshold": 4, "toc_cell": false, "toc_section_display": "block", "toc_window_display": false, "widenNotebook": false } }, "nbformat": 4, "nbformat_minor": 2 }