{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "![ML Logo](http://spark-mooc.github.io/web-assets/images/CS190.1x_Banner_300.png)\n", "# **Principal Component Analysis Lab**\n", "#### This lab delves into exploratory analysis of neuroscience data, specifically using principal component analysis (PCA) and feature-based aggregation. We will use a dataset of light-sheet imaging recorded by the [Ahrens Lab](http://www.janelia.org/lab/ahrens-lab) at Janelia Research Campus, and hosted on the CodeNeuro [data repository](http://datasets.codeneuro.org).\n", "#### Our dataset is generated by studying the movement of a larval [zebrafish](http://en.wikipedia.org/wiki/Zebrafish), an animal that is especially useful in neuroscience because it is transparent, making it possible to record activity over its entire brain using a technique called [light-sheet microscopy](http://en.wikipedia.org/wiki/Light_sheet_fluorescence_microscopy). Specifically, we'll work with time-varying images containing patterns of the zebrafish's neural activity as it is presented with a moving visual pattern. Different stimuli induce different patterns across the brain, and we can use exploratory analyses to identify these patterns. Read [\"Mapping brain activity at scale with cluster computing\"](http://thefreemanlab.com/work/papers/freeman-2014-nature-methods.pdf) for more information about these kinds of analyses.\n", "#### During this lab you will learn about PCA, and then compare and contrast different exploratory analyses of the same data set to identify which neural patterns they best highlight.\n", "#### ** This lab will cover: **\n", "+ ####*Part 1:* Work through the steps of PCA on a sample dataset\n", " + ####*Visualization 1:* Two-dimensional Gaussians\n", "+ ####*Part 2:* Write a PCA function and evaluate PCA on sample datasets\n", " + ####*Visualization 2:* PCA projection\n", " + ####*Visualization 3:* Three-dimensional data\n", " + ####*Visualization 4:* 2D representation of 3D data\n", "+ ####*Part 3:* Parse, inspect, and preprocess neuroscience data then perform PCA\n", " + ####*Visualization 5:* Pixel intensity\n", " + ####*Visualization 6:* Normalized data\n", " + ####*Visualization 7:* Top two components as images\n", " + ####*Visualization 8:* Top two components as one image\n", "+ ####*Part 4:* Perform feature-based aggregation followed by PCA\n", " + ####*Visualization 9:* Top two components by time\n", " + ####*Visualization 10:* Top two components by direction\n", " \n", "#### Note that, for reference, you can look up the details of the relevant Spark methods in [Spark's Python API](https://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.RDD) and the relevant NumPy methods in the [NumPy Reference](http://docs.scipy.org/doc/numpy/reference/index.html)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "labVersion = 'cs190_week5_v_1_2'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " \n", "### **Part 1: Work through the steps of PCA on a sample dataset**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **Visualization 1: Two-dimensional Gaussians**\n", "#### Principal Component Analysis, or PCA, is a strategy for dimensionality reduction. To better understand PCA, we'll work with synthetic data generated by sampling from the [two-dimensional Gaussian distribution](http://en.wikipedia.org/wiki/Multivariate_normal_distribution). This distribution takes as input the mean and variance of each dimension, as well as the covariance between the two dimensions.\n", " \n", "#### In our visualizations below, we will specify the mean of each dimension to be 50 and the variance along each dimension to be 1. We will explore two different values for the covariance: 0 and 0.9. When the covariance is zero, the two dimensions are uncorrelated, and hence the data looks spherical. In contrast, when the covariance is 0.9, the two dimensions are strongly (positively) correlated and thus the data is non-spherical. As we'll see in Parts 1 and 2, the non-spherical data is amenable to dimensionality reduction via PCA, while the spherical data is not." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "import numpy as np\n", "\n", "def preparePlot(xticks, yticks, figsize=(10.5, 6), hideLabels=False, gridColor='#999999',\n", " gridWidth=1.0):\n", " \"\"\"Template for generating the plot layout.\"\"\"\n", " plt.close()\n", " fig, ax = plt.subplots(figsize=figsize, facecolor='white', edgecolor='white')\n", " ax.axes.tick_params(labelcolor='#999999', labelsize='10')\n", " for axis, ticks in [(ax.get_xaxis(), xticks), (ax.get_yaxis(), yticks)]:\n", " axis.set_ticks_position('none')\n", " axis.set_ticks(ticks)\n", " axis.label.set_color('#999999')\n", " if hideLabels: axis.set_ticklabels([])\n", " plt.grid(color=gridColor, linewidth=gridWidth, linestyle='-')\n", " map(lambda position: ax.spines[position].set_visible(False), ['bottom', 'top', 'left', 'right'])\n", " return fig, ax\n", "\n", "def create2DGaussian(mn, sigma, cov, n):\n", " \"\"\"Randomly sample points from a two-dimensional Gaussian distribution\"\"\"\n", " np.random.seed(142)\n", " return np.random.multivariate_normal(np.array([mn, mn]), np.array([[sigma, cov], [cov, sigma]]), n)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "dataRandom = create2DGaussian(mn=50, sigma=1, cov=0, n=100)\n", "\n", "# generate layout and plot data\n", "fig, ax = preparePlot(np.arange(46, 55, 2), np.arange(46, 55, 2))\n", "ax.set_xlabel(r'Simulated $x_1$ values'), ax.set_ylabel(r'Simulated $x_2$ values')\n", "ax.set_xlim(45, 54.5), ax.set_ylim(45, 54.5)\n", "plt.scatter(dataRandom[:,0], dataRandom[:,1], s=14**2, c='#d6ebf2', edgecolors='#8cbfd0', alpha=0.75)\n", "pass" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "dataCorrelated = create2DGaussian(mn=50, sigma=1, cov=.9, n=100)\n", "\n", "# generate layout and plot data\n", "fig, ax = preparePlot(np.arange(46, 55, 2), np.arange(46, 55, 2))\n", "ax.set_xlabel(r'Simulated $x_1$ values'), ax.set_ylabel(r'Simulated $x_2$ values')\n", "ax.set_xlim(45.5, 54.5), ax.set_ylim(45.5, 54.5)\n", "plt.scatter(dataCorrelated[:,0], dataCorrelated[:,1], s=14**2, c='#d6ebf2',\n", " edgecolors='#8cbfd0', alpha=0.75)\n", "pass" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **(1a) Interpreting PCA**\n", "#### PCA can be interpreted as identifying the \"directions\" along which the data vary the most. In the first step of PCA, we must first center our data. Working with our correlated dataset, first compute the mean of each feature (column) in the dataset. Then for each observation, modify the features by subtracting their corresponding mean, to create a zero mean dataset.\n", "#### Note that `correlatedData` is an RDD of NumPy arrays. This allows us to perform certain operations more succinctly. For example, we can sum the columns of our dataset using `correlatedData.sum()`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TODO: Replace with appropriate code\n", "correlatedData = sc.parallelize(dataCorrelated)\n", "\n", "meanCorrelated = \n", "correlatedDataZeroMean = correlatedData.\n", "\n", "print meanCorrelated\n", "print correlatedData.take(1)\n", "print correlatedDataZeroMean.take(1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TEST Interpreting PCA (1a)\n", "from test_helper import Test\n", "Test.assertTrue(np.allclose(meanCorrelated, [49.95739037, 49.97180477]),\n", " 'incorrect value for meanCorrelated')\n", "Test.assertTrue(np.allclose(correlatedDataZeroMean.take(1)[0], [-0.28561917, 0.10351492]),\n", " 'incorrect value for correlatedDataZeroMean')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **(1b) Sample covariance matrix**\n", "#### We are now ready to compute the sample covariance matrix. If we define $\\scriptsize \\mathbf{X} \\in \\mathbb{R}^{n \\times d}$ as the zero mean data matrix, then the sample covariance matrix is defined as: $$ \\mathbf{C}_{\\mathbf X} = \\frac{1}{n} \\mathbf{X}^\\top \\mathbf{X} \\,.$$ To compute this matrix, compute the outer product of each data point, add together these outer products, and divide by the number of data points. The data are two dimensional, so the resulting covariance matrix should be a 2x2 matrix.\n", " \n", "#### Note that [np.outer()](http://docs.scipy.org/doc/numpy/reference/generated/numpy.outer.html) can be used to calculate the outer product of two NumPy arrays." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TODO: Replace with appropriate code\n", "# Compute the covariance matrix using outer products and correlatedDataZeroMean\n", "correlatedCov = \n", "print correlatedCov" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TEST Sample covariance matrix (1b)\n", "covResult = [[ 0.99558386, 0.90148989], [0.90148989, 1.08607497]]\n", "Test.assertTrue(np.allclose(covResult, correlatedCov), 'incorrect value for correlatedCov')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **(1c) Covariance Function**\n", "#### Next, use the expressions above to write a function to compute the sample covariance matrix for an arbitrary `data` RDD." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TODO: Replace with appropriate code\n", "def estimateCovariance(data):\n", " \"\"\"Compute the covariance matrix for a given rdd.\n", "\n", " Note:\n", " The multi-dimensional covariance array should be calculated using outer products. Don't\n", " forget to normalize the data by first subtracting the mean.\n", "\n", " Args:\n", " data (RDD of np.ndarray): An `RDD` consisting of NumPy arrays.\n", "\n", " Returns:\n", " np.ndarray: A multi-dimensional array where the number of rows and columns both equal the\n", " length of the arrays in the input `RDD`.\n", " \"\"\"\n", " \n", "\n", "correlatedCovAuto= estimateCovariance(correlatedData)\n", "print correlatedCovAuto" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TEST Covariance function (1c)\n", "correctCov = [[ 0.99558386, 0.90148989], [0.90148989, 1.08607497]]\n", "Test.assertTrue(np.allclose(correctCov, correlatedCovAuto),\n", " 'incorrect value for correlatedCovAuto')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **(1d) Eigendecomposition**\n", "#### Now that we've computed the sample covariance matrix, we can use it to find directions of maximal variance in the data. Specifically, we can perform an eigendecomposition of this matrix to find its eigenvalues and eigenvectors. The $\\scriptsize d $ eigenvectors of the covariance matrix give us the directions of maximal variance, and are often called the \"principal components.\" The associated eigenvalues are the variances in these directions. In particular, the eigenvector corresponding to the largest eigenvalue is the direction of maximal variance (this is sometimes called the \"top\" eigenvector). Eigendecomposition of a $\\scriptsize d \\times d $ covariance matrix has a (roughly) cubic runtime complexity with respect to $\\scriptsize d $. Whenever $\\scriptsize d $ is relatively small (e.g., less than a few thousand) we can quickly perform this eigendecomposition locally.\n", " \n", "#### Use a function from `numpy.linalg` called [eigh](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) to perform the eigendecomposition. Next, sort the eigenvectors based on their corresponding eigenvalues (from high to low), yielding a matrix where the columns are the eigenvectors (and the first column is the top eigenvector). Note that [np.argsort](http://docs.scipy.org/doc/numpy/reference/generated/numpy.argsort.html#numpy-argsort) can be used to obtain the indices of the eigenvalues that correspond to the ascending order of eigenvalues. Finally, set the `topComponent` variable equal to the top eigenvector or prinicipal component, which is a $\\scriptsize 2 $-dimensional vector (array with two values).\n", "#### Note that the eigenvectors returned by `eigh` appear in the columns and not the rows. For example, the first eigenvector of `eigVecs` would be found in the first column and could be accessed using `eigVecs[:,0]`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TODO: Replace with appropriate code\n", "from numpy.linalg import eigh\n", "\n", "# Calculate the eigenvalues and eigenvectors from correlatedCovAuto\n", "eigVals, eigVecs = \n", "print 'eigenvalues: {0}'.format(eigVals)\n", "print '\\neigenvectors: \\n{0}'.format(eigVecs)\n", "\n", "# Use np.argsort to find the top eigenvector based on the largest eigenvalue\n", "inds = np.argsort()\n", "topComponent = \n", "print '\\ntop principal component: {0}'.format(topComponent)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TEST Eigendecomposition (1d)\n", "def checkBasis(vectors, correct):\n", " return np.allclose(vectors, correct) or np.allclose(np.negative(vectors), correct)\n", "Test.assertTrue(checkBasis(topComponent, [0.68915649, 0.72461254]),\n", " 'incorrect value for topComponent')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **(1e) PCA scores**\n", "#### We just computed the top principal component for a 2-dimensional non-spherical dataset. Now let's use this principal component to derive a one-dimensional representation for the original data. To compute these compact representations, which are sometimes called PCA \"scores\", calculate the dot product between each data point in the raw data and the top principal component." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TODO: Replace with appropriate code\n", "# Use the topComponent and the data from correlatedData to generate PCA scores\n", "correlatedDataScores = \n", "print 'one-dimensional data (first three):\\n{0}'.format(np.asarray(correlatedDataScores.take(3)))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TEST PCA Scores (1e)\n", "firstThree = [70.51682806, 69.30622356, 71.13588168]\n", "Test.assertTrue(checkBasis(correlatedDataScores.take(3), firstThree),\n", " 'incorrect value for correlatedDataScores')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### **Part 2: Write a PCA function and evaluate PCA on sample datasets**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **(2a) PCA function**\n", "#### We now have all the ingredients to write a general PCA function. Instead of working with just the top principal component, our function will compute the top $\\scriptsize k$ principal components and principal scores for a given dataset. Write this general function `pca`, and run it with `correlatedData` and $\\scriptsize k = 2$. Hint: Use results from Part (1c), Part (1d), and Part (1e).\n", " \n", "####Note: As discussed in lecture, our implementation is a reasonable strategy when $\\scriptsize d $ is small, though more efficient distributed algorithms exist when $\\scriptsize d $ is large." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TODO: Replace with appropriate code\n", "def pca(data, k=2):\n", " \"\"\"Computes the top `k` principal components, corresponding scores, and all eigenvalues.\n", "\n", " Note:\n", " All eigenvalues should be returned in sorted order (largest to smallest). `eigh` returns\n", " each eigenvectors as a column. This function should also return eigenvectors as columns.\n", "\n", " Args:\n", " data (RDD of np.ndarray): An `RDD` consisting of NumPy arrays.\n", " k (int): The number of principal components to return.\n", "\n", " Returns:\n", " tuple of (np.ndarray, RDD of np.ndarray, np.ndarray): A tuple of (eigenvectors, `RDD` of\n", " scores, eigenvalues). Eigenvectors is a multi-dimensional array where the number of\n", " rows equals the length of the arrays in the input `RDD` and the number of columns equals\n", " `k`. The `RDD` of scores has the same number of rows as `data` and consists of arrays\n", " of length `k`. Eigenvalues is an array of length d (the number of features).\n", " \"\"\"\n", " \n", " # Return the `k` principal components, `k` scores, and all eigenvalues\n", " \n", "\n", "# Run pca on correlatedData with k = 2\n", "topComponentsCorrelated, correlatedDataScoresAuto, eigenvaluesCorrelated = \n", "\n", "# Note that the 1st principal component is in the first column\n", "print 'topComponentsCorrelated: \\n{0}'.format(topComponentsCorrelated)\n", "print ('\\ncorrelatedDataScoresAuto (first three): \\n{0}'\n", " .format('\\n'.join(map(str, correlatedDataScoresAuto.take(3)))))\n", "print '\\neigenvaluesCorrelated: \\n{0}'.format(eigenvaluesCorrelated)\n", "\n", "# Create a higher dimensional test set\n", "pcaTestData = sc.parallelize([np.arange(x, x + 4) for x in np.arange(0, 20, 4)])\n", "componentsTest, testScores, eigenvaluesTest = pca(pcaTestData, 3)\n", "\n", "print '\\npcaTestData: \\n{0}'.format(np.array(pcaTestData.collect()))\n", "print '\\ncomponentsTest: \\n{0}'.format(componentsTest)\n", "print ('\\ntestScores (first three): \\n{0}'\n", " .format('\\n'.join(map(str, testScores.take(3)))))\n", "print '\\neigenvaluesTest: \\n{0}'.format(eigenvaluesTest)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TEST PCA Function (2a)\n", "Test.assertTrue(checkBasis(topComponentsCorrelated.T,\n", " [[0.68915649, 0.72461254], [-0.72461254, 0.68915649]]),\n", " 'incorrect value for topComponentsCorrelated')\n", "firstThreeCorrelated = [[70.51682806, 69.30622356, 71.13588168], [1.48305648, 1.5888655, 1.86710679]]\n", "Test.assertTrue(np.allclose(firstThreeCorrelated,\n", " np.vstack(np.abs(correlatedDataScoresAuto.take(3))).T),\n", " 'incorrect value for firstThreeCorrelated')\n", "Test.assertTrue(np.allclose(eigenvaluesCorrelated, [1.94345403, 0.13820481]),\n", " 'incorrect values for eigenvaluesCorrelated')\n", "topComponentsCorrelatedK1, correlatedDataScoresK1, eigenvaluesCorrelatedK1 = pca(correlatedData, 1)\n", "Test.assertTrue(checkBasis(topComponentsCorrelatedK1.T, [0.68915649, 0.72461254]),\n", " 'incorrect value for components when k=1')\n", "Test.assertTrue(np.allclose([70.51682806, 69.30622356, 71.13588168],\n", " np.vstack(np.abs(correlatedDataScoresK1.take(3))).T),\n", " 'incorrect value for scores when k=1')\n", "Test.assertTrue(np.allclose(eigenvaluesCorrelatedK1, [1.94345403, 0.13820481]),\n", " 'incorrect values for eigenvalues when k=1')\n", "Test.assertTrue(checkBasis(componentsTest.T[0], [ .5, .5, .5, .5]),\n", " 'incorrect value for componentsTest')\n", "Test.assertTrue(np.allclose(np.abs(testScores.first()[0]), 3.),\n", " 'incorrect value for testScores')\n", "Test.assertTrue(np.allclose(eigenvaluesTest, [ 128, 0, 0, 0 ]), 'incorrect value for eigenvaluesTest')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **(2b) PCA on `dataRandom`**\n", "#### Next, use the PCA function we just developed to find the top two principal components of the spherical `dataRandom` we created in Visualization 1." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TODO: Replace with appropriate code\n", "randomData = sc.parallelize(dataRandom)\n", "\n", "# Use pca on randomData\n", "topComponentsRandom, randomDataScoresAuto, eigenvaluesRandom = \n", "\n", "print 'topComponentsRandom: \\n{0}'.format(topComponentsRandom)\n", "print ('\\nrandomDataScoresAuto (first three): \\n{0}'\n", " .format('\\n'.join(map(str, randomDataScoresAuto.take(3)))))\n", "print '\\neigenvaluesRandom: \\n{0}'.format(eigenvaluesRandom)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TEST PCA on `dataRandom` (2b)\n", "Test.assertTrue(checkBasis(topComponentsRandom.T,\n", " [[-0.2522559 , 0.96766056], [-0.96766056, -0.2522559]]),\n", " 'incorrect value for topComponentsRandom')\n", "firstThreeRandom = [[36.61068572, 35.97314295, 35.59836628],\n", " [61.3489929 , 62.08813671, 60.61390415]]\n", "Test.assertTrue(np.allclose(firstThreeRandom, np.vstack(np.abs(randomDataScoresAuto.take(3))).T),\n", " 'incorrect value for randomDataScoresAuto')\n", "Test.assertTrue(np.allclose(eigenvaluesRandom, [1.4204546, 0.99521397]),\n", " 'incorrect value for eigenvaluesRandom')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **Visualization 2: PCA projection**\n", "#### Plot the original data and the 1-dimensional reconstruction using the top principal component to see how the PCA solution looks. The original data is plotted as before; however, the 1-dimensional reconstruction (projection) is plotted in green on top of the original data and the vectors (lines) representing the two principal components are shown as dotted lines." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def projectPointsAndGetLines(data, components, xRange):\n", " \"\"\"Project original data onto first component and get line details for top two components.\"\"\"\n", " topComponent= components[:, 0]\n", " slope1, slope2 = components[1, :2] / components[0, :2]\n", "\n", " means = data.mean()[:2]\n", " demeaned = data.map(lambda v: v - means)\n", " projected = demeaned.map(lambda v: (v.dot(topComponent) /\n", " topComponent.dot(topComponent)) * topComponent)\n", " remeaned = projected.map(lambda v: v + means)\n", " x1,x2 = zip(*remeaned.collect())\n", "\n", " lineStartP1X1, lineStartP1X2 = means - np.asarray([xRange, xRange * slope1])\n", " lineEndP1X1, lineEndP1X2 = means + np.asarray([xRange, xRange * slope1])\n", " lineStartP2X1, lineStartP2X2 = means - np.asarray([xRange, xRange * slope2])\n", " lineEndP2X1, lineEndP2X2 = means + np.asarray([xRange, xRange * slope2])\n", "\n", " return ((x1, x2), ([lineStartP1X1, lineEndP1X1], [lineStartP1X2, lineEndP1X2]),\n", " ([lineStartP2X1, lineEndP2X1], [lineStartP2X2, lineEndP2X2]))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "((x1, x2), (line1X1, line1X2), (line2X1, line2X2)) = \\\n", " projectPointsAndGetLines(correlatedData, topComponentsCorrelated, 5)\n", "\n", "# generate layout and plot data\n", "fig, ax = preparePlot(np.arange(46, 55, 2), np.arange(46, 55, 2), figsize=(7, 7))\n", "ax.set_xlabel(r'Simulated $x_1$ values'), ax.set_ylabel(r'Simulated $x_2$ values')\n", "ax.set_xlim(45.5, 54.5), ax.set_ylim(45.5, 54.5)\n", "plt.plot(line1X1, line1X2, linewidth=3.0, c='#8cbfd0', linestyle='--')\n", "plt.plot(line2X1, line2X2, linewidth=3.0, c='#d6ebf2', linestyle='--')\n", "plt.scatter(dataCorrelated[:,0], dataCorrelated[:,1], s=14**2, c='#d6ebf2',\n", " edgecolors='#8cbfd0', alpha=0.75)\n", "plt.scatter(x1, x2, s=14**2, c='#62c162', alpha=.75)\n", "pass" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "((x1, x2), (line1X1, line1X2), (line2X1, line2X2)) = \\\n", " projectPointsAndGetLines(randomData, topComponentsRandom, 5)\n", "\n", "# generate layout and plot data\n", "fig, ax = preparePlot(np.arange(46, 55, 2), np.arange(46, 55, 2), figsize=(7, 7))\n", "ax.set_xlabel(r'Simulated $x_1$ values'), ax.set_ylabel(r'Simulated $x_2$ values')\n", "ax.set_xlim(45.5, 54.5), ax.set_ylim(45.5, 54.5)\n", "plt.plot(line1X1, line1X2, linewidth=3.0, c='#8cbfd0', linestyle='--')\n", "plt.plot(line2X1, line2X2, linewidth=3.0, c='#d6ebf2', linestyle='--')\n", "plt.scatter(dataRandom[:,0], dataRandom[:,1], s=14**2, c='#d6ebf2',\n", " edgecolors='#8cbfd0', alpha=0.75)\n", "plt.scatter(x1, x2, s=14**2, c='#62c162', alpha=.75)\n", "pass" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **Visualization 3: Three-dimensional data**\n", "#### So far we have worked with two-dimensional data. Now let's generate three-dimensional data with highly correlated features. As in Visualization 1, we'll create samples from a multivariate Gaussian distribution, which in three dimensions requires us to specify three means, three variances, and three covariances.\n", " \n", "#### In the 3D graphs below, we have included the 2D plane that corresponds to the top two principal components, i.e. the plane with the smallest euclidean distance between the points and itself. Notice that the data points, despite living in three-dimensions, are found near a two-dimensional plane: the left graph shows how most points are close to the plane when it is viewed from its side, while the right graph shows that the plane covers most of the variance in the data. Note that darker blues correspond to points with higher values for the third dimension." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from mpl_toolkits.mplot3d import Axes3D\n", "\n", "m = 100\n", "mu = np.array([50, 50, 50])\n", "r1_2 = 0.9\n", "r1_3 = 0.7\n", "r2_3 = 0.1\n", "sigma1 = 5\n", "sigma2 = 20\n", "sigma3 = 20\n", "c = np.array([[sigma1 ** 2, r1_2 * sigma1 * sigma2, r1_3 * sigma1 * sigma3],\n", " [r1_2 * sigma1 * sigma2, sigma2 ** 2, r2_3 * sigma2 * sigma3],\n", " [r1_3 * sigma1 * sigma3, r2_3 * sigma2 * sigma3, sigma3 ** 2]])\n", "np.random.seed(142)\n", "dataThreeD = np.random.multivariate_normal(mu, c, m)\n", "\n", "from matplotlib.colors import ListedColormap, Normalize\n", "from matplotlib.cm import get_cmap\n", "norm = Normalize()\n", "cmap = get_cmap(\"Blues\")\n", "clrs = cmap(np.array(norm(dataThreeD[:,2])))[:,0:3]\n", "\n", "fig = plt.figure(figsize=(11, 6))\n", "ax = fig.add_subplot(121, projection='3d')\n", "ax.azim=-100\n", "ax.scatter(dataThreeD[:,0], dataThreeD[:,1], dataThreeD[:,2], c=clrs, s=14**2)\n", "\n", "xx, yy = np.meshgrid(np.arange(-15, 10, 1), np.arange(-50, 30, 1))\n", "normal = np.array([0.96981815, -0.188338, -0.15485978])\n", "z = (-normal[0] * xx - normal[1] * yy) * 1. / normal[2]\n", "xx = xx + 50\n", "yy = yy + 50\n", "z = z + 50\n", "\n", "ax.set_zlim((-20, 120)), ax.set_ylim((-20, 100)), ax.set_xlim((30, 75))\n", "ax.plot_surface(xx, yy, z, alpha=.10)\n", "\n", "ax = fig.add_subplot(122, projection='3d')\n", "ax.azim=10\n", "ax.elev=20\n", "#ax.dist=8\n", "ax.scatter(dataThreeD[:,0], dataThreeD[:,1], dataThreeD[:,2], c=clrs, s=14**2)\n", "\n", "ax.set_zlim((-20, 120)), ax.set_ylim((-20, 100)), ax.set_xlim((30, 75))\n", "ax.plot_surface(xx, yy, z, alpha=.1)\n", "plt.tight_layout()\n", "pass" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **(2c) 3D to 2D**\n", "#### We will now use PCA to see if we can recover the 2-dimensional plane on which the data live. Parallelize the data, and use our PCA function from above, with $ \\scriptsize k=2 $ components." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TODO: Replace with appropriate code\n", "threeDData = sc.parallelize(dataThreeD)\n", "componentsThreeD, threeDScores, eigenvaluesThreeD = \n", "\n", "print 'componentsThreeD: \\n{0}'.format(componentsThreeD)\n", "print ('\\nthreeDScores (first three): \\n{0}'\n", " .format('\\n'.join(map(str, threeDScores.take(3)))))\n", "print '\\neigenvaluesThreeD: \\n{0}'.format(eigenvaluesThreeD)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TEST 3D to 2D (2c)\n", "Test.assertEquals(componentsThreeD.shape, (3, 2), 'incorrect shape for componentsThreeD')\n", "Test.assertTrue(np.allclose(np.sum(eigenvaluesThreeD), 969.796443367),\n", " 'incorrect value for eigenvaluesThreeD')\n", "Test.assertTrue(np.allclose(np.abs(np.sum(componentsThreeD)), 1.77238943258),\n", " 'incorrect value for componentsThreeD')\n", "Test.assertTrue(np.allclose(np.abs(np.sum(threeDScores.take(3))), 237.782834092),\n", " 'incorrect value for threeDScores')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **Visualization 4: 2D representation of 3D data**\n", "#### See the 2D version of the data that captures most of its original structure. Note that darker blues correspond to points with higher values for the original data's third dimension." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "scoresThreeD = np.asarray(threeDScores.collect())\n", "\n", "# generate layout and plot data\n", "fig, ax = preparePlot(np.arange(20, 150, 20), np.arange(-40, 110, 20))\n", "ax.set_xlabel(r'New $x_1$ values'), ax.set_ylabel(r'New $x_2$ values')\n", "ax.set_xlim(5, 150), ax.set_ylim(-45, 50)\n", "plt.scatter(scoresThreeD[:,0], scoresThreeD[:,1], s=14**2, c=clrs, edgecolors='#8cbfd0', alpha=0.75)\n", "pass" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **(2d) Variance explained**\n", "#### Finally, let's quantify how much of the variance is being captured by PCA in each of the three synthetic datasets we've analyzed. To do this, we'll compute the fraction of retained variance by the top principal components. Recall that the eigenvalue corresponding to each principal component captures the variance along this direction. If our initial data is $\\scriptsize d$-dimensional, then the total variance in our data equals: $ \\scriptsize \\sum_{i=1}^d \\lambda_i $, where $\\scriptsize \\lambda_i$ is the eigenvalue corresponding to the $\\scriptsize i$th principal component. Moreover, if we use PCA with some $\\scriptsize k < d$, then we can compute the variance retained by these principal components by adding the top $\\scriptsize k$ eigenvalues. The fraction of retained variance equals the sum of the top $\\scriptsize k$ eigenvalues divided by the sum of all of the eigenvalues." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TODO: Replace with appropriate code\n", "def varianceExplained(data, k=1):\n", " \"\"\"Calculate the fraction of variance explained by the top `k` eigenvectors.\n", "\n", " Args:\n", " data (RDD of np.ndarray): An RDD that contains NumPy arrays which store the\n", " features for an observation.\n", " k: The number of principal components to consider.\n", "\n", " Returns:\n", " float: A number between 0 and 1 representing the percentage of variance explained\n", " by the top `k` eigenvectors.\n", " \"\"\"\n", " components, scores, eigenvalues = \n", " \n", "\n", "varianceRandom1 = varianceExplained(randomData, 1)\n", "varianceCorrelated1 = varianceExplained(correlatedData, 1)\n", "varianceRandom2 = varianceExplained(randomData, 2)\n", "varianceCorrelated2 = varianceExplained(correlatedData, 2)\n", "varianceThreeD2 = varianceExplained(threeDData, 2)\n", "print ('Percentage of variance explained by the first component of randomData: {0:.1f}%'\n", " .format(varianceRandom1 * 100))\n", "print ('Percentage of variance explained by both components of randomData: {0:.1f}%'\n", " .format(varianceRandom2 * 100))\n", "print ('\\nPercentage of variance explained by the first component of correlatedData: {0:.1f}%'.\n", " format(varianceCorrelated1 * 100))\n", "print ('Percentage of variance explained by both components of correlatedData: {0:.1f}%'\n", " .format(varianceCorrelated2 * 100))\n", "print ('\\nPercentage of variance explained by the first two components of threeDData: {0:.1f}%'\n", " .format(varianceThreeD2 * 100))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TEST Variance explained (2d)\n", "Test.assertTrue(np.allclose(varianceRandom1, 0.588017172066), 'incorrect value for varianceRandom1')\n", "Test.assertTrue(np.allclose(varianceCorrelated1, 0.933608329586),\n", " 'incorrect value for varianceCorrelated1')\n", "Test.assertTrue(np.allclose(varianceRandom2, 1.0), 'incorrect value for varianceRandom2')\n", "Test.assertTrue(np.allclose(varianceCorrelated2, 1.0), 'incorrect value for varianceCorrelated2')\n", "Test.assertTrue(np.allclose(varianceThreeD2, 0.993967356912), 'incorrect value for varianceThreeD2')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " \n", "### **Part 3: Parse, inspect, and preprocess neuroscience data then perform PCA **" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " \n", "#### **Data introduction**\n", "#### A central challenge in neuroscience is understanding the organization and function of neurons, the cells responsible for processing and representing information in the brain. New technologies make it possible to monitor the responses of large populations of neurons in awake animals. In general, neurons communicate through electrical impulses that must be recorded with electrodes, which is a challenging process. As an alternative, we can genetically engineer animals so that their neurons express special proteins that flouresce or light up when active, and then use microscopy to record neural activity as images. A recently developed method called light-sheet microscopy lets us do this in a special, transparent animal, the larval zebrafish, over nearly its entire brain. The resulting data are time-varying images containing the activity of hundreds of thousands of neurons. Given the raw data, which is enormous, we want to find compact spatial and temporal patterns: Which groups of neurons are active together? What is the time course of their activity? Are those patterns specific to particular events happening during the experiment (e.g. a stimulus that we might present). PCA is a powerful technique for finding spatial and temporal patterns in these kinds of data, and that's what we'll explore here!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **(3a) Load neuroscience data**\n", "#### In the next sections we will use PCA to capture structure in neural datasets. Before doing the analysis, we will load and do some basic inspection of the data. The raw data are currently stored as a text file. Every line in the file contains the time series of image intensity for a single pixel in a time-varying image (i.e. a movie). The first two numbers in each line are the spatial coordinates of the pixel, and the remaining numbers are the time series. We'll use first() to inspect a single row, and print just the first 100 characters." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import os\n", "baseDir = os.path.join('data')\n", "inputPath = os.path.join('cs190', 'neuro.txt')\n", "\n", "inputFile = os.path.join(baseDir, inputPath)\n", "\n", "lines = sc.textFile(inputFile)\n", "print lines.first()[0:100]\n", "\n", "# Check that everything loaded properly\n", "assert len(lines.first()) == 1397\n", "assert lines.count() == 46460" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **(3b) Parse the data**\n", "#### Parse the data into a key-value representation. We want each key to be a tuple of two-dimensional spatial coordinates and each value to be a NumPy array storing the associated time series. Write a function that converts a line of text into a (`tuple`, `np.ndarray`) pair. Then apply this function to each record in the RDD, and inspect the first entry of the new parsed data set. Now would be a good time to cache the data, and force a computation by calling count, to ensure the data are cached." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TODO: Replace with appropriate code\n", "def parse(line):\n", " \"\"\"Parse the raw data into a (`tuple`, `np.ndarray`) pair.\n", "\n", " Note:\n", " You should store the pixel coordinates as a tuple of two ints and the elements of the pixel intensity\n", " time series as an np.ndarray of floats.\n", "\n", " Args:\n", " line (str): A string representing an observation. Elements are separated by spaces. The\n", " first two elements represent the coordinates of the pixel, and the rest of the elements\n", " represent the pixel intensity over time.\n", "\n", " Returns:\n", " tuple of tuple, np.ndarray: A (coordinate, pixel intensity array) `tuple` where coordinate is\n", " a `tuple` containing two values and the pixel intensity is stored in an NumPy array\n", " which contains 240 values.\n", " \"\"\"\n", " \n", "\n", "rawData = lines.map(parse)\n", "rawData.cache()\n", "entry = rawData.first()\n", "print 'Length of movie is {0} seconds'.format(len(entry[1]))\n", "print 'Number of pixels in movie is {0:,}'.format(rawData.count())\n", "print ('\\nFirst entry of rawData (with only the first five values of the NumPy array):\\n({0}, {1})'\n", " .format(entry[0], entry[1][:5]))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TEST Parse the data (3b)\n", "Test.assertTrue(isinstance(entry[0], tuple), \"entry's key should be a tuple\")\n", "Test.assertEquals(len(entry), 2, 'entry should have a key and a value')\n", "Test.assertTrue(isinstance(entry[0][1], int), 'coordinate tuple should contain ints')\n", "Test.assertEquals(len(entry[0]), 2, \"entry's key should have two values\")\n", "Test.assertTrue(isinstance(entry[1], np.ndarray), \"entry's value should be an np.ndarray\")\n", "Test.assertTrue(isinstance(entry[1][0], np.float), 'the np.ndarray should consist of np.float values')\n", "Test.assertEquals(entry[0], (0, 0), 'incorrect key for entry')\n", "Test.assertEquals(entry[1].size, 240, 'incorrect length of entry array')\n", "Test.assertTrue(np.allclose(np.sum(entry[1]), 24683.5), 'incorrect values in entry array')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **(3c) Min and max flouresence**\n", "#### Next we'll do some basic preprocessing on the data. The raw time-series data are in units of image flouresence, and baseline flouresence varies somewhat arbitrarily from pixel to pixel. First, compute the minimum and maximum values across all pixels." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TODO: Replace with appropriate code\n", "mn = \n", "mx = \n", "\n", "print mn, mx" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TEST Min and max flouresence (3c)\n", "Test.assertTrue(np.allclose(mn, 100.6), 'incorrect value for mn')\n", "Test.assertTrue(np.allclose(mx, 940.8), 'incorrect value for mx')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **Visualization 5: Pixel intensity**\n", "#### Let's now see how a random pixel varies in value over the course of the time series. We'll visualize a pixel that exhibits a standard deviation of over 100." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "example = rawData.filter(lambda (k, v): np.std(v) > 100).values().first()\n", "\n", "# generate layout and plot data\n", "fig, ax = preparePlot(np.arange(0, 300, 50), np.arange(300, 800, 100))\n", "ax.set_xlabel(r'time'), ax.set_ylabel(r'flouresence')\n", "ax.set_xlim(-20, 270), ax.set_ylim(270, 730)\n", "plt.plot(range(len(example)), example, c='#8cbfd0', linewidth='3.0')\n", "pass" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **(3d) Fractional signal change**\n", "####To convert from these raw flouresence units to more intuitive units of fractional signal change, write a function that takes a time series for a particular pixel and subtracts and divides by the mean. Then apply this function to all the pixels. Confirm that this changes the maximum and minimum values." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TODO: Replace with appropriate code\n", "def rescale(ts):\n", " \"\"\"Take a np.ndarray and return the standardized array by subtracting and dividing by the mean.\n", "\n", " Note:\n", " You should first subtract the mean and then divide by the mean.\n", "\n", " Args:\n", " ts (np.ndarray): Time series data (`np.float`) representing pixel intensity.\n", "\n", " Returns:\n", " np.ndarray: The times series adjusted by subtracting the mean and dividing by the mean.\n", " \"\"\"\n", " \n", "\n", "scaledData = rawData.mapValues(lambda v: rescale(v))\n", "mnScaled = scaledData.map(lambda (k, v): v).map(lambda v: min(v)).min()\n", "mxScaled = scaledData.map(lambda (k, v): v).map(lambda v: max(v)).max()\n", "print mnScaled, mxScaled" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TEST Fractional signal change (3d)\n", "Test.assertTrue(isinstance(scaledData.first()[1], np.ndarray), 'incorrect type returned by rescale')\n", "Test.assertTrue(np.allclose(mnScaled, -0.27151288), 'incorrect value for mnScaled')\n", "Test.assertTrue(np.allclose(mxScaled, 0.90544876), 'incorrect value for mxScaled')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **Visualization 6: Normalized data**\n", "#### Now that we've normalized our data, let's once again see how a random pixel varies in value over the course of the time series. We'll visualize a pixel that exhibits a standard deviation of over 0.1. Note the change in scale on the y-axis compared to the previous visualization." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "example = scaledData.filter(lambda (k, v): np.std(v) > 0.1).values().first()\n", "\n", "# generate layout and plot data\n", "fig, ax = preparePlot(np.arange(0, 300, 50), np.arange(-.1, .6, .1))\n", "ax.set_xlabel(r'time'), ax.set_ylabel(r'flouresence')\n", "ax.set_xlim(-20, 260), ax.set_ylim(-.12, .52)\n", "plt.plot(range(len(example)), example, c='#8cbfd0', linewidth='3.0')\n", "pass" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **(3e) PCA on the scaled data**\n", "#### We now have a preprocessed dataset with $\\scriptsize n = 46460$ pixels and $\\scriptsize d = 240$ seconds of time series data for each pixel. We can interpret the pixels as our observations and each pixel value in the time series as a feature. We would like to find patterns in brain activity during this time series, and we expect to find correlations over time. We can thus use PCA to find a more compact representation of our data and allow us to visualize it.\n", " \n", "#### Use the `pca` function from Part (2a) to perform PCA on the preprocessed neuroscience data with $\\scriptsize k = 3$, resulting in a new low-dimensional 46460 by 3 dataset. The `pca` function takes an RDD of arrays, but `data` is an RDD of key-value pairs, so you'll need to extract the values." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TODO: Replace with appropriate code\n", "# Run pca using scaledData\n", "componentsScaled, scaledScores, eigenvaluesScaled = " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TEST PCA on the scaled data (3e)\n", "Test.assertEquals(componentsScaled.shape, (240, 3), 'incorrect shape for componentsScaled')\n", "Test.assertTrue(np.allclose(np.abs(np.sum(componentsScaled[:5, :])), 0.283150995232),\n", " 'incorrect value for componentsScaled')\n", "Test.assertTrue(np.allclose(np.abs(np.sum(scaledScores.take(3))), 0.0285507449251),\n", " 'incorrect value for scaledScores')\n", "Test.assertTrue(np.allclose(np.sum(eigenvaluesScaled[:5]), 0.206987501564),\n", " 'incorrect value for eigenvaluesScaled')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **Visualization 7: Top two components as images**\n", "#### Now, we'll view the scores for the top two component as images. Note that we reshape the vectors by the dimensions of the original image, 230 x 202.\n", "#### These graphs map the values for the single component to a grayscale image. This provides us with a visual representation which we can use to see the overall structure of the zebrafish brain and to identify where high and low values occur. However, using this representation, there is a substantial amount of useful information that is difficult to interpret. In the next visualization, we'll see how we can improve interpretability by combining the two principal components into a single image using a color mapping." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.cm as cm\n", "\n", "scoresScaled = np.vstack(scaledScores.collect())\n", "imageOneScaled = scoresScaled[:,0].reshape(230, 202).T\n", "\n", "# generate layout and plot data\n", "fig, ax = preparePlot(np.arange(0, 10, 1), np.arange(0, 10, 1), figsize=(9.0, 7.2), hideLabels=True)\n", "ax.grid(False)\n", "ax.set_title('Top Principal Component', color='#888888')\n", "image = plt.imshow(imageOneScaled,interpolation='nearest', aspect='auto', cmap=cm.gray)\n", "pass" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "imageTwoScaled = scoresScaled[:,1].reshape(230, 202).T\n", "\n", "# generate layout and plot data\n", "fig, ax = preparePlot(np.arange(0, 10, 1), np.arange(0, 10, 1), figsize=(9.0, 7.2), hideLabels=True)\n", "ax.grid(False)\n", "ax.set_title('Second Principal Component', color='#888888')\n", "image = plt.imshow(imageTwoScaled,interpolation='nearest', aspect='auto', cmap=cm.gray)\n", "pass" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **Visualization 8: Top two components as one image**\n", "#### When we perform PCA and color neurons based on their location in the low-dimensional space, we can interpret areas with similar colors as exhibiting similar responses (at least in terms of the simple representation we recover with PCA). Below, the first graph shows how low-dimensional representations, which correspond to the first two principal components, are mapped to colors. The second graph shows the result of this color mapping using the zebrafish neural data.\n", " \n", "####The second graph clearly exhibits patterns of neural similarity throughout different regions of the brain. However, when performing PCA on the full dataset, there are multiple reasons why neurons might have similar responses. The neurons might respond similarly to different stimulus directions, their responses might have similar temporal dynamics, or their response similarity could be influenced by both temporal and stimulus-specific factors. However, with our initial PCA analysis, we cannot pin down the underlying factors, and hence it is hard to interpret what \"similarity\" really means." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Optional Details: Note that we use [polar coordinates](https://en.wikipedia.org/wiki/Polar_coordinate_system) to map our low-dimensional points to colors. Using polar coordinates provides us with an angle $ (\\phi) $ and magnitude $ (\\rho) $. We then use the well-known polar color space, [hue-saturation-value](https://en.wikipedia.org/wiki/HSL_and_HSV) (HSV), and map the angle to hue and the magnitude to value (brightness). This maps low magnitude points to black while allowing larger magnitude points to be differentiated by their angle. Additionally, the function `polarTransform` that maps low-dimensional representations to colors has an input parameter called `scale`, which we set to 2.0, and you can try lower values for the two graphs to see more nuanced mappings -- values near 1.0 are particularly interesting." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Adapted from python-thunder's Colorize.transform where cmap='polar'.\n", "# Checkout the library at: https://github.com/thunder-project/thunder and\n", "# http://thunder-project.org/\n", "\n", "def polarTransform(scale, img):\n", " \"\"\"Convert points from cartesian to polar coordinates and map to colors.\"\"\"\n", " from matplotlib.colors import hsv_to_rgb\n", "\n", " img = np.asarray(img)\n", " dims = img.shape\n", "\n", " phi = ((np.arctan2(-img[0], -img[1]) + np.pi/2) % (np.pi*2)) / (2 * np.pi)\n", " rho = np.sqrt(img[0]**2 + img[1]**2)\n", " saturation = np.ones((dims[1], dims[2]))\n", "\n", " out = hsv_to_rgb(np.dstack((phi, saturation, scale * rho)))\n", "\n", " return np.clip(out * scale, 0, 1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Show the polar mapping from principal component coordinates to colors.\n", "x1AbsMax = np.max(np.abs(imageOneScaled))\n", "x2AbsMax = np.max(np.abs(imageTwoScaled))\n", "\n", "numOfPixels = 300\n", "x1Vals = np.arange(-x1AbsMax, x1AbsMax, (2 * x1AbsMax) / numOfPixels)\n", "x2Vals = np.arange(x2AbsMax, -x2AbsMax, -(2 * x2AbsMax) / numOfPixels)\n", "x2Vals.shape = (numOfPixels, 1)\n", "\n", "x1Data = np.tile(x1Vals, (numOfPixels, 1))\n", "x2Data = np.tile(x2Vals, (1, numOfPixels))\n", "\n", "# Try changing the first parameter to lower values\n", "polarMap = polarTransform(2.0, [x1Data, x2Data])\n", "\n", "gridRange = np.arange(0, numOfPixels + 25, 25)\n", "fig, ax = preparePlot(gridRange, gridRange, figsize=(9.0, 7.2), hideLabels=True)\n", "image = plt.imshow(polarMap, interpolation='nearest', aspect='auto')\n", "ax.set_xlabel('Principal component one'), ax.set_ylabel('Principal component two')\n", "gridMarks = (2 * gridRange / float(numOfPixels) - 1.0)\n", "x1Marks = x1AbsMax * gridMarks\n", "x2Marks = -x2AbsMax * gridMarks\n", "ax.get_xaxis().set_ticklabels(map(lambda x: '{0:.1f}'.format(x), x1Marks))\n", "ax.get_yaxis().set_ticklabels(map(lambda x: '{0:.1f}'.format(x), x2Marks))\n", "pass" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Use the same transformation on the image data\n", "# Try changing the first parameter to lower values\n", "brainmap = polarTransform(2.0, [imageOneScaled, imageTwoScaled])\n", "\n", "# generate layout and plot data\n", "fig, ax = preparePlot(np.arange(0, 10, 1), np.arange(0, 10, 1), figsize=(9.0, 7.2), hideLabels=True)\n", "ax.grid(False)\n", "image = plt.imshow(brainmap,interpolation='nearest', aspect='auto')\n", "pass" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### **Part 4: Feature-based aggregation and PCA**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **(4a) Aggregation using arrays**\n", " \n", "#### In the analysis in Part 3, we performed PCA on the full time series data, trying to find global patterns across all 240 seconds of the time series. However, our analysis doesn't use the fact that different events happened during those 240 seconds. Specifically, during those 240 seconds, the zebrafish was presented with 12 different direction-specific visual patterns, with each one lasting for 20 seconds, for a total of 12 x 20 = 240 features. Stronger patterns are likely to emerge if we incorporate knowledge of our experimental setup into our analysis. As we'll see, we can isolate the impact of temporal response or direction-specific impact by appropriately aggregating our features.\n", " \n", "#### In order to aggregate the features we will use basic ideas from matrix multiplication. First, note that if we use `np.dot` with a two-dimensional array, then NumPy performs the equivalent matrix-multiply calculation. For example, `np.array([[1, 2, 3], [4, 5, 6]]).dot(np.array([2, 0, 1]))` produces `np.array([5, 14])`.\n", "#### $$\\begin{bmatrix} 1 & 2 & 3 \\\\\\ 4 & 5 & 6 \\end{bmatrix} \\begin{bmatrix} 2 \\\\\\ 0 \\\\\\ 1 \\end{bmatrix} = \\begin{bmatrix} 5 \\\\\\ 14 \\end{bmatrix} $$\n", "#### By setting up our multi-dimensional array properly we can multiply it by a vector to perform certain aggregation operations. For example, imagine we had a 3 dimensional vector, $ \\scriptsize \\begin{bmatrix} 1 & 2 & 3 \\end{bmatrix}^\\top $ and we wanted to create a 2 dimensional vector containing the sum of its first and last elements as one value and three times its second value as another value, i.e., $ \\scriptsize \\begin{bmatrix} 4 & 6 \\end{bmatrix}^\\top $. We can generate this result via matrix multiplication as follows: `np.array([[1, 0, 1], [0, 3, 0]]).dot(np.array([1, 2, 3])` which produces `np.array([4, 6]`.\n", "#### $$\\begin{bmatrix} 1 & 0 & 1 \\\\\\ 0 & 3 & 0 \\end{bmatrix} \\begin{bmatrix} 1 \\\\\\ 2 \\\\\\ 3 \\end{bmatrix} = \\begin{bmatrix} 4 \\\\\\ 6 \\end{bmatrix} $$\n", "#### For this exercise, you'll create several arrays that perform different types of aggregation. The aggregation is specified in the comments before each array. You should fill in the array values by hand. We'll automate array creation in the next two exercises." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TODO: Replace with appropriate code\n", "vector = np.array([0., 1., 2., 3., 4., 5.])\n", "\n", "# Create a multi-dimensional array that when multiplied (using .dot) against vector, results in\n", "# a two element array where the first element is the sum of the 0, 2, and 4 indexed elements of\n", "# vector and the second element is the sum of the 1, 3, and 5 indexed elements of vector.\n", "# This should be a 2 row by 6 column array\n", "sumEveryOther = np.array()\n", "\n", "# Create a multi-dimensional array that when multiplied (using .dot) against vector, results in a\n", "# three element array where the first element is the sum of the 0 and 3 indexed elements of vector,\n", "# the second element is the sum of the 1 and 4 indexed elements of vector, and the third element is\n", "# the sum of the 2 and 5 indexed elements of vector.\n", "# This should be a 3 row by 6 column array\n", "sumEveryThird = np.array()\n", "\n", "# Create a multi-dimensional array that can be used to sum the first three elements of vector and\n", "# the last three elements of vector, which returns a two element array with those values when dotted\n", "# with vector.\n", "# This should be a 2 row by 6 column array\n", "sumByThree = np.array()\n", "\n", "# Create a multi-dimensional array that sums the first two elements, second two elements, and\n", "# last two elements of vector, which returns a three element array with those values when dotted\n", "# with vector.\n", "# This should be a 3 row by 6 column array\n", "sumByTwo = np.array()\n", "\n", "print 'sumEveryOther.dot(vector):\\t{0}'.format(sumEveryOther.dot(vector))\n", "print 'sumEveryThird.dot(vector):\\t{0}'.format(sumEveryThird.dot(vector))\n", "\n", "print '\\nsumByThree.dot(vector):\\t{0}'.format(sumByThree.dot(vector))\n", "print 'sumByTwo.dot(vector): \\t{0}'.format(sumByTwo.dot(vector))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TEST Aggregation using arrays (4a)\n", "Test.assertEquals(sumEveryOther.shape, (2, 6), 'incorrect shape for sumEveryOther')\n", "Test.assertEquals(sumEveryThird.shape, (3, 6), 'incorrect shape for sumEveryThird')\n", "Test.assertTrue(np.allclose(sumEveryOther.dot(vector), [6, 9]), 'incorrect value for sumEveryOther')\n", "Test.assertTrue(np.allclose(sumEveryThird.dot(vector), [3, 5, 7]),\n", " 'incorrect value for sumEveryThird')\n", "Test.assertEquals(sumByThree.shape, (2, 6), 'incorrect shape for sumByThree')\n", "Test.assertEquals(sumByTwo.shape, (3, 6), 'incorrect shape for sumByTwo')\n", "Test.assertTrue(np.allclose(sumByThree.dot(vector), [3, 12]), 'incorrect value for sumByThree')\n", "Test.assertTrue(np.allclose(sumByTwo.dot(vector), [1, 5, 9]), 'incorrect value for sumByTwo')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **(4b) Recreate with `np.tile` and `np.eye`**\n", "#### [np.tile](http://docs.scipy.org/doc/numpy/reference/generated/numpy.tile.html) is useful for repeating arrays in one or more dimensions. For example, `np.tile(np.array([[1, 2], [3, 4]]), 2)` produces `np.array([[1, 2, 1, 2], [3, 4, 3, 4]]))`.\n", "#### $$ np.tile( \\begin{bmatrix} 1 & 2 \\\\\\ 3 & 4 \\end{bmatrix} , 2) \\to \\begin{bmatrix} 1 & 2 & 1& 2 \\\\\\ 3 & 4 & 3 & 4 \\end{bmatrix} $$\n", "#### Recall that [np.eye](http://docs.scipy.org/doc/numpy/reference/generated/numpy.eye.html) can be used to create an identity array $ (\\mathbf{I_n}) $. For example, `np.eye(3)` produces `np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]])`.\n", "#### $$ np.eye( 3 ) \\to \\begin{bmatrix} 1 & 0 & 0 \\\\\\ 0 & 1 & 0 \\\\\\ 0 & 0 & 1 \\end{bmatrix} $$\n", "#### In this exercise, recreate `sumEveryOther` and `sumEveryThird` using `np.tile` and `np.eye`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Reference for what to recreate\n", "print 'sumEveryOther: \\n{0}'.format(sumEveryOther)\n", "print '\\nsumEveryThird: \\n{0}'.format(sumEveryThird)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TODO: Replace with appropriate code\n", "# Use np.tile and np.eye to recreate the arrays\n", "sumEveryOtherTile = \n", "sumEveryThirdTile = \n", "\n", "print sumEveryOtherTile\n", "print 'sumEveryOtherTile.dot(vector): {0}'.format(sumEveryOtherTile.dot(vector))\n", "print '\\n', sumEveryThirdTile\n", "print 'sumEveryThirdTile.dot(vector): {0}'.format(sumEveryThirdTile.dot(vector))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TEST Recreate with `np.tile` and `np.eye` (4b)\n", "Test.assertEquals(sumEveryOtherTile.shape, (2, 6), 'incorrect shape for sumEveryOtherTile')\n", "Test.assertEquals(sumEveryThirdTile.shape, (3, 6), 'incorrect shape for sumEveryThirdTile')\n", "Test.assertTrue(np.allclose(sumEveryOtherTile.dot(vector), [6, 9]),\n", " 'incorrect value for sumEveryOtherTile')\n", "Test.assertTrue(np.allclose(sumEveryThirdTile.dot(vector), [3, 5, 7]),\n", " 'incorrect value for sumEveryThirdTile')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **(4c) Recreate with `np.kron` **\n", "#### The Kronecker product is the generalization of outer products involving matrices, and we've included some examples below to illustrate the idea. Please refer to the [Wikipedia page](https://en.wikipedia.org/wiki/Kronecker_product) for a detailed definition. We can use [np.kron](http://docs.scipy.org/doc/numpy/reference/generated/numpy.kron.html) to compute Kronecker products and recreate the `sumBy` arrays. Note that $ \\otimes $ indicates a Kronecker product.\n", "#### $$ \\begin{bmatrix} 1 & 2 \\\\\\ 3 & 4 \\end{bmatrix} \\otimes \\begin{bmatrix} 1 & 2 \\end{bmatrix} = \\begin{bmatrix} 1 \\cdot 1 & 1 \\cdot 2 & 2 \\cdot 1 & 2 \\cdot 2 \\\\\\ 3 \\cdot 1 & 3 \\cdot 2 & 4 \\cdot 1 & 4 \\cdot 2 \\end{bmatrix} = \\begin{bmatrix} 1 & 2 & 2 & 4 \\\\\\ 3 & 6 & 4 & 8 \\end{bmatrix} $$\n", "#### We can see how the Kronecker product continues to expand if we add another row to the second array.\n", "#### $$ \\begin{bmatrix} 1 & 2 \\\\\\ 3 & 4 \\end{bmatrix} \\otimes \\begin{bmatrix} 1 & 2 \\\\\\ 3 & 4 \\end{bmatrix} = \\begin{bmatrix} 1 \\cdot 1 & 1 \\cdot 2 & 2 \\cdot 1 & 2 \\cdot 2 \\\\\\ 1 \\cdot 3 & 1 \\cdot 4 & 2 \\cdot 3 & 2 \\cdot 4 \\\\\\ 3 \\cdot 1 & 3 \\cdot 2 & 4 \\cdot 1 & 4 \\cdot 2 \\\\\\ 3 \\cdot 3 & 3 \\cdot 4 & 4 \\cdot 3 & 4 \\cdot 4 \\end{bmatrix} = \\begin{bmatrix} 1 & 2 & 2 & 4 \\\\\\ 3 & 4 & 6 & 8 \\\\\\ 3 & 6 & 4 & 8 \\\\\\ 9 & 12 & 12 & 16 \\end{bmatrix} $$\n", "#### For this exercise, you'll recreate the `sumByThree` and `sumByTwo` arrays using `np.kron`, `np.eye`, and `np.ones`. Note that `np.ones` creates an array of all ones." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Reference for what to recreate\n", "print 'sumByThree: \\n{0}'.format(sumByThree)\n", "print '\\nsumByTwo: \\n{0}'.format(sumByTwo)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TODO: Replace with appropriate code\n", "# Use np.kron, np.eye, and np.ones to recreate the arrays\n", "sumByThreeKron = \n", "sumByTwoKron = \n", "\n", "print sumByThreeKron\n", "print 'sumByThreeKron.dot(vector): {0}'.format(sumByThreeKron.dot(vector))\n", "print '\\n', sumByTwoKron\n", "print 'sumByTwoKron.dot(vector): {0}'.format(sumByTwoKron.dot(vector))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TEST Recreate with `np.kron` (4c)\n", "Test.assertEquals(sumByThreeKron.shape, (2, 6), 'incorrect shape for sumByThreeKron')\n", "Test.assertEquals(sumByTwoKron.shape, (3, 6), 'incorrect shape for sumByTwoKron')\n", "Test.assertTrue(np.allclose(sumByThreeKron.dot(vector), [3, 12]),\n", " 'incorrect value for sumByThreeKron')\n", "Test.assertTrue(np.allclose(sumByTwoKron.dot(vector), [1, 5, 9]),\n", " 'incorrect value for sumByTwoKron')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **(4d) Aggregate by time** As we discussed in Part (4a), we would like to incorporate knowledge of our experimental setup into our analysis. To do this, we'll first study the temporal aspects of neural response, by aggregating our features by time. In other words, we want to see how different pixels (and the underlying neurons captured in these pixels) react in each of the 20 seconds after a new visual pattern is displayed, regardless of what the pattern is. Hence, instead of working with the 240 features individually, we'll aggregate the original features into 20 new features, where the first new feature captures the pixel response one second after a visual pattern appears, the second new feature is the response after two seconds, and so on.\n", " \n", "#### We can perform this aggregation using a map operation. First, build a multi-dimensional array $ \\scriptsize \\mathbf{T} $ that, when dotted with a 240-dimensional vector, sums every 20-th component of this vector and returns a 20-dimensional vector. Note that this exercise is similar to (4b). Once you have created your multi-dimensional array $ \\scriptsize \\mathbf{T} $, use a `map` operation with that array and each time series to generate a transformed dataset. We'll cache and count the output, as we'll be using it again." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TODO: Replace with appropriate code\n", "# Create a multi-dimensional array to perform the aggregation\n", "T = \n", "\n", "# Transform scaledData using T. Make sure to retain the keys.\n", "timeData = scaledData.\n", "\n", "timeData.cache()\n", "print timeData.count()\n", "print timeData.first()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TEST Aggregate by time (4d)\n", "Test.assertEquals(T.shape, (20, 240), 'incorrect shape for T')\n", "timeDataFirst = timeData.values().first()\n", "timeDataFifth = timeData.values().take(5)[4]\n", "Test.assertEquals(timeData.count(), 46460, 'incorrect length of timeData')\n", "Test.assertEquals(timeDataFirst.size, 20, 'incorrect value length of timeData')\n", "Test.assertEquals(timeData.keys().first(), (0, 0), 'incorrect keys in timeData')\n", "Test.assertTrue(np.allclose(timeDataFirst[:2], [0.00802155, 0.00607693]),\n", " 'incorrect values in timeData')\n", "Test.assertTrue(np.allclose(timeDataFifth[-2:],[-0.00636676, -0.0179427]),\n", " 'incorrect values in timeData')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **(4e) Obtain a compact representation**\n", "#### We now have a time-aggregated dataset with $\\scriptsize n = 46460$ pixels and $\\scriptsize d = 20$ aggregated time features, and we want to use PCA to find a more compact representation. Use the `pca` function from Part (2a) to perform PCA on the this data with $\\scriptsize k = 3$, resulting in a new low-dimensional 46,460 by 3 dataset. As before, you'll need to extract the values from `timeData` since it is an RDD of key-value pairs." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TODO: Replace with appropriate code\n", "componentsTime, timeScores, eigenvaluesTime = \n", "\n", "print 'componentsTime: (first five) \\n{0}'.format(componentsTime[:5,:])\n", "print ('\\ntimeScores (first three): \\n{0}'\n", " .format('\\n'.join(map(str, timeScores.take(3)))))\n", "print '\\neigenvaluesTime: (first five) \\n{0}'.format(eigenvaluesTime[:5])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TEST Obtain a compact representation (4e)\n", "Test.assertEquals(componentsTime.shape, (20, 3), 'incorrect shape for componentsTime')\n", "Test.assertTrue(np.allclose(np.abs(np.sum(componentsTime[:5, :])), 2.37299020),\n", " 'incorrect value for componentsTime')\n", "Test.assertTrue(np.allclose(np.abs(np.sum(timeScores.take(3))), 0.0213119114),\n", " 'incorrect value for timeScores')\n", "Test.assertTrue(np.allclose(np.sum(eigenvaluesTime[:5]), 0.844764792),\n", " 'incorrect value for eigenvaluesTime')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### ** Visualization 9: Top two components by time **\n", "#### Let's view the scores from the first two PCs as a composite image. When we preprocess by aggregating by time and then perform PCA, we are only looking at variability related to temporal dynamics. As a result, if neurons appear similar -- have similar colors -- in the resulting image, it means that their responses vary similarly over time, regardless of how they might be encoding direction. In the image below, we can define the midline as the horizontal line across the middle of the brain. We see clear patterns of neural activity in different parts of the brain, and crucially note that the regions on either side of the midline are similar, which suggests that temporal dynamics do not differ across the two sides of the brain." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "scoresTime = np.vstack(timeScores.collect())\n", "imageOneTime = scoresTime[:,0].reshape(230, 202).T\n", "imageTwoTime = scoresTime[:,1].reshape(230, 202).T\n", "brainmap = polarTransform(3, [imageOneTime, imageTwoTime])\n", "\n", "# generate layout and plot data\n", "fig, ax = preparePlot(np.arange(0, 10, 1), np.arange(0, 10, 1), figsize=(9.0, 7.2), hideLabels=True)\n", "ax.grid(False)\n", "image = plt.imshow(brainmap,interpolation='nearest', aspect='auto')\n", "pass" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **(4f) Aggregate by direction**\n", "#### Next, let's perform a second type of feature aggregation so that we can study the direction-specific aspects of neural response, by aggregating our features by direction. In other words, we want to see how different pixels (and the underlying neurons captured in these pixels) react when the zebrafish is presented with 12 direction-specific patterns, ignoring the temporal aspect of the reaction. Hence, instead of working with the 240 features individually, we'll aggregate the original features into 12 new features, where the first new feature captures the average pixel response to the first direction-specific visual pattern, the second new feature is the response to the second direction-specific visual pattern, and so on.\n", " \n", "#### As in Part (4c), we'll design a multi-dimensional array $ \\scriptsize \\mathbf{D} $ that, when multiplied by a 240-dimensional vector, sums the first 20 components, then the second 20 components, and so on. Note that this is similar to exercise (4c). First create $ \\scriptsize \\mathbf{D} $, then use a `map` operation with that array and each time series to generate a transformed dataset. We'll cache and count the output, as we'll be using it again." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TODO: Replace with appropriate code\n", "# Create a multi-dimensional array to perform the aggregation\n", "D = \n", "\n", "# Transform scaledData using D. Make sure to retain the keys.\n", "directionData = scaledData.\n", "\n", "directionData.cache()\n", "print directionData.count()\n", "print directionData.first()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TEST Aggregate by direction (4f)\n", "Test.assertEquals(D.shape, (12, 240), 'incorrect shape for D')\n", "directionDataFirst = directionData.values().first()\n", "directionDataFifth = directionData.values().take(5)[4]\n", "Test.assertEquals(directionData.count(), 46460, 'incorrect length of directionData')\n", "Test.assertEquals(directionDataFirst.size, 12, 'incorrect value length of directionData')\n", "Test.assertEquals(directionData.keys().first(), (0, 0), 'incorrect keys in directionData')\n", "Test.assertTrue(np.allclose(directionDataFirst[:2], [ 0.03346365, 0.03638058]),\n", " 'incorrect values in directionData')\n", "Test.assertTrue(np.allclose(directionDataFifth[:2], [ 0.01479147, -0.02090099]),\n", " 'incorrect values in directionData')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **(4g) Compact representation of direction data**\n", "#### We now have a direction-aggregated dataset with $\\scriptsize n = 46460$ pixels and $\\scriptsize d = 12$ aggregated direction features, and we want to use PCA to find a more compact representation. Use the `pca` function from Part (2a) to perform PCA on the this data with $\\scriptsize k = 3$, resulting in a new low-dimensional 46460 by 3 dataset. As before, you'll need to extract the values from `directionData` since it is an RDD of key-value pairs." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TODO: Replace with appropriate code\n", "componentsDirection, directionScores, eigenvaluesDirection = \n", "\n", "print 'componentsDirection: (first five) \\n{0}'.format(componentsDirection[:5,:])\n", "print ('\\ndirectionScores (first three): \\n{0}'\n", " .format('\\n'.join(map(str, directionScores.take(3)))))\n", "print '\\neigenvaluesDirection: (first five) \\n{0}'.format(eigenvaluesDirection[:5])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# TEST Compact representation of direction data (4g)\n", "Test.assertEquals(componentsDirection.shape, (12, 3), 'incorrect shape for componentsDirection')\n", "Test.assertTrue(np.allclose(np.abs(np.sum(componentsDirection[:5, :])), 1.080232069),\n", " 'incorrect value for componentsDirection')\n", "Test.assertTrue(np.allclose(np.abs(np.sum(directionScores.take(3))), 0.10993162084),\n", " 'incorrect value for directionScores')\n", "Test.assertTrue(np.allclose(np.sum(eigenvaluesDirection[:5]), 2.0089720377),\n", " 'incorrect value for eigenvaluesDirection')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " \n", "#### **Visualization 10: Top two components by direction**\n", "#### Again, let's view the scores from the first two PCs as a composite image. When we preprocess by averaging across time (group by direction), and then perform PCA, we are only looking at variability related to stimulus direction. As a result, if neurons appear similar -- have similar colors -- in the image, it means that their responses vary similarly across directions, regardless of how they evolve over time. In the image below, we see a different pattern of similarity across regions of the brain. Moreover, regions on either side of the midline are colored differently, which suggests that we are looking at a property, direction selectivity, that has a different representation across the two sides of the brain." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "scoresDirection = np.vstack(directionScores.collect())\n", "imageOneDirection = scoresDirection[:,0].reshape(230, 202).T\n", "imageTwoDirection = scoresDirection[:,1].reshape(230, 202).T\n", "brainmap = polarTransform(2, [imageOneDirection, imageTwoDirection])\n", "# with thunder: Colorize(cmap='polar', scale=2).transform([imageOneDirection, imageTwoDirection])\n", "\n", "# generate layout and plot data\n", "fig, ax = preparePlot(np.arange(0, 10, 1), np.arange(0, 10, 1), figsize=(9.0, 7.2), hideLabels=True)\n", "ax.grid(False)\n", "image = plt.imshow(brainmap, interpolation='nearest', aspect='auto')\n", "pass" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### **(4h) Next steps**\n", "#### In the analyses above we have successfully identified regions of the brain that encode particular properties, e.g., a particular temporal pattern or selectivity to a stimulus. However, this is only the first step! These exploratory analyses are typically followed with more targeted investigation, both through analysis and experiment. For example, we might find all neurons that prefer one stimulus direction, and then do an experiment in which we stimulate or inactivate only those neurons and look at the effect on the animal's behavior. Alternatively, we might subdivide neurons into groups based on simple forms of stimulus selectivity like the ones analyzed here, and then estimate coupling across different neuronal populations, i.e. can we predict one population's response as a function of another. This can be framed as a massive pair-wise regression problem, related to techniques you learned earlier in the course, and demanding large-scale implementations." ] } ], "metadata": {}, "nbformat": 4, "nbformat_minor": 0 }