{ "metadata": { "name": "", "signature": "sha256:c92c39605f5ecf2bc2c69097aac25abfbe46156a992e31fecac5e21a57863e82" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "heading", "level": 1, "metadata": {}, "source": [ "Lab 2 Exploratory Data Analysis" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For this lab, we'll explore some data from a very useful source, the UC Irvine machine learning data repository.\n", "\n", "First, create a directory ~/labs/lab2 on your VM. \n", "\n", "Open a browser on your VM and point it at this ipynb file. Click on the download link (top right hand corner of the page) and download this file into your lab2 directory.\n", "\n", "Now from inside (or outside) your VM open a browser and go to this URL (this may be a good time to enable clipboard sharing on your VM. Its under the \"Devices\" menu):\n", "https://archive.ics.uci.edu/ml/datasets/Heart+Disease\n", "and read the dataset description. \n", "\n", "Click on the \"Data Folder\" link near the top of the page. If you're not inside your VM, copy the URL for this page and then paste it into a browser on your VM. \n", "\n", "Now click on \"processed.cleveland.data\" and save it into ~/labs/lab2. \n", "\n", "Now read the data into a python and create a variable \"cleveland_raw_data\" which is a list of rows from this dataset. Each row should be a list of string values returned by the csv file reader." ] }, { "cell_type": "code", "collapsed": false, "input": [ "labdir = \"/home/datascience/labs/lab2/\"\n", "import csv\n", "\n", "with open(labdir+\"processed.cleveland.data\") as csvfile:\n", " cleveland_raw_data = list(csv.reader(csvfile)) \n", " " ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> TODO: How many rows are there in the dataset?\n" ] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Data Cleaning" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First we have to clean and sanitize the data. This data is pretty clean and is mostly numeric but contains some '?' in some fields. To make it easier to handle, we convert those fields to 'None'. For convenience, you should define a function \"safefloat\" that takes a string argument, and returns None if the argument is '?', otherwise the float value of the string. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "import string \n", "\n", "def safefloat(x): # TODO: Implement safefloat()\n", " \n", "cleveland_data = [[safefloat(x) for x in y] for y in cleveland_raw_data]" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As discussed in the dataset summary, the following are the column names. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "headers = ['age', 'sex', 'cp', 'trestbps', 'chol', 'fbs', 'restecg', 'thalach', 'exang', 'oldpeak', 'slope', 'ca', 'thal', 'num']" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we construct a dictionary mapping these header names to the column numbers 0...13:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "headernum = dict(zip(headers, range(len(headers)))) " ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Define a function \"getcol\" that takes a column name and returns the data in that column as a list of numbers." ] }, { "cell_type": "code", "collapsed": false, "input": [ "def getcol(name): # TODO write getcol\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Basic Statistics" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "What is the minimum, maximum, mean and standard deviation of the age of this set of subjects? Use the numpy package with contains the mean() and std() functions. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "import numpy as np\n", "age = getcol('age')\n", "[min(age), max(age), np.mean(age), np.std(age)]" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next we define a function select which given a column name and a predicate, returns the values of that column at rows for which the predicate is true." ] }, { "cell_type": "code", "collapsed": false, "input": [ "def select(colname, predicate):\n", " icol = headernum[colname]\n", " return [i[icol] for i in cleveland_data if predicate(i)]" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now run these expressions to get the mean age of male and female subjects." ] }, { "cell_type": "code", "collapsed": false, "input": [ "def fieldis(colname, cval):\n", " icol = headernum[colname]\n", " return lambda(x): x[icol] == cval\n", "\n", "[np.mean(select('age', fieldis('sex',1))), np.mean(select('age', fieldis('sex',0)))]" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> TODO: What were the mean ages for females and males?" ] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Histograms of Data Fields" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Plot histograms of age and resting blood pressure" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import pylab\n", "%matplotlib inline\n", "\n", "h1 = pylab.hist(age, 30)" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "code", "collapsed": false, "input": [ "bp = getcol('trestbps')\n", "h2 = pylab.hist(bp, 30)" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> TODO\n", "Describe the rough shape of the distribution of bps.\n", "Is it skewed? " ] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Scatter Plots" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Make scatter plots of:\n", "* age vs bp (resting blood pressure) \n", "* age vs thalach (max heart rate)\n", " " ] }, { "cell_type": "code", "collapsed": false, "input": [ "pylab.scatter(age, bp)" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "code", "collapsed": false, "input": [ "maxhr = getcol('thalach')\n", "pylab.scatter(age, maxhr)" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can augment the basic scatter plots with other information that might be relevant. In the plot below, we used the 'num' field to color the dots. num is an integer indicating the degree of heart disease from 0...4. We also make the dots larger with the s= argument to make the colors easier to see. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "pylab.scatter(age, bp, c=getcol('num'), s=50)" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To figure out what color encodes what value, we can do a simple plot of the values 0...4" ] }, { "cell_type": "code", "collapsed": false, "input": [ "pylab.scatter(range(5), range(5), c=range(5), s=50)" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> TODO: What do you notice about the distribution of num = 2 diagnoses?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "These scatter plots seem to show trends. To make those clearer we can overlay regression lines. The regression line minimizes the total squared vertical distance from the line to the data points, and shows the general trend for the data. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "# for numpy we need arrays instead of lists of values\n", "age = np.array(getcol('age'))\n", "bp = np.array(getcol('trestbps'))\n", "\n", "pylab.scatter(age, bp)\n", "m, b = np.polyfit(age, bp, 1)\n", "pylab.plot(age, m*age + b, '-', color='red')" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "code", "collapsed": false, "input": [ "maxhr = np.array(getcol('thalach'))\n", "\n", "pylab.scatter(age, maxhr)\n", "m, b = np.polyfit(age, maxhr, 1)\n", "pylab.plot(age, m*age + b, '-', color='red')" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Critical Thinking with Data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following scatter plot and regression line shows the relationship between blood pressure (X-axis) and heart disease (Y-axis). " ] }, { "cell_type": "code", "collapsed": false, "input": [ "num = np.array(getcol('num'))\n", "factor = bp\n", "\n", "pylab.scatter(factor, num)\n", "m, b = np.polyfit(factor, num, 1)\n", "pylab.plot(factor, m*factor + b, '-', color='red')" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> TODO: Based on this plot, do you think blood pressure influences heart disease?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now consider this plot of age versus num:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "num = np.array(getcol('num'))\n", "factor = age\n", "\n", "pylab.scatter(factor, num)\n", "m, b = np.polyfit(factor, num, 1)\n", "pylab.plot(factor, m*factor + b, '-', color='red')" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> TODO: Based on this plot of Age vs Num and the previous plot of Age vs BPS, what would you say now about the relation between BPS and Num?" ] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Dimension Reduction" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Recall that dimension reduction allows you to look at the dominant factors in high-dimensional data. Matplotlib includes the PCA function for this purpose. You use it like this:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "from matplotlib.mlab import PCA\n", "cleveland_matrix = np.array(cleveland_data, dtype=np.float64) # First put the data in a 2D array of double-precision floats\n", "results = PCA(cleveland_matrix[:,0:8]) # leave out columns with None in them\n", "yy = results.Y # returns the projections of the data into the principal component directions" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "code", "collapsed": false, "input": [ "pylab.scatter(yy[:,0],yy[:,1])" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> TODO: Do you see a relationship between the two main variables (X and Y axes of this plot)?" ] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Text Data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Download the NY times Dataset from here\n", "https://archive.ics.uci.edu/ml/machine-learning-databases/bag-of-words/docword.nips.txt.gz\n", "and save it to your lab2 directory. Unzip the file, producing docword.nips.txt.\n", "\n", "This file has 3 header lines: num docs, num distinct words, num total words. The following lines represent the documents with three fields:\n", "\n", "docid wordid wordcount\n", "\n", "We can read the file with a csv reader:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "with open(labdir+\"docword.nips.txt\") as csvfile:\n", " ndocs = int(csvfile.readline())\n", " nwords = int(csvfile.readline())\n", " nnz = int(csvfile.readline())\n", " nips_raw_data = list(csv.reader(csvfile, delimiter=' '))\n", " \n", "nips_data = [[int(x) for x in y] for y in nips_raw_data] # convert from string to numeric data" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "code", "collapsed": false, "input": [ "[ndocs, nwords, nnz]" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we're going to create an array 'counts' containing the counts for each word over all documents. Note that we use 'row[1]-1' as the index. The docword files use 1-based array indexing, but python uses zero-based indexing. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "counts = [0] * nwords\n", "for row in nips_data:\n", " counts[row[1]-1] += row[2] # increment the count for this word by the value in the third column" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next we zip the word index as the first column, and sort this table by word count in descending order." ] }, { "cell_type": "code", "collapsed": false, "input": [ "import operator\n", "wordtab = zip(range(nwords), counts)\n", "wordtab.sort(key=lambda x: x[1], reverse=True)" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The top (first) values in this list are the most frequent word ids (first column), and their counts (second column):" ] }, { "cell_type": "code", "collapsed": false, "input": [ "wordtab[0:8]" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now grab the vocabulary file for nips:\n", "https://archive.ics.uci.edu/ml/machine-learning-databases/bag-of-words/vocab.nips.txt\n", "and save it to lab2.\n", "Run the following to load it and create a dictionary (word -> wordid) and inverse dictionary (wordid -> word) from it. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "mydict = {} # word dictionary\n", "words = [''] * nwords # invese dictionary - just an array of strings\n", "i = 0\n", "with open(labdir+\"vocab.nips.txt\") as txtfile:\n", " for line in txtfile:\n", " word = line.rstrip('\\n')\n", " mydict[word] = i\n", " words[i] = word\n", " i += 1" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can find the top words using the inverse dictionary:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "topwords = [words[x] for x,y in wordtab[0:10]]\n", "topwords" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> TODO: What do you think is the topic of the NIPS dataset?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we can plot the counts words in rank order (decreasing order of frequency)." ] }, { "cell_type": "code", "collapsed": false, "input": [ "scounts = [y for x,y in wordtab]\n", "pylab.plot(scounts)" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "What form does this curve have?\n", "To make it clearer, lets to a log-log plot." ] }, { "cell_type": "code", "collapsed": false, "input": [ "pylab.loglog(scounts)" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> TODO: What is the approximate slope (in log-log space) of this curve over the frequency range 10^1 to 10^3 ?" ] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Lab 2 Responses" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The lab 2 responses should be entered here:\n", "https://bcourses.berkeley.edu/courses/1377158/quizzes/2045090" ] } ], "metadata": {} } ] }