{ "metadata": { "name": "", "signature": "sha256:8d4371387e572dccb374d007f65ea7846d2fafc94b650e10f4e22e2a105c858f" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Homework 3: Prediction and Classification\n", "\n", "Due: Thursday, October 16, 2014 11:59 PM\n", "\n", " Download this assignment\n", "\n", "#### Submission Instructions\n", "To submit your homework, create a folder named lastname_firstinitial_hw# and place your IPython notebooks, data files, and any other files in this folder. Your IPython Notebooks should be completely executed with the results visible in the notebook. We should not have to run any code. Compress the folder (please use .zip compression) and submit to the CS109 dropbox in the appropriate folder. If we cannot access your work because these directions are not followed correctly, we will not grade your work.\n", "\n", "---\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Introduction\n", "\n", "In this assignment you will be using regression and classification to explore different data sets. \n", "\n", "**First**: You will use data from before 2002 in the [Sean Lahman's Baseball Database](http://seanlahman.com/baseball-archive/statistics) to create a metric for picking baseball players using linear regression. This is same database we used in Homework 1. This database contains the \"complete batting and pitching statistics from 1871 to 2013, plus fielding statistics, standings, team stats, managerial records, post-season data, and more\". [Documentation provided here](http://seanlahman.com/files/database/readme2012.txt).\n", "\n", "![\"Sabermetrics Science\"](http://saberseminar.com/wp-content/uploads/2012/01/saber-web.jpg)\n", "http://saberseminar.com/wp-content/uploads/2012/01/saber-web.jpg\n", "\n", "**Second**: You will use the famous [iris](http://en.wikipedia.org/wiki/Iris_flower_data_set) data set to perform a $k$-neareast neighbor classification using cross validation. While it was introduced in 1936, it is still [one of the most popular](http://archive.ics.uci.edu/ml/) example data sets in the machine learning community. Wikipedia describes the data set as follows: \"The data set consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor). Four features were measured from each sample: the length and the width of the sepals and petals, in centimetres.\" Here is an illustration what the four features measure:\n", "\n", "![\"iris data features\"](http://sebastianraschka.com/Images/2014_python_lda/iris_petal_sepal.png)\n", "http://sebastianraschka.com/Images/2014_python_lda/iris_petal_sepal.png\n", "\n", "**Third**: You will investigate the influence of higher dimensional spaces on the classification using another standard data set in machine learning called the The [digits data set](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html). This data set is similar to the MNIST data set discussed in the lecture. The main difference is, that each digit is represented by an 8x8 pixel image patch, which is considerably smaller than the 28x28 pixels from MNIST. In addition, the gray values are restricted to 16 different values (4 bit), instead of 256 (8 bit) for MNIST. \n", "\n", "**Finally**: In preparation for Homework 4, we want you to read through the following articles related to predicting the 2014 Senate Midterm Elections. \n", "\n", "* [Nate Silver's Methodology at while at NYT](http://fivethirtyeight.blogs.nytimes.com/methodology/)\n", "* [How The FiveThirtyEight Senate Forecast Model Works](http://fivethirtyeight.com/features/how-the-fivethirtyeight-senate-forecast-model-works/)\n", "* [Pollster Ratings v4.0: Methodology](http://fivethirtyeight.com/features/pollster-ratings-v40-methodology/)\n", "* [Pollster Ratings v4.0: Results](http://fivethirtyeight.com/features/pollster-ratings-v40-results/)\n", "* [Nate Silver versus Sam Wang](http://www.washingtonpost.com/blogs/plum-line/wp/2014/09/17/nate-silver-versus-sam-wang/)\n", "* [More Nate Silver versus Sam Wang](http://www.dailykos.com/story/2014/09/09/1328288/-Get-Ready-To-Rumbllllle-Battle-Of-The-Nerds-Nate-Silver-VS-Sam-Wang)\n", "* [Nate Silver explains critisims of Sam Wang](http://politicalwire.com/archives/2014/10/02/nate_silver_rebuts_sam_wang.html)\n", "* [Background on the feud between Nate Silver and Sam Wang](http://talkingpointsmemo.com/dc/nate-silver-sam-wang-feud)\n", "* [Are there swing voters?]( http://www.stat.columbia.edu/~gelman/research/unpublished/swing_voters.pdf)\n", "\n", "\n", "\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load Python modules" ] }, { "cell_type": "code", "collapsed": false, "input": [ "# special IPython command to prepare the notebook for matplotlib\n", "%matplotlib inline \n", "\n", "import requests \n", "import StringIO\n", "import zipfile\n", "import numpy as np\n", "import pandas as pd # pandas\n", "import matplotlib.pyplot as plt # module for plotting \n", "\n", "# If this module is not already installed, you may need to install it. \n", "# You can do this by typing 'pip install seaborn' in the command line\n", "import seaborn as sns \n", "\n", "import sklearn\n", "import sklearn.datasets\n", "import sklearn.cross_validation\n", "import sklearn.decomposition\n", "import sklearn.grid_search\n", "import sklearn.neighbors\n", "import sklearn.metrics" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 1 }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Problem 1: Sabermetrics\n", "\n", "Using data preceding the 2002 season pick 10 offensive players keeping the payroll under $20 million (assign each player the median salary). Predict how many games this team would win in a 162 game season. \n", "\n", "In this problem we will be returning to the [Sean Lahman's Baseball Database](http://seanlahman.com/baseball-archive/statistics) that we used in Homework 1. From this database, we will be extract five data sets containing information such as yearly stats and standing, batting statistics, fielding statistics, player names, player salaries and biographical information. You will explore the data in this database from before 2002 and create a metric for picking players. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Problem 1(a) \n", "\n", "Load in [these CSV files](http://seanlahman.com/files/database/lahman-csv_2014-02-14.zip) from the [Sean Lahman's Baseball Database](http://seanlahman.com/baseball-archive/statistics). For this assignment, we will use the 'Teams.csv', 'Batting.csv', 'Salaries.csv', 'Fielding.csv', 'Master.csv' tables. Read these tables into separate pandas DataFrames with the following names. \n", "\n", "CSV file name | Name of pandas DataFrame\n", ":---: | :---: \n", "Teams.csv | teams\n", "Batting.csv | players\n", "Salaries.csv | salaries\n", "Fielding.csv | fielding\n", "Master.csv | master" ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 2 }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Problem 1(b)\n", "\n", "Calculate the median salary for each player and create a pandas DataFrame called `medianSalaries` with four columns: (1) the player ID, (2) the first name of the player, (3) the last name of the player and (4) the median salary of the player. Show the head of the `medianSalaries` DataFrame. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 3 }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Problem 1(c)\n", "\n", "Now, consider only team/season combinations in which the teams played 162 Games. Exclude all data from before 1947. Compute the per plate appearance rates for singles, doubles, triples, HR, and BB. Create a new pandas DataFrame called `stats` that has the teamID, yearID, wins and these rates.\n", "\n", "**Hint**: Singles are hits that are not doubles, triples, nor HR. Plate appearances are base on balls plus at bats." ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 4 }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Problem 1(d)\n", "\n", "Is there a noticeable time trend in the rates computed computed in Problem 1(c)? " ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 5 }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Problem 1(e) \n", "\n", "Using the `stats` DataFrame from Problem 1(c), adjust the singles per PA rates so that the average across teams for each year is 0. Do the same for the doubles, triples, HR, and BB rates. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 6 }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Problem 1(f)\n", "\n", "Build a simple linear regression model to predict the number of wins from the average adjusted singles, double, triples, HR, and BB rates. To decide which of these terms to include fit the model to data from 2002 and compute the average squared residuals from predictions to years past 2002. Use the fitted model to define a new sabermetric summary: offensive predicted wins (OPW). Hint: the new summary should be a linear combination of one to five of the five rates.\n" ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 7 }, { "cell_type": "markdown", "metadata": {}, "source": [ "** Your answer here: **" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Problem 1(g)\n", "\n", "Now we will create a similar database for individual players. Consider only player/year combinations in which the player had at least 500 plate appearances. Consider only the years we considered for the calculations above (after 1947 and seasons with 162 games). For each player/year compute singles, doubles, triples, HR, BB per plate appearance rates. Create a new pandas DataFrame called `playerstats` that has the playerID, yearID and the rates of these stats. Remove the average for each year as for these rates as done in Problem 1(e). " ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 8 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Show the head of the `playerstats` DataFrame. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 9 }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Problem 1(h)\n", "\n", "Using the `playerstats` DataFrame created in Problem 1(g), create a new DataFrame called `playerLS` containing the player's lifetime stats. This DataFrame should contain the playerID, the year the player's career started, the year the player's career ended and the player's lifetime average for each of the quantities (singles, doubles, triples, HR, BB). For simplicity we will simply compute the avaerage of the rates by year (a more correct way is to go back to the totals). " ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 10 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Show the head of the `playerLS` DataFrame. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 11 }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Problem 1(i)\n", "\n", "Compute the OPW for each player based on the average rates in the `playerLS` DataFrame. You can interpret this summary statistic as the predicted wins for a team with 9 batters exactly like the player in question. Add this column to the playerLS DataFrame. Call this colum OPW." ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 12 }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Problem 1(j)\n", "\n", "Add four columns to the `playerLS` DataFrame that contains the player's position (C, 1B, 2B, 3B, SS, LF, CF, RF, or OF), first name, last name and median salary. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 13 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Show the head of the `playerLS` DataFrame. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 14 }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Problem 1(k)\n", "\n", "Subset the `playerLS` DataFrame for players active in 2002 and 2003 and played at least three years. Plot and describe the relationship bewteen the median salary (in millions) and the predicted number of wins. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 15 }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Problem 1(l)\n", "Pick one players from one of each of these 10 position C, 1B, 2B, 3B, SS, LF, CF, RF, DH, or OF keeping the total median salary of all 10 players below 20 million. Report their averaged predicted wins and total salary." ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 16 }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Problem 1(m)\n", "What do these players outperform in? Singles, doubles, triples HR or BB?" ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 17 }, { "cell_type": "markdown", "metadata": {}, "source": [ "** Your answer here: **" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Discussion for Problem 1\n", "\n", "*Write a brief discussion of your conclusions to the questions and tasks above in 100 words or less.*\n", "\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Problem 2: $k$-Nearest Neighbors and Cross Validation \n", "\n", "What is the optimal $k$ for predicting species using $k$-nearest neighbor classification \n", "on the four features provided by the iris dataset.\n", "\n", "In this problem you will get to know the famous iris data set, and use cross validation to select the optimal $k$ for a $k$-nearest neighbor classification. This problem set makes heavy use of the [sklearn](http://scikit-learn.org/stable/) library. In addition to Pandas, it is one of the most useful libraries for data scientists! After completing this homework assignment you will know all the basics to get started with your own machine learning projects in sklearn. \n", "\n", "Future lectures will give further background information on different classifiers and their specific strengths and weaknesses, but when you have the basics for sklearn down, changing the classifier will boil down to exchanging one to two lines of code.\n", "\n", "The data set is so popular, that sklearn provides an extra function to load it:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "#load the iris data set\n", "iris = sklearn.datasets.load_iris()\n", "\n", "X = iris.data \n", "Y = iris.target\n", "\n", "print X.shape, Y.shape" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "(150, 4) (150,)\n" ] } ], "prompt_number": 18 }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Problem 2(a) \n", "Split the data into a train and a test set. Use a random selection of 33% of the samples as test data. Sklearn provides the [`train_test_split`](http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.train_test_split.html) function for this purpose. Print the dimensions of all the train and test data sets you have created. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 19 }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Problem 2(b)\n", "\n", "Examine the data further by looking at the projections to the first two principal components of the data. Use the [`TruncatedSVD`](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html) function for this purpose, and create a scatter plot. Use the colors on the scatter plot to represent the different classes in the target data. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 20 }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Problem 2(c) \n", "\n", "In the lecture we discussed how to use cross validation to estimate the optimal value for $k$ (the number of nearest neighbors to base the classification on). Use ***ten fold cross validation*** to estimate the optimal value for $k$ for the iris data set. \n", "\n", "**Note**: For your convenience sklearn does not only include the [KNN classifier](http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html), but also a [grid search function](http://scikit-learn.org/stable/modules/generated/sklearn.grid_search.GridSearchCV.html#sklearn.grid_search.GridSearchCV). The function is called grid search, because if you have to optimize more than one parameter, it is common practice to define a range of possible values for each parameter. An exhaustive search then runs over the complete grid defined by all the possible parameter combinations. This can get very computation heavy, but luckily our KNN classifier only requires tuning of a single parameter for this problem set. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 21 }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Problem 2(d)\n", "\n", "Visualize the result by plotting the score results versus values for $k$. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 22 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Verify that the grid search has indeed chosen the right parameter value for $k$." ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 23 }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Problem 2(e)\n", "\n", "Test the performance of our tuned KNN classifier on the test set." ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 24 }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Discussion for Problem 2\n", "\n", "*Write a brief discussion of your conclusions to the questions and tasks above in 100 words or less.*\n", "\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Problem 3: The Curse and Blessing of Higher Dimensions\n", "\n", "In this problem we will investigate the influence of higher dimensional spaces on the classification. The data set is again one of the standard data sets from sklearn. The [digits data set](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) is similar to the MNIST data set discussed in the lecture. The main difference is, that each digit is represented by an 8x8 pixel image patch, which is considerably smaller than the 28x28 pixels from MNIST. In addition, the gray values are restricted to 16 different values (4 bit), instead of 256 (8 bit) for MNIST. \n", "\n", "First we again load our data set." ] }, { "cell_type": "code", "collapsed": false, "input": [ "digits = sklearn.datasets.load_digits()\n", "\n", "X = digits.data \n", "Y = digits.target\n", "\n", "print X.shape, Y.shape" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "(1797, 64) (1797,)\n" ] } ], "prompt_number": 25 }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Problem 3(a) \n", "\n", "Start with the same steps as in Problem 2. Split the data into train and test set. Use 33% of the samples as test data. Print the dimensions of all the train and test data sets you created. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 26 }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Problem 3(b) \n", "\n", "Similar to Problem 2(b), create a scatter plot of the projections to the first two PCs. Use the colors on the scatter plot to represent the different classes in the target data. How well can we separate the classes?\n", "\n", "**Hint**: Use a `Colormap` in matplotlib to represent the diferent classes in the target data. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 27 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create individual scatter plots using only two classes at a time to explore which classes are most difficult to distinguish in terms of class separability. You do not need to create scatter plots for all pairwise comparisons, but at least show one. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 28 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Give a brief interpretation of the scatter plot. Which classes look like hard to distinguish? Do both feature dimensions contribute to the class separability? " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "** Your answer here: **" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Problem 3(c) \n", "\n", "Write a **ten-fold cross validation** to estimate the optimal value for $k$ for the digits data set. *However*, this time we are interested in the influence of the number of dimensions we project the data down as well. \n", "\n", "Extend the cross validation as done for the iris data set, to optimize $k$ for different dimensional projections of the data. Create a boxplot showing test scores for the optimal $k$ for each $d$-dimensional subspace with $d$ ranging from one to ten. The plot should have the scores on the y-axis and the different dimensions $d$ on the x-axis. You can use your favorite plot function for the boxplots. [Seaborn](http://web.stanford.edu/~mwaskom/software/seaborn/index.html) is worth having a look at though. It is a great library for statistical visualization and of course also comes with a [`boxplot`](http://web.stanford.edu/~mwaskom/software/seaborn/generated/seaborn.boxplot.html) function that has simple means for changing the labels on the x-axis." ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your cross validation and evaluation code here ###" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 29 }, { "cell_type": "code", "collapsed": false, "input": [ "### Your boxplot code here ### " ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 30 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Write a short interpretation of the generated plot, answering the following questions:\n", "\n", "* What trend do you see in the plot for increasing dimensions?\n", "\n", "* Why do you think this is happening?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "** Your answer here: **" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Problem 3(d) \n", "\n", "**For AC209 Students**: Change the boxplot we generated above to also show the optimal value for $k$ chosen by the cross validation grid search. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "### Your code here ### " ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 31 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Write a short interpretation answering the following questions:\n", "\n", "* Which trend do you observe for the optimal value of $k$?\n", "\n", "* Why do you think this is happening?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "** Your answer here: **" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Discussion for Problem 3\n", "\n", "*Write a brief discussion of your conclusions to the questions and tasks above in 100 words or less.*\n", "\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Submission Instructions\n", "\n", "To submit your homework, create a folder named **lastname_firstinitial_hw#** and place your IPython notebooks, data files, and any other files in this folder. Your IPython Notebooks should be completely executed with the results visible in the notebook. We should not have to run any code. Compress the folder (please use .zip compression) and submit to the CS109 dropbox in the appropriate folder. *If we cannot access your work because these directions are not followed correctly, we will not grade your work.*\n" ] }, { "cell_type": "code", "collapsed": false, "input": [], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 31 } ], "metadata": {} } ] }