{ "metadata": { "name": "" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "CS194-16: Introduction to Data Science\n", "\n", "__Name:__ *Please put your name*\n", "\n", "__Student ID:__ *Please put your student id*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Homework 3\n", "\n", "# Part 1: Predicting Movie Ratings\n", "\n", "One of the most common uses of data is to predict what users want. This allows Google to show you relevant ads, Amazon to recommend relevant products, and Netflix to recommend movies that you might like. In this assignment, you'll explore how to recommend movies to a user. We'll start with some basic methods, and then use machine learning to make more sophisticated predictions.\n", "\n", "We'll use Spark for this assignment. In part 1 of the assignment, you'll run Spark on your local machine and on a smaller dataset. The purpose of this part of the assignment is to get everything working before adding the complexities of running on many machines. The interface for running local Spark jobs is exactly the same as the interface for running jobs on a cluster, so you'll be using the same functions we used in lab, and all of the code you write locally can be executed on a cluster. In part 2, which will be released after the midterm, you'll run Spark on a cluster that we have running for you (like in the lab). You'll use the cluster to run your code on a larger dataset, and to predict which movies to recommend to yourself!\n", "\n", "As mentioned during the lab, think carefully before calling `collect()` on any datasets. When you're using a small, local dataset, calling `collect()` and then using Python to analyze the data locally will work fine, but this will not work when you're using a large dataset that doesn't fit in memory on one machine. Solutions that call `collect()` and do local analysis that could have been done with Spark will not receive full credit.\n", "\n", "We have created a [FAQ](#FAQ) at the bottom of this page (which is an expanded version of the FAQ from the lab) to help with common problems you may run into. If you run into a problem, please check the FAQ before posting on Piazza!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise 0: Setup\n", "\n", "a) As mentioned above, for this part of the assignment, you'll run Spark locally rather than on a cluster, for easier debugging. Begin by downloading Spark from [this link](http://people.apache.org/~tdas/spark-0.9.1-rc3/spark-0.9.1-bin-cdh4.tgz). Unzip and untar the file so you have a `spark-0.9.1-bin-cdh4` folder; this folder contains all of the code needed to run Spark. We need to do a little bit of setup to tell iPython how to find Spark (we set this up for you on the cluster machines, but you need to do it yourself when running in your own VM). We also need to start your own `SparkContext` (which we also did for you in the lab; the `SparkContext` was saved as `sc` in the lab). The `SparkContext` is like a master for just your application. It requests some resources from the cluster master, and it also breaks down jobs that you submit into stages of tasks. For example, when you call `map()` on an resilient distributed dataset (RDD; Spark's name for datasets stored in memory), the `SparkContext` decides how many `map` tasks to run, and launches the `map` tasks on the executors allocated by the cluster master.\n", "\n", "Fill in the path to the `spark` folder you just downloaded in the code below, and then execute it to create a `SparkContext` to use to run jobs." ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Configure the necessary Spark environment. pyspark needs SPARK_HOME setup\n", "# so it knows how to start the Spark master and some local workers for you to use.\n", "import os\n", "# Fill this in with the path to the spark-0.9.1-bin-cdh4 folder you just downloaded\n", "# (e.g., /home/saasbook/spark-0.9.1-bin-cdh4)\n", "path_to_spark = # YOUR CODE HERE\n", "os.environ['SPARK_HOME'] = path_to_spark\n", "\n", "# Set the python path so that we know where to find the pyspark files.\n", "import sys\n", "path_to_pyspark = os.path.join(path_to_spark, \"python\")\n", "sys.path.insert(0, path_to_pyspark)\n", "\n", "from pyspark import SparkContext\n", "# You can set the app name to whatever you want; this just affects what\n", "# will show up in the UI.\n", "app_name = \"i<3datascience\"\n", "sc = SparkContext(\"local\", app_name)" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Having trouble? Checkout the [section in the FAQ](#faq_create_context) that covers issues creating a SparkContext.**\n", "\n", "Even though you're running Spark locally, Spark still starts the application web UI where you can see your application and what tasks it's running. In a browser in the VM, go to [http://localhost:4040](http://localhost:4040/) to see the UI for your application. There's no Master UI running here (the UI we saw at port 8080 during the lab) because Spark doesn't use a master when you run in local mode.\n", "\n", "b) Next, download the datafiles that you'll need for the assignment from [https://github.com/amplab/datascience-sp14/raw/master/hw3/part1files.tar.gz](https://github.com/amplab/datascience-sp14/raw/master/hw3/part1files.tar.gz). You'll do all of your analysis on the `ratings.dat` and `movies.dat` datasets located in the `part1files` folder that you just downloaded. These are smaller versions of the datasets we used in lab 8. As in the lab, each entry in the ratings dataset is formatted as `UserID::MovieID::Rating::Timestamp` and each entry in the movies dataset is formatted as `MovieID::Title::Genres`. Read these two datasets into memory. You can count the number of entries in each dataset to ensure that you've loaded them correctly; the ratings dataset should have 100K entries and the movies dataset should have 1682 entries.\n", "\n", "Note that when you create a new dataset using `sc.textFile`, you can give an absolute path to the dataset on your filesystem, e.g. `/Users/kay/part1files/ratings.dat'." ] }, { "cell_type": "code", "collapsed": false, "input": [ "### YOUR CODE HERE\n", "\n", "ratings_count = # YOUR CODE HERE\n", "movies_count = # YOUR CODE HERE\n", "print \"%s ratings and %s movies in the datasets\" % (ratings_count, movies_count)" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise 1: Basic Recommendations\n", "\n", "a) One way to recommend movies is to always recommend the movies with the highest average rating. Use Spark to find the name and the average rating of the 5 movies with the highest average rating." ] }, { "cell_type": "code", "collapsed": false, "input": [ "### YOUR CODE HERE\n", "\n", "highest_rated_movies = # YOUR CODE HERE\n", "print \"5 highest rated movies (and associated average ratings): \", highest_rated_movies" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "b) The movies you found may seem a bit suspicious. How many ratings does each of those movies have?" ] }, { "cell_type": "code", "collapsed": false, "input": [ "### YOUR CODE HERE\n", "\n", "highest_rated_movies_rating_counts = # YOUR CODE HERE\n", "print \"Number of ratings for each of the 5 highest rated movies: \", highest_rated_movies_rating_counts" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "c) How can you improve your recommendations? Improve upon your recommendations in part (a) to recommend 5 movies that you expect to be well-liked. You are not expected to use any sophisticated machine learning techniques here; using just the Spark operations we learned in lab is sufficient." ] }, { "cell_type": "code", "collapsed": false, "input": [ "### YOUR CODE HERE\n", "\n", "recommended_movies = # YOUR CODE HERE.\n", "print \"5 recommended movies are: \", recommended_movies" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ ">__Describe how you improved on the recommendations in part (a) in at most 4 sentences.__ *Your answer here*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise 2: Collaborative Filtering\n", "\n", "You've learned about many of the basic transformations and actions that Spark allows us to do on distributed datasets. Spark also exposes some higher level functionality; in particular, machine learning using a component of Spark called MLlib. In this assignment, you'll use MLlib to make personalized movie recommendations for you using the movie data we've been analyzing.\n", "\n", "We're going to use a technique called collaborative filtering. The basic idea of collaborative filtering is that we start with a matrix whose entries are movie ratings. Each row represents a user and each column represents a particular movie (shown in red in the diagram below).\n", "\n", "We don't know all of the entries in this matrix, which is precisely why we need collaborative filtering. For each user, we have ratings for only a subset of the movies. With collaborative filtering, the idea is to approximate the ratings matrix by factorizing it as the product of two matrices: one that describes properties of each user (shown in green), and one that describes properties of each movie (shown in blue).\n", "\n", "![factorization](http://ampcamp.berkeley.edu/big-data-mini-course/img/matrix_factorization.png)\n", "\n", "We want to select these two matrices such that the error for the users/movie pairs where we know the correct ratings is minimized. The ALS (alternating least squares) algorithm does this by first randomly filling the users matrix with values and then optimizing the value of the movies such that the error is minimized. Then, it holds the movies matrix constrant and optimizes the value of the user's matrix. This alternation between which matrix to optimize is the reason for the \"alternating\" in the name.\n", "\n", "This optimization is what's being shown on the right in the image above. Given a fixed set of user factors (i.e., values in the users matrix), we use the known ratings to find the best values for the movie factors using the optimization written at the bottom of the figure. Then we \"alternate\" and pick the best user factors given fixed movie factors.\n", "\n", "For a simple example of what the users and movies matrices might look like, check out the [slides from lecture 9](http://goo.gl/l4Stce).\n", "\n", "a) Before jumping into the machine learning, you need to break up the dataset into a training set (which we'll use to train models), a validation set (which we'll use to choose the best model), and a test set. One way that people often partition data is using the time stamp: using the least significant digit of the timestamp is an essentially random way to split the dataset into multiple groups. Use the least significant digit of the rating timestamp to separate 60% of the data into a training set, 20% into a validation set, and the remaining 20% into a test set." ] }, { "cell_type": "code", "collapsed": false, "input": [ "### YOUR CODE HERE\n", "training = # YOUR CODE HERE \n", "validation = # YOUR CODE HERE\n", "test = # YOUR CODE HERE\n", "\n", "print \"Training: %s, validation: %s, test: %s\" % (training.count(), validation.count(), test.count())" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After splitting the dataset, your training set should have about 60K entries and the validation and test sets should have about 20K entries (the exact number of entries in each dataset will vary slightly depending on the method you used to split the data into the 3 sets).\n", "\n", "b) In the next part, you'll generate a few different models, and will need a way to decide which model is best. We'll use the root mean squared error (RMSE) to compute the error of each model. The root mean squared error is the square root of the average value of `(actual rating - predicted rating)^2` for all users and movies for which we have the actual rating." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ">__If your model perfectly predicts the user ratings, what will the root mean squared error be?__ *Your answer here*\n", "\n", ">__If all of the predicted ratings are off by one (they're 1 higher or lower than the actual ratings), what will the RMSE be?__ *Your answer here*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "c) Write a function to compute the sum of squared error given a `predicted` and `actual` RDD." ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Hint: you may want to use the math module to compute the square root!\n", "import math\n", "\n", "def compute_error(predicted, actual):\n", " \"\"\" Compute the root mean squared error between predicted and actual.\n", " \n", " Params:\n", " predicted: An RDD of predicted ratings for each movie and each user where each entry is in the form (user, movie, rating).\n", " actual: An RDD of actual ratings where each entry is in the form (user, movie, rating).\n", " \"\"\"\n", " ### YOUR CODE HERE\n", "\n", "# sc.parallelize turns a Python list into a Spark RDD.\n", "test_predicted = sc.parallelize([\n", " (1, 1, 5),\n", " (1, 2, 3),\n", " (1, 3, 4),\n", " (2, 1, 3),\n", " (2, 2, 2),\n", " (2, 3, 4)])\n", "test_actual = sc.parallelize([\n", " (1, 2, 3),\n", " (1, 3, 5),\n", " (2, 1, 5),\n", " (2, 2, 1)])\n", "# The error for the test datasets should be 1.2247\n", "print \"Error for test datasets: %s\" % compute_error(test_predicted, test_actual)" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "d) In this part, we'll use `ALS.train` to train a bunch of models, and we'll select the best model. The most important parameter to ALS is the rank, which is the number of columns in the Users matrix (green in the diagram above) or the number of rows in the Movies matrix. In general, a lower rank will mean higher error on the training dataset, but a high rank may lead to overfitting. Train models with ranks of 4, 8, 12, and 16 using the `training` dataset, predict the ratings for the `validation` dataset, and use the `compute_error` function you wrote in part `(b)` to compute the error. Which rank produces the best model, based on the error on the `validation` dataset? Note that the values here will be more meaningful when we run on a cluster with a larger dataset.\n", "\n", "To create the model, use `ALS.train(training_rdd, rank)`, which takes two parameters: an RDD in the format (user, movie, rating) to use to train the model, and an integer rank. To predict rating values, call `model.predictAll` with the `validation` dataset, where `model` is the model generated with `ALS.train`. `predictAll` accepts an RDD with each entry in the format (user, movie) and outputs an RDD with each entry in the format (user, movie, rating)." ] }, { "cell_type": "code", "collapsed": false, "input": [ "from pyspark.mllib.recommendation import ALS\n", "\n", "best_rank = # YOUR CODE HERE\n", "\n", "print \"The best model was trained with rank %s\" % best_rank" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "e) So far, we used the `training` and `validation` datasets to select the best model. Since we used these two datasets to determine what model is best, we can't use them to test how good the model is (otherwise we'd be vulnerable to overfitting). To decide how good our model is, we need to use the `test` dataset. Use the model you created in part (c) to predict the ratings for the test dataset and compute the RMSE." ] }, { "cell_type": "code", "collapsed": false, "input": [ "test_rmse = # YOUR CODE HERE\n", "\n", "print \"The model had a RMSE on the test set of %s\" % test_rmse" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You now have code to predict how users will rate movies! In the next part of the assignment, you'll run this code on a larger dataset using a cluster of machines, like we did in the lab. You'll use the larger dataset to generate a better model, and then will use that model to predict what movies to recommend to yourself. Until then, good luck on the midterm!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## Frequently Asked Questions\n", "\n", "### I'm not sure where to start. Some parts of this homework seem impossible to do with the functions we learned about in lab!\n", "\n", "With the exception of the ML functions that we introduce in this assignment, you should be able to complete all parts of this homework using only the functions we learned in lab (although you're welcome to use more features of Spark if you like!). You may need to use the functions introduced at the end of the lab (which we didn't get to in class); if you're having trouble, try walking through the last sections of the lab to see if they help.\n", "\n", "### How do I get to the UI?\n", "\n", "The UI for your application is on port 4040 on your local machine (so go to [http://localhost:4040](http://localhost:4040)). There is no master web UI when you run locally because there is no Spark master -- just a single worker process to run your tasks.\n", "\n", "### What are all of the operations I can do on a Spark dataset (RDD)?\n", "\n", "If you click the \"RDD\" link on [this Page](http://spark.apache.org/docs/0.9.0/api/pyspark/index.html), it lists all of the operations you can do on a Spark RDD. Spark also has a Scala API (Scala is a programming language similar to Java); the [documentation for the Scala functions](http://spark.apache.org/docs/0.9.0/scala-programming-guide.html) is sometimes more helpful, and the Python functions work in the same way.\n", "\n", "### How do I use matplotlib?\n", "\n", "There are lots of good examples on the [matplotlib website](http://matplotlib.org/index.html). For example, [this page](http://matplotlib.org/examples/pylab_examples/simple_plot.html) shows how to plot a single line.\n", "\n", "### Why am I getting an OutOfMemoryError?\n", "\n", "If you get an error that looks like: `org.apache.spark.SparkException: Job aborted: Exception while deserializing and fetching task: java.lang.OutOfMemoryError: Java heap space`, it probably means that you've tried to collect too much data on the machine where Python is running. This is likely to happen if you do `collect()` on a large dataset. The best way to remedy this problem is to restart your iPython notebook (go to the main server, at port 8888 of the machine you were assigned, click \"Shutdown\" on your notebook, and then open it again) and don't do `collect()` on a large dataset.\n", "\n", "Curious why you're getting a Java error when your program is written in Python? Spark is mostly written in Java (and Scala, a language built on top of Java). We're using `pyspark` here, which uses a translation layer to translate between Python and Java. Your Python `SparkContext` object is backed by a Java `SparkContext` object; all operations you run on Spark datasets are passsed through this Java object. So, if you try to collect a result that's too large, the Java Virtual Machine that's running the Java `SparkContext` runs out of memory.\n", "\n", "### Python / Spark is giving me a crazy weird error!\n", "\n", "Spark is mostly written in Scala and Java, and the Python version of the code (\"pyspark\") hooks into the Java implementation in a way that can make error messages very difficult to understand. If you get a hard-to-understand error when you run a Spark operation, we recommend first narrowing down the error so that you know exactly which operation caused the error. For example, if `rdd.groupByKey().map(lambda x: x[1])` fails with an error, separate the `groupByKey()` and `map()` calls onto separate lines so you know which one is causing the error. Next, double check the function signature to make sure you're passing the right arguments. Pyspark can fail with a weird error if a RDD operation is given the wrong number or type of arguments. If you're still stumped, try using `take(10)` to print out the first 10 entries in the dataset you're calling the RDD operation on. Make sure the function you're calling and the arguments you're passing in make sense given the format of the input dataset.\n", "\n", "### I ran some code and nothing happened!\n", "\n", "Are you sure? Some of the Spark operations will take a minute or so to run; look at the top of the iPython notebook to see if it says \"Kernel busy\". If so, it's busy running your code. Go checkout the Spark UI to see more about what's going on.\n", "\n", "### My code is taking forever to run. Did I do something wrong?\n", "\n", "Probably. In our solution code, none of the Spark jobs take more than a minute to run. If you ran something and it's taking forever, double check that you passed in the datasets you inteded to. If all else failed, create a small sample dataset and try your code on that to make sure things are working.\n", "\n", "If you end up with a job that's taking forever, you can kill the job by shutting down this notebook (which will destroy your `SparkContext` and the associated worker process that's doing work) and then re-launching it. Be sure to hit save first!\n", "\n", "\n", "### I'm having trouble creating `SparkContext`...\n", "\n", "#### I'm getting an error that says \"`Exception AttributeError: \"'SparkContext' object has no attribute '_jsc'\"`\".\n", "\n", "When you try to create a `SparkContext`, you may get an error that ends with a red box with text that looks like: `Exception AttributeError: \"'SparkContext' object has no attribute '_jsc'\" in > ignored`. This is a benign error that happens when the Spark Context tries to shut down, but it signals that there was an error when creating the SparkContext. Look at the error messages above this one to see what the real problem is.\n", "\n", "#### I'm getting an error that says \"`ImportError: No module named pyspark`\"\n", "\n", "This means that you didn't give the correct path to Spark when setting the `path_to_spark` variable. Ensure that the path listed here matches the path to the Spark folder you downloaded. When you change to the correct path, you may need to shutdown and restart your notebook for all of the path setup to work correctly again.\n", "\n", "#### I'm getting an error that says \"`ValueError: Cannot run multiple SparkContexts at once`\"\n", "\n", "You've created multiple `SparkContext`s, likely by executing the code to create a new Spark Context multiple times. Either (a) use the `SparkContext` that you created earlier or (b) shutdown your notebook, restart it, and then re-run the relevant code only a single time.\n", "\n", "#### I'm getting an error that says \"`ValueError: invalid literal for int() with base 10: ''`\"\n", "\n", "This error typically occurs because something went wrong when trying to launch the Java backend for your `SparkContext`. One way this can occur is if `JAVA_HOME` isn't setup properly (we set this up earlier in this class, but this may be an issue if you're using a new VM). If you look in the terminal where you launched `ipython notebook`, there is typically more log output that will tell you what went wrong.\n", "\n", "\n", "### Can you explain more about how `flatMap()` works?\n", "\n", "Let's look at an example: suppose you have an RDD where each entry lines of text in a book, and you want to make a new RDD where each entry is a single word. You could use `flatMap()` to do this as follows:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "lines_in_book = [\n", " \"I am Sam\",\n", " \"I am Sam\",\n", " \"Sam I am\",\n", " \"Do you like\",\n", " \"green eggs and ham?\"]\n", "# sc.parallelize turns a Python list into an RDD.\n", "lines_in_book_rdd = sc.parallelize(lines_in_book)\n", "\n", "# Notice that here, the function passed to flat map will return a list.\n", "words_rdd = lines_in_book_rdd.flatMap(lambda x: x.split(\" \"))\n", "\n", "print words_rdd.collect()" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The resulting RDD will have a list of words. The function we passed into `flatMap` returned a list of words for each entry in the original RDD, and `flatMap` combines all of these lists of words into a single list. Let's do this same thing with `map` to see what's different." ] }, { "cell_type": "code", "collapsed": false, "input": [ "list_of_words_rdd = lines_in_book_rdd.map(lambda x: x.split(\" \"))\n", "print list_of_words_rdd.collect()" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that now the resulting RDD has a list of lists.\n", "\n", "Another way to think about this is that `map()` always returns a new RDD with the same number of entries as the original RDD: each entry in the original RDD is mapped to one entry in the new RDD. With `flatMap()`, each entry in the original RDD maps to a list of 0 or more entries, so the new RDD isn't necessarily the same size as the old RDD (it might be larger or smaller)." ] } ], "metadata": {} } ] }