{ "metadata": { "name": "", "signature": "sha256:529afb8a95a960818c26b662d1724f5a60cb8236370d942a1724bca1fd5869fb" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "heading", "level": 1, "metadata": {}, "source": [ "Data Science with Hadoop - Predicting airline delays - part 2: Spark and ML-Lib" ] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Introduction" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this 2nd part of the supplement to the second blog on data science, we continue to demonstrate how to build a predictive model with Hadoop, this time we'll use [Apache Spark](https://spark.apache.org/) and [ML-Lib](http://spark.apache.org/docs/1.1.0/mllib-guide.html). \n", "\n", "In the context of our demo, we will show how to use Apache Spark via its Scala API to generate our feature matrix and also use ML-Lib (Spark's machine learning library) to build and evaluate our classification models.\n", "\n", "Recall from part 1 that we are constructing a predictive model for flight delays. Our source dataset resides [here](http://stat-computing.org/dataexpo/2009/the-data.html), and includes details about flights in the US from the years 1987-2008. We have also enriched the data with [weather information](http://www.ncdc.noaa.gov/cdo-web/datasets/), where we find daily temperatures (min/max), wind speed, snow conditions and precipitation. \n", "\n", "We will build a supervised learning model to predict flight delays for flights leaving O'Hare International airport (ORD). We will use the year 2007 data to build the model, and test its validity using data from 2008." ] }, { "cell_type": "heading", "level": 1, "metadata": {}, "source": [ "Pre-processing with Hadoop and Spark" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Apache Spark](https://spark.apache.org/)'s basic data abstraction is that of an RDD (resilient distributed dataset), which is a fault-tolerant collection of elements that can be operated on in parallel across your Hadoop cluster. \n", "\n", "Spark's API (available in Scala, Python or Java) supports a variety of transformations such as map() and flatMap(), filter(), join(), and others to create and manipulate RDDs. For a full description of the API please check the [Spark API programming guide]( http://spark.apache.org/docs/1.1.0/programming-guide.html). " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Recall from [part 1](http://hortonworks.com/blog/data-science-apacheh-hadoop-predicting-airline-delays/) that in our first iteration we generated the following features for each flight:\n", "* **month**: winter months should have more delays than summer months\n", "* **day of month**: this is likely not a very predictive variable, but let's keep it in anyway\n", "* **day of week**: weekend vs. weekday\n", "* **hour of the day**: later hours tend to have more delays\n", "* **Carrier**: we might expect some carriers to be more prone to delays than others\n", "* **Destination airport**: we expect some airports to be more prone to delays than others; and\n", "* **Distance**: interesting to see if this variable is a good predictor of delay\n", "\n", "We will use Spark RDDs to perform the same pre-processing, transforming the raw flight delay dataset into the two feature matrices: data_2007 (our training set) and data_2008 (our test set).\n", "\n", "The case class *DelayRec* that encapsulates a flight delay record represents the feature vector, and its methods do most of the heavy lifting: \n", "1. to_date() is a helper method to convert year/month/day to a string\n", "1. gen_features(row) takes a row of inputs and generates a key/value tuple where the key is the date string (output of *to_date*) and the value is the feature value. We don't use the key in this iteraion, but we will use it in the second iteration to join with the weather data.\n", "1. the get_hour() method extracts the 2-digit hour portion of the departure time\n", "1. The days_from_nearest_holiday() method computes the minimum distance (in days) of the provided year/month/date from any holiday in the list *holidays*.\n", "\n", "With *DelayRec* in place, our processing takes on the following steps (in the function *prepFlightDelays*):\n", "1. We read the raw input file with Spark's *SparkContext.textFile* method, resulting in an RDD\n", "1. Each row is parsed with *CSVReader* into fields, and populated into a *DelayRec* object\n", "1. We then perform a sequence of RDD transformations on the input RDD to make sure we only have rows that correspond to flights that did not get cancelled and originated from ORD.\n", "\n", "Finally, we use the *gen_features* method to generate the final feature vector per row, as a set of doubles.\n" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import org.apache.spark.rdd._\n", "import scala.collection.JavaConverters._\n", "import au.com.bytecode.opencsv.CSVReader\n", "\n", "import java.io._\n", "import org.joda.time._\n", "import org.joda.time.format._\n", "\n", "case class DelayRec(year: String,\n", " month: String,\n", " dayOfMonth: String,\n", " dayOfWeek: String,\n", " crsDepTime: String,\n", " depDelay: String,\n", " origin: String,\n", " distance: String,\n", " cancelled: String) {\n", "\n", " val holidays = List(\"01/01/2007\", \"01/15/2007\", \"02/19/2007\", \"05/28/2007\", \"06/07/2007\", \"07/04/2007\",\n", " \"09/03/2007\", \"10/08/2007\" ,\"11/11/2007\", \"11/22/2007\", \"12/25/2007\",\n", " \"01/01/2008\", \"01/21/2008\", \"02/18/2008\", \"05/22/2008\", \"05/26/2008\", \"07/04/2008\",\n", " \"09/01/2008\", \"10/13/2008\" ,\"11/11/2008\", \"11/27/2008\", \"12/25/2008\")\n", "\n", " def gen_features: (String, Array[Double]) = {\n", " val values = Array(\n", " depDelay.toDouble,\n", " month.toDouble,\n", " dayOfMonth.toDouble,\n", " dayOfWeek.toDouble,\n", " get_hour(crsDepTime).toDouble,\n", " distance.toDouble,\n", " days_from_nearest_holiday(year.toInt, month.toInt, dayOfMonth.toInt)\n", " )\n", " new Tuple2(to_date(year.toInt, month.toInt, dayOfMonth.toInt), values)\n", " }\n", "\n", " def get_hour(depTime: String) : String = \"%04d\".format(depTime.toInt).take(2)\n", " def to_date(year: Int, month: Int, day: Int) = \"%04d%02d%02d\".format(year, month, day)\n", "\n", " def days_from_nearest_holiday(year:Int, month:Int, day:Int): Int = {\n", " val sampleDate = new DateTime(year, month, day, 0, 0)\n", "\n", " holidays.foldLeft(3000) { (r, c) =>\n", " val holiday = DateTimeFormat.forPattern(\"MM/dd/yyyy\").parseDateTime(c)\n", " val distance = Math.abs(Days.daysBetween(holiday, sampleDate).getDays)\n", " math.min(r, distance)\n", " }\n", " }\n", " }\n", "\n", "// function to do a preprocessing step for a given file\n", "def prepFlightDelays(infile: String): RDD[DelayRec] = {\n", " val data = sc.textFile(infile)\n", "\n", " data.map { line =>\n", " val reader = new CSVReader(new StringReader(line))\n", " reader.readAll().asScala.toList.map(rec => DelayRec(rec(0),rec(1),rec(2),rec(3),rec(5),rec(15),rec(16),rec(18),rec(21)))\n", " }.map(list => list(0))\n", " .filter(rec => rec.year != \"Year\")\n", " .filter(rec => rec.cancelled == \"0\")\n", " .filter(rec => rec.origin == \"ORD\")\n", "}\n", "\n", "val data_2007 = prepFlightDelays(\"airline/delay/2007.csv\").map(rec => rec.gen_features._2)\n", "val data_2008 = prepFlightDelays(\"airline/delay/2008.csv\").map(rec => rec.gen_features._2)\n", "data_2007.take(5).map(x => x mkString \",\").foreach(println)" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stderr", "text": [] }, { "output_type": "stream", "stream": "stdout", "text": [ "-8.0,1.0,25.0,4.0,11.0,719.0,10.0\n", "41.0,1.0,28.0,7.0,15.0,925.0,13.0\n", "45.0,1.0,29.0,1.0,20.0,316.0,14.0\n", "-9.0,1.0,17.0,3.0,19.0,719.0,2.0\n", "180.0,1.0,12.0,5.0,17.0,316.0,3.0\n" ] } ], "prompt_number": 1 }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Modeling with Spark and ML-Lib" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With the data_2007 dataset (which we'll use for training) and the data_2008 dataset (which we'll use for validation) as RDDs, we now build a predictive model using Spark's [ML-Lib](http://spark.apache.org/docs/1.1.0/mllib-guide.html) machine learning library.\n", "\n", "ML-Lib is Spark\u2019s scalable machine learning library, which includes various learning algorithms and utilities, including classification, regression, clustering, collaborative filtering, dimensionality reduction, and others. \n", "\n", "If you compare ML-Lib to Scikit-learn, at the moment ML-Lib lacks a few important algorithms like Random Forest or Gradient Boosted Trees. Having said that, we see a strong pace of innovation from the ML-Lib community and expect more algorithms and other features to be added soon (for example, Random Forest is being actively [worked on](https://github.com/apache/spark/pull/2435), and will likely be available in the next release).\n", "\n", "To use ML-Lib's machine learning algorithms, first we parse our feature matrices into RDDs of *LabeledPoint* objects (for both the training and test datasets). *LabeledPoint* is ML-Lib's abstraction for a feature vector accompanied by a label. We consider flight delays of 15 minutes or more as \"delays\" and mark it with a label of 1.0, and under 15 minutes as \"non-delay\" and mark it with a label of 0.0. \n", "\n", "We also use ML-Lib's *StandardScaler* class to normalize our feature values for both training and validation sets. This is important because of ML-Lib's use of Stochastic Gradient Descent, which is known to perform best if feature vectors are normalized." ] }, { "cell_type": "code", "collapsed": false, "input": [ "import org.apache.spark.mllib.regression.LabeledPoint\n", "import org.apache.spark.mllib.linalg.Vectors\n", "import org.apache.spark.mllib.feature.StandardScaler\n", "\n", "def parseData(vals: Array[Double]): LabeledPoint = {\n", " LabeledPoint(if (vals(0)>=15) 1.0 else 0.0, Vectors.dense(vals.drop(1)))\n", "}\n", "\n", "// Prepare training set\n", "val parsedTrainData = data_2007.map(parseData)\n", "parsedTrainData.cache\n", "val scaler = new StandardScaler(withMean = true, withStd = true).fit(parsedTrainData.map(x => x.features))\n", "val scaledTrainData = parsedTrainData.map(x => LabeledPoint(x.label, scaler.transform(Vectors.dense(x.features.toArray))))\n", "scaledTrainData.cache\n", "\n", "// Prepare test/validation set\n", "val parsedTestData = data_2008.map(parseData)\n", "parsedTestData.cache\n", "val scaledTestData = parsedTestData.map(x => LabeledPoint(x.label, scaler.transform(Vectors.dense(x.features.toArray))))\n", "scaledTestData.cache\n", "\n", "scaledTrainData.take(3).map(x => (x.label, x.features)).foreach(println)" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "(0.0,[-1.6160463330366548,1.054927299466599,0.03217026353736381,-0.5189244175441321,0.034083933424313526,-0.2801683099466359])\n", "(1.0,[-1.6160463330366548,1.3961052168540333,1.5354307758475527,0.3624320984120952,0.43165511884343954,-0.023273887437334728])\n", "(1.0,[-1.6160463330366548,1.5098311893165113,-1.4710902487728252,1.4641277433573794,-0.7436888225169864,0.06235758673243232])\n" ] }, { "output_type": "stream", "stream": "stderr", "text": [] } ], "prompt_number": 2 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that we use the RDD *cache* method to ensure that these computed RDDs (parsedTrainData, scaledTrainData, parsedTestData and scaledTestData) are cached in memory by Spark and not re-computed with each iteration of stochastic gradient descent.\n", "\n", "We also define a helper function to evaluate our metrics: precision, recall, accuracy and the F1-measure" ] }, { "cell_type": "code", "collapsed": false, "input": [ "// Function to compute evaluation metrics\n", "def eval_metrics(labelsAndPreds: RDD[(Double, Double)]) : Tuple2[Array[Double], Array[Double]] = {\n", " val tp = labelsAndPreds.filter(r => r._1==1 && r._2==1).count.toDouble\n", " val tn = labelsAndPreds.filter(r => r._1==0 && r._2==0).count.toDouble\n", " val fp = labelsAndPreds.filter(r => r._1==1 && r._2==0).count.toDouble\n", " val fn = labelsAndPreds.filter(r => r._1==0 && r._2==1).count.toDouble\n", "\n", " val precision = tp / (tp+fp)\n", " val recall = tp / (tp+fn)\n", " val F_measure = 2*precision*recall / (precision+recall)\n", " val accuracy = (tp+tn) / (tp+tn+fp+fn)\n", " new Tuple2(Array(tp, tn, fp, fn), Array(precision, recall, F_measure, accuracy))\n", "}" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [] }, { "output_type": "stream", "stream": "stderr", "text": [] } ], "prompt_number": 3 }, { "cell_type": "markdown", "metadata": {}, "source": [ "ML-Lib supports a few algorithms for supervised learning, among those are Linear Regression, Logistic Regression, Naive Bayes, Decision Tree and Linear SVM. We will use Logistic Regression and SVM, both of which are implemented using [Stochastic Gradient descent](http://en.wikipedia.org/wiki/Stochastic_gradient_descent) (SGD).\n", "\n", "Let's see how to build these models with ML-Lib:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import org.apache.spark.mllib.classification.LogisticRegressionWithSGD\n", "\n", "// Build the Logistic Regression model\n", "val model_lr = LogisticRegressionWithSGD.train(scaledTrainData, numIterations=100)\n", "\n", "// Predict\n", "val labelsAndPreds_lr = scaledTestData.map { point =>\n", " val pred = model_lr.predict(point.features)\n", " (pred, point.label)\n", "}\n", "val m_lr = eval_metrics(labelsAndPreds_lr)._2\n", "println(\"precision = %.2f, recall = %.2f, F1 = %.2f, accuracy = %.2f\".format(m_lr(0), m_lr(1), m_lr(2), m_lr(3)))" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "precision = 0.37, recall = 0.64, F1 = 0.47, accuracy = 0.59\n" ] }, { "output_type": "stream", "stream": "stderr", "text": [] } ], "prompt_number": 4 }, { "cell_type": "markdown", "metadata": {}, "source": [ "We have built a model using Logistic Regression with SGD using 100 iterations, and then used it to predict flight delays over the validation set to measure performance: precision, recall, F1 and accuracy. \n", "\n", "Next, let's try a Support Vector Machine model:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import org.apache.spark.mllib.classification.SVMWithSGD\n", "\n", "// Build the SVM model\n", "val svmAlg = new SVMWithSGD()\n", "svmAlg.optimizer.setNumIterations(100)\n", " .setRegParam(1.0)\n", " .setStepSize(1.0)\n", "val model_svm = svmAlg.run(scaledTrainData)\n", "\n", "// Predict\n", "val labelsAndPreds_svm = scaledTestData.map { point =>\n", " val pred = model_svm.predict(point.features)\n", " (pred, point.label)\n", "}\n", "val m_svm = eval_metrics(labelsAndPreds_svm)._2\n", "println(\"precision = %.2f, recall = %.2f, F1 = %.2f, accuracy = %.2f\".format(m_svm(0), m_svm(1), m_svm(2), m_svm(3)))" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "precision = 0.37, recall = 0.64, F1 = 0.47, accuracy = 0.59\n" ] }, { "output_type": "stream", "stream": "stderr", "text": [] } ], "prompt_number": 5 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since ML-Lib also has a strong Decision Tree implementation, let's use it here:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import org.apache.spark.mllib.tree.DecisionTree\n", "\n", "// Build the Decision Tree model\n", "val numClasses = 2\n", "val categoricalFeaturesInfo = Map[Int, Int]()\n", "val impurity = \"gini\"\n", "val maxDepth = 10\n", "val maxBins = 100\n", "val model_dt = DecisionTree.trainClassifier(parsedTrainData, numClasses, categoricalFeaturesInfo, impurity, maxDepth, maxBins)\n", "\n", "// Predict\n", "val labelsAndPreds_dt = parsedTestData.map { point =>\n", " val pred = model_dt.predict(point.features)\n", " (pred, point.label)\n", "}\n", "val m_dt = eval_metrics(labelsAndPreds_dt)._2\n", "println(\"precision = %.2f, recall = %.2f, F1 = %.2f, accuracy = %.2f\".format(m_dt(0), m_dt(1), m_dt(2), m_dt(3)))" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "precision = 0.41, recall = 0.24, F1 = 0.31, accuracy = 0.69\n" ] }, { "output_type": "stream", "stream": "stderr", "text": [] } ], "prompt_number": 6 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that overall accuracy is higher with the Decision Tree, but the precision and recall trade-off is different with higher precision and much lower recall. " ] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Building a richer model with flight delays, weather data using Apache Spark and ML-Lib" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Similar to our approach in the first part of this blog post, we now enrich the dataset by integrating weather data into our feature matrix, thus achieving better predictive performance overall for our model. \n", "\n", "To accomplish this with Apache Spark, we rewrite our previous *preprocess_spark* function to extract the same base features from the flight delay dataset, and also join those with five variables from the weather datasets: minimum and maximum temperature for the day, precipitation, snow and wind speed. Let's see how this is accomplished." ] }, { "cell_type": "code", "collapsed": false, "input": [ "import org.apache.spark.SparkContext._\n", "import scala.collection.JavaConverters._\n", "import au.com.bytecode.opencsv.CSVReader\n", "import java.io._\n", "\n", "// function to do a preprocessing step for a given file\n", "\n", "def preprocess_spark(delay_file: String, weather_file: String): RDD[Array[Double]] = { \n", " // Read wether data\n", " val delayRecs = prepFlightDelays(delay_file).map{ rec => \n", " val features = rec.gen_features\n", " (features._1, features._2)\n", " }\n", "\n", " // Read weather data into RDDs\n", " val station_inx = 0\n", " val date_inx = 1\n", " val metric_inx = 2\n", " val value_inx = 3\n", "\n", " def filterMap(wdata:RDD[Array[String]], metric:String):RDD[(String,Double)] = {\n", " wdata.filter(vals => vals(metric_inx) == metric).map(vals => (vals(date_inx), vals(value_inx).toDouble))\n", " }\n", "\n", " val wdata = sc.textFile(weather_file).map(line => line.split(\",\"))\n", " .filter(vals => vals(station_inx) == \"USW00094846\")\n", " val w_tmin = filterMap(wdata,\"TMIN\")\n", " val w_tmax = filterMap(wdata,\"TMAX\")\n", " val w_prcp = filterMap(wdata,\"PRCP\")\n", " val w_snow = filterMap(wdata,\"SNOW\")\n", " val w_awnd = filterMap(wdata,\"AWND\")\n", "\n", " delayRecs.join(w_tmin).map(vals => (vals._1, vals._2._1 ++ Array(vals._2._2)))\n", " .join(w_tmax).map(vals => (vals._1, vals._2._1 ++ Array(vals._2._2)))\n", " .join(w_prcp).map(vals => (vals._1, vals._2._1 ++ Array(vals._2._2)))\n", " .join(w_snow).map(vals => (vals._1, vals._2._1 ++ Array(vals._2._2)))\n", " .join(w_awnd).map(vals => vals._2._1 ++ Array(vals._2._2))\n", "}\n", "\n", "val data_2007 = preprocess_spark(\"airline/delay/2007.csv\", \"airline/weather/2007.csv\")\n", "val data_2008 = preprocess_spark(\"airline/delay/2008.csv\", \"airline/weather/2008.csv\")\n", "\n", "data_2007.take(5).map(x => x mkString \",\").foreach(println)" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "63.0,2.0,14.0,3.0,15.0,316.0,5.0,-139.0,-61.0,8.0,36.0,53.0\n", "0.0,2.0,14.0,3.0,12.0,925.0,5.0,-139.0,-61.0,8.0,36.0,53.0\n", "105.0,2.0,14.0,3.0,17.0,316.0,5.0,-139.0,-61.0,8.0,36.0,53.0\n", "36.0,2.0,14.0,3.0,19.0,719.0,5.0,-139.0,-61.0,8.0,36.0,53.0\n", "35.0,2.0,14.0,3.0,18.0,719.0,5.0,-139.0,-61.0,8.0,36.0,53.0\n" ] }, { "output_type": "stream", "stream": "stderr", "text": [] } ], "prompt_number": 6 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that the minimum and maximum temparature variables from the weather dataset are measured here in Celsius and multiplied by 10. So for example -139.0 would translate into -13.9 Celsius." ] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Modeling with weather data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We are going to repeat the previous models of SVM and Decision Tree with our enriched feature set. As before, we create an RDD of *LabeledPoint* objects, and normalize our dataset with ML-Lib's *StandardScaler*:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import org.apache.spark.mllib.regression.LabeledPoint\n", "import org.apache.spark.mllib.linalg.Vectors\n", "import org.apache.spark.mllib.feature.StandardScaler\n", "\n", "def parseData(vals: Array[Double]): LabeledPoint = {\n", " LabeledPoint(if (vals(0)>=15) 1.0 else 0.0, Vectors.dense(vals.drop(1)))\n", "}\n", "\n", "// Prepare training set\n", "val parsedTrainData = data_2007.map(parseData)\n", "val scaler = new StandardScaler(withMean = true, withStd = true).fit(parsedTrainData.map(x => x.features))\n", "val scaledTrainData = parsedTrainData.map(x => LabeledPoint(x.label, scaler.transform(Vectors.dense(x.features.toArray))))\n", "parsedTrainData.cache\n", "scaledTrainData.cache\n", "\n", "// Prepare test/validation set\n", "val parsedTestData = data_2008.map(parseData)\n", "val scaledTestData = parsedTestData.map(x => LabeledPoint(x.label, scaler.transform(Vectors.dense(x.features.toArray))))\n", "parsedTestData.cache\n", "scaledTestData.cache\n", "\n", "scaledTrainData.take(3).map(x => (x.label, x.features)).foreach(println)" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "(1.0,[-1.3223160050229035,-0.19605839762065824,-0.4689165738993619,0.362432098412094,-0.7436888225169838,-0.7083256807954716,-1.8498937310721373,-1.7543558972509141,-0.24059125907894596,2.6196648266835743,0.5456137506493843])\n", "(0.0,[-1.3223160050229035,-0.19605839762065824,-0.4689165738993619,-0.2985852885550763,0.4316551188434456,-0.7083256807954716,-1.8498937310721373,-1.7543558972509141,-0.24059125907894596,2.6196648266835743,0.5456137506493843])\n", "(1.0,[-1.3223160050229035,-0.19605839762065824,-0.4689165738993619,0.8031103563902076,-0.7436888225169838,-0.7083256807954716,-1.8498937310721373,-1.7543558972509141,-0.24059125907894596,2.6196648266835743,0.5456137506493843])\n" ] }, { "output_type": "stream", "stream": "stderr", "text": [] } ], "prompt_number": 8 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, let's build a Support Vector Machine model using this enriched feature matrix:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import org.apache.spark.mllib.classification.SVMWithSGD\n", "import org.apache.spark.mllib.optimization.L1Updater\n", "\n", "// Build the SVM model\n", "val svmAlg = new SVMWithSGD()\n", "svmAlg.optimizer.setNumIterations(100)\n", " .setRegParam(1.0)\n", " .setStepSize(1.0)\n", "val model_svm = svmAlg.run(scaledTrainData)\n", "\n", "// Predict\n", "val labelsAndPreds_svm = scaledTestData.map { point =>\n", " val pred = model_svm.predict(point.features)\n", " (pred, point.label)\n", "}\n", "val m_svm = eval_metrics(labelsAndPreds_svm)._2\n", "println(\"precision = %.2f, recall = %.2f, F1 = %.2f, accuracy = %.2f\".format(m_svm(0), m_svm(1), m_svm(2), m_svm(3)))" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "precision = 0.39, recall = 0.68, F1 = 0.49, accuracy = 0.61\n" ] }, { "output_type": "stream", "stream": "stderr", "text": [] } ], "prompt_number": 9 }, { "cell_type": "markdown", "metadata": {}, "source": [ "And finally, let's implement a Decision Tree model:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import org.apache.spark.mllib.tree.DecisionTree\n", "\n", "// Build the Decision Tree model\n", "val numClasses = 2\n", "val categoricalFeaturesInfo = Map[Int, Int]()\n", "val impurity = \"gini\"\n", "val maxDepth = 10\n", "val maxBins = 100\n", "val model_dt = DecisionTree.trainClassifier(scaledTrainData, numClasses, categoricalFeaturesInfo, impurity, maxDepth, maxBins)\n", "\n", "// Predict\n", "val labelsAndPreds_dt = scaledTestData.map { point =>\n", " val pred = model_dt.predict(point.features)\n", " (pred, point.label)\n", "}\n", "val m_dt = eval_metrics(labelsAndPreds_dt)._2\n", "println(\"precision = %.2f, recall = %.2f, F1 = %.2f, accuracy = %.2f\".format(m_dt(0), m_dt(1), m_dt(2), m_dt(3)))" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "precision = 0.52, recall = 0.35, F1 = 0.42, accuracy = 0.72\n" ] }, { "output_type": "stream", "stream": "stderr", "text": [] } ], "prompt_number": 10 }, { "cell_type": "markdown", "metadata": {}, "source": [ "As expected, the improved feature set increased the accuracy of our model for both SVM and Decision Tree models." ] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Summary" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this IPython notebook we have demonstrated how to build a predictive model in Scala with Apache Hadoop, Apache Spark and its machine learning library: ML-Lib. \n", "\n", "We have used Apache Spark on our HDP cluster to perform various types of data pre-processing and feature engineering tasks. We then applied a few ML-Lib machine learning algorithms such as Support Vector Machines and Decision Tree to the resulting datasets and showed how through iterations we continuously add new and improved features resulting in better model performance.\n", "\n", "In the next part of this demo series we will show how to perform the same learning task with R." ] } ], "metadata": {} } ] }