{ "metadata": { "name": "", "signature": "sha256:7ce5292fff087bd3fe623675ed06dd472f9e0de945d9b383f83f9f151eb1eaad" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "heading", "level": 1, "metadata": {}, "source": [ "RDD basics" ] }, { "cell_type": "heading", "level": 4, "metadata": {}, "source": [ "[Introduction to Spark with Python, by Jose A. Dianes](https://github.com/jadianes/spark-py-notebooks)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This notebook will introduce three basic but essential Spark operations. Two of them are the *transformations* `map` and `filter`. The other is the *action* `collect`. At the same time we will introduce the concept of *persistence* in Spark. " ] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Getting the data and creating the RDD" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we did in our first notebook, we will use the reduced dataset (10 percent) provided for the KDD Cup 1999, containing nearly half million network interactions. The file is provided as a Gzip file that we will download locally." ] }, { "cell_type": "code", "collapsed": false, "input": [ "import urllib\n", "f = urllib.urlretrieve (\"http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz\", \"kddcup.data_10_percent.gz\")" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can use this file to create our RDD." ] }, { "cell_type": "code", "collapsed": false, "input": [ "data_file = \"./kddcup.data_10_percent.gz\"\n", "raw_data = sc.textFile(data_file)" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 1 }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "The `filter` transformation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This transformation can be applied to RDDs in order to keep just elements that satisfy a certain condition. More concretely, a function is evaluated on every element in the original RDD. The new resulting RDD will contain just those elements that make the function return `True`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For example, imagine we want to count how many `normal.` interactions we have in our dataset. We can filter our `raw_data` RDD as follows. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "normal_raw_data = raw_data.filter(lambda x: 'normal.' in x)" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 2 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can count how many elements we have in the new RDD." ] }, { "cell_type": "code", "collapsed": false, "input": [ "from time import time\n", "t0 = time()\n", "normal_count = normal_raw_data.count()\n", "tt = time() - t0\n", "print \"There are {} 'normal' interactions\".format(normal_count)\n", "print \"Count completed in {} seconds\".format(round(tt,3))" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "There are 97278 'normal' interactions\n", "Count completed in 5.951 seconds\n" ] } ], "prompt_number": 3 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Remember from notebook 1 that we have a total of 494021 in our 10 percent dataset. Here we can see that 97278 contain the `normal.` tag word. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that we have measured the elapsed time for counting the elements in the RDD. We have done this because we wanted to point out that actual (distributed) computations in Spark take place when we execute *actions* and not *transformations*. In this case `count` is the action we execute on the RDD. We can apply as many transformations as we want on a our RDD and no computation will take place until we call the first action that, in this case takes a few seconds to complete." ] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "The `map` transformation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By using the `map` transformation in Spark, we can apply a function to every element in our RDD. Python's lambdas are specially expressive for this particular." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this case we want to read our data file as a CSV formatted one. We can do this by applying a lambda function to each element in the RDD as follows." ] }, { "cell_type": "code", "collapsed": false, "input": [ "from pprint import pprint\n", "csv_data = raw_data.map(lambda x: x.split(\",\"))\n", "t0 = time()\n", "head_rows = csv_data.take(5)\n", "tt = time() - t0\n", "print \"Parse completed in {} seconds\".format(round(tt,3))\n", "pprint(head_rows[0])" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "Parse completed in 1.715 seconds\n", "[u'0',\n", " u'tcp',\n", " u'http',\n", " u'SF',\n", " u'181',\n", " u'5450',\n", " u'0',\n", " u'0',\n", " u'0',\n", " u'0',\n", " u'0',\n", " u'1',\n", " u'0',\n", " u'0',\n", " u'0',\n", " u'0',\n", " u'0',\n", " u'0',\n", " u'0',\n", " u'0',\n", " u'0',\n", " u'0',\n", " u'8',\n", " u'8',\n", " u'0.00',\n", " u'0.00',\n", " u'0.00',\n", " u'0.00',\n", " u'1.00',\n", " u'0.00',\n", " u'0.00',\n", " u'9',\n", " u'9',\n", " u'1.00',\n", " u'0.00',\n", " u'0.11',\n", " u'0.00',\n", " u'0.00',\n", " u'0.00',\n", " u'0.00',\n", " u'0.00',\n", " u'normal.']\n" ] } ], "prompt_number": 4 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Again, all action happens once we call the first Spark *action* (i.e. *take* in this case). What if we take a lot of elements instead of just the first few? " ] }, { "cell_type": "code", "collapsed": false, "input": [ "t0 = time()\n", "head_rows = csv_data.take(100000)\n", "tt = time() - t0\n", "print \"Parse completed in {} seconds\".format(round(tt,3))" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "Parse completed in 8.629 seconds\n" ] } ], "prompt_number": 5 }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that it takes longer. The `map` function is applied now in a distributed way to a lot of elements on the RDD, hence the longer execution time." ] }, { "cell_type": "heading", "level": 3, "metadata": {}, "source": [ "Using `map` and predefined functions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Of course we can use predefined functions with `map`. Imagine we want to have each element in the RDD as a key-value pair where the key is the tag (e.g. *normal*) and the value is the whole list of elements that represents the row in the CSV formatted file. We could proceed as follows. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "def parse_interaction(line):\n", " elems = line.split(\",\")\n", " tag = elems[41]\n", " return (tag, elems)\n", "\n", "key_csv_data = raw_data.map(parse_interaction)\n", "head_rows = key_csv_data.take(5)\n", "pprint(head_rows[0])" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "(u'normal.',\n", " [u'0',\n", " u'tcp',\n", " u'http',\n", " u'SF',\n", " u'181',\n", " u'5450',\n", " u'0',\n", " u'0',\n", " u'0',\n", " u'0',\n", " u'0',\n", " u'1',\n", " u'0',\n", " u'0',\n", " u'0',\n", " u'0',\n", " u'0',\n", " u'0',\n", " u'0',\n", " u'0',\n", " u'0',\n", " u'0',\n", " u'8',\n", " u'8',\n", " u'0.00',\n", " u'0.00',\n", " u'0.00',\n", " u'0.00',\n", " u'1.00',\n", " u'0.00',\n", " u'0.00',\n", " u'9',\n", " u'9',\n", " u'1.00',\n", " u'0.00',\n", " u'0.11',\n", " u'0.00',\n", " u'0.00',\n", " u'0.00',\n", " u'0.00',\n", " u'0.00',\n", " u'normal.'])\n" ] } ], "prompt_number": 6 }, { "cell_type": "markdown", "metadata": {}, "source": [ "That was easy, wasn't it?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In our notebook about working with key-value pairs we will use this type of RDDs to do data aggregations (e.g. count by key)." ] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "The `collect` action" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So far we have used the actions `count` and `take`. Another basic action we need to learn is `collect`. Basically it will get all the elements in the RDD into memory for us to work with them. For this reason it has to be used with care, specially when working with large RDDs. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "An example using our raw data. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "t0 = time()\n", "all_raw_data = raw_data.collect()\n", "tt = time() - t0\n", "print \"Data collected in {} seconds\".format(round(tt,3))" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "Data collected in 17.927 seconds\n" ] } ], "prompt_number": 9 }, { "cell_type": "markdown", "metadata": {}, "source": [ "That took longer as any other action we used before, of course. Every Spark worker node that has a fragment of the RDD has to be coordinated in order to retrieve its part, and then *reduce* everything together. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As a last example combining all the previous, we want to collect all the `normal` interactions as key-value pairs. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "# get data from file\n", "data_file = \"./kddcup.data_10_percent.gz\"\n", "raw_data = sc.textFile(data_file)\n", "\n", "# parse into key-value pairs\n", "key_csv_data = raw_data.map(parse_interaction)\n", "\n", "# filter normal key interactions\n", "normal_key_interactions = key_csv_data.filter(lambda x: x[0] == \"normal.\")\n", "\n", "# collect all\n", "t0 = time()\n", "all_normal = normal_key_interactions.collect()\n", "tt = time() - t0\n", "normal_count = len(all_normal)\n", "print \"Data collected in {} seconds\".format(round(tt,3))\n", "print \"There are {} 'normal' interactions\".format(normal_count)" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "Data collected in 12.485 seconds\n", "There are 97278 normal interactions\n" ] } ], "prompt_number": 13 }, { "cell_type": "markdown", "metadata": {}, "source": [ "This count matches with the previous count for `normal` interactions. The new procedure is more time consuming. This is because we retrieve all the data with `collect` and then use Python's `len` on the resulting list. Before we were just counting the total number of elements in the RDD by using `count`. " ] } ], "metadata": {} } ] }