{ "metadata": { "name": "", "signature": "sha256:a2c1f8578bcf768c504abca0cb1c7d5e6369914c12c14b66c2f3561eaaf2f6fd" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "##mrjob##\n", "\n", "__mrjob__ is a software package developed by the restaurant recommendation company _Yelp_. \n", "It's goal is to simplify the deployment of map-reduce jobs based on streaming and python onto different \n", "frameworks such as Hadoop on a private cluster or hadoop on AWS (called EMR).\n", "\n", "* You can read more about mrjob here: https://pythonhosted.org/mrjob/index.html \n", "* and you can clone it from github here: https://github.com/yelp/mrjob\n", "\n", "In this notebook we run a simple word-count example, add to it some logging commands, and look at two modes of running the job." ] }, { "cell_type": "code", "collapsed": false, "input": [ "import os\n", "home_dir=os.environ['HOME']\n", "root_dir = '/Users/yoavfreund/BigData/mrjob'\n", "examples_dir=root_dir+'/examples/'\n", "!ls -l $examples_dir" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "total 152\r\n", "-rw-r--r-- 1 yoavfreund staff 0 Apr 30 17:08 __init__.py\r\n", "drwxr-xr-x 5 yoavfreund staff 170 Apr 30 17:08 \u001b[34mbash_wrap\u001b[m\u001b[m\r\n", "drwxr-xr-x 3 yoavfreund staff 102 Apr 30 17:08 \u001b[34mcontrib\u001b[m\u001b[m\r\n", "-rw-r--r-- 1 yoavfreund staff 3176 Apr 30 17:08 mr_cmd.py\r\n", "-rw-r--r-- 1 yoavfreund staff 1198 Apr 30 17:08 mr_grep.py\r\n", "-rw-r--r-- 1 yoavfreund staff 2125 Apr 30 17:08 mr_jar_step_example.py\r\n", "-rw-r--r-- 1 yoavfreund staff 4108 Apr 30 17:08 mr_log_sampler.py\r\n", "-rwxr-xr-x 1 yoavfreund staff 1972 Apr 30 17:08 \u001b[31mmr_most_used_word.py\u001b[m\u001b[m\r\n", "-rw-r--r-- 1 yoavfreund staff 3400 Apr 30 17:08 mr_next_word_stats.py\r\n", "-rw-r--r-- 1 yoavfreund staff 3501 Apr 30 17:08 mr_page_rank.py\r\n", "drwxr-xr-x 6 yoavfreund staff 204 Apr 30 17:08 \u001b[34mmr_postfix_bounce\u001b[m\u001b[m\r\n", "-rw-r--r-- 1 yoavfreund staff 21954 Apr 30 17:08 mr_text_classifier.py\r\n", "drwxr-xr-x 6 yoavfreund staff 204 Apr 30 17:08 \u001b[34mmr_travelling_salesman\u001b[m\u001b[m\r\n", "-rw-r--r-- 1 yoavfreund staff 1552 Apr 30 17:08 mr_wc.py\r\n", "-rwxr-xr-x 1 yoavfreund staff 1977 Apr 30 17:08 \u001b[31mmr_wc.rb\u001b[m\u001b[m\r\n", "-rwxr-xr-x 1 yoavfreund staff 1065 Apr 30 17:08 \u001b[31mmr_word_freq_count.py\u001b[m\u001b[m\r\n", "-rw-r--r-- 1 yoavfreund staff 4887 Apr 30 17:08 py3k_word_freq_count.py\r\n" ] } ], "prompt_number": 3 }, { "cell_type": "code", "collapsed": false, "input": [ "filename=examples_dir+'mr_word_freq_count.py'\n", "print filename\n", "!ls $filaname\n", "# load example code from mr jobs as a starting point\n", "%load $filename" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "/Users/yoavfreund/BigData/mrjob/examples/mr_word_freq_count.py\n", "Simple use of mrjob.ipynb Weather Analysis.ipynb counts\r\n" ] } ], "prompt_number": 4 }, { "cell_type": "code", "collapsed": false, "input": [ "#!/usr/bin/python\n", "# Copyright 2009-2010 Yelp\n", "#\n", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# http://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License.\n", "\"\"\"The classic MapReduce job: count the frequency of words.\n", "\"\"\"\n", "from mrjob.job import MRJob\n", "import re\n", "\n", "WORD_RE = re.compile(r\"[\\w']+\")\n", "\n", "\n", "class MRWordFreqCount(MRJob):\n", "\n", " def mapper(self, _, line):\n", " for word in WORD_RE.findall(line):\n", " yield (word.lower(), 1)\n", "\n", " def combiner(self, word, counts):\n", " yield (word, sum(counts))\n", "\n", " def reducer(self, word, counts):\n", " yield (word, sum(counts))\n", "\n", "\n", "if __name__ == '__main__':\n", " MRWordFreqCount.run()\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "code", "collapsed": false, "input": [ "%%writefile mr_word_freq_count.py\n", "#!/usr/bin/python\n", "# Copyright 2009-2010 Yelp\n", "#\n", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# http://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License.\n", "\"\"\"The classic MapReduce job: count the frequency of words.\n", "\"\"\"\n", "from mrjob.job import MRJob\n", "import re\n", "from sys import stderr\n", "\n", "WORD_RE = re.compile(r\"[\\w']+\")\n", "\n", "#logfile=open('log','w')\n", "logfile=stderr\n", "\n", "class MRWordFreqCount(MRJob):\n", "\n", " def mapper(self, _, line):\n", " for word in WORD_RE.findall(line):\n", " logfile.write('mapper '+word.lower()+'\\n')\n", " yield (word.lower(), 1)\n", "\n", " def combiner(self, word, counts):\n", " #yield (word, sum(counts))\n", " l_counts=[c for c in counts] # extract list from iterator\n", " S=sum(l_counts)\n", " logfile.write('combiner '+word+' ['+','.join([str(c) for c in l_counts])+']='+str(S)+'\\n')\n", " yield (word, S)\n", "\n", " def reducer(self, word, counts):\n", " #yield (word, sum(counts))\n", " l_counts=[c for c in counts] # extract list from iterator\n", " S=sum(l_counts)\n", " logfile.write('reducer '+word+' ['+','.join([str(c) for c in l_counts])+']='+str(S)+'\\n')\n", " yield (word, S)\n", "\n", "if __name__ == '__main__':\n", " MRWordFreqCount.run()\n" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "Overwriting mr_word_freq_count.py\n" ] } ], "prompt_number": 6 }, { "cell_type": "code", "collapsed": false, "input": [ "!python mr_word_freq_count.py $root_dir/README.rst > counts\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "code", "collapsed": false, "input": [ "!cat log" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "cat: log: No such file or directory\r\n" ] } ], "prompt_number": 8 }, { "cell_type": "code", "collapsed": false, "input": [ "!cat counts" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "\"'__main__'\"\t1\r\n", "\"04\"\t1\r\n", "\"05\"\t1\r\n", "\"08\"\t1\r\n", "\"1\"\t1\r\n", "\"2\"\t2\r\n", "\"2009\"\t1\r\n", "\"2010\"\t1\r\n", "\"2011\"\t4\r\n", "\"2012\"\t1\r\n", "\"4\"\t1\r\n", "\"4898987\"\t1\r\n", "\"5\"\t1\r\n", "\"_\"\t18\r\n", "\"__name__\"\t1\r\n", "\"a\"\t3\r\n", "\"access\"\t1\r\n", "\"accordingly\"\t1\r\n", "\"account\"\t3\r\n", "\"advanced\"\t2\r\n", "\"aimotion\"\t1\r\n", "\"allows\"\t1\r\n", "\"also\"\t1\r\n", "\"amazon\"\t6\r\n", "\"amazon's\"\t1\r\n", "\"an\"\t2\r\n", "\"analysis\"\t2\r\n", "\"and\"\t12\r\n", "\"apache\"\t1\r\n", "\"automatically\"\t1\r\n", "\"aws\"\t5\r\n", "\"aws_access_key_id\"\t2\r\n", "\"aws_secret_access_key\"\t2\r\n", "\"basics\"\t1\r\n", "\"basis\"\t1\r\n", "\"blind\"\t3\r\n", "\"blip\"\t1\r\n", "\"blogspot\"\t1\r\n", "\"buy\"\t1\r\n", "\"by\"\t1\r\n", "\"ci\"\t2\r\n", "\"class\"\t1\r\n", "\"classic\"\t1\r\n", "\"click\"\t1\r\n", "\"cluster\"\t5\r\n", "\"code\"\t4\r\n", "\"com\"\t10\r\n", "\"combiner\"\t1\r\n", "\"compile\"\t1\r\n", "\"computing\"\t1\r\n", "\"conf\"\t6\r\n", "\"config\"\t1\r\n", "\"configs\"\t1\r\n", "\"configuration\"\t1\r\n", "\"contents\"\t1\r\n", "\"count\"\t1\r\n", "\"counts\"\t7\r\n", "\"create\"\t1\r\n", "\"credentials\"\t1\r\n", "\"def\"\t3\r\n", "\"development\"\t1\r\n", "\"discussion\"\t1\r\n", "\"distributed\"\t1\r\n", "\"docs\"\t2\r\n", "\"documentation\"\t5\r\n", "\"duplicate\"\t1\r\n", "\"e\"\t1\r\n", "\"easily\"\t1\r\n", "\"elastic\"\t5\r\n", "\"elasticmapreduce\"\t2\r\n", "\"emr\"\t9\r\n", "\"en\"\t1\r\n", "\"environment\"\t3\r\n", "\"error\"\t1\r\n", "\"etc\"\t1\r\n", "\"everyone\"\t1\r\n", "\"example\"\t1\r\n", "\"examples\"\t4\r\n", "\"features\"\t2\r\n", "\"feeds\"\t1\r\n", "\"file\"\t2\r\n", "\"findall\"\t1\r\n", "\"for\"\t8\r\n", "\"frequency\"\t1\r\n", "\"from\"\t5\r\n", "\"fully\"\t1\r\n", "\"g\"\t1\r\n", "\"get\"\t1\r\n", "\"github\"\t3\r\n", "\"google\"\t1\r\n", "\"graph\"\t2\r\n", "\"greg\"\t2\r\n", "\"group\"\t2\r\n", "\"groups\"\t1\r\n", "\"guides\"\t1\r\n", "\"hadoop\"\t11\r\n", "\"hadoop_home\"\t1\r\n", "\"handled\"\t1\r\n", "\"helps\"\t1\r\n", "\"hourly\"\t1\r\n", "\"html\"\t3\r\n", "\"http\"\t16\r\n", "\"https\"\t3\r\n", "\"if\"\t1\r\n", "\"image\"\t2\r\n", "\"import\"\t2\r\n", "\"important\"\t1\r\n", "\"in\"\t5\r\n", "\"information\"\t2\r\n", "\"inside\"\t1\r\n", "\"install\"\t4\r\n", "\"installation\"\t1\r\n", "\"interpret\"\t1\r\n", "\"into\"\t1\r\n", "\"introduction\"\t2\r\n", "\"is\"\t2\r\n", "\"it\"\t3\r\n", "\"its\"\t1\r\n", "\"job\"\t4\r\n", "\"job's\"\t1\r\n", "\"jobs\"\t3\r\n", "\"keys\"\t1\r\n", "\"killion\"\t1\r\n", "\"latest\"\t1\r\n", "\"line\"\t2\r\n", "\"links\"\t1\r\n", "\"live\"\t1\r\n", "\"locally\"\t2\r\n", "\"logo\"\t1\r\n", "\"logo_medium\"\t1\r\n", "\"logos\"\t1\r\n", "\"logs\"\t1\r\n", "\"looks\"\t1\r\n", "\"lower\"\t1\r\n", "\"mailto\"\t1\r\n", "\"make\"\t3\r\n", "\"map\"\t2\r\n", "\"mapper\"\t1\r\n", "\"mapreduce\"\t8\r\n", "\"marcelcaraciolo\"\t1\r\n", "\"master\"\t1\r\n", "\"minimal\"\t1\r\n", "\"more\"\t3\r\n", "\"mr_word_freq_count\"\t3\r\n", "\"mrjob\"\t31\r\n", "\"mrjob_conf\"\t1\r\n", "\"mrwordfreqcount\"\t2\r\n", "\"multi\"\t1\r\n", "\"need\"\t1\r\n", "\"net\"\t3\r\n", "\"next\"\t1\r\n", "\"of\"\t2\r\n", "\"on\"\t10\r\n", "\"one\"\t1\r\n", "\"only\"\t1\r\n", "\"or\"\t1\r\n", "\"org\"\t7\r\n", "\"other\"\t3\r\n", "\"out\"\t1\r\n", "\"overview\"\t1\r\n", "\"own\"\t2\r\n", "\"package\"\t1\r\n", "\"packages\"\t4\r\n", "\"page\"\t1\r\n", "\"pip\"\t1\r\n", "\"png\"\t2\r\n", "\"postneo\"\t1\r\n", "\"production\"\t1\r\n", "\"project\"\t1\r\n", "\"put\"\t1\r\n", "\"py\"\t4\r\n", "\"pycon\"\t3\r\n", "\"pypi\"\t1\r\n", "\"pypy\"\t2\r\n", "\"python\"\t10\r\n", "\"pythonpath\"\t1\r\n", "\"r\"\t3\r\n", "\"raw\"\t1\r\n", "\"re\"\t2\r\n", "\"readme\"\t3\r\n", "\"readthedocs\"\t1\r\n", "\"recommendations\"\t2\r\n", "\"recsys\"\t1\r\n", "\"reduce\"\t2\r\n", "\"reducer\"\t1\r\n", "\"reference\"\t1\r\n", "\"regions\"\t1\r\n", "\"rst\"\t3\r\n", "\"run\"\t8\r\n", "\"scripts\"\t1\r\n", "\"secret\"\t1\r\n", "\"security\"\t1\r\n", "\"see\"\t1\r\n", "\"self\"\t3\r\n", "\"service\"\t1\r\n", "\"services\"\t1\r\n", "\"set\"\t5\r\n", "\"setting\"\t1\r\n", "\"setup\"\t4\r\n", "\"sign\"\t1\r\n", "\"simple\"\t1\r\n", "\"simplejson\"\t1\r\n", "\"social\"\t2\r\n", "\"some\"\t1\r\n", "\"source\"\t5\r\n", "\"ssh\"\t1\r\n", "\"stable\"\t1\r\n", "\"stable1\"\t1\r\n", "\"step\"\t2\r\n", "\"streaming\"\t3\r\n", "\"sum\"\t2\r\n", "\"supports\"\t1\r\n", "\"sure\"\t1\r\n", "\"tarballs\"\t1\r\n", "\"target\"\t1\r\n", "\"testing\"\t1\r\n", "\"thanks\"\t1\r\n", "\"that\"\t1\r\n", "\"the\"\t7\r\n", "\"this\"\t1\r\n", "\"time\"\t1\r\n", "\"to\"\t9\r\n", "\"tracker\"\t1\r\n", "\"transparently\"\t1\r\n", "\"travis\"\t2\r\n", "\"tree\"\t2\r\n", "\"try\"\t1\r\n", "\"tunnel\"\t1\r\n", "\"tv\"\t1\r\n", "\"tz\"\t1\r\n", "\"up\"\t3\r\n", "\"upload\"\t2\r\n", "\"us\"\t1\r\n", "\"use\"\t1\r\n", "\"using\"\t2\r\n", "\"v0\"\t1\r\n", "\"variables\"\t2\r\n", "\"version\"\t2\r\n", "\"videos\"\t1\r\n", "\"w'\"\t1\r\n", "\"web\"\t1\r\n", "\"which\"\t1\r\n", "\"with\"\t3\r\n", "\"word\"\t6\r\n", "\"word_re\"\t2\r\n", "\"words\"\t1\r\n", "\"works\"\t4\r\n", "\"write\"\t2\r\n", "\"www\"\t1\r\n", "\"yelp\"\t4\r\n", "\"yield\"\t3\r\n", "\"you\"\t2\r\n", "\"you'll\"\t1\r\n", "\"your\"\t10\r\n" ] } ], "prompt_number": 9 }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "What is the meaning of \"yield\" ?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The keyword __yield__ is somewhat similar to __return__ however, while __return__ terminates the function and returns the result, \n", "__yield__, the first time it is encountered, return an object called a __generator__, without executing the function even once. On subsequent calls, the function is executed until one or more __yield__ commands are encountered, these values are returned, and the function halts (but does not terminate) until it is called again.\n", "\n", "Here is a simple example:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "def myrange(start,stop,step):\n", " value=start\n", " while value<=stop:\n", " yield value\n", " value += step\n", "print [x for x in myrange(1.0,3.0,0.3)]" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "[1.0, 1.3, 1.6, 1.9000000000000001, 2.2, 2.5, 2.8]\n" ] } ], "prompt_number": 14 }, { "cell_type": "code", "collapsed": false, "input": [ "print myrange(1.0,3.0,0.3)" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "\n" ] } ], "prompt_number": 15 }, { "cell_type": "code", "collapsed": false, "input": [ "gen1=myrange(1.0,3.0,0.3)\n", "gen2=myrange(2.0,5.0,0.7)\n", "print 'gen1:',[x for x in gen1]\n", "print 'gen1:',[x for x in gen1] # after the generator terminated, it does not yield any more values.\n", "print 'gen2:',[x for x in gen2]" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "gen1: [1.0, 1.3, 1.6, 1.9000000000000001, 2.2, 2.5, 2.8]\n", "gen1: []\n", "gen2: [2.0, 2.7, 3.4000000000000004, 4.1000000000000005, 4.800000000000001]\n" ] } ], "prompt_number": 16 }, { "cell_type": "markdown", "metadata": {}, "source": [ "A generator is similar to an array or a list, all of those are __iterable__ objects. However, while list store all of the values in memory and can be read in any order, generators create the values on the fly and can only traversed __once__ and __in order__\n", "\n", "It is the fact that values are generated on the fly and then discarded which makes generators attractive when processing large amounts of data - only a small amount of intermedite results, the outputs of the mapper which are inputs to the reducer, need to be stored in memory. How much depends on the communication speed between mappers and reducers.\n", "\n", "It is instructive to see how generators can be cascaded by passing a generator as a parameter to another generator." ] }, { "cell_type": "code", "collapsed": false, "input": [ "def mycumul(values): # values can be a list or a generator.\n", " s=0\n", " for value in values:\n", " s+=value\n", " yield s" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 17 }, { "cell_type": "code", "collapsed": false, "input": [ "# Here we pass a generator as an input to another generator.\n", "gen3=mycumul(myrange(1.0,3.0,0.3)) " ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 18 }, { "cell_type": "code", "collapsed": false, "input": [ "print 'gen3:',[x for x in gen3]" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "gen3: [1.0, 2.3, 3.9, 5.8, 8.0, 10.5, 13.3]\n" ] } ], "prompt_number": 19 }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Different modes of running a mrjob map-reduce job ##\n", "\n", "Once the mapper, combiner and reducer have been written and tested, you can run the job on different types of infrastructure:\n", "\n", "1. __inline__ run the job as a single process on the local machine.\n", "1. __local__ run the job on the local machine, but using multiple processes to simulate parallel processing.\n", "1. __hadoop__ run the job on a hadoop cluster (such as the one we have in SDSC)\n", "1. __EMR__ (Elastic Map Reduce) run the job on a hadoop cluster running on the amazon cloud.\n", "\n", "Below we run the same process we ran at the top using __local__ instead of the default __inline__. Observe that in this case the reducers have some non-trivial work to do even when combiners are used." ] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Running in local mode" ] }, { "cell_type": "code", "collapsed": false, "input": [ "!python mr_word_freq_count.py --runner=local $root_dir/README.rst > counts" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "using configs in /Users/yoavfreund/.mrjob.conf\r\n", "creating tmp directory /var/folders/80/c2kfvdvx5cx570r4vlzqgb840000gq/T/mr_word_freq_count.yoavfreund.20140507.195136.371860\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "writing to /var/folders/80/c2kfvdvx5cx570r4vlzqgb840000gq/T/mr_word_freq_count.yoavfreund.20140507.195136.371860/step-0-mapper_part-00000\r\n", "> //anaconda/bin/python mr_word_freq_count.py --step-num=0 --mapper /var/folders/80/c2kfvdvx5cx570r4vlzqgb840000gq/T/mr_word_freq_count.yoavfreund.20140507.195136.371860/input_part-00000 | sort | //anaconda/bin/python mr_word_freq_count.py --step-num=0 --combiner > /var/folders/80/c2kfvdvx5cx570r4vlzqgb840000gq/T/mr_word_freq_count.yoavfreund.20140507.195136.371860/step-0-mapper_part-00000\r\n", "writing to /var/folders/80/c2kfvdvx5cx570r4vlzqgb840000gq/T/mr_word_freq_count.yoavfreund.20140507.195136.371860/step-0-mapper_part-00001\r\n", "> //anaconda/bin/python mr_word_freq_count.py --step-num=0 --mapper /var/folders/80/c2kfvdvx5cx570r4vlzqgb840000gq/T/mr_word_freq_count.yoavfreund.20140507.195136.371860/input_part-00001 | sort | //anaconda/bin/python mr_word_freq_count.py --step-num=0 --combiner > /var/folders/80/c2kfvdvx5cx570r4vlzqgb840000gq/T/mr_word_freq_count.yoavfreund.20140507.195136.371860/step-0-mapper_part-00001\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "STDERR: mapper mrjob\r\n", "STDERR: mapper image\r\n", "STDERR: mapper http\r\n", "STDERR: mapper github\r\n", "STDERR: mapper com\r\n", "STDERR: mapper yelp\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper raw\r\n", "STDERR: mapper master\r\n", "STDERR: mapper docs\r\n", "STDERR: mapper logos\r\n", "STDERR: mapper logo_medium\r\n", "STDERR: mapper png\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper is\r\n", "STDERR: mapper a\r\n", "STDERR: mapper python\r\n", "STDERR: mapper 2\r\n", "STDERR: mapper 5\r\n", "STDERR: mapper package\r\n", "STDERR: mapper that\r\n", "STDERR: mapper helps\r\n", "STDERR: mapper you\r\n", "STDERR: mapper write\r\n", "STDERR: mapper and\r\n", "STDERR: mapper run\r\n", "STDERR: mapper hadoop\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "STDERR: mapper streaming\r\n", "STDERR: mapper jobs\r\n", "STDERR: mapper stable\r\n", "STDERR: mapper version\r\n", "STDERR: mapper v0\r\n", "STDERR: mapper 4\r\n", "STDERR: mapper 2\r\n", "STDERR: mapper documentation\r\n", "STDERR: mapper http\r\n", "STDERR: mapper packages\r\n", "STDERR: mapper python\r\n", "STDERR: mapper org\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper _\r\n", "STDERR: mapper development\r\n", "STDERR: mapper version\r\n", "STDERR: mapper documentation\r\n", "STDERR: mapper http\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper readthedocs\r\n", "STDERR: mapper org\r\n", "STDERR: mapper en\r\n", "STDERR: mapper latest\r\n", "STDERR: mapper _\r\n", "STDERR: mapper image\r\n", "STDERR: mapper https\r\n", "STDERR: mapper travis\r\n", "STDERR: mapper ci\r\n", "STDERR: mapper org\r\n", "STDERR: mapper yelp\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper png\r\n", "STDERR: mapper target\r\n", "STDERR: mapper https\r\n", "STDERR: mapper travis\r\n", "STDERR: mapper ci\r\n", "STDERR: mapper org\r\n", "STDERR: mapper yelp\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper fully\r\n", "STDERR: mapper supports\r\n", "STDERR: mapper amazon's\r\n", "STDERR: mapper elastic\r\n", "STDERR: mapper mapreduce\r\n", "STDERR: mapper emr\r\n", "STDERR: mapper service\r\n", "STDERR: mapper which\r\n", "STDERR: mapper allows\r\n", "STDERR: mapper you\r\n", "STDERR: mapper to\r\n", "STDERR: mapper buy\r\n", "STDERR: mapper time\r\n", "STDERR: mapper on\r\n", "STDERR: mapper a\r\n", "STDERR: mapper hadoop\r\n", "STDERR: mapper cluster\r\n", "STDERR: mapper on\r\n", "STDERR: mapper an\r\n", "STDERR: mapper hourly\r\n", "STDERR: mapper basis\r\n", "STDERR: mapper it\r\n", "STDERR: mapper also\r\n", "STDERR: mapper works\r\n", "STDERR: mapper with\r\n", "STDERR: mapper your\r\n", "STDERR: mapper own\r\n", "STDERR: mapper hadoop\r\n", "STDERR: mapper cluster\r\n", "STDERR: mapper some\r\n", "STDERR: mapper important\r\n", "STDERR: mapper features\r\n", "STDERR: mapper run\r\n", "STDERR: mapper jobs\r\n", "STDERR: mapper on\r\n", "STDERR: mapper emr\r\n", "STDERR: mapper your\r\n", "STDERR: mapper own\r\n", "STDERR: mapper hadoop\r\n", "STDERR: mapper cluster\r\n", "STDERR: mapper or\r\n", "STDERR: mapper locally\r\n", "STDERR: mapper for\r\n", "STDERR: mapper testing\r\n", "STDERR: mapper write\r\n", "STDERR: mapper multi\r\n", "STDERR: mapper step\r\n", "STDERR: mapper jobs\r\n", "STDERR: mapper one\r\n", "STDERR: mapper map\r\n", "STDERR: mapper reduce\r\n", "STDERR: mapper step\r\n", "STDERR: mapper feeds\r\n", "STDERR: mapper into\r\n", "STDERR: mapper the\r\n", "STDERR: mapper next\r\n", "STDERR: mapper duplicate\r\n", "STDERR: mapper your\r\n", "STDERR: mapper production\r\n", "STDERR: mapper environment\r\n", "STDERR: mapper inside\r\n", "STDERR: mapper hadoop\r\n", "STDERR: mapper upload\r\n", "STDERR: mapper your\r\n", "STDERR: mapper source\r\n", "STDERR: mapper tree\r\n", "STDERR: mapper and\r\n", "STDERR: mapper put\r\n", "STDERR: mapper it\r\n", "STDERR: mapper in\r\n", "STDERR: mapper your\r\n", "STDERR: mapper job's\r\n", "STDERR: mapper pythonpath\r\n", "STDERR: mapper run\r\n", "STDERR: mapper make\r\n", "STDERR: mapper and\r\n", "STDERR: mapper other\r\n", "STDERR: mapper setup\r\n", "STDERR: mapper scripts\r\n", "STDERR: mapper set\r\n", "STDERR: mapper environment\r\n", "STDERR: mapper variables\r\n", "STDERR: mapper e\r\n", "STDERR: mapper g\r\n", "STDERR: mapper tz\r\n", "STDERR: mapper easily\r\n", "STDERR: mapper install\r\n", "STDERR: mapper python\r\n", "STDERR: mapper packages\r\n", "STDERR: mapper from\r\n", "STDERR: mapper tarballs\r\n", "STDERR: mapper emr\r\n", "STDERR: mapper only\r\n", "STDERR: mapper setup\r\n", "STDERR: mapper handled\r\n", "STDERR: mapper transparently\r\n", "STDERR: mapper by\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper conf\r\n", "STDERR: mapper config\r\n", "STDERR: mapper file\r\n", "STDERR: mapper automatically\r\n", "STDERR: mapper interpret\r\n", "STDERR: mapper error\r\n", "STDERR: mapper logs\r\n", "STDERR: mapper from\r\n", "STDERR: mapper emr\r\n", "STDERR: mapper ssh\r\n", "STDERR: mapper tunnel\r\n", "STDERR: mapper to\r\n", "STDERR: mapper hadoop\r\n", "STDERR: mapper job\r\n", "STDERR: mapper tracker\r\n", "STDERR: mapper on\r\n", "STDERR: mapper emr\r\n", "STDERR: mapper minimal\r\n", "STDERR: mapper setup\r\n", "STDERR: mapper to\r\n", "STDERR: mapper run\r\n", "STDERR: mapper on\r\n", "STDERR: mapper emr\r\n", "STDERR: mapper set\r\n", "STDERR: mapper aws_access_key_id\r\n", "STDERR: mapper and\r\n", "STDERR: mapper aws_secret_access_key\r\n", "STDERR: mapper to\r\n", "STDERR: mapper run\r\n", "STDERR: mapper on\r\n", "STDERR: mapper your\r\n", "STDERR: mapper hadoop\r\n", "STDERR: mapper cluster\r\n", "STDERR: mapper install\r\n", "STDERR: mapper simplejson\r\n", "STDERR: mapper and\r\n", "STDERR: mapper make\r\n", "STDERR: mapper sure\r\n", "STDERR: mapper hadoop_home\r\n", "STDERR: mapper is\r\n", "STDERR: mapper set\r\n", "STDERR: mapper installation\r\n", "STDERR: mapper from\r\n", "STDERR: mapper pypi\r\n", "STDERR: mapper pip\r\n", "STDERR: mapper install\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper from\r\n", "STDERR: mapper source\r\n", "STDERR: mapper python\r\n", "STDERR: mapper setup\r\n", "STDERR: mapper py\r\n", "STDERR: mapper install\r\n", "STDERR: mapper a\r\n", "STDERR: mapper simple\r\n", "STDERR: mapper map\r\n", "STDERR: mapper reduce\r\n", "STDERR: mapper job\r\n", "STDERR: mapper code\r\n", "STDERR: mapper for\r\n", "STDERR: mapper this\r\n", "STDERR: mapper example\r\n", "STDERR: mapper and\r\n", "STDERR: mapper more\r\n", "STDERR: mapper live\r\n", "STDERR: mapper in\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper examples\r\n", "STDERR: mapper code\r\n", "STDERR: mapper python\r\n", "STDERR: mapper the\r\n", "STDERR: mapper classic\r\n", "STDERR: mapper mapreduce\r\n", "STDERR: mapper job\r\n", "STDERR: mapper count\r\n", "STDERR: mapper the\r\n", "STDERR: mapper frequency\r\n", "STDERR: mapper of\r\n", "STDERR: mapper words\r\n", "STDERR: mapper from\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper job\r\n", "STDERR: mapper import\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper import\r\n", "STDERR: mapper re\r\n", "STDERR: mapper word_re\r\n", "STDERR: mapper re\r\n", "STDERR: mapper compile\r\n", "STDERR: mapper r\r\n", "STDERR: mapper w'\r\n", "STDERR: mapper class\r\n", "STDERR: mapper mrwordfreqcount\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper def\r\n", "STDERR: mapper mapper\r\n", "STDERR: mapper self\r\n", "STDERR: mapper _\r\n", "STDERR: mapper line\r\n", "STDERR: mapper for\r\n", "STDERR: mapper word\r\n", "STDERR: mapper in\r\n", "STDERR: mapper word_re\r\n", "STDERR: mapper findall\r\n", "STDERR: mapper line\r\n", "STDERR: mapper yield\r\n", "STDERR: mapper word\r\n", "STDERR: mapper lower\r\n", "STDERR: mapper 1\r\n", "STDERR: mapper def\r\n", "STDERR: mapper combiner\r\n", "STDERR: mapper self\r\n", "STDERR: mapper word\r\n", "STDERR: mapper counts\r\n", "STDERR: mapper yield\r\n", "STDERR: mapper word\r\n", "STDERR: mapper sum\r\n", "STDERR: mapper counts\r\n", "STDERR: mapper def\r\n", "STDERR: mapper reducer\r\n", "STDERR: mapper self\r\n", "STDERR: mapper word\r\n", "STDERR: mapper counts\r\n", "STDERR: mapper yield\r\n", "STDERR: mapper word\r\n", "STDERR: mapper sum\r\n", "STDERR: mapper counts\r\n", "STDERR: mapper if\r\n", "STDERR: mapper __name__\r\n", "STDERR: mapper '__main__'\r\n", "STDERR: mapper mrwordfreqcount\r\n", "STDERR: mapper run\r\n", "STDERR: mapper try\r\n", "STDERR: mapper it\r\n", "STDERR: mapper out\r\n", "STDERR: combiner '__main__' [1]=1\r\n", "STDERR: combiner 1 [1]=1\r\n", "STDERR: combiner 2 [1,1]=2\r\n", "STDERR: combiner 4 [1]=1\r\n", "STDERR: combiner 5 [1]=1\r\n", "STDERR: combiner _ [1,1,1]=3\r\n", "STDERR: combiner __name__ [1]=1\r\n", "STDERR: combiner a [1,1,1]=3\r\n", "STDERR: combiner allows [1]=1\r\n", "STDERR: combiner also [1]=1\r\n", "STDERR: combiner amazon's [1]=1\r\n", "STDERR: combiner an [1]=1\r\n", "STDERR: combiner and [1,1,1,1,1,1]=6\r\n", "STDERR: combiner automatically [1]=1\r\n", "STDERR: combiner aws_access_key_id [1]=1\r\n", "STDERR: combiner aws_secret_access_key [1]=1\r\n", "STDERR: combiner basis [1]=1\r\n", "STDERR: combiner buy [1]=1\r\n", "STDERR: combiner by [1]=1\r\n", "STDERR: combiner ci [1,1]=2\r\n", "STDERR: combiner class [1]=1\r\n", "STDERR: combiner classic [1]=1\r\n", "STDERR: combiner cluster [1,1,1,1]=4\r\n", "STDERR: combiner code [1,1]=2\r\n", "STDERR: combiner com [1]=1\r\n", "STDERR: combiner combiner [1]=1\r\n", "STDERR: combiner compile [1]=1\r\n", "STDERR: combiner conf [1]=1\r\n", "STDERR: combiner config [1]=1\r\n", "STDERR: combiner count [1]=1\r\n", "STDERR: combiner counts [1,1,1,1]=4\r\n", "STDERR: combiner def [1,1,1]=3\r\n", "STDERR: combiner development [1]=1\r\n", "STDERR: combiner docs [1]=1\r\n", "STDERR: combiner documentation [1,1]=2\r\n", "STDERR: combiner duplicate [1]=1\r\n", "STDERR: combiner e [1]=1\r\n", "STDERR: combiner easily [1]=1\r\n", "STDERR: combiner elastic [1]=1\r\n", "STDERR: combiner emr [1,1,1,1,1,1]=6\r\n", "STDERR: combiner en [1]=1\r\n", "STDERR: combiner environment [1,1]=2\r\n", "STDERR: combiner error [1]=1\r\n", "STDERR: combiner example [1]=1\r\n", "STDERR: combiner examples [1]=1\r\n", "STDERR: combiner features [1]=1\r\n", "STDERR: combiner feeds [1]=1\r\n", "STDERR: combiner file [1]=1\r\n", "STDERR: combiner findall [1]=1\r\n", "STDERR: combiner for [1,1,1]=3\r\n", "STDERR: combiner frequency [1]=1\r\n", "STDERR: combiner from [1,1,1,1,1]=5\r\n", "STDERR: combiner fully [1]=1\r\n", "STDERR: combiner g [1]=1\r\n", "STDERR: combiner github [1]=1\r\n", "STDERR: combiner hadoop [1,1,1,1,1,1,1]=7\r\n", "STDERR: combiner hadoop_home [1]=1\r\n", "STDERR: combiner handled [1]=1\r\n", "STDERR: combiner helps [1]=1\r\n", "STDERR: combiner hourly [1]=1\r\n", "STDERR: combiner http [1,1,1]=3\r\n", "STDERR: combiner https [1,1]=2\r\n", "STDERR: combiner if [1]=1\r\n", "STDERR: combiner image [1,1]=2\r\n", "STDERR: combiner import [1,1]=2\r\n", "STDERR: combiner important [1]=1\r\n", "STDERR: combiner in [1,1,1]=3\r\n", "STDERR: combiner inside [1]=1\r\n", "STDERR: combiner install [1,1,1,1]=4\r\n", "STDERR: combiner installation [1]=1\r\n", "STDERR: combiner interpret [1]=1\r\n", "STDERR: combiner into [1]=1\r\n", "STDERR: combiner is [1,1]=2\r\n", "STDERR: combiner it [1,1,1]=3\r\n", "STDERR: combiner job [1,1,1,1]=4\r\n", "STDERR: combiner job's [1]=1\r\n", "STDERR: combiner jobs [1,1,1]=3\r\n", "STDERR: combiner latest [1]=1\r\n", "STDERR: combiner line [1,1]=2\r\n", "STDERR: combiner live [1]=1\r\n", "STDERR: combiner locally [1]=1\r\n", "STDERR: combiner logo_medium [1]=1\r\n", "STDERR: combiner logos [1]=1\r\n", "STDERR: combiner logs [1]=1\r\n", "STDERR: combiner lower [1]=1\r\n", "STDERR: combiner make [1,1]=2\r\n", "STDERR: combiner map [1,1]=2\r\n", "STDERR: combiner mapper [1]=1\r\n", "STDERR: combiner mapreduce [1,1]=2\r\n", "STDERR: combiner master [1]=1\r\n", "STDERR: combiner minimal [1]=1\r\n", "STDERR: combiner more [1]=1\r\n", "STDERR: combiner mrjob [1,1,1,1,1,1,1,1,1,1,1,1,1,1]=14\r\n", "STDERR: combiner mrwordfreqcount [1,1]=2\r\n", "STDERR: combiner multi [1]=1\r\n", "STDERR: combiner next [1]=1\r\n", "STDERR: combiner of [1]=1\r\n", "STDERR: combiner on [1,1,1,1,1,1]=6\r\n", "STDERR: combiner one [1]=1\r\n", "STDERR: combiner only [1]=1\r\n", "STDERR: combiner or [1]=1\r\n", "STDERR: combiner org [1,1,1,1]=4\r\n", "STDERR: combiner other [1]=1\r\n", "STDERR: combiner out [1]=1\r\n", "STDERR: combiner own [1,1]=2\r\n", "STDERR: combiner package [1]=1\r\n", "STDERR: combiner packages [1,1]=2\r\n", "STDERR: combiner pip [1]=1\r\n", "STDERR: combiner png [1,1]=2\r\n", "STDERR: combiner production [1]=1\r\n", "STDERR: combiner put [1]=1\r\n", "STDERR: combiner py [1]=1\r\n", "STDERR: combiner pypi [1]=1\r\n", "STDERR: combiner python [1,1,1,1,1]=5\r\n", "STDERR: combiner pythonpath [1]=1\r\n", "STDERR: combiner r [1]=1\r\n", "STDERR: combiner raw [1]=1\r\n", "STDERR: combiner re [1,1]=2\r\n", "STDERR: combiner readthedocs [1]=1\r\n", "STDERR: combiner reduce [1,1]=2\r\n", "STDERR: combiner reducer [1]=1\r\n", "STDERR: combiner run [1,1,1,1,1,1]=6\r\n", "STDERR: combiner scripts [1]=1\r\n", "STDERR: combiner self [1,1,1]=3\r\n", "STDERR: combiner service [1]=1\r\n", "STDERR: combiner set [1,1,1]=3\r\n", "STDERR: combiner setup [1,1,1,1]=4\r\n", "STDERR: combiner simple [1]=1\r\n", "STDERR: combiner simplejson [1]=1\r\n", "STDERR: combiner some [1]=1\r\n", "STDERR: combiner source [1,1]=2\r\n", "STDERR: combiner ssh [1]=1\r\n", "STDERR: combiner stable [1]=1\r\n", "STDERR: combiner step [1,1]=2\r\n", "STDERR: combiner streaming [1]=1\r\n", "STDERR: combiner sum [1,1]=2\r\n", "STDERR: combiner supports [1]=1\r\n", "STDERR: combiner sure [1]=1\r\n", "STDERR: combiner tarballs [1]=1\r\n", "STDERR: combiner target [1]=1\r\n", "STDERR: combiner testing [1]=1\r\n", "STDERR: combiner that [1]=1\r\n", "STDERR: combiner the [1,1,1]=3\r\n", "STDERR: combiner this [1]=1\r\n", "STDERR: combiner time [1]=1\r\n", "STDERR: combiner to [1,1,1,1]=4\r\n", "STDERR: combiner tracker [1]=1\r\n", "STDERR: combiner transparently [1]=1\r\n", "STDERR: combiner travis [1,1]=2\r\n", "STDERR: combiner tree [1]=1\r\n", "STDERR: combiner try [1]=1\r\n", "STDERR: combiner tunnel [1]=1\r\n", "STDERR: combiner tz [1]=1\r\n", "STDERR: combiner upload [1]=1\r\n", "STDERR: combiner v0 [1]=1\r\n", "STDERR: combiner variables [1]=1\r\n", "STDERR: combiner version [1,1]=2\r\n", "STDERR: combiner w' [1]=1\r\n", "STDERR: combiner which [1]=1\r\n", "STDERR: combiner with [1]=1\r\n", "STDERR: combiner word [1,1,1,1,1,1]=6\r\n", "STDERR: combiner word_re [1,1]=2\r\n", "STDERR: combiner words [1]=1\r\n", "STDERR: combiner works [1]=1\r\n", "STDERR: combiner write [1,1]=2\r\n", "STDERR: combiner yelp [1,1,1]=3\r\n", "STDERR: combiner yield [1,1,1]=3\r\n", "STDERR: combiner you [1,1]=2\r\n", "STDERR: combiner your [1,1,1,1,1,1]=6\r\n", "STDERR: mapper locally\r\n", "STDERR: mapper python\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper examples\r\n", "STDERR: mapper mr_word_freq_count\r\n", "STDERR: mapper py\r\n", "STDERR: mapper readme\r\n", "STDERR: mapper rst\r\n", "STDERR: mapper counts\r\n", "STDERR: mapper on\r\n", "STDERR: mapper emr\r\n", "STDERR: mapper python\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper examples\r\n", "STDERR: mapper mr_word_freq_count\r\n", "STDERR: mapper py\r\n", "STDERR: mapper readme\r\n", "STDERR: mapper rst\r\n", "STDERR: mapper r\r\n", "STDERR: mapper emr\r\n", "STDERR: mapper counts\r\n", "STDERR: mapper on\r\n", "STDERR: mapper your\r\n", "STDERR: mapper hadoop\r\n", "STDERR: mapper cluster\r\n", "STDERR: mapper python\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper examples\r\n", "STDERR: mapper mr_word_freq_count\r\n", "STDERR: mapper py\r\n", "STDERR: mapper readme\r\n", "STDERR: mapper rst\r\n", "STDERR: mapper r\r\n", "STDERR: mapper hadoop\r\n", "STDERR: mapper counts\r\n", "STDERR: mapper setting\r\n", "STDERR: mapper up\r\n", "STDERR: mapper emr\r\n", "STDERR: mapper on\r\n", "STDERR: mapper amazon\r\n", "STDERR: mapper create\r\n", "STDERR: mapper an\r\n", "STDERR: mapper amazon\r\n", "STDERR: mapper web\r\n", "STDERR: mapper services\r\n", "STDERR: mapper account\r\n", "STDERR: mapper http\r\n", "STDERR: mapper aws\r\n", "STDERR: mapper amazon\r\n", "STDERR: mapper com\r\n", "STDERR: mapper _\r\n", "STDERR: mapper sign\r\n", "STDERR: mapper up\r\n", "STDERR: mapper for\r\n", "STDERR: mapper elastic\r\n", "STDERR: mapper mapreduce\r\n", "STDERR: mapper http\r\n", "STDERR: mapper aws\r\n", "STDERR: mapper amazon\r\n", "STDERR: mapper com\r\n", "STDERR: mapper elasticmapreduce\r\n", "STDERR: mapper _\r\n", "STDERR: mapper get\r\n", "STDERR: mapper your\r\n", "STDERR: mapper access\r\n", "STDERR: mapper and\r\n", "STDERR: mapper secret\r\n", "STDERR: mapper keys\r\n", "STDERR: mapper click\r\n", "STDERR: mapper security\r\n", "STDERR: mapper credentials\r\n", "STDERR: mapper on\r\n", "STDERR: mapper your\r\n", "STDERR: mapper account\r\n", "STDERR: mapper page\r\n", "STDERR: mapper http\r\n", "STDERR: mapper aws\r\n", "STDERR: mapper amazon\r\n", "STDERR: mapper com\r\n", "STDERR: mapper account\r\n", "STDERR: mapper _\r\n", "STDERR: mapper set\r\n", "STDERR: mapper the\r\n", "STDERR: mapper environment\r\n", "STDERR: mapper variables\r\n", "STDERR: mapper aws_access_key_id\r\n", "STDERR: mapper and\r\n", "STDERR: mapper aws_secret_access_key\r\n", "STDERR: mapper accordingly\r\n", "STDERR: mapper advanced\r\n", "STDERR: mapper configuration\r\n", "STDERR: mapper to\r\n", "STDERR: mapper run\r\n", "STDERR: mapper in\r\n", "STDERR: mapper other\r\n", "STDERR: mapper aws\r\n", "STDERR: mapper regions\r\n", "STDERR: mapper upload\r\n", "STDERR: mapper your\r\n", "STDERR: mapper source\r\n", "STDERR: mapper tree\r\n", "STDERR: mapper run\r\n", "STDERR: mapper make\r\n", "STDERR: mapper and\r\n", "STDERR: mapper use\r\n", "STDERR: mapper other\r\n", "STDERR: mapper advanced\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper features\r\n", "STDERR: mapper you'll\r\n", "STDERR: mapper need\r\n", "STDERR: mapper to\r\n", "STDERR: mapper set\r\n", "STDERR: mapper up\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper conf\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper looks\r\n", "STDERR: mapper for\r\n", "STDERR: mapper its\r\n", "STDERR: mapper conf\r\n", "STDERR: mapper file\r\n", "STDERR: mapper in\r\n", "STDERR: mapper the\r\n", "STDERR: mapper contents\r\n", "STDERR: mapper of\r\n", "STDERR: mapper mrjob_conf\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper conf\r\n", "STDERR: mapper etc\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper conf\r\n", "STDERR: mapper see\r\n", "STDERR: mapper the\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper conf\r\n", "STDERR: mapper documentation\r\n", "STDERR: mapper http\r\n", "STDERR: mapper packages\r\n", "STDERR: mapper python\r\n", "STDERR: mapper org\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper guides\r\n", "STDERR: mapper configs\r\n", "STDERR: mapper basics\r\n", "STDERR: mapper html\r\n", "STDERR: mapper _\r\n", "STDERR: mapper for\r\n", "STDERR: mapper more\r\n", "STDERR: mapper information\r\n", "STDERR: mapper project\r\n", "STDERR: mapper links\r\n", "STDERR: mapper source\r\n", "STDERR: mapper code\r\n", "STDERR: mapper http\r\n", "STDERR: mapper github\r\n", "STDERR: mapper com\r\n", "STDERR: mapper yelp\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper _\r\n", "STDERR: mapper documentation\r\n", "STDERR: mapper http\r\n", "STDERR: mapper packages\r\n", "STDERR: mapper python\r\n", "STDERR: mapper org\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper _\r\n", "STDERR: mapper discussion\r\n", "STDERR: mapper group\r\n", "STDERR: mapper http\r\n", "STDERR: mapper groups\r\n", "STDERR: mapper google\r\n", "STDERR: mapper com\r\n", "STDERR: mapper group\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper _\r\n", "STDERR: mapper reference\r\n", "STDERR: mapper hadoop\r\n", "STDERR: mapper streaming\r\n", "STDERR: mapper http\r\n", "STDERR: mapper hadoop\r\n", "STDERR: mapper apache\r\n", "STDERR: mapper org\r\n", "STDERR: mapper docs\r\n", "STDERR: mapper stable1\r\n", "STDERR: mapper streaming\r\n", "STDERR: mapper html\r\n", "STDERR: mapper _\r\n", "STDERR: mapper elastic\r\n", "STDERR: mapper mapreduce\r\n", "STDERR: mapper http\r\n", "STDERR: mapper aws\r\n", "STDERR: mapper amazon\r\n", "STDERR: mapper com\r\n", "STDERR: mapper documentation\r\n", "STDERR: mapper elasticmapreduce\r\n", "STDERR: mapper _\r\n", "STDERR: mapper more\r\n", "STDERR: mapper information\r\n", "STDERR: mapper pycon\r\n", "STDERR: mapper 2011\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper overview\r\n", "STDERR: mapper http\r\n", "STDERR: mapper blip\r\n", "STDERR: mapper tv\r\n", "STDERR: mapper pycon\r\n", "STDERR: mapper us\r\n", "STDERR: mapper videos\r\n", "STDERR: mapper 2009\r\n", "STDERR: mapper 2010\r\n", "STDERR: mapper 2011\r\n", "STDERR: mapper pycon\r\n", "STDERR: mapper 2011\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper distributed\r\n", "STDERR: mapper computing\r\n", "STDERR: mapper for\r\n", "STDERR: mapper everyone\r\n", "STDERR: mapper 4898987\r\n", "STDERR: mapper _\r\n", "STDERR: mapper introduction\r\n", "STDERR: mapper to\r\n", "STDERR: mapper recommendations\r\n", "STDERR: mapper and\r\n", "STDERR: mapper mapreduce\r\n", "STDERR: mapper with\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper http\r\n", "STDERR: mapper aimotion\r\n", "STDERR: mapper blogspot\r\n", "STDERR: mapper com\r\n", "STDERR: mapper 2012\r\n", "STDERR: mapper 08\r\n", "STDERR: mapper introduction\r\n", "STDERR: mapper to\r\n", "STDERR: mapper recommendations\r\n", "STDERR: mapper with\r\n", "STDERR: mapper html\r\n", "STDERR: mapper _\r\n", "STDERR: mapper source\r\n", "STDERR: mapper code\r\n", "STDERR: mapper https\r\n", "STDERR: mapper github\r\n", "STDERR: mapper com\r\n", "STDERR: mapper marcelcaraciolo\r\n", "STDERR: mapper recsys\r\n", "STDERR: mapper mapreduce\r\n", "STDERR: mapper mrjob\r\n", "STDERR: mapper _\r\n", "STDERR: mapper social\r\n", "STDERR: mapper graph\r\n", "STDERR: mapper analysis\r\n", "STDERR: mapper using\r\n", "STDERR: mapper elastic\r\n", "STDERR: mapper mapreduce\r\n", "STDERR: mapper and\r\n", "STDERR: mapper pypy\r\n", "STDERR: mapper http\r\n", "STDERR: mapper postneo\r\n", "STDERR: mapper com\r\n", "STDERR: mapper 2011\r\n", "STDERR: mapper 05\r\n", "STDERR: mapper 04\r\n", "STDERR: mapper social\r\n", "STDERR: mapper graph\r\n", "STDERR: mapper analysis\r\n", "STDERR: mapper using\r\n", "STDERR: mapper elastic\r\n", "STDERR: mapper mapreduce\r\n", "STDERR: mapper and\r\n", "STDERR: mapper pypy\r\n", "STDERR: mapper _\r\n", "STDERR: mapper thanks\r\n", "STDERR: mapper to\r\n", "STDERR: mapper greg\r\n", "STDERR: mapper killion\r\n", "STDERR: mapper mailto\r\n", "STDERR: mapper greg\r\n", "STDERR: mapper blind\r\n", "STDERR: mapper works\r\n", "STDERR: mapper net\r\n", "STDERR: mapper _\r\n", "STDERR: mapper blind\r\n", "STDERR: mapper works\r\n", "STDERR: mapper net\r\n", "STDERR: mapper http\r\n", "STDERR: mapper www\r\n", "STDERR: mapper blind\r\n", "STDERR: mapper works\r\n", "STDERR: mapper net\r\n", "STDERR: mapper _\r\n", "STDERR: mapper for\r\n", "STDERR: mapper the\r\n", "STDERR: mapper logo\r\n", "STDERR: combiner 04 [1]=1\r\n", "STDERR: combiner 05 [1]=1\r\n", "STDERR: combiner 08 [1]=1\r\n", "STDERR: combiner 2009 [1]=1\r\n", "STDERR: combiner 2010 [1]=1\r\n", "STDERR: combiner 2011 [1,1,1,1]=4\r\n", "STDERR: combiner 2012 [1]=1\r\n", "STDERR: combiner 4898987 [1]=1\r\n", "STDERR: combiner _ [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]=15\r\n", "STDERR: combiner access [1]=1\r\n", "STDERR: combiner accordingly [1]=1\r\n", "STDERR: combiner account [1,1,1]=3\r\n", "STDERR: combiner advanced [1,1]=2\r\n", "STDERR: combiner aimotion [1]=1\r\n", "STDERR: combiner amazon [1,1,1,1,1,1]=6\r\n", "STDERR: combiner an [1]=1\r\n", "STDERR: combiner analysis [1,1]=2\r\n", "STDERR: combiner and [1,1,1,1,1,1]=6\r\n", "STDERR: combiner apache [1]=1\r\n", "STDERR: combiner aws [1,1,1,1,1]=5\r\n", "STDERR: combiner aws_access_key_id [1]=1\r\n", "STDERR: combiner aws_secret_access_key [1]=1\r\n", "STDERR: combiner basics [1]=1\r\n", "STDERR: combiner blind [1,1,1]=3\r\n", "STDERR: combiner blip [1]=1\r\n", "STDERR: combiner blogspot [1]=1\r\n", "STDERR: combiner click [1]=1\r\n", "STDERR: combiner cluster [1]=1\r\n", "STDERR: combiner code [1,1]=2\r\n", "STDERR: combiner com [1,1,1,1,1,1,1,1,1]=9\r\n", "STDERR: combiner computing [1]=1\r\n", "STDERR: combiner conf [1,1,1,1,1]=5\r\n", "STDERR: combiner configs [1]=1\r\n", "STDERR: combiner configuration [1]=1\r\n", "STDERR: combiner contents [1]=1\r\n", "STDERR: combiner counts [1,1,1]=3\r\n", "STDERR: combiner create [1]=1\r\n", "STDERR: combiner credentials [1]=1\r\n", "STDERR: combiner discussion [1]=1\r\n", "STDERR: combiner distributed [1]=1\r\n", "STDERR: combiner docs [1]=1\r\n", "STDERR: combiner documentation [1,1,1]=3\r\n", "STDERR: combiner elastic [1,1,1,1]=4\r\n", "STDERR: combiner elasticmapreduce [1,1]=2\r\n", "STDERR: combiner emr [1,1,1]=3\r\n", "STDERR: combiner environment [1]=1\r\n", "STDERR: combiner etc [1]=1\r\n", "STDERR: combiner everyone [1]=1\r\n", "STDERR: combiner examples [1,1,1]=3\r\n", "STDERR: combiner features [1]=1\r\n", "STDERR: combiner file [1]=1\r\n", "STDERR: combiner for [1,1,1,1,1]=5\r\n", "STDERR: combiner get [1]=1\r\n", "STDERR: combiner github [1,1]=2\r\n", "STDERR: combiner google [1]=1\r\n", "STDERR: combiner graph [1,1]=2\r\n", "STDERR: combiner greg [1,1]=2\r\n", "STDERR: combiner group [1,1]=2\r\n", "STDERR: combiner groups [1]=1\r\n", "STDERR: combiner guides [1]=1\r\n", "STDERR: combiner hadoop [1,1,1,1]=4\r\n", "STDERR: combiner html [1,1,1]=3\r\n", "STDERR: combiner http [1,1,1,1,1,1,1,1,1,1,1,1,1]=13\r\n", "STDERR: combiner https [1]=1\r\n", "STDERR: combiner in [1,1]=2\r\n", "STDERR: combiner information [1,1]=2\r\n", "STDERR: combiner introduction [1,1]=2\r\n", "STDERR: combiner its [1]=1\r\n", "STDERR: combiner keys [1]=1\r\n", "STDERR: combiner killion [1]=1\r\n", "STDERR: combiner links [1]=1\r\n", "STDERR: combiner locally [1]=1\r\n", "STDERR: combiner logo [1]=1\r\n", "STDERR: combiner looks [1]=1\r\n", "STDERR: combiner mailto [1]=1\r\n", "STDERR: combiner make [1]=1\r\n", "STDERR: combiner mapreduce [1,1,1,1,1,1]=6\r\n", "STDERR: combiner marcelcaraciolo [1]=1\r\n", "STDERR: combiner more [1,1]=2\r\n", "STDERR: combiner mr_word_freq_count [1,1,1]=3\r\n", "STDERR: combiner mrjob [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]=17\r\n", "STDERR: combiner mrjob_conf [1]=1\r\n", "STDERR: combiner need [1]=1\r\n", "STDERR: combiner net [1,1,1]=3\r\n", "STDERR: combiner of [1]=1\r\n", "STDERR: combiner on [1,1,1,1]=4\r\n", "STDERR: combiner org [1,1,1]=3\r\n", "STDERR: combiner other [1,1]=2\r\n", "STDERR: combiner overview [1]=1\r\n", "STDERR: combiner packages [1,1]=2\r\n", "STDERR: combiner page [1]=1\r\n", "STDERR: combiner postneo [1]=1\r\n", "STDERR: combiner project [1]=1\r\n", "STDERR: combiner py [1,1,1]=3\r\n", "STDERR: combiner pycon [1,1,1]=3\r\n", "STDERR: combiner pypy [1,1]=2\r\n", "STDERR: combiner python [1,1,1,1,1]=5\r\n", "STDERR: combiner r [1,1]=2\r\n", "STDERR: combiner readme [1,1,1]=3\r\n", "STDERR: combiner recommendations [1,1]=2\r\n", "STDERR: combiner recsys [1]=1\r\n", "STDERR: combiner reference [1]=1\r\n", "STDERR: combiner regions [1]=1\r\n", "STDERR: combiner rst [1,1,1]=3\r\n", "STDERR: combiner run [1,1]=2\r\n", "STDERR: combiner secret [1]=1\r\n", "STDERR: combiner security [1]=1\r\n", "STDERR: combiner see [1]=1\r\n", "STDERR: combiner services [1]=1\r\n", "STDERR: combiner set [1,1]=2\r\n", "STDERR: combiner setting [1]=1\r\n", "STDERR: combiner sign [1]=1\r\n", "STDERR: combiner social [1,1]=2\r\n", "STDERR: combiner source [1,1,1]=3\r\n", "STDERR: combiner stable1 [1]=1\r\n", "STDERR: combiner streaming [1,1]=2\r\n", "STDERR: combiner thanks [1]=1\r\n", "STDERR: combiner the [1,1,1,1]=4\r\n", "STDERR: combiner to [1,1,1,1,1]=5\r\n", "STDERR: combiner tree [1]=1\r\n", "STDERR: combiner tv [1]=1\r\n", "STDERR: combiner up [1,1,1]=3\r\n", "STDERR: combiner upload [1]=1\r\n", "STDERR: combiner us [1]=1\r\n", "STDERR: combiner use [1]=1\r\n", "STDERR: combiner using [1,1]=2\r\n", "STDERR: combiner variables [1]=1\r\n", "STDERR: combiner videos [1]=1\r\n", "STDERR: combiner web [1]=1\r\n", "STDERR: combiner with [1,1]=2\r\n", "STDERR: combiner works [1,1,1]=3\r\n", "STDERR: combiner www [1]=1\r\n", "STDERR: combiner yelp [1]=1\r\n", "STDERR: combiner you'll [1]=1\r\n", "STDERR: combiner your [1,1,1,1]=4\r\n", "Counters from step 1:\r\n", " (no counters found)\r\n", "writing to /var/folders/80/c2kfvdvx5cx570r4vlzqgb840000gq/T/mr_word_freq_count.yoavfreund.20140507.195136.371860/step-0-mapper-sorted\r\n", "> sort /var/folders/80/c2kfvdvx5cx570r4vlzqgb840000gq/T/mr_word_freq_count.yoavfreund.20140507.195136.371860/step-0-mapper_part-00000 /var/folders/80/c2kfvdvx5cx570r4vlzqgb840000gq/T/mr_word_freq_count.yoavfreund.20140507.195136.371860/step-0-mapper_part-00001\r\n", "writing to /var/folders/80/c2kfvdvx5cx570r4vlzqgb840000gq/T/mr_word_freq_count.yoavfreund.20140507.195136.371860/step-0-reducer_part-00000\r\n", "> //anaconda/bin/python mr_word_freq_count.py --step-num=0 --reducer /var/folders/80/c2kfvdvx5cx570r4vlzqgb840000gq/T/mr_word_freq_count.yoavfreund.20140507.195136.371860/input_part-00000 > /var/folders/80/c2kfvdvx5cx570r4vlzqgb840000gq/T/mr_word_freq_count.yoavfreund.20140507.195136.371860/step-0-reducer_part-00000\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "writing to /var/folders/80/c2kfvdvx5cx570r4vlzqgb840000gq/T/mr_word_freq_count.yoavfreund.20140507.195136.371860/step-0-reducer_part-00001\r\n", "> //anaconda/bin/python mr_word_freq_count.py --step-num=0 --reducer /var/folders/80/c2kfvdvx5cx570r4vlzqgb840000gq/T/mr_word_freq_count.yoavfreund.20140507.195136.371860/input_part-00001 > /var/folders/80/c2kfvdvx5cx570r4vlzqgb840000gq/T/mr_word_freq_count.yoavfreund.20140507.195136.371860/step-0-reducer_part-00001\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "STDERR: reducer '__main__' [1]=1\r\n", "STDERR: reducer 04 [1]=1\r\n", "STDERR: reducer 05 [1]=1\r\n", "STDERR: reducer 08 [1]=1\r\n", "STDERR: reducer 1 [1]=1\r\n", "STDERR: reducer 2 [2]=2\r\n", "STDERR: reducer 2009 [1]=1\r\n", "STDERR: reducer 2010 [1]=1\r\n", "STDERR: reducer 2011 [4]=4\r\n", "STDERR: reducer 2012 [1]=1\r\n", "STDERR: reducer 4 [1]=1\r\n", "STDERR: reducer 4898987 [1]=1\r\n", "STDERR: reducer 5 [1]=1\r\n", "STDERR: reducer _ [15,3]=18\r\n", "STDERR: reducer __name__ [1]=1\r\n", "STDERR: reducer a [3]=3\r\n", "STDERR: reducer access [1]=1\r\n", "STDERR: reducer accordingly [1]=1\r\n", "STDERR: reducer account [3]=3\r\n", "STDERR: reducer advanced [2]=2\r\n", "STDERR: reducer aimotion [1]=1\r\n", "STDERR: reducer allows [1]=1\r\n", "STDERR: reducer also [1]=1\r\n", "STDERR: reducer amazon [6]=6\r\n", "STDERR: reducer amazon's [1]=1\r\n", "STDERR: reducer an [1,1]=2\r\n", "STDERR: reducer analysis [2]=2\r\n", "STDERR: reducer and [6,6]=12\r\n", "STDERR: reducer apache [1]=1\r\n", "STDERR: reducer automatically [1]=1\r\n", "STDERR: reducer aws [5]=5\r\n", "STDERR: reducer aws_access_key_id [1,1]=2\r\n", "STDERR: reducer aws_secret_access_key [1,1]=2\r\n", "STDERR: reducer basics [1]=1\r\n", "STDERR: reducer basis [1]=1\r\n", "STDERR: reducer blind [3]=3\r\n", "STDERR: reducer blip [1]=1\r\n", "STDERR: reducer blogspot [1]=1\r\n", "STDERR: reducer buy [1]=1\r\n", "STDERR: reducer by [1]=1\r\n", "STDERR: reducer ci [2]=2\r\n", "STDERR: reducer class [1]=1\r\n", "STDERR: reducer classic [1]=1\r\n", "STDERR: reducer click [1]=1\r\n", "STDERR: reducer cluster [1,4]=5\r\n", "STDERR: reducer code [2,2]=4\r\n", "STDERR: reducer com [1,9]=10\r\n", "STDERR: reducer combiner [1]=1\r\n", "STDERR: reducer compile [1]=1\r\n", "STDERR: reducer computing [1]=1\r\n", "STDERR: reducer conf [1,5]=6\r\n", "STDERR: reducer config [1]=1\r\n", "STDERR: reducer configs [1]=1\r\n", "STDERR: reducer configuration [1]=1\r\n", "STDERR: reducer contents [1]=1\r\n", "STDERR: reducer count [1]=1\r\n", "STDERR: reducer counts [3,4]=7\r\n", "STDERR: reducer create [1]=1\r\n", "STDERR: reducer credentials [1]=1\r\n", "STDERR: reducer def [3]=3\r\n", "STDERR: reducer development [1]=1\r\n", "STDERR: reducer discussion [1]=1\r\n", "STDERR: reducer distributed [1]=1\r\n", "STDERR: reducer docs [1,1]=2\r\n", "STDERR: reducer documentation [2,3]=5\r\n", "STDERR: reducer duplicate [1]=1\r\n", "STDERR: reducer e [1]=1\r\n", "STDERR: reducer easily [1]=1\r\n", "STDERR: reducer elastic [1,4]=5\r\n", "STDERR: reducer elasticmapreduce [2]=2\r\n", "STDERR: reducer emr [3,6]=9\r\n", "STDERR: reducer en [1]=1\r\n", "STDERR: reducer environment [1,2]=3\r\n", "STDERR: reducer error [1]=1\r\n", "STDERR: reducer etc [1]=1\r\n", "STDERR: reducer everyone [1]=1\r\n", "STDERR: reducer example [1]=1\r\n", "STDERR: reducer examples [1,3]=4\r\n", "STDERR: reducer features [1,1]=2\r\n", "STDERR: reducer feeds [1]=1\r\n", "STDERR: reducer file [1,1]=2\r\n", "STDERR: reducer findall [1]=1\r\n", "STDERR: reducer for [3,5]=8\r\n", "STDERR: reducer frequency [1]=1\r\n", "STDERR: reducer from [5]=5\r\n", "STDERR: reducer fully [1]=1\r\n", "STDERR: reducer g [1]=1\r\n", "STDERR: reducer get [1]=1\r\n", "STDERR: reducer github [1,2]=3\r\n", "STDERR: reducer google [1]=1\r\n", "STDERR: reducer graph [2]=2\r\n", "STDERR: reducer greg [2]=2\r\n", "STDERR: reducer group [2]=2\r\n", "STDERR: reducer groups [1]=1\r\n", "STDERR: reducer guides [1]=1\r\n", "STDERR: reducer hadoop [4,7]=11\r\n", "STDERR: reducer hadoop_home [1]=1\r\n", "STDERR: reducer handled [1]=1\r\n", "STDERR: reducer helps [1]=1\r\n", "STDERR: reducer hourly [1]=1\r\n", "STDERR: reducer html [3]=3\r\n", "STDERR: reducer http [13,3]=16\r\n", "STDERR: reducer https [1,2]=3\r\n", "STDERR: reducer if [1]=1\r\n", "STDERR: reducer image [2]=2\r\n", "STDERR: reducer import [2]=2\r\n", "STDERR: reducer important [1]=1\r\n", "STDERR: reducer in [2,3]=5\r\n", "STDERR: reducer information [2]=2\r\n", "STDERR: reducer inside [1]=1\r\n", "STDERR: reducer install [4]=4\r\n", "STDERR: reducer installation [1]=1\r\n", "STDERR: reducer interpret [1]=1\r\n", "STDERR: reducer into [1]=1\r\n", "STDERR: reducer introduction [2]=2\r\n", "STDERR: reducer is [2]=2\r\n", "STDERR: reducer it [3]=3\r\n", "STDERR: reducer its [1]=1\r\n", "STDERR: reducer job [4]=4\r\n", "STDERR: reducer job's [1]=1\r\n", "STDERR: reducer jobs [3]=3\r\n", "STDERR: reducer keys [1]=1\r\n", "STDERR: reducer killion [1]=1\r\n", "STDERR: reducer latest [1]=1\r\n", "STDERR: reducer line [2]=2\r\n", "STDERR: reducer links [1]=1\r\n", "STDERR: reducer live [1]=1\r\n", "STDERR: reducer locally [1,1]=2\r\n", "STDERR: reducer logo [1]=1\r\n", "STDERR: reducer logo_medium [1]=1\r\n", "STDERR: reducer logos [1]=1\r\n", "STDERR: reducer logs [1]=1\r\n", "STDERR: reducer looks [1]=1\r\n", "STDERR: reducer lower [1]=1\r\n", "STDERR: reducer mailto [1]=1\r\n", "STDERR: reducer make [1,2]=3\r\n", "STDERR: reducer map [2]=2\r\n", "STDERR: reducer mapper [1]=1\r\n", "STDERR: reducer mapreduce [2,6]=8\r\n", "STDERR: reducer marcelcaraciolo [1]=1\r\n", "STDERR: reducer master [1]=1\r\n", "STDERR: reducer minimal [1]=1\r\n", "STDERR: reducer more [1,2]=3\r\n", "STDERR: reducer mr_word_freq_count [3]=3\r\n", "STDERR: reducer mrjob [14,17]=31\r\n", "STDERR: reducer mrjob_conf [1]=1\r\n", "STDERR: reducer mrwordfreqcount [2]=2\r\n", "STDERR: reducer multi [1]=1\r\n", "STDERR: reducer need [1]=1\r\n", "STDERR: reducer net [3]=3\r\n", "STDERR: reducer next [1]=1\r\n", "STDERR: reducer of [1,1]=2\r\n", "STDERR: reducer on [4,6]=10\r\n", "STDERR: reducer one [1]=1\r\n", "STDERR: reducer only [1]=1\r\n", "STDERR: reducer or [1]=1\r\n", "STDERR: reducer org [3,4]=7\r\n", "STDERR: reducer other [1,2]=3\r\n", "STDERR: reducer out [1]=1\r\n", "STDERR: reducer overview [1]=1\r\n", "STDERR: reducer own [2]=2\r\n", "STDERR: reducer package [1]=1\r\n", "STDERR: reducer packages [2,2]=4\r\n", "STDERR: reducer page [1]=1\r\n", "STDERR: reducer pip [1]=1\r\n", "STDERR: reducer png [2]=2\r\n", "STDERR: reducer postneo [1]=1\r\n", "STDERR: reducer production [1]=1\r\n", "STDERR: reducer project [1]=1\r\n", "STDERR: reducer put [1]=1\r\n", "STDERR: reducer py [1,3]=4\r\n", "STDERR: reducer pycon [3]=3\r\n", "STDERR: reducer pypi [1]=1\r\n", "STDERR: reducer pypy [2]=2\r\n", "STDERR: reducer python [5,5]=10\r\n", "STDERR: reducer pythonpath [1]=1\r\n", "STDERR: reducer r [1,2]=3\r\n", "STDERR: reducer raw [1]=1\r\n", "STDERR: reducer re [2]=2\r\n", "STDERR: reducer readme [3]=3\r\n", "STDERR: reducer readthedocs [1]=1\r\n", "STDERR: reducer recommendations [2]=2\r\n", "STDERR: reducer recsys [1]=1\r\n", "STDERR: reducer reduce [2]=2\r\n", "STDERR: reducer reducer [1]=1\r\n", "STDERR: reducer reference [1]=1\r\n", "STDERR: reducer regions [1]=1\r\n", "STDERR: reducer rst [3]=3\r\n", "STDERR: reducer run [2,6]=8\r\n", "STDERR: reducer scripts [1]=1\r\n", "STDERR: reducer secret [1]=1\r\n", "STDERR: reducer security [1]=1\r\n", "STDERR: reducer see [1]=1\r\n", "STDERR: reducer self [3]=3\r\n", "STDERR: reducer service [1]=1\r\n", "STDERR: reducer services [1]=1\r\n", "STDERR: reducer set [2,3]=5\r\n", "STDERR: reducer setting [1]=1\r\n", "STDERR: reducer setup [4]=4\r\n", "STDERR: reducer sign [1]=1\r\n", "STDERR: reducer simple [1]=1\r\n", "STDERR: reducer simplejson [1]=1\r\n", "STDERR: reducer social [2]=2\r\n", "STDERR: reducer some [1]=1\r\n", "STDERR: reducer source [2,3]=5\r\n", "STDERR: reducer ssh [1]=1\r\n", "STDERR: reducer stable [1]=1\r\n", "STDERR: reducer stable1 [1]=1\r\n", "STDERR: reducer step [2]=2\r\n", "STDERR: reducer streaming [1,2]=3\r\n", "STDERR: reducer sum [2]=2\r\n", "STDERR: reducer supports [1]=1\r\n", "STDERR: reducer sure [1]=1\r\n", "STDERR: reducer tarballs [1]=1\r\n", "STDERR: reducer target [1]=1\r\n", "STDERR: reducer testing [1]=1\r\n", "STDERR: reducer thanks [1]=1\r\n", "STDERR: reducer that [1]=1\r\n", "STDERR: reducer the [3,4]=7\r\n", "STDERR: reducer this [1]=1\r\n", "STDERR: reducer time [1]=1\r\n", "STDERR: reducer to [4,5]=9\r\n", "STDERR: reducer tracker [1]=1\r\n", "STDERR: reducer transparently [1]=1\r\n", "STDERR: reducer travis [2]=2\r\n", "STDERR: reducer tree [1,1]=2\r\n", "STDERR: reducer try [1]=1\r\n", "STDERR: reducer tunnel [1]=1\r\n", "STDERR: reducer tv [1]=1\r\n", "STDERR: reducer tz [1]=1\r\n", "STDERR: reducer up [3]=3\r\n", "STDERR: reducer upload [1,1]=2\r\n", "STDERR: reducer us [1]=1\r\n", "STDERR: reducer use [1]=1\r\n", "STDERR: reducer using [2]=2\r\n", "STDERR: reducer v0 [1]=1\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "STDERR: reducer variables [1,1]=2\r\n", "STDERR: reducer version [2]=2\r\n", "STDERR: reducer videos [1]=1\r\n", "STDERR: reducer w' [1]=1\r\n", "STDERR: reducer web [1]=1\r\n", "STDERR: reducer which [1]=1\r\n", "STDERR: reducer with [1,2]=3\r\n", "STDERR: reducer word [6]=6\r\n", "STDERR: reducer word_re [2]=2\r\n", "STDERR: reducer words [1]=1\r\n", "STDERR: reducer works [1,3]=4\r\n", "STDERR: reducer write [2]=2\r\n", "STDERR: reducer www [1]=1\r\n", "STDERR: reducer yelp [1,3]=4\r\n", "STDERR: reducer yield [3]=3\r\n", "STDERR: reducer you [2]=2\r\n", "STDERR: reducer you'll [1]=1\r\n", "STDERR: reducer your [4,6]=10\r\n", "Counters from step 1:\r\n", " (no counters found)\r\n", "Moving /var/folders/80/c2kfvdvx5cx570r4vlzqgb840000gq/T/mr_word_freq_count.yoavfreund.20140507.195136.371860/step-0-reducer_part-00000 -> /var/folders/80/c2kfvdvx5cx570r4vlzqgb840000gq/T/mr_word_freq_count.yoavfreund.20140507.195136.371860/output/part-00000\r\n", "Moving /var/folders/80/c2kfvdvx5cx570r4vlzqgb840000gq/T/mr_word_freq_count.yoavfreund.20140507.195136.371860/step-0-reducer_part-00001 -> /var/folders/80/c2kfvdvx5cx570r4vlzqgb840000gq/T/mr_word_freq_count.yoavfreund.20140507.195136.371860/output/part-00001\r\n", "Streaming final output from /var/folders/80/c2kfvdvx5cx570r4vlzqgb840000gq/T/mr_word_freq_count.yoavfreund.20140507.195136.371860/output\r\n", "removing tmp directory /var/folders/80/c2kfvdvx5cx570r4vlzqgb840000gq/T/mr_word_freq_count.yoavfreund.20140507.195136.371860\r\n" ] } ], "prompt_number": 10 }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Running in EMR mode on a dedicated job flow" ] }, { "cell_type": "code", "collapsed": false, "input": [ "!python mr_word_freq_count.py -r emr $root_dir/README.rst > counts" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "python: can't open file 'mr_word_freq_count.py': [Errno 2] No such file or directory\r\n" ] } ], "prompt_number": 2 }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Running in EMR mode on existing job flow (hadoop cluster)" ] }, { "cell_type": "code", "collapsed": false, "input": [ "job_flow_id='j-35O01QLMRUFED'\n", "!python mr_word_freq_count.py -r emr --emr-job-flow-id=j-35O01QLMRUFED $root_dir/README.rst > counts" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "using configs in /Users/yoavfreund/.mrjob.conf\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "using existing scratch bucket mrjob-71c4e33417a2cde8\r\n", "using s3://mrjob-71c4e33417a2cde8/tmp/ as our scratch dir on S3\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "creating tmp directory /var/folders/80/c2kfvdvx5cx570r4vlzqgb840000gq/T/mr_word_freq_count.yoavfreund.20140507.195205.656489\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "Copying non-input files into s3://mrjob-71c4e33417a2cde8/tmp/mr_word_freq_count.yoavfreund.20140507.195205.656489/files/\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "Adding our job to existing job flow j-3MMGSXIO3FQR3\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "Job launched 30.4s ago, status RUNNING: Running step (mr_word_freq_count.yoavfreund.20140507.195205.656489: Step 1 of 1)\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "Job launched 60.9s ago, status RUNNING: Running step (mr_word_freq_count.yoavfreund.20140507.195205.656489: Step 1 of 1)\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "Job launched 91.4s ago, status RUNNING: Running step (mr_word_freq_count.yoavfreund.20140507.195205.656489: Step 1 of 1)\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "Job on job flow j-3MMGSXIO3FQR3 failed with status WAITING: Waiting after step failed\r\n", "Logs are in s3://yoav.hadoop/j-3MMGSXIO3FQR3/\r\n", "ec2_key_pair_file not specified, going to S3\r\n", "Scanning S3 logs for probable cause of failure\r\n", "Waiting 5.0s for S3 eventual consistency\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "Attempting to terminate job...\r\n", "Traceback (most recent call last):\r\n", " File \"mr_word_freq_count.py\", line 48, in \r\n", " MRWordFreqCount.run()\r\n", " File \"//anaconda/lib/python2.7/site-packages/mrjob/job.py\", line 494, in run\r\n", " mr_job.execute()\r\n", " File \"//anaconda/lib/python2.7/site-packages/mrjob/job.py\", line 512, in execute\r\n", " super(MRJob, self).execute()\r\n", " File \"//anaconda/lib/python2.7/site-packages/mrjob/launch.py\", line 147, in execute\r\n", " self.run_job()\r\n", " File \"//anaconda/lib/python2.7/site-packages/mrjob/launch.py\", line 213, in run_job\r\n", " self.stdout.flush()\r\n", " File \"//anaconda/lib/python2.7/site-packages/mrjob/runner.py\", line 614, in __exit__\r\n", " self.cleanup()\r\n", " File \"//anaconda/lib/python2.7/site-packages/mrjob/emr.py\", line 1010, in cleanup\r\n", " super(EMRJobRunner, self).cleanup(mode=mode)\r\n", " File \"//anaconda/lib/python2.7/site-packages/mrjob/runner.py\", line 560, in cleanup\r\n", " self._cleanup_job()\r\n", " File \"//anaconda/lib/python2.7/site-packages/mrjob/emr.py\", line 1084, in _cleanup_job\r\n", " self._opts['ec2_key_pair_file'])\r\n", " File \"//anaconda/lib/python2.7/site-packages/mrjob/ssh.py\", line 200, in ssh_terminate_single_job\r\n", " ssh_bin, address, ec2_key_pair_file, ['hadoop', 'job', '-list']))\r\n", " File \"//anaconda/lib/python2.7/site-packages/mrjob/ssh.py\", line 82, in ssh_run\r\n", " p = Popen(args, stdout=PIPE, stderr=PIPE, stdin=PIPE)\r\n", " File \"//anaconda/lib/python2.7/subprocess.py\", line 709, in __init__\r\n", " errread, errwrite)\r\n", " File \"//anaconda/lib/python2.7/subprocess.py\", line 1326, in _execute_child\r\n", " raise child_exception\r\n", "TypeError: execv() arg 2 must contain only strings\r\n" ] } ], "prompt_number": 11 }, { "cell_type": "code", "collapsed": false, "input": [ "!ls mrjob/examples/" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "__init__.py mr_jar_step_example.py \u001b[34mmr_postfix_bounce\u001b[m\u001b[m \u001b[31mmr_word_freq_count.py\u001b[m\u001b[m\r\n", "\u001b[34mbash_wrap\u001b[m\u001b[m mr_log_sampler.py mr_text_classifier.py py3k_word_freq_count.py\r\n", "\u001b[34mcontrib\u001b[m\u001b[m \u001b[31mmr_most_used_word.py\u001b[m\u001b[m \u001b[34mmr_travelling_salesman\u001b[m\u001b[m\r\n", "mr_cmd.py mr_next_word_stats.py mr_wc.py\r\n", "mr_grep.py mr_page_rank.py \u001b[31mmr_wc.rb\u001b[m\u001b[m\r\n" ] } ], "prompt_number": 67 }, { "cell_type": "code", "collapsed": false, "input": [ "%load $root_dir/examples/mr_travelling_salesman/README.rst" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 24 }, { "cell_type": "markdown", "metadata": {}, "source": [ "### HW ###\n", "1) Look around in the examples directory\n", "2) Write a map-reduce job that computes the PCA of a large set of vectors (use as input the max_temp profiles in \n", " /home/ubuntu/data/weather/SAMPLE_TMAX.csv)\n", "\n", "**Hint:** One map-reduce job is enough. You might think that you first need to compute the means $\\mu_i=E(X_i)$ and then, in a second path, compute\n", "$$cov(X_i,X_j) = E((X_i-\\mu_i)(X_j-\\mu_j))$$\n", "However, recall the formula \n", "$$ var(X) \\doteq E((X-\\mu)^2) = E(X^2) - E(X)^2 $$\n", "This formula can be generalized to the $cov$ matrix." ] }, { "cell_type": "code", "collapsed": false, "input": [ "!wc /home/ubuntu/data/weather/SAMPLE_TMAX.csv" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ " 20000 20000 26114979 /home/ubuntu/data/weather/SAMPLE_TMAX.csv\r\n" ] } ], "prompt_number": 28 }, { "cell_type": "code", "collapsed": false, "input": [], "language": "python", "metadata": {}, "outputs": [] } ], "metadata": {} } ] }