{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# GraphLab Create v Common Crawl 2012 WebGraph\n", "## or, How I Learned to Stop Worrying about 128B edges and Love the PageRank" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here at Dato we highly evaluate openness and transparency. Look at the [Dato Gallery](https://dato.com/learn/gallery/ \"Dato Gallery\"), filled with notebooks you can download and execute on your own machine, to see that that GraphLab Create can really do what we say it does. So when it came to benchmarks, we said, why not upload a notebook about it?\n", "\n", "This notebook will describe our **PageRank benchmark**. The dataset for this benchmark is the web itself - compiled by good people from [commoncrawl.org](http://commoncrawl.org/) in 2012. You can [download the dataset from here](http://webdatacommons.org/hyperlinkgraph/2012-08/download.html#toc0). We will run the **PageRank** algorithm over a network of **3.5 billion nodes** and **128 billion links**. Each node represents a web page, and each link - a hyperlink between two pages. GraphLab should do it in **about 5 hours**.\n", "\n", "Running this benchmark will prove you how powerful and robust GraphLab is - we are not aware of any general-purpose graph analytics system that can cope with this task, either on a single machine or distributed. However, unlike other notebooks in the gallery, we **don't recommend running this notebook on your laptop!**. Instead, this notebook will describe how to run this benchmark on an EC2 instance in the Amazon Web Services (AWS) cloud.\n", "\n", "We'll be using an **r3.8xlarge** EC2 instance. That's a strong machine,\n", "with 32 cores, 244 Gigabytes of RAM, and 2 SSD drives, each sized 320 GBs.\n", "If you can access a physical machine of this calibre, expect similar results.\n", "\n", "Here are the steps for running this benchmark (over EC2, of course):\n", "\n", "1. Launch an EC2 instance,\n", "2. Install GraphLab Create and Jupyter (formerly *IPython Notebook*) on it,\n", "3. Connect to your Jupyter instance running on the EC2 machine and run the benchmark." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 1: Launch an EC2 instance\n", "\n", "We created a detailed guide for launching an EC2 instance, [which is available here](https://github.com/guy4261/glc_pagerank_benchmark/blob/master/commoncrawl_benchmark_ec2_instructions/guide.pdf).\n", "\n", "The specifications for this instance are as following:\n", "* The region should be US West (Oregon).\n", "* The AMI should be **Ubuntu Server 14.04 LTS (HVM), SSD Volume Type - ami-9abea4fb**.\n", "* The instance type should be **r3.8xlarge** (32 cores, 244 GBs of RAM, two 320 GB SSDs)\n", "* The storage should include a **Root volume of 16 GBs**.\n", "* The Security Group should allow everyone to access **ports 22 (SSH) and 8888 (Jupyter)**.\n", "\n", "Again - if you are not sure how to launch an AWS instance, don't worry! [Use our guide](https://github.com/guy4261/glc_pagerank_benchmark/blob/master/commoncrawl_benchmark_ec2_instructions/guide.pdf), which includes the screenshots of all the stages you will go through to launch your instance.\n", "\n", "Once you've set up your machine, connect to it via ssh (OS X, Linux) or a client such as PuTTY (Windows), and proceed to the next step." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 2: Install GraphLab Create and Jupyter\n", "\n", "When connected to the your machine, download [the installation script from here](https://raw.githubusercontent.com/guy4261/glc_pagerank_benchmark/master/install.sh) and run it.\n", "\n", "```bash\n", "$ wget https://raw.githubusercontent.com/guy4261/glc_pagerank_benchmark/master/install.sh\n", "$ chmod u+x install.sh\n", "$ ./install.sh\n", "```\n", "\n", "When the script will finish running, you will be able to access Jupyter via your browser, and run code that will execute on your EC2 instance.\n", "\n", "The script will install Python, GraphLab Create and Jupyter, and will start Jupyter on port 8888. You can see the entire script here:\n", "\n", "https://raw.githubusercontent.com/guy4261/glc_pagerank_benchmark/master/install.sh" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Tip: Password protecting your Jupyter server\n", "Since we assume you are only creating this instance for the purpose of running this benchmark, we allow the running Jupyter server to be open to the outside world. However, if you want to password-protect your instance, run the following lines in your shell (after connecting via SSH to your instance).\n", "```bash\n", "# Kill the previous jupyter instance\n", "$ kill -9 `cat pid`\n", "\n", "# Generate config files for jupyter\n", "$ jupyter notebook --generate-config\n", "\n", "# Set a password for the Jupyter instane\n", "$ python -c \"from notebook.auth import passwd; password = passwd(); open('/home/ubuntu/.jupyter/jupyter_notebook_config.py', 'a').write('c.NotebookApp.password = u\\'%s\\'' % (password))\"\n", "\n", "# Restart Jupyter\n", "$ nohup jupyter notebook --no-browser --ip=\"*\" & > pid\n", "```\n", "\n", "You will be prompted for a password the next time you browse to the Jupyter address on your instance." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 3: Connect to Jupyter and run the benchmark\n", "\n", "If you used the install script, [the benchmark notebook](https://github.com/guy4261/glc_pagerank_benchmark/blob/master/commoncrawl_benchmark.ipynb) has been downloaded to your EC2 instance and should be visible when you browse into Jupyter.\n", "\n", "You can also download the notebook from this address: https://raw.githubusercontent.com/guy4261/glc_pagerank_benchmark/master/commoncrawl_benchmark.ipynb\n", "\n", "The notebook does two main things: first, it prepares and mounts the two SSD volumes available on your instance. Then, it runs the benchmark. The benchmark essentially consists of the following four lines:\n", "\n", "```python\n", "import graphlab as gl\n", "s3_sgraph_path = \"s3://dato-datasets-oregon/webgraphs/sgraph/common_crawl_2012_sgraph\"\n", "g = gl.load_sgraph(s3_sgraph_path)\n", "pr = gl.pagerank.create(g)\n", "```\n", "\n", "But since this is supposedly the first time you run GraphLab Create, and the first time you pull anything off AWS, you will need to enter 3 keys - your GraphLab Create Product Key, and your AWS Access Key ID and Secret Access Key. You can see [the rendered notebook on github](https://github.com/guy4261/glc_pagerank_benchmark/blob/master/commoncrawl_benchmark.ipynb).\n", "\n", "When the benchmarking is running, you can close your browser window. If you do that, then when you come back, create a new cell and run some Python code to see if the benchmark is still running. If you'll get a (\\*) mark next to your cell - that's your signal that there's some ongoing calculation taking place - the benchmark. Otherwise, if you immediately see output then the benchmark is done. You can examine the resulting benchmark object (`pr`)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data Overview\n", "\n", "The [dataset's webpage says](http://webdatacommons.org/hyperlinkgraph/2012-08/download.html#toc0):\n", " \n", " Downloading the page graph: The page graph (arc and indes files) are, due to their size split into in small files of around 500 MB. These files can be downloaded using ```wget -i http://webdatacommons.org/hyperlinkgraph/2012-08/data/index.list.txt``` for the index files and respectively ```wget -i http://webdatacommons.org/hyperlinkgraph/2012-08/data/arc.list.txt``` for the arc files.\n", "\n", "Examining http://webdatacommons.org/hyperlinkgraph/2012-08/data/arc.list.txt , we will find a list of URLs pointing at .gz files.\n", "\n", "```\n", "http://data.dws.informatik.uni-mannheim.de/hyperlinkgraph/2012-08/network/part-r-00000.gz\n", "http://data.dws.informatik.uni-mannheim.de/hyperlinkgraph/2012-08/network/part-r-00001.gz\n", "...\n", "http://data.dws.informatik.uni-mannheim.de/hyperlinkgraph/2012-08/network/part-r-00696.gz\n", "```\n", "\n", "As CommonCrawl's own documentation says, each of these gzip files weights ~500 MBs. The entire edges dataset weight around **330 GBs**. Here is the `head` of the first file (`part-r-00000.gz`):\n", "\n", "```\n", "0\t739935047\n", "1\t741742773\n", "2\t741745070\n", "```\n", "\n", "This is a very common format for storing graph edges - id1-TAB-id2.\n", "While GraphLab could handle loading such a graph, we saved you the the trouble of downloading the files to your EC2 instance and uploaded it to an open S3 bucket. To access it you only need AWS credentials.\n", "\n", "The SGraph created by loading the data is stored in binary form in this bucket; that way, it weights around **218 GBs** - which is less than the **330 GBs of gzipped edges' files**.\n", "Also, since this data is stored on Amazon's S3, accessing it from your EC2 instance should be faster than downloading the raw data from CommonCrawl's servers.\n", "\n", "The bucket containing the SGraph is located at [s3://dato-datasets-oregon/webgraphs/sgraph/common_crawl_2012_sgraph/](s3://dato-datasets-oregon/webgraphs/sgraph/common_crawl_2012_sgraph/) .\n", "\n", "For those who would like to download it via the AWS command-line tool, use the following command: \n", "\n", "`aws s3 cp s3://dato-datasets/webgraphs/sgraph/common_crawl_2012_sgraph ccg2012 --recursive`\n", "\n", "This will create a new folder `ccg2012` in the directory where the command is executed. You can, of course, choose a different path." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Summary\n", "\n", "This notebook covers it all: where are the fully-detailed instructions for launching an EC2 instance, how to get the script that installs everything from Python to Jupyter on your instance, and how to access your instance to run the benchmark. Our goal was to help anyone willing to run one of the most heavy-duty benchmarks of graph algorithms, even if you don't have the equipment to support it (or previous knowledege of EC2). If you still come across any trouble, drop a line to [Guy Rapaport <guy@dato.com>](mailto:guy@dato.com) and ask for help.\n", "\n", "Good luck and Happy Benchmarking!" ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.10" } }, "nbformat": 4, "nbformat_minor": 0 }