{ "nbformat": 4, "nbformat_minor": 0, "metadata": { "colab": { "provenance": [] }, "kernelspec": { "name": "python3", "display_name": "Python 3" }, "language_info": { "name": "python" } }, "cells": [ { "cell_type": "markdown", "source": [ "## Setup" ], "metadata": { "id": "TJau23pA_vxN" } }, { "cell_type": "code", "source": [ "! pip install \"ray==2.2.0\"" ], "metadata": { "id": "oEQwycQn_xMZ" }, "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "source": [ "## Your First Ray API Example" ], "metadata": { "id": "XtyPW2KpAnru" } }, { "cell_type": "markdown", "source": [ "To give you an example, take the following function which retrieves and processes data from a database. Our dummy database is a plain Python list containing the words of the title of this book. To simulate the idea that accessing and processing data from the database is costly, we have the function sleep (pause for a certain amount of time) in Python." ], "metadata": { "id": "u5UR33OMAmgU" } }, { "cell_type": "code", "source": [ "import time\n", "\n", "database = [\n", " \"Learning\", \"Ray\",\n", " \"Flexible\", \"Distributed\", \"Python\", \"for\", \"Machine\", \"Learning\"\n", "]\n", "\n", "\n", "def retrieve(item):\n", " time.sleep(item / 10.)\n", " return item, database[item]" ], "metadata": { "id": "h_2DVHrA_xv4" }, "execution_count": 18, "outputs": [] }, { "cell_type": "markdown", "source": [ "Our database has eight items in total. If we were to retrieve all items sequentially, how long should that take? For the item with index 5 we wait for half a second (5 / 10.) and so on. In total, we can expect a runtime of around `(0+1+2+3+4+5+6+7)/10. = 2.8` seconds. Let’s see if that’s what we actually get:" ], "metadata": { "id": "nCld8WFYAwR_" } }, { "cell_type": "code", "source": [ "def print_runtime(input_data, start_time):\n", " print(f'Runtime: {time.time() - start_time:.2f} seconds, data:')\n", " print(*input_data, sep=\"\\n\")\n", "\n", "\n", "start = time.time()\n", "data = [retrieve(item) for item in range(8)]\n", "print_runtime(data, start)" ], "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "0-GvF5WcAqZf", "outputId": "d75e2602-ba55-4108-eb93-ffb6514f8a64" }, "execution_count": 19, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Runtime: 2.81 seconds, data:\n", "(0, 'Learning')\n", "(1, 'Ray')\n", "(2, 'Flexible')\n", "(3, 'Distributed')\n", "(4, 'Python')\n", "(5, 'for')\n", "(6, 'Machine')\n", "(7, 'Learning')\n" ] } ] }, { "cell_type": "markdown", "source": [ "The total time it takes to run the function is 2.81 seconds, but this may vary on your individual computer. It's important to note that our basic Python version is not capable of running this function simultaneously.\n", "\n", "You may not have been surprised to hear this, but it's likely that you at least suspected that Python list comprehensions are more efficient in terms of performance. The runtime we measured, which was 2.8 seconds, is actually the worst case scenario. It may be frustrating to see that a program that mostly just \"sleeps\" during its runtime could still be so slow, but the reason for this is due to the Global Interpreter Lock (GIL), which gets enough criticism as it is." ], "metadata": { "id": "NVdvzJySBB3u" } }, { "cell_type": "markdown", "source": [ "### Ray Tasks" ], "metadata": { "id": "_An1n5kmBTYR" } }, { "cell_type": "markdown", "source": [ "It’s reasonable to assume that such a task can benefit from parallelization. Perfectly distributed, the runtime should not take much longer than the longest subtask, namely 7/10. = 0.7 seconds. So, let’s see how you can extend this example to run on Ray. To do so, you start by using the @ray.remote decorator" ], "metadata": { "id": "qxjHaaeGBUhR" } }, { "cell_type": "markdown", "source": [ "One aspect that may have been noticed is that the retrieve definition involves directly accessing items from the database. While this works well on a local Ray cluster, it is important to consider how this would function on an actual cluster with multiple computers. In a Ray cluster, there is a head node with a driver process and multiple worker nodes with worker processes executing tasks. By default, Ray creates as many worker processes as there are CPU cores on the machine. However, in this scenario the database is defined on the driver only, but the worker processes need access to it to run the retrieve task. Fortunately, Ray has a simple solution for sharing objects between the driver and workers or between workers - using the put function to place the data into Ray's distributed object store. In the retrieve_task definition, we explicitly include a db argument, which will later be passed the db_object_ref object." ], "metadata": { "id": "3xoOIINYDPM0" } }, { "cell_type": "code", "source": [ "import ray \n", "\n", "\n", "#By utilizing the object store in this manner, you can allow Ray to manage data \n", "#access throughout the entire cluster. We will discuss the specifics of how values \n", "#are transmitted between nodes and within workers when discussing Ray's infrastructure. \n", "db_object_ref = ray.put(database)\n", "\n", "\n", "@ray.remote\n", "def retrieve_task(item, db):\n", " time.sleep(item / 10.)\n", " return item, db[item]" ], "metadata": { "id": "aVF2ox7YBWXX" }, "execution_count": 23, "outputs": [] }, { "cell_type": "markdown", "source": [ "In this way, the function retrieve_task becomes a so-called Ray task. In essence, a Ray task is a function that gets executed on a different process that it was called from, potentially on a different machine.\n", "\n", "It is very convenient to use Ray because you can continue to write your Python code as usual, without having to significantly change your approach or programming style. Using the @ray.remote decorator on your retrieve function is the intended use of decorators, but for the purpose of clarity, we did not modify the original code in this example.\n", "\n", "To retrieve database entries and measure performance, what changes do you need to make to the code? It's actually not a lot. Here's an overview of the process:" ], "metadata": { "id": "gJ4LPyihBZpI" } }, { "cell_type": "markdown", "source": [ "In this case, it would be more efficient to allow the driver process to perform other tasks while waiting for results, and to process results as they are completed rather than waiting for all items to be finished. Additionally, if one of the database items cannot be retrieved due to an issue like a deadlock in the database connection, the driver will hang indefinitely. To prevent this, it is a good idea to set reasonable timeouts using the wait function. For example, if we do not want to wait longer than ten times the longest data retrieval task, we can use the wait function to stop the task after that time has passed." ], "metadata": { "id": "qYRTjbu2D3Bx" } }, { "cell_type": "code", "source": [ "start = time.time()\n", "object_references = [\n", " retrieve_task.remote(item, db_object_ref) for item in range(8)\n", "]\n", "all_data = []\n", "\n", "while len(object_references) > 0:\n", " finished, object_references = ray.wait(\n", " object_references, num_returns=2, timeout=7.0\n", " )\n", " data = ray.get(finished)\n", " print_runtime(data, start)\n", " all_data.extend(data)\n", "\n", "print_runtime(all_data, start)" ], "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "mBB05zOvA3hN", "outputId": "b35d41d9-8fc9-4321-8c66-812a67e06772" }, "execution_count": 26, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Runtime: 1.10 seconds, data:\n", "(0, 'Learning')\n", "(1, 'Ray')\n", "Runtime: 1.61 seconds, data:\n", "(2, 'Flexible')\n", "(3, 'Distributed')\n", "Runtime: 2.52 seconds, data:\n", "(4, 'Python')\n", "(5, 'for')\n", "Runtime: 3.83 seconds, data:\n", "(6, 'Machine')\n", "(7, 'Learning')\n", "Runtime: 3.83 seconds, data:\n", "(0, 'Learning')\n", "(1, 'Ray')\n", "(2, 'Flexible')\n", "(3, 'Distributed')\n", "(4, 'Python')\n", "(5, 'for')\n", "(6, 'Machine')\n", "(7, 'Learning')\n" ] } ] }, { "cell_type": "markdown", "source": [ "## Running A MapReduce Example" ], "metadata": { "id": "_k6TFrFSFt8z" } }, { "cell_type": "markdown", "source": [ "We will be using Python to implement the MapReduce algorithm for our word-count purpose and utilizing Ray to parallelize the computation. To better understand what we are working with, we will begin by loading some example data." ], "metadata": { "id": "a-lRsOuzFsLG" } }, { "cell_type": "code", "source": [ "import subprocess\n", "zen_of_python = subprocess.check_output([\"python\", \"-c\", \"import this\"])\n", "corpus = zen_of_python.split()\n", "\n", "num_partitions = 3\n", "chunk = len(corpus) // num_partitions\n", "partitions = [\n", " corpus[i * chunk: (i + 1) * chunk] for i in range(num_partitions)\n", "]" ], "metadata": { "id": "blnDNWntBcAF" }, "execution_count": 27, "outputs": [] }, { "cell_type": "markdown", "source": [ "We will be using the Zen of Python, a collection of guidelines from the Python community, as our data for this exercise. The Zen of Python can be accessed by typing \"import this\" in a Python session and is traditionally hidden as an \"Easter egg.\" While it is beneficial for Python programmers to read these guidelines, for the purposes of this exercise, we will only be counting the number of words contained within them. To do this, we will divide the Zen of Python into three separate \"documents\" by treating each line as a separate entity and then splitting it into these partitions.\n", "\n", "To determine the map phase, we require a map function that we will utilize on each document. In this particular scenario, we want to output the pair (word, 1) for every word found in a document. For basic text documents that are loaded as Python strings, this process appears as follows." ], "metadata": { "id": "NeUoFnBfF5i1" } }, { "cell_type": "code", "source": [ "def map_function(document):\n", " for word in document.lower().split():\n", " yield word, 1" ], "metadata": { "id": "ld9ZdN_kGBeR" }, "execution_count": 28, "outputs": [] }, { "cell_type": "markdown", "source": [ "We will use the apply_map function on a large collection of documents by marking it as a task in Ray using the @ray.remote decorator. When we call apply_map, it will be applied to three sets of document data (num_partitions=3). The apply_map function will return three lists, one for each partition. We do this so that Ray can rearrange the results of the map phase and distribute them to the appropriate nodes for us." ], "metadata": { "id": "4O2ADdh6GCrQ" } }, { "cell_type": "code", "source": [ "import ray\n", "\n", "@ray.remote\n", "def apply_map(corpus, num_partitions=3):\n", " map_results = [list() for _ in range(num_partitions)]\n", " for document in corpus:\n", " for result in map_function(document):\n", " first_letter = result[0].decode(\"utf-8\")[0]\n", " word_index = ord(first_letter) % num_partitions\n", " map_results[word_index].append(result)\n", " return map_results" ], "metadata": { "id": "_TxLBZKUGGm5" }, "execution_count": 29, "outputs": [] }, { "cell_type": "markdown", "source": [ "For text corpora that can be stored on a single machine, it is unnecessary to use the map phase. However, when the data needs to be divided across multiple nodes, the map phase becomes useful. In order to apply the map phase to our corpus in parallel, we use a remote call on apply_map, just like we have done in previous examples. The main difference now is that we also specify that we want three results returned (one for each partition) using the num_returns argument." ], "metadata": { "id": "2NaS3jkpGP_1" } }, { "cell_type": "code", "source": [ "map_results = [\n", " apply_map.options(num_returns=num_partitions)\n", " .remote(data, num_partitions)\n", " for data in partitions\n", "]\n", "\n", "for i in range(num_partitions):\n", " mapper_results = ray.get(map_results[i])\n", " for j, result in enumerate(mapper_results):\n", " print(f\"Mapper {i}, return value {j}: {result[:2]}\")" ], "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "mu-7-HO9Fw1f", "outputId": "5494e133-fc31-4d36-cabb-414500cb8ae5" }, "execution_count": 30, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Mapper 0, return value 0: [(b'of', 1), (b'is', 1)]\n", "Mapper 0, return value 1: [(b'python,', 1), (b'peters', 1)]\n", "Mapper 0, return value 2: [(b'the', 1), (b'zen', 1)]\n", "Mapper 1, return value 0: [(b'unless', 1), (b'in', 1)]\n", "Mapper 1, return value 1: [(b'although', 1), (b'practicality', 1)]\n", "Mapper 1, return value 2: [(b'beats', 1), (b'errors', 1)]\n", "Mapper 2, return value 0: [(b'is', 1), (b'is', 1)]\n", "Mapper 2, return value 1: [(b'although', 1), (b'a', 1)]\n", "Mapper 2, return value 2: [(b'better', 1), (b'than', 1)]\n" ] } ] }, { "cell_type": "markdown", "source": [ "We can make it so that all pairs from the j-th return value end up on the same node for the reduce phase. Let’s discuss this phase next.\n", "\n", "In the reduce phase we can create a dictionary that sums up all word occurrences on each partition:\n" ], "metadata": { "id": "mh_HGkE3Gwe1" } }, { "cell_type": "code", "source": [ "@ray.remote\n", "def apply_reduce(*results):\n", " reduce_results = dict()\n", " for res in results:\n", " for key, value in res:\n", " if key not in reduce_results:\n", " reduce_results[key] = 0\n", " reduce_results[key] += value\n", "\n", " return reduce_results" ], "metadata": { "id": "HwpfHX6nGxws" }, "execution_count": 31, "outputs": [] }, { "cell_type": "markdown", "source": [ "We can take the j-th return value from each mapper and send it to the j-th reducer using the following method. It's important to note that this code works for larger datasets that don't fit on one machine because we are passing references to the data using Ray objects rather than the actual data itself. Both the map and reduce phases can be run on any Ray cluster and the data shuffling is also handled by Ray." ], "metadata": { "id": "A3Ye8OyXGz6x" } }, { "cell_type": "code", "source": [ "outputs = []\n", "for i in range(num_partitions):\n", " outputs.append(\n", " apply_reduce.remote(*[partition[i] for partition in map_results])\n", " )\n", "\n", "counts = {k: v for output in ray.get(outputs) for k, v in output.items()}\n", "\n", "sorted_counts = sorted(counts.items(), key=lambda item: item[1], reverse=True)\n", "for count in sorted_counts:\n", " print(f\"{count[0].decode('utf-8')}: {count[1]}\")" ], "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "a_GmxzXvGT1F", "outputId": "a6eddede-9996-41f9-88f5-85e51dd7c90f" }, "execution_count": 32, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "is: 10\n", "better: 8\n", "than: 8\n", "the: 6\n", "to: 5\n", "of: 3\n", "although: 3\n", "be: 3\n", "unless: 2\n", "one: 2\n", "if: 2\n", "implementation: 2\n", "idea.: 2\n", "special: 2\n", "should: 2\n", "do: 2\n", "may: 2\n", "a: 2\n", "never: 2\n", "way: 2\n", "explain,: 2\n", "ugly.: 1\n", "implicit.: 1\n", "complex.: 1\n", "complex: 1\n", "complicated.: 1\n", "flat: 1\n", "readability: 1\n", "counts.: 1\n", "cases: 1\n", "rules.: 1\n", "in: 1\n", "face: 1\n", "refuse: 1\n", "one--: 1\n", "only: 1\n", "--obvious: 1\n", "it.: 1\n", "obvious: 1\n", "first: 1\n", "often: 1\n", "*right*: 1\n", "it's: 1\n", "it: 1\n", "idea: 1\n", "--: 1\n", "let's: 1\n", "python,: 1\n", "peters: 1\n", "simple: 1\n", "sparse: 1\n", "dense.: 1\n", "aren't: 1\n", "practicality: 1\n", "purity.: 1\n", "pass: 1\n", "silently.: 1\n", "silenced.: 1\n", "ambiguity,: 1\n", "guess.: 1\n", "and: 1\n", "preferably: 1\n", "at: 1\n", "you're: 1\n", "dutch.: 1\n", "good: 1\n", "are: 1\n", "great: 1\n", "more: 1\n", "zen: 1\n", "by: 1\n", "tim: 1\n", "beautiful: 1\n", "explicit: 1\n", "nested.: 1\n", "enough: 1\n", "break: 1\n", "beats: 1\n", "errors: 1\n", "explicitly: 1\n", "temptation: 1\n", "there: 1\n", "that: 1\n", "not: 1\n", "now: 1\n", "never.: 1\n", "now.: 1\n", "hard: 1\n", "bad: 1\n", "easy: 1\n", "namespaces: 1\n", "honking: 1\n", "those!: 1\n" ] } ] }, { "cell_type": "markdown", "source": [ "To gain a thorough understanding of how to scale MapReduce tasks across multiple nodes using Ray, including memory management, we suggest reading this [insightful blog post on the topic](https://medium.com/distributed-computing-with-ray/executing-adistributed-shuffle-without-a-mapreduce-system-d5856379426c).\n", "\n", "The important part about this MapReduce example is to realize how flexible Ray’s programming model really is. Surely, a production-grade MapReduce implementation takes a bit more effort. But being able to reproduce common algorithms like this one quickly goes a long way. Keep in mind that in the earlier phases of MapReduce, say around 2010, this paradigm was often the only thing you had to express your workloads. With Ray, a whole range of interesting distributed computing patterns become accessible to any intermediate Python programmer." ], "metadata": { "id": "yVuJd9LmGiNP" } } ] }