{ "cells": [ { "cell_type": "markdown", "metadata": { "toc": true }, "source": [ "

Table of Contents

\n", "
" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n" ], "text/plain": [ "" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# code for loading the format for the notebook\n", "import os\n", "\n", "# path : store the current path to convert back to it later\n", "path = os.getcwd()\n", "os.chdir(os.path.join('..', 'notebook_format'))\n", "from formats import load_style\n", "load_style(plot_style=False)" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Ethen 2018-07-16 14:45:19 \n", "\n", "CPython 3.6.4\n", "IPython 6.4.0\n", "\n", "pyspark 2.2.1\n" ] } ], "source": [ "os.chdir(path)\n", "\n", "# 1. magic to print version\n", "# 2. magic so that the notebook will reload external python modules\n", "%load_ext watermark\n", "%load_ext autoreload\n", "%autoreload 2\n", "\n", "from pyspark.conf import SparkConf\n", "from pyspark.sql import SparkSession, Row\n", "\n", "%watermark -a 'Ethen' -d -t -v -p pyspark" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "
\n", "

SparkSession - in-memory

\n", " \n", "
\n", "

SparkContext

\n", "\n", "

Spark UI

\n", "\n", "
\n", "
Version
\n", "
v2.2.1
\n", "
Master
\n", "
local[*]
\n", "
AppName
\n", "
implicit
\n", "
\n", "
\n", " \n", "
\n", " " ], "text/plain": [ "" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "spark = (SparkSession.\n", " builder.\n", " master('local[*]').\n", " appName('implicit').\n", " config(conf = SparkConf()).\n", " getOrCreate())\n", "spark" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Tuning Spark Partitions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The main reason why we should care about partitions is performance. By having all relevant data in one place (node) we reduce the overhead of shuffling (need for serialization and network traffic). Understanding how Spark deals with partitions allow us to control the application parallelism (which leads to better cluster utilization - fewer costs). But keep in mind that partitioning will not be helpful in all applications. For example, if a given RDD is scanned only once, there is no point in partitioning it in advance. It's useful only when a dataset is reused multiple times and performing operations that involves a shuffle, e.g. `reduceByKey()`.\n", "\n", "We will use the following list of numbers to investigate the behavior of spark's partitioning." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Number of partitions: 2\n", "Partitioner: None\n", "Partitions structure: [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]\n" ] } ], "source": [ "num_partitions = 2\n", "rdd = spark.sparkContext.parallelize(range(10), num_partitions)\n", "\n", "print('Number of partitions: {}'.format(rdd.getNumPartitions()))\n", "print('Partitioner: {}'.format(rdd.partitioner))\n", "print('Partitions structure: {}'.format(rdd.glom().collect()))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's look at what is happening under the hood. Spark uses different partitioning schemes for various types of RDD, in our case, our partitioner is None, If there is no partitioner, then the partitioning is not based upon characteristic of data but uniformly distributed across nodes.\n", "\n", "Looking at the partition structure, we can see that our RDD is in fact split into two partitions, and if we were to apply transformations on this RDD, then each partition's work will be executed in a separate thread.\n", "\n", "If you're confused about the `glom` method, it returns a RDD created by coalescing all elements within each partition into a list/array. An example usage of this method might be, say we wish to get the maximum value of a RDD, we could do:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "9" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# or in Scala: rdd.reduce(_ max _)\n", "rdd.reduce(max)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As reduce introduces lot of shuffles between partitions for comparison, we could instead:\n", "\n", "- Find the maximum in each partition\n", "- Compare maximum value between partitions to get the final max value." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "9" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "rdd.glom().map(lambda partition: max(partition)).reduce(max)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The next question is: What will happen when the number of partitions exceeds the number of data records?" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Number of partitions: 15\n", "Partitioner: None\n", "Partitions structure: [[], [0], [1], [], [2], [3], [], [4], [5], [], [6], [7], [], [8], [9]]\n" ] } ], "source": [ "num_partitions = 15\n", "rdd = spark.sparkContext.parallelize(range(10), num_partitions)\n", "\n", "print('Number of partitions: {}'.format(rdd.getNumPartitions()))\n", "print('Partitioner: {}'.format(rdd.partitioner))\n", "print('Partitions structure: {}'.format(rdd.glom().collect()))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "From the output above, we can see that Spark created the requested number of partitions, but some of them are empty. This is bad because we would need to spend time preparing these idle threads." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Custom Partitions - PartitionBy" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`partitionBy()` transformation gives the end-user the flexibility to apply custom partitioning logic over the RDD. To use `partitionBy()`, our RDD must be comprised of tuple (pair) objects. And again, it's highly advised to persist it for more optimal later usage.\n", "\n", "Let's get into a more realistic example. Imagine that our data consist of various dummy transactions made across different countries." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "transactions = [\n", " {'name': 'Bob', 'amount': 100, 'country': 'United Kingdom'},\n", " {'name': 'James', 'amount': 15, 'country': 'United Kingdom'},\n", " {'name': 'Marek', 'amount': 51, 'country': 'Poland'},\n", " {'name': 'Paul', 'amount': 75, 'country': 'Poland'}\n", "]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Say we know that our downstream analysis required analyzing records within the same country. To optimize network traffic it seems to be a good idea to put records from one country onto the same node/partition." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Number of partitions: 2\n", "Partitioner: \n", "Partitions structure: [[('Poland', {'name': 'Marek', 'amount': 51, 'country': 'Poland'}), ('Poland', {'name': 'Paul', 'amount': 75, 'country': 'Poland'})], [('United Kingdom', {'name': 'Bob', 'amount': 100, 'country': 'United Kingdom'}), ('United Kingdom', {'name': 'James', 'amount': 15, 'country': 'United Kingdom'})]]\n" ] } ], "source": [ "def country_partitioner(country):\n", " return hash(country)\n", "\n", "# note that we technically don't need to pass in the custom\n", "# partitioner when using partitionBy, if we don't then spark\n", "# will use its own hash partitioner to carry out the partitioning\n", "rdd = (spark.sparkContext.\n", " parallelize(transactions).\n", " map(lambda record: (record['country'], record)).\n", " partitionBy(2, country_partitioner))\n", "\n", "print(\"Number of partitions: {}\".format(rdd.getNumPartitions()))\n", "print(\"Partitioner: {}\".format(rdd.partitioner))\n", "print(\"Partitions structure: {}\".format(rdd.glom().collect()))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It worked as expected, all records from the same country is within the same partition. We can perform some downstream work on them without worrying about large network shuffling. One caveat about this approach is that we should pay attention for potential data skews. Meaning if some keys are overrepresented in the dataset it can result in suboptimal resource usage and potential failure (e.g. in our case, say United Kingdom had a lot more data than Poland).\n", "\n", "After partitioning the data, the next common transformation is to use a `mapPartitions()`, which operates on each partition of the RDD." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Working With DataFrames" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Nowadays we are all advised use structured DataFrames from Spark SQL module as oppose to RDDs as much as possible. When we are calling a DataFrame transformation, it actually becomes a set of RDD transformation underneath the hood. The main advantage is that when using the DataFrame API, spark understands the inner structure of our records much better and is capable of performing internal optimization to increase the processing speed." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Number of partitions: 2\n", "Partitioner: None\n", "Partitions structure: [[Row(amount=51, country='Poland', name='Marek'), Row(amount=75, country='Poland', name='Paul')], [Row(amount=100, country='United Kingdom', name='Bob'), Row(amount=15, country='United Kingdom', name='James')]]\n" ] } ], "source": [ "spark = (SparkSession.\n", " builder.\n", " master('local[*]').\n", " config('spark.sql.shuffle.partitions', 2).\n", " getOrCreate())\n", "\n", "# create a spark DataFrame from the dictionary\n", "rdd = (spark.sparkContext.\n", " parallelize(transactions, 2).\n", " map(lambda x: Row(**x)))\n", "\n", "# here, we are essentially creating a custom partitioner,\n", "# by specifying we are going to repartition using the 'country' column.\n", "# We can think of this operation as performing an indexing on the 'country'\n", "# column from a relational database standpoint.\n", "# When not specifying number of partitions, spark will use the value from the\n", "# config parameter 'spark.sql.shuffle.partitions', in this example, we\n", "# explicitly set it to 2, if we didn't specify this value, the default would\n", "# be 200.\n", "df = spark.createDataFrame(rdd).repartition(2, 'country')\n", "\n", "print('Number of partitions: {}'.format(df.rdd.getNumPartitions()))\n", "print('Partitioner: {}'.format(rdd.partitioner))\n", "print('Partitions structure: {}'.format(df.rdd.glom().collect()))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### coalesce vs repartition" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `coalesce()` and `repartition()` transformations are both used for changing the number of partitions in the RDD. The main difference is that:\n", "\n", "- If we are increasing the number of partitions use `repartition()`, this will perform a full shuffle.\n", "- If we are decreasing the number of partitions use `coalesce()`, this operation ensures that we minimize shuffles." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Number of partitions: 1\n", "Partitions structure: [[Row(amount=51, country='Poland', name='Marek'), Row(amount=75, country='Poland', name='Paul'), Row(amount=100, country='United Kingdom', name='Bob'), Row(amount=15, country='United Kingdom', name='James')]]\n", "Number of partitions: 4\n", "Partitions structure: [[Row(amount=15, country='United Kingdom', name='James'), Row(amount=75, country='Poland', name='Paul')], [], [], [Row(amount=100, country='United Kingdom', name='Bob'), Row(amount=51, country='Poland', name='Marek')]]\n" ] } ], "source": [ "# the coalesce algorithm merged the data from 1 partition to another\n", "# Partition to Partition A, thus it can't be used to increase the partition\n", "df_coalesce = df.coalesce(1)\n", "print('Number of partitions: {}'.format(df_coalesce.rdd.getNumPartitions()))\n", "print('Partitions structure: {}'.format(df_coalesce.rdd.glom().collect()))\n", "\n", "df_repartition = df.repartition(4)\n", "print('Number of partitions: {}'.format(df_repartition.rdd.getNumPartitions()))\n", "print('Partitions structure: {}'.format(df_repartition.rdd.glom().collect()))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Best Practices" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In real world scenarios, times when we might want to use coalesce is after doing some aggregation or filtering on the giant raw data. e.g. Suppose we have a data that contains 2 billion rows of data (1 TB) split into 13,000 partitions. Suppose after the aggregation, we are only down to 2 million rows of data. Now if we were to save it as is, a lot of the output partitions will be empty. And as we can imagine, it's not efficient to read or write thousands of empty files, to improve this, we should call `coalesce`.\n", "\n", "```python\n", "# the .sample method mimics that our data is much smaller after the aggregation/filtering\n", "smaller_data = huge_data.sample(withReplacement = False, fraction = 0.001)\n", "smaller_data.coalesce(4)\n", "```\n", "\n", "The next million dollar question is: What is the optimal partition number?\n", "\n", "By now, we can probably guessed if we have too few partitions, we would potentially be faced with:\n", "\n", "- Less concurrency - We are not leveraging the advantages of parallelism. There could be worker nodes which are sitting ideal.\n", "- Data skewing and improper resource utilization - Our data might be skewed on one partition and hence we have one worker might be doing more than other workers and hence resource issues might come at that worker.\n", "- Memory issues: An error message that we might across upon is `java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE`. When this happens, increasing the number of partitions (therefore, reducing the average partition size) usually resolves the issue. Related link: [Stackoverflow: Why does Spark RDD partition has 2GB limit?\n", "](https://stackoverflow.com/questions/29689719/why-does-spark-rdd-partition-has-2gb-limit-for-hdfs)\n", "\n", "On the other hand, the disadvantages of too many partitions is that our time might be all spent on task scheduling as oppose to performing the actual computation. Hence, it is recommended to partition judiciously depending upon our cluster configuration and requirements. The following number is a rule of thumb that can serve as a guideline:\n", "\n", "> According to the [spark documentation](http://spark.apache.org/docs/latest/tuning.html#level-of-parallelism): **In general, we recommend 2-3 tasks per CPU core in your cluster**. In spark, the definition of a task is computation applied to a unit of data (partition). Thus if a stage consists of 200 task, that means in this stage, we are applying the computation across 200 partitions. If we were to follow the recommendation, that gives us the formula `number_of_partitions = number_of_cpus * [2 or 3]`.\n", "\n", "If interested, the following link also contains recommendations for tuning spark applications. Specifically, `--num-executors`, `--executor-memory` and `--executor-cores`. [Blog: Distribution of Executors, Cores and Memory for a Spark Application running in Yarn](https://spoddutur.github.io/spark-notes/distribution_of_executors_cores_and_memory_for_spark_application.html)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Reference\n", "\n", "- [Blog: Glom in Spark RDD](http://alvincjin.blogspot.com/2015/11/glom-in-spark-rdd.html)\n", "- [Blog: Partitioning in Apache Spark](https://medium.com/parrot-prediction/partitioning-in-apache-spark-8134ad840b0)\n", "- [Blog: Understanding Spark Partitioning](https://techmagie.wordpress.com/2015/12/19/understanding-spark-partitioning/)\n", "- [Blog: Managing Spark Partitions with Coalesce and Repartition](https://hackernoon.com/managing-spark-partitions-with-coalesce-and-repartition-4050c57ad5c4)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.4" }, "toc": { "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": true, "toc_position": { "height": "calc(100% - 180px)", "left": "10px", "top": "150px", "width": "251px" }, "toc_section_display": true, "toc_window_display": true }, "varInspector": { "cols": { "lenName": 16, "lenType": 16, "lenVar": 40 }, "kernels_config": { "python": { "delete_cmd_postfix": "", "delete_cmd_prefix": "del ", "library": "var_list.py", "varRefreshCmd": "print(var_dic_list())" }, "r": { "delete_cmd_postfix": ") ", "delete_cmd_prefix": "rm(", "library": "var_list.r", "varRefreshCmd": "cat(var_dic_list()) " } }, "types_to_exclude": [ "module", "function", "builtin_function_or_method", "instance", "_Feature" ], "window_display": false } }, "nbformat": 4, "nbformat_minor": 2 }