{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Multithreading with Numba\n", "\n", "**WARNING**: *Due to the CPU restrictions on notebook execution on Binder, the benefits of multithreading are going to be erratic. The code in this notebook will run on Binder, but for reasonable benchmarks, you should download and run this notebook on your own system.*\n", "\n", "Numba supports several approaches to multithreading:\n", "\n", "* Automatic multithreading of array expressions and reductions\n", "* Explicit multithreading of loops with `prange()`\n", "* External multithreading with tools like concurrent.futures or Dask.\n", "\n", "The first two options make use of the *ParallelAccelerator* optimization pass (contributed by Intel) in Numba. ParallelAccelerator is only supported on 64-bit platforms, and is not available for Python 2.7 on Windows. It is also only effective when compiling in nopython mode." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import numba\n", "from numba import jit" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Automatic Multithreading\n", "\n", "NumPy array expressions have a significant amount of implied parallelism, as operations are broadcast independently over the input elements. ParallelAccelerator can identify this parallelism and automatically distribute it over several threads. All we need to do is enable the parallelization pass with `parallel=True`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "SQRT_2PI = np.sqrt(2 * np.pi)\n", "\n", "@jit(nopython=True, parallel=True)\n", "def gaussians(x, means, widths):\n", " '''Return the value of gaussian kernels.\n", " \n", " x - location of evaluation\n", " means - array of kernel means\n", " widths - array of kernel widths\n", " '''\n", " n = means.shape[0]\n", " result = np.exp( -0.5 * ((x - means) / widths)**2 ) / widths\n", " return result / SQRT_2PI / n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "means = np.random.uniform(-1, 1, size=1000000)\n", "widths = np.random.uniform(0.1, 0.3, size=1000000)\n", "\n", "gaussians(0.4, means, widths)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To see the effect of multiple CPUs, we can compare to the case where ParallelAccelerator disabled. Noting that decorators are functions that transform other functions, we can call `jit` as a function:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gaussians_nothread = jit(nopython=True)(gaussians.py_func)\n", "\n", "%timeit gaussians_nothread(0.4, means, widths)\n", "%timeit gaussians(0.4, means, widths)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also compare the performance to the uncompiled NumPy array evaluation using the `.py_func` attribute to get the original Python function:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%timeit gaussians.py_func(0.4, means, widths) # compare to pure NumPy" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The performance ratio depends on the number of CPUs in your system, but the multithreaded version is definitely faster than the single threaded version.\n", "\n", "ParallelAccelerator can also handle reductions:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "@jit(nopython=True, parallel=True)\n", "def kde(x, means, widths):\n", " '''Return the value of gaussian kernels.\n", " \n", " x - location of evaluation\n", " means - array of kernel means\n", " widths - array of kernel widths\n", " '''\n", " n = means.shape[0]\n", " result = np.exp( -0.5 * ((x - means) / widths)**2 ) / widths\n", " return result.mean() / SQRT_2PI\n", "\n", "kde_nothread = jit(nopython=True)(kde.py_func)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%timeit kde_nothread(0.4, means, widths)\n", "%timeit kde(0.4, means, widths)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Multithreading with `prange()`\n", "\n", "There are other situations where you would like multithreading, but do not have a straightforward array expression. In those cases, using `prange()` in a for loop indicates to ParallelAccelerator that this is a loop where each iteration is independent of the other and can be executed in parallel.\n", "\n", "For example, we might want to run many Monte Carlo trials in a row:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import random\n", "\n", "# Serial version\n", "@jit(nopython=True)\n", "def monte_carlo_pi_serial(nsamples):\n", " acc = 0\n", " for i in range(nsamples):\n", " x = random.random()\n", " y = random.random()\n", " if (x**2 + y**2) < 1.0:\n", " acc += 1\n", " return 4.0 * acc / nsamples\n", "\n", "# Parallel version\n", "@jit(nopython=True, parallel=True)\n", "def monte_carlo_pi_parallel(nsamples):\n", " acc = 0\n", " # Only change is here\n", " for i in numba.prange(nsamples):\n", " x = random.random()\n", " y = random.random()\n", " if (x**2 + y**2) < 1.0:\n", " acc += 1\n", " return 4.0 * acc / nsamples" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that `prange()` is automatically handling the reduction variable `acc` in a thread-safe way for you. We are also relying on Numba to automatically initialize the random number generator in each thread independently.\n", "\n", "You can also have each thread in a `prange()` modify a separate element in an output array, but more general race conditions are not automatically resolved by ParallelAccelerator. Be careful!\n", "\n", "Let's see how fast these two implementations are:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%time monte_carlo_pi_serial(int(4e8))\n", "%time monte_carlo_pi_parallel(int(4e8))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The parallel version saturates all the CPUs once the initial compilation finishes." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### External Multithreading\n", "\n", "Sometimes your threading system is external to Numba entirely. You might be using `concurrent.futures` to run functions in multiple threads, or a parallel framework like [Dask](http://dask.pydata.org/). For these situations, you do not want to use ParallelAccelerator, but do want to allow the Numba-compiled function to run concurrently in different threads.\n", "\n", "To do this, you want the Numba function to release the Global Interpreter Lock (GIL) during execution. This can be done using the `nogil=True` option to `@jit`.\n", "\n", "Let's do our Monte Carlo example again, but with Dask. Note that Numba will still handle initializing separate random number generator seeds on each thread, as it did with ParallelAccelerator." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import dask\n", "import dask.delayed\n", "\n", "@jit(nopython=True, nogil=True)\n", "def monte_carlo_pi(nsamples):\n", " acc = 0\n", " for i in range(nsamples):\n", " x = random.random()\n", " y = random.random()\n", " if (x**2 + y**2) < 1.0:\n", " acc += 1\n", " return 4.0 * acc / nsamples\n", "\n", "print(monte_carlo_pi(int(1e6)))\n", "\n", "delayed_monte_carlo_pi = dask.delayed(monte_carlo_pi)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Parallel execution:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "futures = [delayed_monte_carlo_pi(int(4e8)) for i in range(4)]\n", "results = dask.compute(futures)[0]\n", "print(sum(results)/4) # average resuts" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Serial execution" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "futures = [delayed_monte_carlo_pi(int(4e8)) for i in range(4)]\n", "results = dask.compute(futures, num_workers=1)[0]\n", "print(sum(results)/4) # average resuts" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.6" } }, "nbformat": 4, "nbformat_minor": 2 }