{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# SIMD Autovectorization in Numba\n", "\n", "**WARNING**: *Due to CPU limitations on Binder, not all the benefits of SIMD instructions will be visible. You will see better SIMD performance if you download this notebook and run on Intel Haswell / AMD Zen or later.*\n", "\n", "Most modern CPUs have support for instructions that apply the same operation to multiple data elements simultaneously. These are called \"Single Instruction, Multiple Data\" (SIMD) operations, and the LLVM backend used by Numba can generate them in some cases to execute loops more quickly. (This process is called \"autovectorization.\")\n", "\n", "For example, Intel processors have support for SIMD instruction sets like:\n", "\n", "* SSE (128-bit inputs)\n", "* AVX (256-bit inputs)\n", "* AVX-512 (512-bit inputs, Skylake-X and later or Xeon Phi)\n", "\n", "These wide instructions typically operate on as many values as will fit into an input register. For AVX instructions, this means that either 8 float32 values or 4 float64 values can be processed as a single input. As a result, the NumPy dtype that you use can potentially impact performance to a greater degree than when SIMD is not in use." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "from numba import jit" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It can be somewhat tricky to determine when LLVM has successfully autovectorized a loop. The Numba team is working on exporting diagnostic information to show where the autovectorizer has generated SIMD code. For now, we can use a fairly crude approach of searching the assembly language generated by LLVM for SIMD instructions.\n", "\n", "It is also interesting to note what kind of SIMD is used on your system. On x86_64, the name of the registers used indicates which level of SIMD is in use:\n", "\n", "* SSE: `xmmX`\n", "* AVX/AVX2: `ymmX`\n", "* AVX-512: `zmmX`\n", "\n", "where X is an integer.\n", "\n", "**Note**: The method we use below to find SIMD instructions will only work on Intel/AMD CPUs. Other platforms have entirely different assembly language syntax for SIMD instructions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def find_instr(func, keyword, sig=0, limit=5):\n", " count = 0\n", " for l in func.inspect_asm(func.signatures[sig]).split('\\n'):\n", " if keyword in l:\n", " count += 1\n", " print(l)\n", " if count >= limit:\n", " break\n", " if count == 0:\n", " print('No instructions found')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Basic SIMD\n", "\n", "Let's start with a simple function that returns the square difference between two arrays, as you might write for a least-squares optimization:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "@jit(nopython=True)\n", "def sqdiff(x, y):\n", " out = np.empty_like(x)\n", " for i in range(x.shape[0]):\n", " out[i] = (x[i] - y[i])**2\n", " return out" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x32 = np.linspace(1, 2, 10000, dtype=np.float32)\n", "y32 = np.linspace(2, 3, 10000, dtype=np.float32)\n", "sqdiff(x32, y32)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x64 = x32.astype(np.float64)\n", "y64 = y32.astype(np.float64)\n", "sqdiff(x64, y64)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Numba has created two different implementations of the function, one for `float32` 1-D arrays, and one for `float64` 1-D arrays:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sqdiff.signatures" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This allows Numba (and LLVM) to specialize the use of the SIMD instructions for each situation. In particular, using lower precision floating point allows twice as many values to fit into a SIMD register. We will see that for the same number of elements, the `float32` calculation goes twice as fast:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%timeit sqdiff(x32, y32)\n", "%timeit sqdiff(x64, y64)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can check for SIMD instructions in both cases. (Due to the order of compilation above, signature 0 is the `float32` implementation and signature 1 is the `float64` implementation.)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('float32:')\n", "find_instr(sqdiff, keyword='subp', sig=0)\n", "print('---\\nfloat64:')\n", "find_instr(sqdiff, keyword='subp', sig=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In x86_64 assembly, SSE uses `subps` for \"subtraction packed single precision\" (AVX uses `vsubps`), representing vector float32 operations. The `subpd` instruction (AVX = `vsubpd`) stands for \"subtraction packed double precision\", representing float64 operations." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## SIMD and Division\n", "\n", "In general, the autovectorizer cannot deal with branches inside loops, although this is an area where LLVM is likely to improve in the future. Your best bet for SIMD acceleration is to only have pure math operations in the loop.\n", "\n", "As a result, you would naturally assume a function like this would be OK:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "@jit(nopython=True)\n", "def frac_diff1(x, y):\n", " out = np.empty_like(x)\n", " for i in range(x.shape[0]):\n", " out[i] = 2 * (x[i] - y[i]) / (x[i] + y[i])\n", " return out" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "frac_diff1(x32, y32)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "find_instr(frac_diff1, keyword='subp', sig=0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`No instructions found`?!\n", "\n", "The problem is that division by zero can behave in two different ways:\n", "\n", "* In Python, division by zero raises an exception.\n", "* In NumPy, division by zero results in a `NaN`, like in C.\n", "\n", "By default, Numba `@jit` follows the Python convention, and `@vectorize`/`@guvectorize` follow the NumPy convention. When following the Python convention, a simple division operation `r = x / y` expands out into something like:\n", "\n", "``` python\n", "\n", "if y == 0:\n", " raise ZeroDivisionError()\n", "else:\n", " r = x / y\n", "```\n", "\n", "This branching code causes the autovectorizer to give up, and no SIMD to be generated for our example above.\n", "\n", "Fortunately, Numba allows you to override the \"error model\" of the function if you don't want a `ZeroDivisionError` to be raised:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "@jit(nopython=True, error_model='numpy')\n", "def frac_diff2(x, y):\n", " out = np.empty_like(x)\n", " for i in range(x.shape[0]):\n", " out[i] = 2 * (x[i] - y[i]) / (x[i] + y[i])\n", " return out" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "frac_diff2(x32, y32)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "find_instr(frac_diff2, keyword='subp', sig=0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We have SIMD instructions again, but when we check the speed:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%timeit frac_diff2(x32, y32)\n", "%timeit frac_diff2(x64, y64)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is faster than the no-SIMD case, but there doesn't seem to be a speed benefit with `float32` inputs. What's going on?\n", "\n", "The remaining issue is very subtle. We can see it if we look at a type-annotated version of the function:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "frac_diff2.inspect_types(pretty=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you expand out line 5 in the float32 version of the function, you will see the following bit of Numba IR:\n", "\n", "```\n", "$const28.2 = const(float, 2.0) :: float64\n", "$28.5 = getitem(value=x, index=i) :: float32\n", "$28.8 = getitem(value=y, index=i) :: float32\n", "$28.9 = $28.5 - $28.8 :: float32\n", "del $28.8\n", "del $28.5\n", "$28.10 = $const28.2 * $28.9 :: float64\n", "```\n", "\n", "Notice that the constant `2` has been typed as a float64 value. Later, this causes the multiplication `2 * (x[i] - y[i]` to promote up to float64, and then the rest of the calculation becomes float64. This is a situation where Numba is being overly conservative (and should be fixed at some point), but we can tweak this behavior by casting the constant to the type we want:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "@jit(nopython=True, error_model='numpy')\n", "def frac_diff3(x, y):\n", " out = np.empty_like(x)\n", " dt = x.dtype # Cast the constant using the dtype of the input\n", " for i in range(x.shape[0]):\n", " # Could also use np.float32(2) to always use same type, regardless of input\n", " out[i] = dt.type(2) * (x[i] - y[i]) / (x[i] + y[i])\n", " return out" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "frac_diff3(x32, y32)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%timeit frac_diff3(x32, y32)\n", "%timeit frac_diff3(x64, y64)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now our float32 version is nice and speedy (and 6x faster than what we started with, if we only care about float32)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## SIMD and Reductions\n", "\n", "The autovectorizer can also optimize reduction loops, but only with permission. Normally, compilers are very careful not to reorder floating point instructions because floating point arithmetic is approximate, so mathematically allowed transformations do not always give the same result. For example, it is not generally true for floating point numbers that:\n", "\n", "```\n", "(a + (b + c)) == ((a + b) + c)\n", "```\n", "\n", "For many situations, the round-off error that causes the difference between the left and the right is not important, so changing the order of additions is acceptable for a performance increase.\n", "\n", "To allow reordering of operations, we need to tell Numba to enable `fastmath` optimizations:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "@jit(nopython=True)\n", "def do_sum(A):\n", " acc = 0.\n", " # without fastmath, this loop must accumulate in strict order\n", " for x in A:\n", " acc += x**2\n", " return acc\n", "\n", "@jit(nopython=True, fastmath=True)\n", "def do_sum_fast(A):\n", " acc = 0.\n", " # with fastmath, the reduction can be vectorized as floating point\n", " # reassociation is permitted.\n", " for x in A:\n", " acc += x**2\n", " return acc" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "do_sum(x32)\n", "find_instr(do_sum, keyword='mulp') # look for vector multiplication" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "do_sum_fast(x32)\n", "find_instr(do_sum_fast, keyword='mulp')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The fast version is 4x faster:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%timeit do_sum(x32)\n", "%timeit do_sum_fast(x32)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## SIMD and Special Functions\n", "\n", "If you follow the above guidelines, SIMD autovectorization will work for all basic math operations (`+`,`-`,`*`,`\\`), but generally will not work for function calls in the loop, unless LLVM can inline the function and there is only basic math in the function body.\n", "\n", "However, we build Numba (if you get conda packages from Anaconda or wheels from PyPI) using a patched version of LLVM that supports vectorization of special math functions when Intel SVML (\"Short Vector Math Library\") is present. This library comes with the Intel compiler, and is also freely redistributable. We've installed it in the current conda environment using `conda install -c numba icc_rt`, as we can verify here:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "! conda list icc_rt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Thanks to this library, we can still get SIMD vectorization in a function like this:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "SQRT_2PI = np.sqrt(2 * np.pi)\n", "\n", "@jit(nopython=True, error_model='numpy', fastmath=True)\n", "def kde(x, means, widths):\n", " '''Compute value of gaussian kernel density estimate.\n", " \n", " x - location of evaluation\n", " means - array of kernel means\n", " widths - array of kernel widths\n", " '''\n", " n = means.shape[0]\n", " acc = 0.\n", " for i in range(n):\n", " acc += np.exp( -0.5 * ((x - means[i]) / widths[i])**2 ) / widths[i]\n", " return acc / SQRT_2PI / n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# The distribution we are approximating is flat between -1 and 1, so we expect a KDE value of ~0.5 everywhere\n", "means = np.random.uniform(-1, 1, size=10000)\n", "# These widths are not selected in any reasonable way. Consult your local statistician before approximating a PDF.\n", "widths = np.random.uniform(0.1, 0.3, size=10000)\n", "\n", "kde(0.4, means, widths)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that SIMD instructions were generated:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "find_instr(kde, 'subp')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also see that calls to the special Intel SVML functions for `exp` were generated:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "find_instr(kde, keyword='svml')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%timeit kde(0.4, means, widths)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we recompile the function (which is possible since the `.py_func` attribute holds the original Python function) with the extra flags to allow division and reductions to work, this stops all autovectorization of the loop:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "slow_kde = jit(nopython=True)(kde.py_func)\n", "\n", "slow_kde(0.4, means, widths)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that we get a slightly different answer, both due to the different order of operations, and the small differences in SVML `exp` compared to the default `exp`. We also see that there is no SIMD or calls to SVML:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "find_instr(slow_kde, keyword='subp')\n", "print('---')\n", "find_instr(slow_kde, keyword='svml')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And the function is much slower than the original:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%timeit kde(0.4, means, widths)\n", "%timeit slow_kde(0.4, means, widths)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And only the SIMD vectorized version is faster than doing this in pure NumPy:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def numpy_kde(x, means, widths):\n", " acc = (np.exp( -0.5 * ((x - means) / widths)**2 ) / widths).mean()\n", " # .mean() already divides by n\n", " return acc / SQRT_2PI" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "numpy_kde(0.4, means, widths)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%timeit numpy_kde(0.4, means, widths)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Why is NumPy as fast as it is? In this case, it is because the Anaconda build of NumPy uses MKL to accelerate (with SIMD and threads) many of the individial ufuncs, so it is only when Numba can combine all the operations together that the speed boost emerges.\n", "\n", "Incidentally, although we wrote out the iteration for `kde` as a for loop to highlight what was going on, you still get the benefit of SIMD in Numba when compiling array expressions. We could have compiled `numpy_kde` directly:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "numba_numpy_kde = jit(nopython=True, error_model='numpy', fastmath=True)(numpy_kde)\n", "\n", "numba_numpy_kde(0.4, means, widths)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "find_instr(numba_numpy_kde, keyword='subp')\n", "print('---')\n", "find_instr(numba_numpy_kde, keyword='svml')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And it is nearly as fast as our manual looping version, and 2x faster than NumPy alone:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%timeit kde(0.4, means, widths)\n", "%timeit numba_numpy_kde(0.4, means, widths)\n", "%timeit numpy_kde(0.4, means, widths)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.6" } }, "nbformat": 4, "nbformat_minor": 2 }