{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# NumPy Support in Numba\n", "\n", "Numba is designed to be used with NumPy and complement its capabilities. Numba supports:\n", "\n", "* Passing NumPy arrays as arguments, including structured dtypes\n", "* Creating compiled ufuncs and generalized ufuncs\n", "* Using a large subset of NumPy functions in nopython mode" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import numba\n", "from numba import jit" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Numba Specialization by Dtype\n", "\n", "Numba automatically uses multiple dispatch on compiled functions to allow different specialized implementations of the same function to be used. Suppose we have a function that clamps values to zero if they are below a particular magnitude:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "@jit(nopython=True)\n", "def zero_clamp(x, threshold):\n", " # assume 1D array. See later in this notebook for more general function\n", " out = np.empty_like(x)\n", " for i in range(out.shape[0]):\n", " if np.abs(x[i]) > threshold:\n", " out[i] = x[i]\n", " else:\n", " out[i] = 0\n", " return out " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "a_small = np.linspace(0, 1, 50)\n", "zero_clamp(a_small, 0.3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's benchmark some different kinds of array inputs. We'll try:\n", "\n", "* int64\n", "* float32\n", "* float32 with a stride (elements not contiguous in memory)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "n = 10000\n", "a_int16 = np.arange(n).astype(np.int16)\n", "a_float32 = np.linspace(0, 1, n, dtype=np.float32)\n", "a_float32_strided = np.linspace(0, 1, 2*n, dtype=np.float32)[::2] # view of every other element" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%timeit zero_clamp(a_int16, 1600)\n", "%timeit zero_clamp(a_float32, 0.3)\n", "%timeit zero_clamp(a_float32_strided, 0.3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We see different performance characteristics for each of these cases, even though they have the same number of input elements. Numba generated different machine code for each situation, which we can see if we look at the `.signatures` attribute of the compiled function:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "zero_clamp.signatures" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When printed as strings, Numba array types have the form: `array(dtype, dimensions, layout)`. The first signature therefore corresponds to a 1D array of float64 with C style layout (row-major order, no gaps between elements). The next two signatures are similar, but for `int16` and `float32` arrays. The final signature indicates an \"any\" layout array, which usually happens when you slice an array, and it no longer has a C or FORTRAN memory layout." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can compare to a pure NumPy implementation and see the speed improvement that Numba has achieved through a combination of specialization and elimination of temporary arrays:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def np_zero_clamp(x, threshold):\n", " return np.where(np.abs(x) > threshold, x, 0)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%timeit np_zero_clamp(a_int16, 1600)\n", "%timeit np_zero_clamp(a_float32, 0.3)\n", "%timeit np_zero_clamp(a_float32_strided, 0.3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Creating Ufuncs\n", "\n", "Universal functions, typically called \"ufuncs\" for short, are functions that broadcast an elementwise operation across input arrays of varying numbers of dimensions. Most NumPy functions are ufuncs, and Numba makes it easy to compile custom ufuncs using the `@vectorize` decorator." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from numba import vectorize" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "@vectorize(nopython=True)\n", "def ufunc_zero_clamp(x, threshold):\n", " if np.abs(x) > threshold:\n", " return x\n", " else:\n", " return 0" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%timeit ufunc_zero_clamp(a_int16, 1600)\n", "%timeit ufunc_zero_clamp(a_float32, 0.3)\n", "%timeit ufunc_zero_clamp(a_float32_strided, 0.3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that for this simple ufunc, Numba is not as fast as the function with the manual looping, and in some cases, is the same speed as the example that called NumPy directly. This is not surprising as this function is very simple, and NumPy *also uses compiled ufuncs*. Numba `@vectorize` is generally most effective when creating ufuncs that are not a simple combination of existing NumPy operations." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Calling NumPy functions\n", "\n", "Numba supports many, but not all, NumPy functions. Some functions also have limitations that prevent the use of some of the optional arguments in nopython mode. A full description can be found in the [Supported NumPy Features](http://numba.pydata.org/numba-doc/dev/reference/numpysupported.html) page in the Numba Reference Manual.\n", "\n", "Note that when using NumPy functions on arrays, Numba will also compile and optimize array expressions:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def numpy_mpe(x, true):\n", " return (((x - true)/true)**2).mean()\n", "\n", "numba_mpe = jit(nopython=True)(numpy_mpe) # using jit as a function rather than a decorator" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can confirm both versions give the same answer:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "true_x = 0.1\n", "x = np.random.normal(true_x, 1, size=100000)\n", "numpy_mpe(x, true=true_x), numba_mpe(x, true=true_x)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And see the Numba version is faster:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%timeit numpy_mpe(x, true=0.1)\n", "%timeit numba_mpe(x, true=0.1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If the `scipy` package is installed, Numba will also automatically make use of the optimized BLAS/LAPACK implementation that SciPy was compiled with. In the case of Anaconda, this is Intel MKL, but OpenBLAS is also common for builds of `scipy`. (Note that Numba is not itself compiled and linked against any BLAS implementation.) Most functions in `numpy.linalg` will be accelerated this way, as well as `numpy.dot`.\n", "\n", "Numba will not run any faster than NumPy for individual linear algebra routines (since both translate to calls to the same underlying library), but you are able to use linear algebra calls inside your Numba-compiled functions without any loss of performance." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.6" } }, "nbformat": 4, "nbformat_minor": 2 }