{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# GTC 2019 Numba Tutorial Notebook 2: CuPy and Numba on the GPU\n", "\n", "\n", "\n", "In the previous notebook, we saw how NumPy and Numba could be used for array math on the CPU. Array operations are very amenable to execution on a massively parallel GPU. We will not go into the CUDA programming model too much in this tutorial, but the most important thing to remember is that the GPU hardware is designed for *data parallelism*. Maximum throughput is achieved when you are computing the same operations on many different elements at once. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## What is CuPy?\n", "\n", "Simply put: [CuPy](https://cupy.chainer.org/) is NumPy, but for the GPU. [Preferred Networks](https://www.preferred-networks.jp/en/) created CuPy as the GPU backend for their deep learning library, [Chainer](https://chainer.org/), but it also works great as a standalone NumPy-like GPU array library. If you know NumPy, CuPy is a very easy way to get started on the GPU.\n", "\n", "Just like NumPy, CuPy offers 3 basic things:\n", "\n", "1. A multidimensional array object, but stored in GPU memory.\n", "2. A ufunc system that follows broadcast rules, but executes in parallel on the GPU.\n", "3. A large library of array functions already implemented with CUDA." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import cupy as cp" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "CuPy arrays look just like NumPy arrays:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ary = cp.arange(10).reshape((2,5))\n", "print(repr(ary))\n", "print(ary.dtype)\n", "print(ary.shape)\n", "print(ary.strides)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This array is in the GPU memory of the default GPU (device 0). We can see this by inspecting the special `device` attribute:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ary.device" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can move data from the CPU to the GPU using the `cp.asarray()` function:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ary_cpu = np.arange(10)\n", "ary_gpu = cp.asarray(ary_cpu)\n", "print('cpu:', ary_cpu)\n", "print('gpu:', ary_gpu)\n", "print(ary_gpu.device)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that when we print the contents of a GPU array, CuPy is copying the data from the GPU back to the CPU so it can print the results.\n", "\n", "If we are done with the data on the GPU, we can convert it back to a NumPy array on the CPU with the `cp.asnumpy()` function:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ary_cpu_returned = cp.asnumpy(ary_gpu)\n", "print(repr(ary_cpu_returned))\n", "print(type(ary_cpu_returned))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### GPU Array Math\n", "\n", "Most of the NumPy methods are supported in CuPy with identical function names and arguments:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(ary_gpu * 2)\n", "print(cp.exp(-0.5 * ary_gpu**2))\n", "print(cp.linalg.norm(ary_gpu))\n", "print(cp.random.normal(loc=5, scale=2.0, size=10))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You may notice a slight pause when you run these functions the first time. This is because CuPy has to compile the CUDA functions on the fly, and then cache them to disk for reuse in the future.\n", "\n", "That's pretty much it! CuPy is very easy to use and has excellent [documentation](https://docs-cupy.chainer.org/en/stable/overview.html), which you should become familiar with.\n", "\n", "Before we get into GPU performance measurement, let's switch gears back to Numba." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## When would I want Numba on the GPU?\n", "\n", "Similar to NumPy, Numba can be useful to use with CuPy when you want to:\n", "\n", "* Combine several operations together for greater efficiency.\n", "* Implement custom algorithms that are not easily described by combining CuPy functions.\n", "\n", "Numba's compiler pipeline for transforming Python functions to machine code can be used to generate CUDA functions which can be used standalone or with CuPy. There are two basic approaches supported by Numba:\n", "\n", " 1. ufuncs/gufuncs (subject of the rest of this notebook)\n", " 2. CUDA Python kernels (subject of next notebook)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Making new ufuncs for the GPU\n", "\n", "Numba has the ability to create compiled ufuncs. You implement a scalar function of all the inputs, and Numba will figure out the broadcast rules for you. Generating a ufunc that uses CUDA requires giving an explicit type signature and setting the `target` attribute:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from numba import vectorize\n", "\n", "@vectorize(['int64(int64, int64)'], target='cuda')\n", "def add_ufunc(x, y):\n", " return x + y" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "a = np.array([1, 2, 3, 4])\n", "b = np.array([10, 20, 30, 40])\n", "b_col = b[:, np.newaxis] # b as column array\n", "c = np.arange(4*4).reshape((4,4))\n", "\n", "print('a+b:\\n', add_ufunc(a, b))\n", "print()\n", "print('b_col + c:\\n', add_ufunc(b_col, c))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A lot of things just happened! Numba automatically:\n", "\n", " * Compiled a CUDA kernel to execute the ufunc operation in parallel over all the input elements.\n", " * Allocated GPU memory for the inputs and the output.\n", " * Copied the input data to the GPU.\n", " * Executed the CUDA kernel with the correct kernel dimensions given the input sizes.\n", " * Copied the result back from the GPU to the CPU.\n", " * Returned the result as a NumPy array on the host.\n", "\n", "This is very convenient for testing, but copying data back and forth between the CPU and GPU can be slow and hurt performance. In the next tutorial notebook, you'll learn about device management, memory allocation, and using CuPy arrays with Numba.\n", "\n", "You might be wondering how fast our simple example is on the GPU? Let's see:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%timeit np.add(b_col, c) # NumPy on CPU" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%timeit add_ufunc(b_col, c) # Numba on GPU" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Wow, the GPU is *a lot slower* than the CPU. What happened??\n", "\n", "This is to be expected because we have (deliberately) misused the GPU in several ways in this example:\n", "\n", " * **Our inputs are too small**: the GPU achieves performance through parallelism, operating on thousands of values at once. Our test inputs have only 4 and 16 integers, respectively. We need a much larger array to even keep the GPU busy.\n", " * **Our calculation is too simple**: Sending a calculation to the GPU involves quite a bit of overhead compared to calling a function on the CPU. If our calculation does not involve enough math operations (often called \"arithmetic intensity\"), then the GPU will spend most of its time waiting for data to move around.\n", " * **We copy the data to and from the GPU**: While including the copy time can be realistic for a single function, often we want to run several GPU operations in sequence. In those cases, it makes sense to send data to the GPU and keep it there until all of our processing is complete.\n", " * **Our data types are larger than necessary**: Our example uses `int64` when we probably don't need it. Scalar code using data types that are 32 and 64-bit run basically the same speed on the CPU, but 64-bit data types have a significant performance cost on the GPU. Basic arithmetic on 64-bit floats can be anywhere from 2x (Pascal-architecture Tesla) to 24x (Maxwell-architecture GeForce) slower than 32-bit floats. NumPy defaults to 64-bit data types when creating arrays, so it is important to set the `dtype` attribute or use the `ndarray.astype()` method to pick 32-bit types when you need them.\n", " \n", " \n", "Given the above, let's try an example that is faster on the GPU:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import math # Note that for the CUDA target, we need to use the scalar functions from the math module, not NumPy\n", "\n", "SQRT_2PI = np.float32((2*math.pi)**0.5) # Precompute this constant as a float32. Numba will inline it at compile time.\n", "\n", "@vectorize(['float32(float32, float32, float32)'], target='cuda')\n", "def gaussian_pdf(x, mean, sigma):\n", " '''Compute the value of a Gaussian probability density function at x with given mean and sigma.'''\n", " return math.exp(-0.5 * ((x - mean) / sigma)**2) / (sigma * SQRT_2PI)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Evaluate the Gaussian distribution PDF a million times!\n", "x = np.random.uniform(-3, 3, size=1000000).astype(np.float32)\n", "mean = np.float32(0.0)\n", "sigma = np.float32(1.0)\n", "\n", "# Quick test\n", "gaussian_pdf(x[0], 0.0, 1.0)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import scipy.stats # for definition of gaussian distribution\n", "norm_pdf = scipy.stats.norm\n", "%timeit norm_pdf.pdf(x, loc=mean, scale=sigma)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%timeit gaussian_pdf(x, mean, sigma)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That's a pretty large improvement, even including the overhead of copying all the data to and from the GPU. Ufuncs that use special functions (`exp`, `sin`, `cos`, etc) on large `float32` data sets run especially well on the GPU." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## CUDA Device Functions\n", "\n", "Ufuncs are great, but you should not have to cram all of your logic into a single function body. You can also create normal functions that are only called from other functions running on the GPU. (These are similar to CUDA C functions defined with `__device__`.)\n", "\n", "Device functions are created with the `numba.cuda.jit` decorator:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from numba import cuda\n", "\n", "@cuda.jit(device=True)\n", "def polar_to_cartesian(rho, theta):\n", " x = rho * math.cos(theta)\n", " y = rho * math.sin(theta)\n", " return x, y # This is Python, so let's return a tuple\n", "\n", "@vectorize(['float32(float32, float32, float32, float32)'], target='cuda')\n", "def polar_distance(rho1, theta1, rho2, theta2):\n", " x1, y1 = polar_to_cartesian(rho1, theta1)\n", " x2, y2 = polar_to_cartesian(rho2, theta2)\n", " \n", " return ((x1 - x2)**2 + (y1 - y2)**2)**0.5" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "n = 1000000\n", "rho1 = np.random.uniform(0.5, 1.5, size=n).astype(np.float32)\n", "theta1 = np.random.uniform(-np.pi, np.pi, size=n).astype(np.float32)\n", "rho2 = np.random.uniform(0.5, 1.5, size=n).astype(np.float32)\n", "theta2 = np.random.uniform(-np.pi, np.pi, size=n).astype(np.float32)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "polar_distance(rho1, theta1, rho2, theta2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that the CUDA compiler aggressively inlines device functions, so there is generally no overhead for function calls. Similarly, the \"tuple\" returned by `polar_to_cartesian` is not actually created as a Python object, but represented temporarily as a struct, which is then optimized away by the compiler.\n", "\n", "We can compare this to doing the same thing on the CPU, still using Numba:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numba\n", "\n", "@numba.jit\n", "def polar_to_cartesian_cpu(rho, theta):\n", " x = rho * math.cos(theta)\n", " y = rho * math.sin(theta)\n", " return x, y # This is Python, so let's return a tuple\n", "\n", "@vectorize(['float32(float32, float32, float32, float32)']) # default target is CPU\n", "def polar_distance_cpu(rho1, theta1, rho2, theta2):\n", " x1, y1 = polar_to_cartesian_cpu(rho1, theta1)\n", " x2, y2 = polar_to_cartesian_cpu(rho2, theta2)\n", " \n", " return ((x1 - x2)**2 + (y1 - y2)**2)**0.5\n", "\n", "np.testing.assert_allclose(polar_distance(rho1, theta1, rho2, theta2),\n", " polar_distance_cpu(rho1, theta1, rho2, theta2),\n", " rtol=1e-7, atol=5e-7)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%timeit polar_distance_cpu(rho1, theta1, rho2, theta2)\n", "%timeit polar_distance(rho1, theta1, rho2, theta2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Not a bad speedup, and we're still doing quite a few GPU memory allocations and data copies." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Allowed Python on the GPU\n", "\n", "Compared to Numba on the CPU (which is already limited), Numba on the GPU has more limitations. Supported Python includes:\n", "\n", "* `if`/`elif`/`else`\n", "* `while` and `for` loops\n", "* Basic math operators\n", "* Selected functions from the `math` and `cmath` modules\n", "* Tuples\n", "\n", "See [the Numba manual](http://numba.pydata.org/numba-doc/latest/cuda/cudapysupported.html) for more details." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Exercise\n", "\n", "Let's build a \"zero suppression\" function. A common operation when working with waveforms is to force all samples values below a certain absolute magnitude to be zero, as a way to eliminate low amplitude noise. Let's make some sample data:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Hacking up a noisy pulse train\n", "%matplotlib inline\n", "from matplotlib import pyplot as plt\n", "\n", "n = 100000\n", "noise = np.random.normal(size=n) * 3\n", "pulses = np.maximum(np.sin(np.arange(n) / (n / 23)) - 0.3, 0.0)\n", "waveform = ((pulses * 300) + noise).astype(np.int16)\n", "plt.plot(waveform)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now try filling in body of this ufunc:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "@vectorize(['int16(int16, int16)'], target='cuda')\n", "def zero_suppress(waveform_value, threshold):\n", " ### Replace this implementation with yours\n", " result = waveform_value\n", " ###\n", " return result" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# the noise on the baseline should disappear when zero_suppress is implemented properly\n", "plt.plot(zero_suppress(waveform, 15.0))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.1" } }, "nbformat": 4, "nbformat_minor": 2 }