{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Numba 0.53 CUDA Release Demo\n", "\n", "Key changes to the CUDA target for Release 0.53 demonstrated in this notebook:\n", "\n", "* More atomic operations: AND, OR, XOR, INC, DEC, Exchange, and Compare-and-Swap (Michael Collison).\n", "* Support for `math.log2` and `math.remainder` (Guilherme Leobas).\n", "* Grid synchronization with Cooperative Grid Groups (Nick White and Graham Markall).\n", "* Improved kernel launch performance (Graham Markall).\n", "* Support for passing tuples and namedtuples to kernels (@heyer2, Alex Tait, and Graham Markall).\n", "* A new cube root intrinsic (Michael Collison).\n", "\n", "Other key changes, not demonstrated in this notebook, include:\n", "\n", "* Support for CUDA 11.2, which uses a new version of [NVVM IR](https://docs.nvidia.com/cuda/nvvm-ir-spec/index.html) (1.6) that is incompatible with previous Numba releases (Graham Markall).\n", "* The CUDA Array Interface is now at Version 3, which adds stream and synchronization semantics.\n", " * The V3 specification was agreed with the input of many contributors: Stuart Archibald, Frédéric Bastien, Lisandro Dalcin, Sanjoy Das, Peter Entschev, Leo Fang, Mark Harris, Peter Hawkins, Jake Hemstad, Dave Hirschfeld, John Kirkham, Keith Kraus, Siu Kwan Lam, Graham Markall, Stan Seibert, Peter Würtz, Edward Z. Yang.\n", "\n", "## Some useful imports" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "from collections import namedtuple\n", "from numba import cuda, void, int32\n", "import math\n", "import numpy as np" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## CUDA Atomics\n", "\n", "### Logical Operations\n", "\n", "The following operations are now implemented, with the signatures `cuda.atomic.(ary, idx, val)`. These execute `ary[idx] = ary[idx] val` atomically, and return the old value at `ary[idx]` for `int32`, `uint32`, `int64`, and `uint64` operands:\n", "\n", "* AND: `cuda.atomic.and_(ary, idx, val)`.\n", "* OR: `cuda.atomic.or_(ary, idx, val)`.\n", "* XOR: `cuda.atomic.xor(ary, idx, val)`.\n", "\n", "Note the underscore suffix for the AND and OR operations, which is needed to prevent a collision with the `and` and `or` keywords.\n", "\n", "A quick demo of AND:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "@cuda.jit\n", "def and_demo(x, old, val):\n", " old[0] = cuda.atomic.and_(x, 0, val)\n", "\n", "\n", "x = np.array([4], dtype=np.uint32)\n", "old = np.zeros_like(x)\n", "\n", "and_demo[1, 1](x, old, 3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We expect that `4 & 3 = 0`:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0\n" ] } ], "source": [ "print(x[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The old value should be `4`:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "4\n" ] } ], "source": [ "print(old[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can demonstrate OR similarly:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "@cuda.jit\n", "def or_demo(x, old, val):\n", " old[0] = cuda.atomic.or_(x, 0, val)\n", "\n", "\n", "x = np.array([4], dtype=np.uint32)\n", "old = np.zeros_like(x)\n", "\n", "or_demo[1, 1](x, old, 3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Hopefully `4 | 3 = 7`:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "7\n" ] } ], "source": [ "print(x[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And the old value should still be 4:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "4\n" ] } ], "source": [ "print(old[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "XOR is left as an exercise for the reader.\n", "\n", "### Increment and decrement\n", "\n", "Increment and decrement are also supported for `uint32` and `uint64` operands:\n", "\n", "* INC: `cuda.atomic.inc(ary, idx, val)`, increments `ary[idx]` by `1` up to `val`, then resets to `0`.\n", "* DEC: `cuda.atomic.dec(ary, idx, val)`, decrements `ary[idx]` by `1` down to `0`, resetting to `val` beyond `0` or if `ary[idx]` is greater than `val` initially.\n", "\n", "A simple example using INC:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "@cuda.jit\n", "def inc_demo(x, old, val):\n", " old[0] = cuda.atomic.inc(x, 0, val)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First we'll try incrementing 10, with a maximum of 20:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "x = np.array([10], dtype=np.uint32)\n", "old = np.zeros_like(x)\n", "\n", "inc_demo[1, 1](x, old, 20)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This should yield 11, with an old value of 10:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "11\n", "10\n" ] } ], "source": [ "print(x[0])\n", "print(old[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now if we increment 10 this time, but with a maximum of 10:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "x = np.array([10], dtype=np.uint32)\n", "old = np.zeros_like(x)\n", "\n", "inc_demo[1, 1](x, old, 10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We should see that the counter has reset to 0:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0\n", "10\n" ] } ], "source": [ "print(x[0])\n", "print(old[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exchange, Compare-and-swap\n", "\n", "Atomic compare-and-swap is now extended to support 64-bit operands (`int64`, `uint64`) operands in addition to 32-bit (`int32`, `uint32`). Atomic exchange, which swaps values atomically with no comparison, is added:\n", "\n", "* `cuda.atomic.compare_and_swap(ary, old, val)` performs `ary[0] = val if ary[0] == old`, returning the original `ary[0]` value.\n", "* `cuda.atomic.exch(ary, idx, val)` performs `ary[idx] = val`, returning the old value of `ary[idx]`.\n", "\n", "A short example exchanging values:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "@cuda.jit\n", "def exch_demo(x, old, val):\n", " old[0] = cuda.atomic.exch(x, 0, val)\n", "\n", "x = np.array([10], dtype=np.uint32)\n", "old = np.zeros_like(x)\n", "exch_demo[1, 1](x, old, 15)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We should see that `x[0]` now contains 15:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "15\n" ] } ], "source": [ "print(x[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And the old value, `10`, was returned by the `exch` function:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "10\n" ] } ], "source": [ "print(old[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Log2 and remainder\n", "\n", "The `math.log2` and `math.remainder` functions can now be used in kernels." ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "@cuda.jit\n", "def log2_remainder_demo(y, x):\n", " y[0] = math.log2(x[0])\n", " y[1] = math.remainder(x[0], 3)\n", "\n", "x = np.array([4], dtype=np.float32)\n", "y = np.zeros(2, dtype=np.float32)\n", "\n", "log2_remainder_demo[1, 1](y, x)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`log2(4)` is `2`:" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2.0\n" ] } ], "source": [ "print(y[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "4 remainder 3 is 1:" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1.0\n" ] } ], "source": [ "print(y[1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Grid Synchronization / Cooperative Grid Groups\n", "\n", "[Grid Synchronization](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#grid-synchronization-cg) provides a way for different blocks to synchronize across the entire grids. It is implemented by instantiating a grid group with [`this_grid()`](https://numba.readthedocs.io/en/latest/cuda-reference/kernel.html#numba.cuda.cg.this_grid) and calling its [`sync()`](https://numba.readthedocs.io/en/latest/cuda-reference/kernel.html#numba.cuda.cg.GridGroup.sync) method.\n", "\n", "The following example kernel uses the whole grid to write to rows of a matrix in sequence - each row of the matrix must be completed before any thread can move on to the next row:" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [], "source": [ "@cuda.jit(void(int32[:,::1]))\n", "def sequential_rows(M):\n", " col = cuda.grid(1)\n", " g = cuda.cg.this_grid()\n", "\n", " rows = M.shape[0]\n", " cols = M.shape[1]\n", "\n", " for row in range(1, rows):\n", " opposite = cols - col - 1\n", " # Each row's elements are one greater than the previous row\n", " M[row, col] = M[row - 1, opposite] + 1\n", " # Wait until all threads have written their column element,\n", " # and that the write is visible to all other threads\n", " g.sync()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll create an empty matrix for the kernel to work on, and determine an appropriate block size:" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "# Empty input data\n", "A = np.zeros((1024, 1024), dtype=np.int32)\n", "# A somewhat arbitrary choice (one warp), but generally smaller block sizes\n", "# allow more blocks to be launched (noting that other limitations on\n", "# occupancy apply such as shared memory size)\n", "blockdim = 32\n", "griddim = A.shape[1] // blockdim" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can launch the kernel and print the result:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[ 0 0 0 ... 0 0 0]\n", " [ 1 1 1 ... 1 1 1]\n", " [ 2 2 2 ... 2 2 2]\n", " ...\n", " [1021 1021 1021 ... 1021 1021 1021]\n", " [1022 1022 1022 ... 1022 1022 1022]\n", " [1023 1023 1023 ... 1023 1023 1023]]\n" ] } ], "source": [ "# Kernel launch - this is implicitly a cooperative launch\n", "sequential_rows[griddim, blockdim](A)\n", "\n", "# What do the results look like?\n", "print(A)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There is a more strict limit on the grid size for launching a kernel using a Grid Group, as all blocks must be able to be resident on the GPU concurrently - unlike a regular launch, it is not possible to wait for one block to fully execute before another block launches, as this would create the conditions for a deadlock.\n", "\n", "The maximum grid size for a cooperative kernel can be enquired with `max_cooperative_grid_blocks()`. This varies between GPU models, depending on the number of SMs and their available resources. It also varyies for different overloads, because each overload is compiled for a different set of arguments that affects the generated code.\n", "\n", "Here we determine the maximum grid size for our example:" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1152\n" ] } ], "source": [ "# Grab the first overload - there's only one because we've compiled with one signature\n", "overload = next(iter(sequential_rows.overloads.values()))\n", "max_blocks = overload.max_cooperative_grid_blocks(blockdim)\n", "print(max_blocks)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Kernel launch performance\n", "\n", "Numba 0.53 updates the CUDA kernel dispatcher mechanism to bring it more into line with the way the CPU dispatcher works\n", "\n", "* Launching lazily-compiled kernels (those jitted without signatures) is now much faster, in line with the time taken to launch eagerly-compiled kernels.\n", "* There is a very slight increase in dispatch time for eagerly-compiled kernels. This is being worked on and is expected to improve again for 0.54.\n", "\n", "For bencharking, we can use a small test launching empty kernels with varying numbers of arguments with and without signatures:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Eagerly compiled with signatures:\n", "20 µs ± 204 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)\n", "34.2 µs ± 32.7 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)\n", "44.6 µs ± 92.6 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)\n", "53.6 µs ± 795 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)\n", "62.4 µs ± 31.5 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)\n", "Without signatures:\n", "19.8 µs ± 50.6 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)\n", "35.1 µs ± 147 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)\n", "44.9 µs ± 141 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)\n", "54.1 µs ± 70.2 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)\n", "62.4 µs ± 43.9 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)\n" ] } ], "source": [ "print(\"Eagerly compiled with signatures:\")\n", "\n", "@cuda.jit('void()')\n", "def sig_kernel_1():\n", " return\n", "\n", "@cuda.jit('void(float32[:])')\n", "def sig_kernel_2(arr1):\n", " return\n", "\n", "@cuda.jit('void(float32[:],float32[:])')\n", "def sig_kernel_3(arr1,arr2):\n", " return\n", "\n", "@cuda.jit('void(float32[:],float32[:],float32[:])')\n", "def sig_kernel_4(arr1,arr2,arr3):\n", " return\n", "\n", "@cuda.jit('void(float32[:],float32[:],float32[:],float32[:])')\n", "def sig_kernel_5(arr1,arr2,arr3,arr4):\n", " return\n", "\n", "arr = cuda.device_array(10000, dtype=np.float32)\n", "\n", "%timeit sig_kernel_1[1, 1]()\n", "%timeit sig_kernel_2[1, 1](arr)\n", "%timeit sig_kernel_3[1, 1](arr,arr)\n", "%timeit sig_kernel_4[1, 1](arr,arr,arr)\n", "%timeit sig_kernel_5[1, 1](arr,arr,arr,arr)\n", "\n", "print(\"Without signatures:\")\n", "\n", "@cuda.jit\n", "def nosig_kernel_1():\n", " return\n", "\n", "@cuda.jit\n", "def nosig_kernel_2(arr1):\n", " return\n", "\n", "@cuda.jit\n", "def nosig_kernel_3(arr1,arr2):\n", " return\n", "\n", "@cuda.jit\n", "def nosig_kernel_4(arr1,arr2,arr3):\n", " return\n", "\n", "@cuda.jit\n", "def nosig_kernel_5(arr1,arr2,arr3,arr4):\n", " return\n", "\n", "arr = cuda.device_array(10000, dtype=np.float32)\n", "\n", "%timeit nosig_kernel_1[1, 1]()\n", "%timeit nosig_kernel_2[1, 1](arr)\n", "%timeit nosig_kernel_3[1, 1](arr,arr)\n", "%timeit nosig_kernel_4[1, 1](arr,arr,arr)\n", "%timeit nosig_kernel_5[1, 1](arr,arr,arr,arr)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example results\n", "\n", "Comparing some results from the above benchmark between Numba 0.52 and Numba 0.53 yields the following on a Linux system with a Quadro RTX 8000 and CUDA 11.1 (11.2 was not used for the comparison as it is only supported by Numba 0.53):\n", "\n", "**Without signatures:**\n", "\n", "| # Args | 0.52 | 0.53 | % Delta |\n", "|--------|-------|------|---------|\n", "| 0 | 26.8 | 18.4 | -31.3% |\n", "| 1 | 60.0 | 33.8 | -43.7% |\n", "| 2 | 83.0 | 43.4 | -47.7% |\n", "| 3 | 104.0 | 52.3 | -49.7% |\n", "| 4 | 124.0 | 61.0 | -50.8% |\n", "\n", "**With signatures:**\n", "\n", "| # Args | 0.52 | 0.53 | % Delta |\n", "|--------|------|------|---------|\n", "| 0 | 18.6 | 18.7 | +0.5% |\n", "| 1 | 31.1 | 33.4 | +6.7% |\n", "| 2 | 39.8 | 42.7 | +7.3% |\n", "| 3 | 47.8 | 51.8 | +8.4% |\n", "| 4 | 55.2 | 60.2 | +9.1% |\n", "\n", "## Support for passing tuples and namedtuples to kernels\n", "\n", "Tuples and namedtuples can now be passed to kernels.\n", "\n", "### Tuples\n", "\n", "Let's create a kernel and pass a tuple to it.\n", "\n", "First we'll define a kernel that extracts values from the heterogeneously-typed tuple `x` and stores them in `r1` and `r2`:" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [], "source": [ "@cuda.jit\n", "def extract_tuple(r1, r2, x):\n", " r1[0] = x[0]\n", " r1[1] = x[1]\n", " r1[2] = x[2]\n", " r2[0] = x[3]\n", " r2[1] = x[4]\n", " r2[2] = x[5]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A tuple of integers and floating point values:" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [], "source": [ "x = (1, 2, 3, 4.5, 5.5, 6.5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Some space for our results:" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [], "source": [ "r1 = np.zeros(len(x) // 2, dtype=np.int64)\n", "r2 = np.zeros(len(x) // 2, dtype=np.float64)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now launch the kernel:" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [], "source": [ "extract_tuple[1, 1](r1, r2, x)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next we can verify that the values have been extracted from the tuple as expected:" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[1 2 3]\n", "[4.5 5.5 6.5]\n" ] } ], "source": [ "print(r1)\n", "print(r2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Named tuples\n", "\n", "We'll create a named tuple to represent a point:" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [], "source": [ "Point = namedtuple('Point', ('x', 'y'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll create a kernel that extracts values from a `Point`, in a similar fashion to the tuple example above:" ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [], "source": [ "@cuda.jit\n", "def extract_point(r, p):\n", " r[0] = p.x\n", " r[1] = p.y\n", " \n", "x = Point(1, 2)\n", "r = np.zeros(len(x), dtype=np.int64)\n", "extract_point[1, 1](r, x)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our extracted data:" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[1 2]\n" ] } ], "source": [ "print(r)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Nesting\n", "\n", "We can nest tuples and named tuples arbitrarily. Using a tuple of a tuple and a scalar:" ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [], "source": [ "@cuda.jit\n", "def extract_nested(r, x):\n", " r[0] = len(x)\n", " r[1] = len(x[0])\n", " r[2] = x[0][0]\n", " r[3] = x[0][1]\n", " r[4] = x[0][2]\n", " r[5] = x[1]\n", "\n", "\n", "x = ((6, 5, 4), 7)\n", "r = np.ones(6, dtype=np.int64)\n", "extract_nested[1, 1](r, x)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our output contains `(len(x), len(x[0]), *x[0], x[1])`:" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[2 3 6 5 4 7]\n" ] } ], "source": [ "print(r)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Tuples of Arrays\n", "\n", "Arrays can also be tuple members, allowing us to group together arrays into a single parameter. For example, we might write a kernel that accepts data bundled up in a *struct-of-arrays*-type (SoA) form:" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [], "source": [ "@cuda.jit\n", "def vector_magnitudes_soa(r, v):\n", " i = cuda.grid(1)\n", " if i >= len(r):\n", " return\n", " \n", " r[i] = math.sqrt(v[0][i] ** 2 + v[1][i] ** 2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Number of elements, and space for the results:" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [], "source": [ "N = 32\n", "r = np.zeros(N)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "An SoA vector structure:" ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [], "source": [ "np.random.seed(1) # For reproducibility between notebook runs\n", "vx = np.random.rand(N)\n", "vy = np.random.rand(N)\n", "v = (vx, vy)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can pass `v` to the kernel rather than needing to pass `vx` and `vy` individually:" ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [], "source": [ "vector_magnitudes_soa[1, N](r, v)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our result:" ] }, { "cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[1.04472949 0.89617665 0.69187712 0.43698409 0.70201198 0.83971806\n", " 0.18715589 0.82591084 1.06549082 0.92199529 0.50435392 1.04522133\n", " 0.22903347 0.98574786 0.90900818 0.73193985 0.50691019 0.57362161\n", " 0.14171652 0.70715054 0.82823808 1.00401469 0.58299133 0.6943761\n", " 1.04769698 0.90655963 0.59541039 0.70084737 0.19827937 0.97086385\n", " 0.70132994 0.59065734]\n" ] } ], "source": [ "print(r)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The cube root intrinsic\n", "\n", "A new instrinsic, `cuda.cbrt` provides an efficient way to compute a cube root for `float32` and `float64` values. Optimized functions from NVIDIA's [libdevice library](https://docs.nvidia.com/cuda/libdevice-users-guide) ([`__nv_cbrt`](https://docs.nvidia.com/cuda/libdevice-users-guide/__nv_cbrt.html) and [`__nv_cbrtf`](https://docs.nvidia.com/cuda/libdevice-users-guide/__nv_cbrtf.html)) underlie its implementation.\n", "\n", "For comparison, we'll define a kernel that implements cube root using a standard mathematical form, and another one using the intrinsic:" ] }, { "cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [], "source": [ "@cuda.jit\n", "def cube_root(r, x):\n", " for i in range(cuda.grid(1), cuda.gridsize(1), len(r)):\n", " r[i] = x[i] ** (1.0 / 3.0)\n", " \n", "@cuda.jit\n", "def intrinsic_cube_root(r, x):\n", " for i in range(cuda.grid(1), cuda.gridsize(1), len(r)):\n", " r[i] = cuda.cbrt(x[i])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then we'll set up some data to test with:" ] }, { "cell_type": "code", "execution_count": 40, "metadata": {}, "outputs": [], "source": [ "# Array size and data type\n", "N = 2 ** 16\n", "dtype = np.float32\n", "\n", "np.random.seed(1)\n", "x = np.random.rand(N).astype(dtype) * 100\n", "r = np.zeros_like(x)\n", "\n", "n_threads = 256\n", "n_blocks = N // n_threads\n", "\n", "# Copy data to device so we can time the kernel execution only\n", "d_r_normal = cuda.to_device(r)\n", "d_r_fast = cuda.to_device(r)\n", "d_x = cuda.to_device(x)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we'll define functions to time for benchmarking. We need to synchronize with the device after the kernel launch to ensure we capture the execution time of the kernel and not just the time spent by the CPU launching the kernel:" ] }, { "cell_type": "code", "execution_count": 41, "metadata": {}, "outputs": [], "source": [ "def run_normal(): \n", " cube_root[n_blocks, n_threads](d_r_normal, d_x)\n", " cuda.synchronize()\n", "\n", "def run_fast():\n", " intrinsic_cube_root[n_blocks, n_threads](d_r_fast, d_x)\n", " cuda.synchronize()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's time both versions:" ] }, { "cell_type": "code", "execution_count": 42, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "76.2 µs ± 322 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)\n", "55.5 µs ± 134 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)\n" ] } ], "source": [ "%timeit run_normal()\n", "%timeit run_fast()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The results agree to six decimal places with `float32` data (this is likely to be a reasonable expectation for most use cases):" ] }, { "cell_type": "code", "execution_count": 43, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Results are in agreement to a relative tolerance of 1e-06.\n" ] } ], "source": [ "rtol=1.0e-6\n", "h_r_normal = d_r_normal.copy_to_host()\n", "h_r_fast = d_r_fast.copy_to_host()\n", "np.testing.assert_allclose(h_r_normal, h_r_fast, rtol=rtol)\n", "print(f\"Results are in agreement to a relative tolerance of {rtol}.\")" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 5 }