{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Multiprocessing\n", "\n", "Threads and processes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Multithreading in Python is split into two groups: multithreading and multiprocessing.\n", "\n", "Multithreading means you have one Python process. Due to the way that Python is implemented with Global Interpreter Lock (GIL), you can only run one Python instruction at a time, even from multiple threads. This is very limiting, but not the end of the world for multithreading. One loophole is that this only is valid for *Python* instructions; as long as they don't change Python's internal memory model (like changing refcounts), *compiled* code is allowed to escape the GIL. This include JIT code like Numba!\n", "\n", "The other method is multiprocessing. This involves creating two or more Python *processes*, with their own memory space, then either transferring data (via Pickle) or by sharing selected portions of memory (less common before Python 3.8, but possible). This is much heaver-weight than threading, but can be used effectively." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Built-in libraries\n", "\n", "* **Thread**: The core low-level threading library.\n", "* **Threading**: A basic interface to thread, still rather low-level by modern standards.\n", "* **Multiprocessing**: Similar to threading, but with processes. Shared memory tools added in Python 3.8.\n", "* **Concurrent.futures:** Higher-level interface to both threading and multiprocessing. Introduced in Python 3.3 and backported in PyPI.\n", "* **Ascynio:** Explicit control over switching points." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import asyncio\n", "import concurrent.futures\n", "import sys # noqa: F401\n", "import threading\n", "import time\n", "\n", "import matplotlib.pyplot as plt\n", "import numpy as np" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def prepare(height, width):\n", " c = np.sum(\n", " np.broadcast_arrays(*np.ogrid[-1j : 0j : height * 1j, -1.5 : 0 : width * 1j]),\n", " axis=0,\n", " )\n", " fractal = np.zeros_like(c, dtype=np.int32)\n", " return c, fractal\n", "\n", "\n", "def run(c, fractal, maxiterations=20):\n", " z = c\n", "\n", " for i in range(1, maxiterations + 1):\n", " z = z**2 + c # Compute z\n", " diverge = abs(z) > 2 # Divergence criteria\n", "\n", " z[diverge] = 2 # Keep number size small\n", " fractal[~diverge] = i # Fill in non-diverged iteration number\n", "\n", " return fractal" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "size = 4000, 3000" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "c, fractal = prepare(*size)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "fractal = run(c, fractal)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=(12, 8))\n", "ax.imshow(fractal);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Raw threading" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import threading\n", "\n", "c, fractal = prepare(*size)\n", "\n", "\n", "def piece(i):\n", " ci = c[10 * i : 10 * (i + 1), :]\n", " fi = fractal[10 * i : 10 * (i + 1), :]\n", " run(ci, fi)\n", "\n", "\n", "workers = []\n", "for i in range(size[0] // 10):\n", " workers.append(threading.Thread(target=piece, args=(i,)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note: You can also use the OO interface, which I sometimes prefer.\n", "\n", "```python\n", "class Worker(threading.Thread):\n", " def __init__(self, c, fractal, i):\n", " super(Worker, self).__init__()\n", " self.c = c\n", " self.fractal = fractal\n", " self.i = i\n", " def run(self):\n", " run(self.c[10*self.i : 10*(self.i + 1), :], self.fractal[10*self.i : 10*(self.i + 1), :])\n", "\n", "workers = []\n", "for i in range(size[0]//10):\n", " workers.append(Worker(c, fractal, i))\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "for worker in workers:\n", " worker.start()\n", "for worker in workers:\n", " worker.join()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "len(workers)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=(12, 8))\n", "ax.imshow(fractal);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## High level: Executors" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "Python 3 introduced an \"executor\" interface that manages workers for you. Instead of creating threads or processes with a `run` method, you create an executor and send work to it. It has the same interface for threads and processes." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "slideshow": { "slide_type": "-" } }, "outputs": [], "source": [ "executor = concurrent.futures.ThreadPoolExecutor(max_workers=8)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def piece(i):\n", " ci = c[10 * i : 10 * (i + 1), :]\n", " fi = fractal[10 * i : 10 * (i + 1), :]\n", " run(ci, fi)\n", "\n", "\n", "c, fractal = prepare(*size)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "futures = executor.map(piece, range(size[0] // 10))\n", "for _ in futures: # iterating over them waits for the results\n", " pass" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=(12, 8))\n", "ax.imshow(fractal);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Async: no threads at all\n", "\n", "Python has native support for async functions, which don't use threads at all, and were originally even implemented on top of generators. This is a control scheme that makes all suspension points explicit, using the \"await\" keyword. This is much easier to program multiple threads sharing data, since you can see the await points; you are running in a single thread except for being able to temporarily give control elsewhere when \"await\" is present.\n", "\n", "This _can_, however, be integrated with threading, and can take advantage of functions that release the GIL." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "async def compute_async():\n", " await asyncio.gather(*(asyncio.to_thread(piece, x) for x in range(size[0] // 10)))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "c, fractal = prepare(*size)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "IPython supports top-level await, but it does not support it in some magics, including the time related magics. So we'll replace the time magic ourselves." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "start = time.time()\n", "await compute_async()\n", "print(time.time() - start)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=(12, 8))\n", "ax.imshow(fractal);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Shared Memory and Multiprocessing" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It's often difficult to get \"perfect scaling,\" N times more work from N threads, in real situations. Even though this problem is \"embarrassingly parallel\" (none of the workers need to know other workers' results), there can be scheduling overhead, contention for memory, or slow-downs due to Python's [Global Interpreter Lock](https://realpython.com/python-gil/)." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "One way to avoid the global interpreter lock is to send work to separate processes. Python interpreters in separate processes do not share memory and therefore do not need to coordinate." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "However, that means that we can't send data by simply sharing variables. We have to send it through a `multiprocessing.Queue` (which serializes— pickles— the data so that it can go through a pipe). In Python 3.8, we have the new `multiprocessing.shared_memory` module!" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "You can share arrays among processes if you declare them as shared memory before launching the subprocesses. Python has a special type for this:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "%%writefile multiproc.py\n", "\n", "import concurrent.futures\n", "import multiprocessing.shared_memory\n", "import time\n", "\n", "import numpy as np\n", "\n", "size = 4000, 3000\n", "\n", "\n", "def prepare(height, width):\n", " c = np.sum(\n", " np.broadcast_arrays(*np.ogrid[-1j : 0j : height * 1j, -1.5 : 0 : width * 1j]),\n", " axis=0,\n", " )\n", " fractal = np.zeros_like(c, dtype=np.int32)\n", " return c, fractal\n", "\n", "\n", "def run(c, fractal, maxiterations=20):\n", " z = c\n", "\n", " for i in range(1, maxiterations + 1):\n", " z = z**2 + c # Compute z\n", " diverge = abs(z) > 2 # Divergence criteria\n", "\n", " z[diverge] = 2 # Keep number size small\n", " fractal[~diverge] = i # Fill in non-diverged iteration number\n", "\n", " return fractal\n", "\n", "\n", "c, orig_fractal = prepare(*size)\n", "\n", "\n", "def piece(i):\n", " mem = multiprocessing.shared_memory.SharedMemory(name=\"perfdistnumpy\")\n", " fractal = np.ndarray(shape=c.shape, dtype=np.int32, buffer=mem.buf)\n", "\n", " ci = c[10 * i : 10 * (i + 1), :]\n", " fi = fractal[10 * i : 10 * (i + 1), :]\n", " run(ci, fi)\n", " mem.close()\n", "\n", "\n", "if __name__ == \"__main__\":\n", " d_size = np.int32().itemsize * np.prod(orig_fractal.size)\n", "\n", " mem = multiprocessing.shared_memory.SharedMemory(\n", " name=\"perfdistnumpy\", create=True, size=d_size\n", " )\n", " try:\n", " fractal = np.ndarray(shape=c.shape, dtype=np.int32, buffer=mem.buf)\n", " fractal[...] = orig_fractal\n", "\n", " executor = concurrent.futures.ProcessPoolExecutor(max_workers=8)\n", "\n", " start = time.time()\n", " futures = executor.map(piece, range(size[0] // 10))\n", " for _ in futures: # iterating over them waits for the results\n", " pass\n", " print(time.time() - start, \"seconds\")\n", " print(fractal)\n", " finally:\n", " mem.close()\n", " mem.unlink()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is shared across processes and can enen outlive the owning process, so make _sure_ you close (per process) and unlink (once) the memory you take! Having a fixed name (like above) can be safer." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "!{sys.executable} multiproc.py" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Dask (external packages)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's try an external package: Dask." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "Still, there needs to be a better way. Our array slices in `piece` are fragile: an indexing error can ruin the result. Can't the problem of scattering work be generalized?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "slideshow": { "slide_type": "slide" } }, "outputs": [], "source": [ "import dask.array" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "c, fractal = prepare(*size)\n", "\n", "c = dask.array.from_array(c, chunks=(10, size[1]))\n", "fractal = dask.array.from_array(fractal, chunks=(10, size[1]))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "fractal = run(c, fractal)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That's AWESOME! So fast for so little. Let's take a look!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fractal" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "What the heck? This is not an array: it is a description of how to make an array. Dask has stepped through our procedure and built an execution graph, encoding all the dependencies so that it can correctly apply it to individual chunks. When we execute this graph, Dask will send a chunk to each processor in the computer and combine results." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "fractal = fractal.compute()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=(12, 8))\n", "ax.imshow(fractal);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Much better and less exciting. Dask now did our chunking for us. It's not any faster than single threaded, though; we need to point it to a scheduler/worker. This could be local, or it could be many computers anywhere in the world!" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "We seem to have paid for this simplicity: it took twice as long as the carefully sliced `pieces` in the executor." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "The reason is that our code is not as simple as it looks. It has masking and piecemeal assignments, which in principle could introduce complex dependencies. _We_ know that everything will be fine if you just chop up the array in independent sections— and thus we implemented our thread and executor-based solutions that way." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "Let me show you what Dask has to do for a 1×1 chunking of our problem." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "c, fractal = prepare(1, 1) # try 2, 2\n", "c = dask.array.from_array(c, chunks=(1, 1))\n", "fractal = dask.array.from_array(fractal, chunks=(1, 1))\n", "fractal = run(c, fractal, maxiterations=1) # try more iterations\n", "fractal.visualize()" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "If that were all, I'd probably stick to chopping up the grid by hand (when possible). However, _exactly the same interface_ that distributes work across cores in my laptop can distribute work around the world, just by pointing it to a remote scheduler.\n", "\n", "This is truly the ~~lazy~~ busy researcher approach!" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "> Note to self: launch\n", "> \n", "> ```bash\n", "> dask-scheduler &\n", "> dask-worker --nthreads 8 127.0.0.1:8786 &\n", "> ```\n", "> \n", "> in a terminal now." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "slideshow": { "slide_type": "slide" } }, "outputs": [], "source": [ "import dask.distributed\n", "\n", "client = dask.distributed.Client(\"127.0.0.1:8786\")\n", "client" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "c, fractal = prepare(*size)\n", "\n", "c = dask.array.from_array(c, chunks=(100, size[1]))\n", "fractal = dask.array.from_array(fractal, chunks=(100, size[1]))\n", "fractal = run(c, fractal)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "fractal = client.compute(fractal, sync=True)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "Well, that was exciting!\n", "\n", "In the end, this example took longer than the single-core version, but it illustrates how array operations _can be_ distributed in a simple way." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "I haven't shown very much of what Dask can do. It's a general toolkit for delayed and distributed evaluation. As such, it provides a nice way to work on Pandas-like DataFrames that are too large for memory:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import dask.dataframe\n", "\n", "df = dask.dataframe.read_csv(\"data/nasa-exoplanets.csv\")\n", "df" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "We don't see the data because they haven't been loaded. But we can get them if we need them." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df[[\"pl_hostname\", \"pl_pnum\"]].compute()" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "Additionally, Dask isn't the only project filling this need. There's also:\n", "\n", " * **Joblib:** annotate functions to execute remotely with decorators.\n", " * **Parsl:** same, but work with conventional schedulers (Condor, Slurm, GRID); an academic project.\n", " * **PySpark:** Spark is a big, scalable project, though its Python interface has performance issues.\n", "\n", "and many smaller projects.\n", "\n", "(Distributed computing hasn't been fully figured out yet.)" ] } ], "metadata": { "kernelspec": { "display_name": "Python [conda env:performance-minicourse] *", "language": "python", "name": "conda-env-performance-minicourse-py" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.4" } }, "nbformat": 4, "nbformat_minor": 4 }