{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "[![](https://bytebucket.org/davis68/resources/raw/f7c98d2b95e961fae257707e22a58fa1a2c36bec/logos/baseline_cse_wdmk.png?token=be4cc41d4b2afe594f5b1570a3c5aad96a65f0d6)](http://cse.illinois.edu/)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Distributed Programming for Engineers (MPI)\n", "\n", "\n", "## Contents\n", "- [Distributed Programs](#intro)\n", "- [Background](#bkgd)\n", " - [SISD](#sisd)\n", " - [SIMD](#simd)\n", " - [MISD](#misd)\n", " - [MIMD](#mimd)\n", " - [Shared-Memory Systems](#shared)\n", " - [Distributed-Memory Systems](#distributed)\n", " - [SPMD](#spmd)\n", "- [MPI Basics](#basics)\n", " - [Hello World](#hello)\n", "- [Basic Functions](#six funcs)\n", " - [`MPI_Init` and `MPI_Finalize`](#Init Finalize)\n", " - [Groups and Communicators](#groups)\n", " - [`MPI_Comm_rank` and `MPI_Comm_size`](#Rank Size)\n", " - [Message Passing, `MPI_Send` and `MPI_Recv`](#msgpass)\n", " - [Collective Operations](#collect)\n", "- [A Finite Difference Example](#fd)\n", " - [Paradigms for parallelization](#Domain Decomp)\n", "- [Advanced Message Passing](#msgpassi)\n", "- [Collective Operations 2](#collect2)\n", " - [`MPI_Barrier`](#Barrier)\n", " - [`MPI_Gather`](#Gather)\n", " - [`MPI_Bcast`](#Bcast)\n", "- [Scaling](#scaling)\n", "- [Memory and C99-Style Variable-Length Arrays](#mem)\n", "- [Passing Vectors](#vector)\n", "- [Resources](#res)\n", " - [Where to Go Next](#wherenext)\n", "- [Credits](#credits)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "## Distributed Programs\n", "[MPI](http://www.mcs.anl.gov/research/projects/mpi/) is a library specification describing a parallel programming paradigm suitable for use on distributed-memory machines, such as modern supercomputers and distributed clusters.\n", "\n", "In this lesson, we will review parallel programming concepts and introduce several useful MPI functions and concepts. We will work within this [IPython](ipython.org)/[Jupyter](jupyter.org) notebook, which allows us to lay out our code, results, and commentary in the same interface. Feel free to open up a `bash` shell in the background if you prefer to look at and execute your code that way.\n", "\n", "Don't forget to load mpich or openmpi on the UIUC EWS workstations first:\n", "\n", "```bash\n", " module load mpich2 # or openmpi if you prefer\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "## Background: Parallel Programming Paradigms\n", "\n", "### Computer Architecture\n", "\n", "A classical computer processor receives instructions as sets of binary data from memory and uses these instructions (in assembly language) to manipulate data from memory in predictable ways. A processor works on a single piece of data at a time (although other data units may be waiting in the registers or cache) and executes a single thread, or sequential list, of commands on the data.\n", "\n", "### Flynn's Taxonomy\n", "To better contextualize this, let us briefly review the basic parallel computing models. These models can be classified using Flynn's Taxonomy, proposed by Michael Flynn back in 1966. According to this classification, all computer systems can be placed in one of four categories. The four types are:\n", "\n", "* **SISD** - **S**ingle **I**nstructution stream **S**ingle **D**ata stream\n", "* **SIMD** - **S**ingle **I**nstructution stream **M**ultiple **D**ata stream\n", "* **MISD** - **M**ultiple **I**nstructution stream **S**ingle **D**ata stream\n", "* **MIMD** - **M**ultiple **I**nstructution stream **M**ultiple **D**ata stream\n", "\n", "Examples and a more detailed description of each model follows below. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### SISD—Single Instruction, Single Data\n", "\n", "A conventional single-core desktop computer is a good example of **SISD**. A single set of instructions operates on a single data element sequentially, then exchanges it back into memory and retrieves a new datum. We will represent this scenario in the following graphic, which shows a single processor interacting via a _bus_ (the wavy black band) with a collection of memory chips (which can be RAM, ROM, hard drives, etc.).\n", "\n", "![](https://raw.githubusercontent.com/maxim-belkin/hpc-sp16/gh-pages/lessons/mpi/img/SISD-base.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### SIMD—Single Instruction, Multiple Data\n", "\n", "Springboarding off of SISD, perhaps the simplest way to intoduce parallelism in computations is using the **SIMD** model. In this model, multiple processing units operate simultaneously on multiple pieces of data _in the same way_. The fact that each processor must perform exactly the same set of instructions is important to note, as it precludes any advantages of _concurrency_. This limits the usefulness of SIMD, but it is still a very powerful tool, particularly in linear algebra.\n", "\n", "Consider, for instance, the addition of two eight-element vectors. If we have eight processors available, then each processor can add two corresponding elements from the two vectors directly; the program then yields an efficiently added single eight-element vector afterwards (if we assume little to no overhead costs for the _vectorization_ of the program). Since all the necessary tasks (the 8 additions to be performed) are identical, we have no need of concurrency.\n", "\n", "![](https://raw.githubusercontent.com/maxim-belkin/hpc-sp16/gh-pages/lessons/mpi/img/SIMD-vector-add.png)\n", "\n", "(This, incidentally, was the major advantage of the first Cray supercomputers in the late 1970s: vectorization let them operate on many data elements simultaneously, thus achieving stupendous speedups.)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### MISD—Multiple Instruction, Single Data\n", "\n", "While this model is part of Flynn's Taxonomy, it is easily the least useful. Outside of _pipelining_ (each processor performs a different task as a piece of data is passed through), there really aren't many practical uses for **MISD**, and many would argue that this isn't a true example of **MISD**. The key to this lack of practical applications is the Single Data specification, it is very difficult to extract useful parallelism if the same information must be used in multiple instruction sets simultaneously." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### MIMD—Multiple Instruction, Multiple Data\n", "\n", "It's not a major leap from **SIMD** to think about using each of those processors to do something _different_ to their assigned data element. It may not make much sense with vector addition, but when performing more complex operations like finite element differential equation solution or Monte Carlo random number integration, we often want a lot of plates spinning.\n", "\n", "For MIMD to be effective, we need two different kinds of parallelism: data-level parallelism (can I segment my problem domain into complementary parts, whether in real space, time, or phase space?); and concurrency (do I need different parts of my program to execute differently based on the data they have allotted to them?).\n", "\n", "Imagine trying to calculate $\\pi$ by throwing darts at a circle. We can count the number of darts that hit within the radius of the circle, $n_{\\text{circle}}$, and compare that value with the ratio of all darts within a bounding square, $n_{\\text{square}}$. (Physical simulation of this process is left as an exercise to the reader.) As we know the equations defining area, we can obtain $\\pi$ trivially:\n", "$$\\begin{array}{l} A_\\text{circle} = \\pi r^2 \\\\ A_\\text{square} = 4 \\pi r^2 \\end{array} \\implies r^2 = \\frac{A_\\text{square}}{4} = \\frac{A_\\text{circle}}{\\pi} \\implies \\pi \\approx 4 \\frac{n_\\text{circle}}{n_\\text{square} + n_\\text{circle}} \\text{.}$$\n", "\n", "![](https://raw.githubusercontent.com/maxim-belkin/hpc-sp16/gh-pages/lessons/mpi/img/darts.png)\n", "\n", "It is apparent how this algorithm can benefit from parallelization: since any one dart is thrown independently of the others, we are not restricted from throwing a large number simultaneously. One possible algorithm could look like this:\n", "\n", " initialize_memory\n", " parallel {\n", " throw_dart\n", " total_darts = total_darts + 1\n", " count_my_darts_in_circle\n", " }\n", " add_all_darts_in_circle\n", " add_all_darts\n", " pi = 4 * darts_in_circle / total_darts\n", "\n", "As you can see, in the parallel portion it doesn't matter to one processor what any other processor is doing. The only time we need all of the processors in sync is at the end of the parallel section where we add up all the darts thrown and that hit the circle. (We will return to this point later.)\n", "\n", "The great degree of flexibility in being able to perform multiple sets of instructions on multiple sets of data simultaneously means that **MIMD** is the usual paradigm for parallel computers. Let's take a look at the architecture of MIMD machines in a bit more detail now. The major division in MIMD programming is shared-memory _vs._ distributed-memory systems.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "**Shared-Memory Systems**\n", "\n", "If the system architecture allows any processor to directly access any location in global memory space, then we refer to the machine as having a _shared memory_. This is convenient in practice for the programmer but can lead to inefficiencies in hardware and software library design. For instance, scaling beyond 32 or 64 processors has been a persistent problem for this architecture choice (contrast tens of thousands of processors for well-designed distributed-memory systems).\n", "\n", "Examples of shared-memory parallelization specifications and libraries include [OpenMP](http://www.openmp.org/) and [OpenACC](http://www.openacc.org/).\n", "\n", "![](https://raw.githubusercontent.com/maxim-belkin/hpc-sp16/gh-pages/lessons/mpi/img/MIMD-SM-base.png)\n", "\n", "Addressing the dart-throwing problem above in the shared-memory paradigm is trivial, since each process can contribute its value from its unique memory location to the final result (a process known as _reduction_)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "**Distributed-Memory Systems**\n", "\n", "For distributed-memory MIMD architectures, every processor is alloted its own physical memory and has no knowledge of other processors' memory. This is the classic problem which MPI was designed to address: we now have to coördinate and communicate data across an internal network (the lighter gray band in the image below) to allow these processors to work effectively together.\n", "\n", "![](https://raw.githubusercontent.com/maxim-belkin/hpc-sp16/gh-pages/lessons/mpi/img/MIMD-DM-base.png)\n", "\n", "The dart-throwing example from before is now nontrivial, but completely scalable (subject to communication overhead). Every processor is independent and can calculate for one or many darts thrown. However, some sort of reduction must be performed to obtain a final coherent result for the number of darts within the circle (the total number should be known _a priori_ from the number of processes, presumably). The pseudocode in this case looks more like this:\n", "\n", " initialize_memory\n", " parallel {\n", " throw_dart\n", " total_darts = total_darts + 1\n", " count_my_darts_in_circle\n", " }\n", " darts_in_circle = reduction_over_all_processors_of_darts_in_their_circles\n", " add_all_darts\n", " pi = 4 * darts_in_circle / total_darts\n", "\n", "We will shortly examine this case as implemented in MPI." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### One final Note: SPMD—Single Program Multiple Data\n", "\n", "You may have noticed in the previous example that we only provided one pseudocode to complete the darts in a circle example. However, we previously mentioned that this was an example of how to use the **MIMD** model. The issue is that explicitly specifying a different set of instructions for each processor is generally tedious and cumbersome, especially if you want to try solving the same problem on a different number of processors. That's why most programming is done using **SPMD**, where a single program is written, but the program may (or may not) assign different instructions to each processor. This allows us to write only one program, but have each processor execute that program differently." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "## MPI Basics" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### Hello World\n", "\n", "The classical example for any new programming language or library is to construct a nontrivial \"Hello World!\" program. In the following code (`./src/mpi-mwe/c/hello_world_mpi.c`), we will see the basic elements of any MPI program, including preliminary setup, branching into several processes, and cleanup when program execution is about to cease. (No message passing occurs in this example.)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%%file hello_world_mpi.c\n", "#include \n", "#include \n", "\n", "int main(int argc, char *argv[]) {\n", " int rank_id, ierr, num_ranks;\n", " double start_time, wall_time;\n", " \n", " printf(\"C MPI minimal working example\\n\");\n", " \n", " ierr = MPI_Init(&argc, &argv);\n", " start_time = MPI_Wtime();\n", " ierr = MPI_Comm_size(MPI_COMM_WORLD, &num_ranks);\n", " ierr = MPI_Comm_rank(MPI_COMM_WORLD, &rank_id);\n", " \n", " if (rank_id == 0) {\n", " printf(\"Number of available processors = %d.\\n\", num_ranks);\n", " }\n", " printf(\"\\tProcess number %d branching off.\\n\", rank_id);\n", " \n", " wall_time = MPI_Wtime() - start_time;\n", " \n", " if (rank_id == 0) {\n", " printf(\"Elapsed wallclock time = %8.6fs.\\n\", wall_time);\n", " }\n", " \n", " MPI_Finalize();\n", " return 0;\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let’s compile and execute this program before commenting on its contents. In order to link the program properly, there are a number of MPI libraries and include files which need to be specified. Fortunately, a convenient wrapper around your preferred C/C++/Fortran compiler has been provided: in this case, `mpicc` (similar wrappers exist for Fortran and C++). To see what the wrapper is doing behind the scenes, use the `-show` option:\n", "\n", " $ mpicc -show\n", " clang -I/usr/local/Cellar/open-mpi/1.8.1/include\n", " -L/usr/local/opt/libevent/lib\n", " -L/usr/local/Cellar/open-mpi/1.8.1/lib -lmpi\n", "\n", "We can thus compile and execute the above example by simply entering,\n", "\n", " $ mpicc -o hello_world_mpi hello_world_mpi.c\n", "$ ./hello_world_mpi\n", " C MPI minimal working example\n", " Number of available processors = 1.\n", " Process number 0 branching off.\n", " Elapsed wallclock time = 0.000270s.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "!mpicc -o hello_world_mpi hello_world_mpi.c\n", "!./hello_world_mpi" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, that's not right, is it? I have a number of processors on my modern machine, so why didn't this use them? In the case of MPI, it is necessary to use the script `mpiexec` which sets up the copies in parallel and coördinates message passing." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "!mpiexec ./hello_world_mpi" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " $ mpiexec ./hello_world_mpi\n", " C MPI minimal working example\n", " C MPI minimal working example\n", " C MPI minimal working example\n", " C MPI minimal working example\n", " Process number 1 branching off.\n", " Number of available processors = 4.\n", " Process number 0 branching off.\n", " Elapsed wallclock time = 0.000021s.\n", " Process number 2 branching off.\n", " Process number 3 branching off.\n", "\n", "(Note the _race conditions_ apparent here: with input and output, unless you explicitly control access and make all of the threads take turns, the output order is unpredictable.)\n", "\n", "Okay, now that we were able to properly execute our MPI program, try running it with a different number of processes using the _-n_ or _-np_ options. Keep in mind that you can specify as many processes as you want, but in practical applications you shouldn't expect to see any improvement in performance if there are a very large number of processes or if there are more processes than physical processors. After trying this out, let's step back and analyze the source code." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "!mpiexec -np 8 ./hello_world_mpi" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## The six MPI functions you _need_ to know\n", "\n", "All MPI functions share a few important characteristics, such as the fact that they all return an integer error value and pass parameters by reference. While the MPI standard contains hundreds of functions, there are only six functions that are necessary to create simple parallel programs. They are:\n", "- `MPI_Init` **or** `MPI_Init_thread`\n", "- `MPI_Finalize`\n", "- `MPI_Comm_rank`\n", "- `MPI_Comm_size`\n", "- `MPI_Send`\n", "- `MPI_Recv`\n", "\n", "We'll discuss the four found in our example after explaining groups and communicators, but we'll hold of on discussing the remaining two, `MPI_Send` and `MPI_Recv` until we see them in another example." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### `MPI_Init` and `MPI_Finalize`\n", "\n", "Any program using MPI must call `MPI_Init` or `MPI_Init_thread` to initialize the message passing environment and `MPI_Finalize` to terminate it. Also, all MPI functions must be called between the initialization and termination, although other functions may be called before or after. Keep in mind that one of the initializing functions must be called once and only once, subsequent calls are erroneous. The prototype for both of these functions is very simple as neither _require_ any input arguments. However, in most cases it is necessary in C/C++ programs to provide pointers to the arguments in main as arguments to MPI_Init (as it is done in the example)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### Groups and Communicators\n", "\n", "We mentioned that the `MPI_Init` function initializes the message passing environment. Before you can fully understand how MPI works, you need to understand the nature of this environment. When you run a program using the mpiexec command, _n_ copies of the program (referred to as processes) start running. Initially, all of these processes belong to the default communicator `MPI_COMM_WORLD`.\n", "\n", "In many cases, this is the only communicator needed. However, there may be cases where you want to have communicators that only include a few of the processes (this is particularly useful with the collective operations we will discuss later). The MPI library includes functions to create subcommunicators, but we won't go into detail about them today.\n", "\n", "Groups are a related but distinct concept to communicators. Groups describe a set of processes, while communicators provide a context for processes to share information (through message-passing). Communicators come in two forms, intracommuncators which are used for communication between processes in the same group, and intercommunicators which allow for communication between disjoint groups. Since only one group and one communicator exist at the beginning of a program, `MPI_COMM_WORLD` is an example of an intracommunicator. Intercommunicators are only necessary when subcommunicators are created and communication between two subcommunicators is desired. We'll illustrate these comments with the following image.\n", "\n", "![](https://raw.githubusercontent.com/maxim-belkin/hpc-sp16/gh-pages/lessons/mpi/img/Groups%20and%20Communicators.png)\n", "\n", "In this graphic, 16 separate processes (P0-P15) are represented with diamonds. Each of these processes belongs to one of four groups, each group being associated with an intracommunicator (Comm3-Comm6). However, each process is also associated with 2 other communicators, one of either Comm1, which contains all the processes in groups 1 and 2, or Comm2, which contain all the processes in groups 2 and 3, and the global communicator, `MPI_COMM_WORLD`. Keep in mind that communicators aren't tracked with a rank or number but a variable that you define, the numbers are just here for reference.\n", "\n", "There are no intercommunicators shown in the image, but one could be set up between any of the non-overlapping communicators shown in the above image (for example, an intercommunicator could be set up between Comm1 and Comm6, but not between Comm0 and Comm2). In many cases, only the `MPI_COMM_WORLD` communicator is needed, but it may be useful to set up more communicators when multiple processes work on related tasks while other processes work on unrelated tasks." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### `MPI_Comm_rank` and `MPI_Comm_size`\n", "\n", "We mentioned earlier that we want to write our programs in **SPMD** style. This means that we have to give a different set of instructions to each process running the same program. To do this correctly, we need to be able to distinguish which process we are providing instructions to and how many total processes need instructions. We can do this by taking advantage of the environment we just discussed. Every process is automatically assigned a rank in each of the communicators it belongs to. We can use `MPI_Comm_rank` to determine that rank, and `MPI_Comm_size` to tell us the total number of processes in that communicator. The prototype for each function is below:\n", " \n", " int MPI_Comm_rank(\n", " MPI_Comm comm, // The communicator, often\n", " // MPI_COMM_WORLD\n", " int* rank // Pointer to location in memory where\n", " // the rank is to be stored\n", " )\n", " \n", " int MPI_Comm_size(\n", " MPI_Comm comm, // The communicator, often\n", " // MPI_COMM_WORLD\n", " int* rank // Pointer to location in memory where\n", " // the size of the group is to be stored\n", " )\n", "We'll see an example of how to use these in the following example when we introduce the send and receive functions.\n", "\n", "\n", "Also note that not all processors will have the same rank in every commmunicator. Referring back to our previous picture, process 0 will have rank 0 in every process, but process 14 will have rank 14 in Comm0, rank 6 in Comm2, and rank 2 in Comm6. _What is the rank of process 7 in Comm4?_" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%%file darts_pi.c\n", "#include \n", "#include \n", "#include \n", "#include \n", "#include \n", "\n", "#define N 100000\n", "#define R 1\n", "\n", "double uniform_rng() { return (double)rand() / (double)RAND_MAX; } // Not great but not the point of this exercise.\n", "\n", "int main(int argc, char *argv[]) {\n", " int rank_id, ierr, num_ranks;\n", " MPI_Status status;\n", " \n", " ierr = MPI_Init(&argc, &argv);\n", " ierr = MPI_Comm_size(MPI_COMM_WORLD, &num_ranks);\n", " ierr = MPI_Comm_rank(MPI_COMM_WORLD, &rank_id);\n", " srand(time(NULL) + rank_id);\n", " \n", " // Calculate some number of darts thrown into the circle.\n", " int n_circle = 0;\n", " double x, y;\n", " for (int t = 0; t < N; t++) {\n", " x = uniform_rng();\n", " y = uniform_rng();\n", " if (x*x + y*y < (double)R*R) n_circle++;\n", " }\n", " \n", " // If this is the first process, then gather everyone's data. Otherwise, send it to the first process.\n", " int total_circle[num_ranks]; // C99 Variable Length Arrays; compile with `-std=c99` or else use `malloc`.\n", " if (rank_id == 0) {\n", " total_circle[0] = n_circle;\n", " for (int i = 1; i < num_ranks; i++) {\n", " ierr = MPI_Recv(&total_circle[i], 1, MPI_INT, i, MPI_ANY_TAG, MPI_COMM_WORLD, &status);\n", " //printf (\"\\t%d: recv %d from %d\\n\", rank_id, total_circle[i], i);\n", " }\n", " } else {\n", " ierr = MPI_Send(&n_circle, 1, MPI_INT, 0, 1, MPI_COMM_WORLD);\n", " //printf (\"\\t%d: send %d to %d\\n\", rank_id, n_circle, 0);\n", " }\n", " \n", " // Now sum over the data and calculate the approximation of pi.\n", " int total = 0;\n", " if (rank_id == 0) {\n", " for (int i = 0; i < num_ranks; i++) {\n", " total += total_circle[i];\n", " }\n", " printf(\"With %d trials, the resulting approximation to pi = %f.\\n\", num_ranks*N, 4.0*(double)total/((double)N*(double)num_ranks));\n", " }\n", " \n", " MPI_Finalize();\n", " return 0;\n", "}" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# To run this code from within the notebook.\n", "!mpicc -std=c99 -o darts_pi darts_pi.c\n", "!mpiexec -n 4 ./darts_pi" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Running the above program should produce an output like this:\n", "\n", " $ mpicc -std=c99 -o darts_pi darts_pi.c\n", "$ mpiexec -n 4 ./darts_pi \n", " With 400000 trials, the resulting approximation of pi = 3.137710.\n", "\n", "A few remarks:\n", "- **We know this algorithm is inefficient**: 400,000 trials for 2 digits of accuracy is absurd! Compare a series solution, which should give you that in a handful of terms.\n", "- I don't do a great job of **calculating the random values as floating-point values**, just dividing by the maximum possible integer value from `stdlib.h`. (This example is doubly dangerous in that it assumes that the default C random number library is thread safe! **It is not!**—more on that in the OpenMP lesson.)\n", "- Note how we are **accessing the respective elements of the array** `total_circle` in the `MPI_Recv` clause.\n", "- We **limit the efficiency** by receiving the messages in order." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### Exchanging messages: `MPI_Send` and `MPI_Recv`\n", "\n", "At the highest level, an MPI implementation supports the parallel execution of a number of copies of a program which can intercommunicate with each other across an equal number of processors. Internode and intranode communications are implicitly handled by the operating system, interconnect, and MPI libraries.\n", "\n", "The most basic action by any set of processors using MPI is the exchange of messages. As you might imagine, this exchange requires the _sending_ and _receiving_ of messages between a pair of processors (This is also known as _point-to-point communication_). A number of more complex actions, such as the partitioning of data or collection of calculated results, are built from these point-to-point communications. Even, more advanced features exist to customize the processor topology, operations, and process management.\n", "\n", "![](https://raw.githubusercontent.com/maxim-belkin/hpc-sp16/gh-pages/lessons/mpi/img/MPI-Send-Recv.png)\n", "\n", "Remember that send and receive actions are always paired with each other. If for any reason the program specifies a send without a matching receive or vice versa, the program may hang or simply crash. There are a number of flavors (which we'll discuss later), but let's start with the basic two: `MPI_Send` and `MPI_Recv`. The function prototypes are:\n", "\n", " int MPI_Send(\n", " void* message, // Pointer to data to send\n", " int count, // Number of data values to send\n", " MPI_Datatype datatype, // Type of data (e.g. MPI_INT)\n", " int destination_rank, // Rank of process to receive message\n", " int tag, // Identifies message type\n", " MPI_Comm comm // The communicator, often\n", " // MPI_COMM_WORLD\n", " )\n", " \n", " int MPI_Recv(\n", " void* message, // Points to location in memory where\n", " // received message is to be stored.\n", " int count, // MAX number of data values to accept \n", " MPI_Datatype datatype // Type of data (e.g. MPI_INT) \n", " int source_rank, // Rank of process to receive from\n", " // (Use MPI_ANY_SOURCE to accept\n", " // from any sender)\n", " int tag, // Type of message to receive\n", " // (Use MPI_ANY_TAG to accept any type)\n", " MPI_Comm comm, // The communicator, often MPI_COMM_WORLD\n", " MPI_Status* status // To receive info about the message\n", " )\n", "\n", "There's a lot of information there, and none of it optional (if you want to hide it, use a language and package such as Python and [MPI4Py](mpi4py.scipy.org)).\n", "\n", "Now let's refer back to the example we just completed. Notice that we call `MPI_Recv` three times on process 0 and we call `MPI_Send` once on each of the remaining processes. Each time `MPI_Recv` is called a different source process is specified, so all three messages sent by the other processes (each with destination process 0) will be received. We also could have specified `MPI_ANY_SOURCE` as the source process, in which case `MPI_Recv` will receive the first three messages to arrive at process 0.\n", "\n", "You may also notice that the `MPI_Recv` arguments specify `MPI_ANY_TAG`. In this example we know exactly what information any incoming messages will contain. If instead we were for example collecting statistics on a set of data and needed to communicate both the mean and standard deviation, we could specify a tag 0 for sending the mean and tag 1 for sending the standard deviation. Then on the receiving end we could receive messages with tag 0 in one location and messages with tag 1 in a separate location." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise\n", "Write a small program to add the vectors [1 2 3 4 5 6 7 8 9 10] and [10 9 8 7 6 5 4 3 2 1] in parallel." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%%file vadd.c" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# To run this code from within the notebook.\n", "!mpicc -o vadd vadd.c\n", "!mpiexec -n 2 ./vadd" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### Collective Operations: `MPI_Reduce`\n", "\n", "You probably noticed that in our previous example, we had to repeatedly call `MPI_Recv` from the 0-rank process and add that received number to the already stored number. As it turns out, this is a very common operation in distributed memory computing, and there is a specific MPI function to accomplish exactly this. We can render this program much more readable and less error-prone by using the convenience function `MPI_Reduce`, which achieves the same effective outcome as the prior code without the explicit message management. Reduction takes a series of values from each processor, applies some operation to them, and then places the result on the root process (which doesn't have to be `0`, although it often is). This reduction can be done on any available communicator, for example using the `MPI_COMM_WORLD` communicator will perform the reduction across every process running the program.\n", "\n", "![](https://raw.githubusercontent.com/maxim-belkin/hpc-sp16/gh-pages/lessons/mpi/img/MPI-Reduce.png)\n", "\n", " int MPI_Reduce(\n", " void* value, // Input value from this process\n", " void* answer, // Result -- on root process only\n", " int count, // Number of values -- usually 1\n", " MPI_Datatype datatype, // Type of data (e.g. MPI_INT) \n", " MPI_Op operation, // What to do (e.g. MPI_SUM) \n", " int root, // Process that receives the answer \n", " MPI_Comm comm // Use MPI_COMM_WORLD \n", " )\n", "\n", "In the context of our prior code, we can convert the code segment as follows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%%file darts_pi.c\n", "#include \n", "#include \n", "#include \n", "#include \n", "#include \n", "\n", "#define N 100000\n", "#define R 1\n", "\n", "double uniform_rng() { return (double)rand() / (double)RAND_MAX; } // Not good but not the point of this exercise.\n", "\n", "int main(int argc, char *argv[]) {\n", " int rank_id, ierr, num_ranks;\n", " MPI_Status status;\n", " \n", " ierr = MPI_Init(&argc, &argv);\n", " ierr = MPI_Comm_size(MPI_COMM_WORLD, &num_ranks);\n", " ierr = MPI_Comm_rank(MPI_COMM_WORLD, &rank_id);\n", " srand(time(NULL) + rank_id);\n", " \n", " // Calculate some number of darts thrown into the circle.\n", " int n_circle = 0;\n", " double x, y;\n", " for (int t = 0; t < N; t++) {\n", " x = uniform_rng();\n", " y = uniform_rng();\n", " if (x*x + y*y < (double)R*R) n_circle++;\n", " }\n", " \n", " // Reduce by summation over all data and output the result.\n", " int total;\n", " ierr = MPI_Reduce(&n_circle, &total, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD);\n", " if (rank_id == 0) {\n", " printf(\"With %d trials, the resulting approximation to pi = %f.\\n\", num_ranks*N, 4.0*(double)total/((double)N*(double)num_ranks));\n", " }\n", " \n", " MPI_Finalize();\n", " return 0;\n", "}" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# To run this code from within the notebook.\n", "!mpicc -std=c99 -o darts_pi darts_pi.c\n", "!mpiexec -n 4 ./darts_pi" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That reduces ten significant lines of code, two `for` loops, $N_\\text{processor}$ MPI function calls, and an array allocation to a single MPI function call. Not bad. This is typical, I find, of numerical codes: well-written MPI code in a few key locations covers the bulk of your communication needs, and you rarely need to explicitly pass messages or manage processes dynamically." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "## A Finite Difference Example\n", "\n", "Let's look through an integrated example which will introduce a few new functions and give you a feel for how MPI operates in a larger code base. We will utilize the finite difference method to solve Poisson's equation for an electrostatic potential:\n", "$${\\nabla}^2 \\varphi = -\\frac{\\rho_f}{\\varepsilon}$$\n", "where $\\rho_f$ are the (known) positions of charges; $\\varepsilon$ is the permittivity of the material; and $\\varphi$ is the resultant scalar electric potential field.\n", "\n", "We use a second-order central difference scheme in the discretization of the differential equation in space and take an initial guess which will require iteration to obtain a steady-state solution. The resulting equation for an arbitrary location in the $(x, y)$ grid (assuming uniform discretization) subject to a set of discrete point charges $\\sum \\rho_f$ is:\n", "$$\\frac{\\varphi_{i+1,j}-2\\varphi_{i,j}+\\varphi_{i-1,j}}{\\delta^2} + \\frac{\\varphi_{i,j+1}-2\\varphi_{i,j}+\\varphi_{i,j-1}}{\\delta^2} = \\frac{\\sum \\rho_f}{\\varepsilon} \\implies$$\n", "$$\\varphi_{i,j} = \\frac{1}{4} \\left(\\varphi_{i+1,j}+\\varphi_{i-1,j}+\\varphi_{i,j+1}+\\varphi_{i,j-1}\\right) - \\frac{\\delta^2}{4} \\left(\\frac{\\sum \\rho_f}{\\varepsilon}\\right) \\text{.}$$\n", "(As the discretization does not subject grid locations to nonlocal influences, it is necessary for the grid locations next to the point charges to propagate that information outward as the iteration proceeds towards a solution.)\n", "\n", "For some known distribution of charges, we can construct a matrix equation and solve it appropriately either by hand-coding an algorithm or by using a library such as [GNU Scientific Library](https://www.gnu.org/software/gsl/). In this case, to keep things fairly explicit, we won't write a matrix explicitly and will instead solve each equation in a `for` loop.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Serial Code**\n", "\n", "One of the odd things we need to consider is that because we are using point charges our grid has to overlap with them directly. (This isn't as much of a problem with distributions.) So we'll just start with a handful of point charges for now, with the boundaries from $(-1,-1)$ to $(1,1)$ set to zero." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%%file fd_charge.c\n", "// This is a serial version of the code for reference.\n", "#include \n", "#include \n", "#include \n", "#include //for memcpy()\n", "#include \n", "\n", "#define BOUNDS 1.0\n", "\n", "double uniform_rng() { return (double)rand() / (double)RAND_MAX; } // Not good but not the point of this exercise.\n", "struct point_t { int x, y; double mag; };\n", "\n", "int main(int argc, char *argv[]) {\n", " int N = 10;\n", " if (argc > 1) N = atoi(argv[1]);\n", " int size = N % 2 == 0 ? N : N+1; // ensure odd N\n", " int iter = 10;\n", " if (argc > 2) iter = atoi(argv[2]);\n", " int out_step = 1000;\n", " if (argc > 3) out_step = atoi(argv[3]);\n", " double eps = 8.8541878176e-12; // permittivity of the material\n", " srand(time(NULL));\n", " \n", " // Initialize the grid with an initial guess of zeros as well as the coordinates.\n", " double phi[size][size]; // C99 Variable Length Arrays; compile with `-std=c99` or else use `malloc`.\n", " double oldphi[size][size];\n", " double x[size],\n", " y[size];\n", " double dx = (2*BOUNDS)/size,\n", " dy = dx;\n", " for (int i = 0; i < size; i++) {\n", " x[i] = (double)i*dx - BOUNDS;\n", " for (int j = 0; j < size; j++) {\n", " if (i == 0) y[j] = (double)j*dy - BOUNDS;\n", " phi[i][j] = 0.0;\n", " }\n", " }\n", " // Set up a few random points and magnitudes for the electrostatics problem.\n", " int K = 20;\n", " struct point_t pt_srcs[K];\n", " for (int k = 0; k < K; k++) {\n", " pt_srcs[k].x = (int)(uniform_rng() * N);\n", " pt_srcs[k].y = (int)(uniform_rng() * N);\n", " pt_srcs[k].mag = uniform_rng() * 2.0 - 1.0;\n", " printf(\"(%f, %f) @ %f\\n\", x[pt_srcs[k].x], y[pt_srcs[k].y], pt_srcs[k].mag);\n", " }\n", " \n", " // Iterate forward.\n", " int n_steps = 0; // total number of steps iterated\n", " double inveps = 1.0 / eps; // saves a division every iteration over the square loop\n", " double pt_src = 0.0; // accumulator for whether a point source is located at a specific (i,j) index site\n", " while (n_steps < iter) {\n", " memcpy(oldphi, phi, size*size);\n", " for (int i = 0; i < size; i++) {\n", " for (int j = 0; j < size; j++) {\n", " // Calculate point source contributions.\n", " pt_src = 0;\n", " for (int k = 0; k < K; k++) {\n", " pt_src = pt_src + ((pt_srcs[k].x == i && pt_srcs[k].y == j) ? pt_srcs[k].mag : 0.0);\n", " }\n", " phi[i][j] = 0.25*dx*dx * pt_src * inveps\n", " + 0.25*(i == 0 ? 0.0 : phi[i-1][j])\n", " + 0.25*(i == size-1 ? 0.0 : phi[i+1][j])\n", " + 0.25*(j == 0 ? 0.0 : phi[i][j-1])\n", " + 0.25*(j == size-1 ? 0.0 : phi[i][j+1]);\n", " }\n", " }\n", " if (n_steps % out_step == 0) {\n", " printf(\"Iteration #%d:\\n\", n_steps);\n", " printf(\"\\tphi(%f, %f) = %24.20f\\n\", x[(int)(0.5*N-1)], y[(int)(0.25*N+1)], phi[(int)(0.5*N-1)][(int)(0.25*N+1)]);\n", " }\n", " n_steps++;\n", " }\n", " \n", " // Write the final condition out to disk and terminate.\n", " printf(\"Terminated after %d steps.\\n\", n_steps);\n", " \n", " FILE* f;\n", " f = fopen(\"./data.txt\", \"w\"); // wb -write binary\n", " if (f != NULL) {\n", " for (int i = 0; i < size; i++) {\n", " for (int j = 0; j < size; j++) {\n", " fprintf(f, \"%f\\t\", phi[i][j]);\n", " }\n", " fprintf(f, \"\\n\");\n", " }\n", " fclose(f);\n", " } else {\n", " //failed to create the file\n", " }\n", "\n", " return 0;\n", "}\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "!gcc -std=c99 -o fd_charge fd_charge.c\n", "!./fd_charge 200 30000 1000" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Use Python to visualize the result quickly.\n", "import numpy as np\n", "import matplotlib as mpl\n", "import matplotlib.pyplot as plt\n", "from matplotlib import cm\n", "%matplotlib inline\n", "#mpl.rcParams['figure.figsize']=[20,20]\n", "\n", "data = np.loadtxt('./data.txt')\n", "grdt = np.gradient(data);\n", "mx = 10**np.floor(np.log10(data.max()))\n", "\n", "fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(20,20))\n", "axes[0].imshow(data, cmap=cm.seismic, vmin=-mx, vmax=mx, extent=[-1,1,-1,1])\n", "axes[1].imshow(data, cmap=cm.seismic, vmin=-mx, vmax=mx)\n", "axes[1].contour(data, cmap=cm.bwr, vmin=-mx, vmax=mx, levels=np.arange(-mx,mx+1,(2*mx)/1e2))\n", "fig.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So for a quick and dirty job (and no validation or verification of the numerics), that serial code will do. But you can already notice significant slowdown towards convergence even at 400×400 resolution (0.025 length units). To handle bigger problems, it will be necessary to parallelize.\n", "\n", "\n", "**Parallel Code**\n", "\n", "We will restrict the domain decomposition to two adjoining processes for the sake of pedagogy, although you likely wouldn't be this restrictive in a real code. Each processor will be responsible for calculating the electrostatics on its segment of the domain, and will not actually have access to the values calculated by the other processes unless they are explicitly communicated.\n", "\n", "\n", "\n", "\n", "\n", "\n", "

One possible decomposition of a domain by area and processor.
This was used in the code below (with two processes).

An alternative, more local, decomposition appropriate
for nearest-neighbor calculations.

\n", "\n", "Why would we need to communicate any values? Well, in this case, parallelization is nontrivial: the finite difference algorithm is nearest-neighbor based, so with a spatial decomposition of the domain we will still need to communicate boundary cells to neighboring processes. Thus we will require _ghost cells_, which refer to data which are not located on this processor natively but are retrieved and used in calculations on the boundary of the local domain.\n", "\n", "![](https://raw.githubusercontent.com/maxim-belkin/hpc-sp16/gh-pages/lessons/mpi/img/ghost-cells.png)\n", "\n", "We can set these up either by having separate arrays for the ghost cells (which leads to clean messaging but messy numerical code) or having integrated arrays in the main `phi` array (which leads to nasty messaging code but nicer numerics). We will use clean messaging to clarify the message passing aspect at the cost of obfuscating the numerics a bit with nested ternary cases." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%%file fd_charge.c\n", "// This is a parallel version of the code. The domain consists of a column of square domains abutting each other.\n", "#include \n", "#include \n", "#include \n", "#include //for memcpy()\n", "#include \n", "#include \n", "\n", "#define BOUNDS 1.0\n", "\n", "double uniform_rng() { return (double)rand() / (double)RAND_MAX; } // Not good but not the point of this exercise.\n", "struct point_t { int x, y; double mag; };\n", "\n", "int main(int argc, char *argv[]) {\n", " // Get number of discrete steps along domain.\n", " int size = 50;\n", " if (argc > 1) size = atoi(argv[1]);\n", " \n", " // Get number of iterations to use in solver.\n", " int iter = 10000;\n", " if (argc > 2) iter = atoi(argv[2]);\n", " \n", " // Get interval for output of test value in convergence.\n", " int out_step = iter/10;\n", " if (argc > 3) out_step = atoi(argv[3]);\n", " \n", " double eps = 8.8541878176e-12; // permittivity of the material\n", " \n", " int rank_id, ierr, num_ranks;\n", " MPI_Status status;\n", " MPI_Request request;\n", " \n", " ierr = MPI_Init(&argc, &argv);\n", " ierr = MPI_Comm_size(MPI_COMM_WORLD, &num_ranks);\n", " ierr = MPI_Comm_rank(MPI_COMM_WORLD, &rank_id);\n", " srand(time(NULL) + rank_id);\n", " \n", " // We will restrict the domain (and thus processes) to only two neighboring processes for the sake of pedagogy.\n", " if (num_ranks != 2) {\n", " printf(\"Expected 2 processes; actual number of processes %d.\\n\", num_ranks);\n", " MPI_Abort(MPI_COMM_WORLD, MPI_ERR_OTHER);\n", " return 0;\n", " }\n", " \n", " // Calculate the processor grid preparatory to the domain partition. We'll make the assignment into a\n", " // 2 x 1 array, making determination of neighboring processes trivial.\n", " int nbr_t, nbr_b;\n", " if (rank_id == 0) {\n", " nbr_t = -1;\n", " nbr_b = 1;\n", " } else {\n", " nbr_t = 0;\n", " nbr_b = -1;\n", " }\n", " \n", " // Initialize the _local_ grid with an initial guess of zeros as well as the coordinates. We will keep each\n", " // local domain the same size as the original domain, 2x2 in coordinates, and just translate them to make a\n", " // larger domain---in this case, adding processes increases the domain size rather than the mesh resolution.\n", " double phi[size][size]; // potential grid---compile with `-std=c99` or else use `malloc`.\n", " double x[size], // x-domain vector\n", " y[size]; // y-domain vector\n", " double dx = (2*BOUNDS)/size,// increment in x-direction\n", " dy = dx; // increment in y-direction\n", " double base_x = -BOUNDS, // [-1,1) for both processes\n", " base_y = 2*BOUNDS*(rank_id % 2) - BOUNDS*2; // [-2,0) for process 0; [0,2) for process 1\n", " for (int i = 0; i < size; i++) {\n", " x[i] = (double)i*dx + base_x;\n", " for (int j = 0; j < size; j++) {\n", " if (i == 0) y[j] = (double)j*dy + base_y;\n", " phi[i][j] = 0.0;\n", " }\n", " }\n", " double phi_t_in[size], // ghost cell messaging arrays\n", " phi_b_in[size],\n", " phi_t_out[size],\n", " phi_b_out[size];\n", " for (int i = 0; i < size; i++) {\n", " phi_t_in[i] = 0.0;\n", " phi_b_in[i] = 0.0;\n", " }\n", " \n", " // Set up a few random points and magnitudes for the electrostatics problem.\n", " int K = 10;\n", " struct point_t pt_srcs[K];\n", " for (int k = 0; k < K; k++) {\n", " (uniform_rng()); // demonstration that the RNG is bad: without this the first number is the same on every process\n", " int x_rn = (int)(uniform_rng() * size);\n", " int y_rn = (int)(uniform_rng() * size);\n", " pt_srcs[k].x = x_rn;\n", " pt_srcs[k].y = y_rn;\n", " pt_srcs[k].mag = uniform_rng() * 2.0 - 1.0;\n", " }\n", " \n", " // Iterate forward.\n", " int n_steps = 0; // total number of steps iterated\n", " double inveps = 1.0 / eps; // saves a division every iteration over the square loop\n", " double pt_src = 0.0; // accumulator for whether a point source is located at a specific (i,j) index site\n", " while (n_steps < iter) {\n", " // Propagate the matrix equation forward towards a solution.\n", " for (int i = 0; i < size; i++) {\n", " for (int j = 0; j < size; j++) {\n", " // Calculate point source contributions.\n", " pt_src = 0;\n", " for (int k = 0; k < K; k++) {\n", " pt_src = pt_src + ((pt_srcs[k].x == i && pt_srcs[k].y == j) ? pt_srcs[k].mag : 0.0);\n", " }\n", " phi[i][j] = 0.25*dx*dx * pt_src * inveps\n", " + 0.25*(i == 0 ? 0.0 : phi[i-1][j])\n", " + 0.25*(i == size-1 ? 0.0 : phi[i+1][j])\n", " + 0.25*(j == 0 ? (nbr_b < 0 ? 0.0 : phi_b_in[i]) : phi[i][j-1])\n", " + 0.25*(j == size-1 ? (nbr_t < 0 ? 0.0 : phi_t_in[i]) : phi[i][j+1]);\n", " }\n", " }\n", " \n", " // Communicate the border cell information to neighboring processes. We will alternate odd and even processes,\n", " // although there are more sophisticated ways to do this with nonblocking message passing. Why do we alternate?\n", " for (int i = 0; i < size; i++) {\n", " phi_t_out[i] = phi[i][size-1];\n", " phi_b_out[i] = phi[i][0];\n", " }\n", " // Pass data up.\n", " if (nbr_t >= 0) MPI_Isend(phi_t_out, size, MPI_DOUBLE, nbr_t, 0, MPI_COMM_WORLD, &request);\n", " if (nbr_b >= 0) MPI_Irecv(phi_b_in, size, MPI_DOUBLE, nbr_b, MPI_ANY_TAG, MPI_COMM_WORLD, &request);\n", " \n", " // Pass data down.\n", " if (nbr_b >= 0) MPI_Isend(phi_b_out, size, MPI_DOUBLE, nbr_b, 0, MPI_COMM_WORLD, &request);\n", " if (nbr_t >= 0) MPI_Irecv(phi_t_in, size, MPI_DOUBLE, nbr_t, MPI_ANY_TAG, MPI_COMM_WORLD, &request);\n", " \n", " MPI_Barrier(MPI_COMM_WORLD);\n", " \n", " // Output information periodically.\n", " if (rank_id == 0 && n_steps % out_step == 0) {\n", " printf(\"Iteration #%d:\\n\", n_steps);\n", " printf(\"\\tphi(%6.3f, %6.3f) = %24.20f\\n\", x[(int)(0.5*size-1)], y[(int)(0.25*size+1)], phi[(int)(0.5*size-1)][(int)(0.25*size+1)]);\n", " }\n", " n_steps++;\n", " }\n", " if (rank_id == 0) {\n", " printf(\"Terminated after %d steps.\\n\", n_steps);\n", " printf(\"\\tphi(%6.3f, %6.3f) = %24.20f\\n\", x[(int)(0.5*size-1)], y[(int)(0.25*size+1)], phi[(int)(0.5*size-1)][(int)(0.25*size+1)]);\n", " }\n", " \n", " // Write the final condition out to disk and terminate.\n", " // Parallel I/O, while supported by MPI, is a whole other ball game and we won't go into that here.\n", " // Thus we will gather all of the data to process rank 0 and output it from there.\n", " double phi_vector[size*2*size];\n", " // We actually have to transpose each process's data into a column-major format to align properly.\n", " double phi_trans[size][size];\n", " for (int i = 0; i < size; i++) {\n", " for (int j = 0; j < size; j++) {\n", " phi_trans[size-j-1][i] = phi[i][j];\n", " }\n", " }\n", " MPI_Gather(&phi_trans[0][0], size*size, MPI_DOUBLE, &phi_vector[0], size*size, MPI_DOUBLE, 0, MPI_COMM_WORLD);\n", " // At this point, the data are in one array on process rank 0 but not in a two-dimensional array, so fix that.\n", " double phi_new[size][size*num_ranks];\n", " int offset;\n", " for (int p = 0; p < num_ranks; p++) { // source process for data\n", " offset = p*size;\n", " for (int i = 0; i < size; i++) { // x-index of process data\n", " for (int j = 0; j < size; j++) { // y-index of process data\n", " phi_new[offset+i][j] = phi_vector[p*size*size+i*size+j];\n", " }\n", " }\n", " }\n", " \n", " if (rank_id == 0) {\n", " FILE* f;\n", " char filename[16];\n", " f = fopen(\"data.txt\", \"w\"); // wb -write binary\n", " if (f != NULL) {\n", " for (int i = 0; i < 2*size; i++) {\n", " for (int j = 0; j < size; j++) {\n", " fprintf(f, \"%f\\t\", phi_new[i][j]);\n", " }\n", " fprintf(f, \"\\n\");\n", " }\n", " fclose(f);\n", " } else {\n", " //failed to create the file\n", " }\n", " }\n", " \n", " MPI_Finalize();\n", " return 0;\n", "}\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "!mpicc -std=c99 -o fd_charge fd_charge.c\n", "!mpiexec -np 2 ./fd_charge 100 10001 1000" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Use Python to visualize the result quickly.\n", "import numpy as np\n", "import matplotlib as mpl\n", "import matplotlib.pyplot as plt\n", "from matplotlib import cm\n", "%matplotlib inline\n", "\n", "data = np.loadtxt('./data.txt')\n", "#mx = 10**np.floor(np.log10(data.max()))\n", "mx = 0.5*data.max()\n", "\n", "fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(8,8))\n", "axes[0].imshow( data, cmap=cm.seismic, vmin=-mx, vmax=mx)\n", "axes[1].contour(data[::-1,:], cmap=cm.bwr, vmin=-mx, vmax=mx, levels=np.arange(-8*mx,8*mx,mx/10))\n", "fig.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- What happens if you neglect to include the following conditional around the file output?\n", "\n", "\n", " if (rank_id == 0) { ... }" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Domains naturally do not overlap in this case:\n", "\n", " 0: (-2.00,-2.00)-(-0.01,-0.01)\n", " 1: ( 0.00,-2.00)-( 1.99,-0.01)\n", " 2: (-2.00, 0.00)-(-0.01, 1.99)\n", " 3: ( 0.00, 0.00)-( 1.99, 1.99)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### The many flavors of `MPI_Send`, `MPI_Recv`\n", "\n", "The vanilla MPI send and receive operations are _blocking_, meaning that when you send a message the process waits at that point until an acknowledgment of receipt is sent by the receiving process. As you can imagine, this both slows down your code and makes even simple messaging schemes cumbersome (what happens if you tell every process to send at the same time and receive at the next function call?). In order to prevent this kind of deadlock, we use _nonblocking_ communications, which do not block execution—they just fire their message into the ether and proceed ahead.\n", "\n", "As before, there are a lot of variants, but the basic two are `MPI_Isend` and `MPI_Irecv`.\n", "\n", " int MPI_Isend(\n", " void* message, // Pointer to data to send\n", " int count, // Number of data values to send\n", " MPI_Datatype datatype, // Type of data (e.g. MPI_INT)\n", " int destination_rank, // Rank of process to receive message\n", " int tag, // Identifies message type\n", " MPI_Comm comm, // Use MPI_COMM_WORLD\n", " MPI_Request* request // To query about the status of the message\n", " )\n", " \n", " int MPI_Irecv(\n", " void* message, // Points to location in memory where\n", " // received message is to be stored.\n", " int count, // MAX number of data values to accept \n", " MPI_Datatype datatype // Type of data (e.g. MPI_INT) \n", " int source_rank, // Rank of process to receive from\n", " // (Use MPI_ANY_SOURCE to accept\n", " // from any sender)\n", " int tag, // Type of message to receive\n", " // (Use MPI_ANY_TAG to accept any type)\n", " MPI_Comm comm, // Use MPI_COMM_WORLD\n", " MPI_Request* request // To query about the status of the message\n", " )\n", "\n", "There's a lot of information there, and none of it optional. Let's unpack the arguments a little more with a trivial example that implements the previous dart-throwing example.\n", "\n", "But wait, nonblocking messages sound so much easier, what is the purpose of having blocking messages? To explain this point, it might be beneficial to briefly explain the four modes for sending messages and how the send is actually implemented:\n", "* Buffered - send can initiate regardless of whether matching receive is ready. Information to send is just placed in a new buffer to wait until the receive is ready.\n", "* Synchronous - send can initiate regardless of whether matching receive is ready. However, the send won't \"complete\" until the receive is ready.\n", "* Ready - send can only initiate if matching send is ready. If the receive is not ready, an error occurs.\n", "* Standard - will behave as either buffered or synchronous, depending on implementation of MPI and available memory.\n", "\n", "Let's illustrate the importance of blocking with the following example. Here we use a nonblocking synchronous send to pass a number between two processes, but we know that process 1 isn't ready for the message yet when process 0 wants to send it." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%%file favorite_number.c\n", "#include \n", "#include \n", "#include \n", "#include \n", "#include \n", "#include \n", "\n", "\n", "int main(int argc, char *argv[]) {\n", " int rank_id, ierr, num_ranks, favorite_number;\n", " MPI_Status status; MPI_Request request;\n", " \n", " ierr = MPI_Init(&argc, &argv);\n", " ierr = MPI_Comm_size(MPI_COMM_WORLD, &num_ranks);\n", " ierr = MPI_Comm_rank(MPI_COMM_WORLD, &rank_id);\n", " srand(time(NULL) + rank_id);\n", "\n", " if (rank_id == 0)\n", " {\n", " // State what my favorite number is\n", " favorite_number = 7;\n", " // Let me mull it over first\n", " sleep(3);\n", " // Yep, time to tell process 1 what my favorite number is\n", " ierr = MPI_Send(&favorite_number, 1, MPI_INT, 1, 0, MPI_COMM_WORLD);\n", "\n", " }\n", " if (rank_id == 1)\n", " {\n", " // Let's guess what the favorite number is\n", " favorite_number = 9;\n", " // Now see what process 0 said our favorite number is\n", " ierr = MPI_Irecv(&favorite_number, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, &request);\n", " }\n", " \n", " printf (\"Process 0's favorite number is %i according to process %i\\n\",favorite_number,rank_id);\n", " \n", " MPI_Finalize();\n", " return 0;\n", "}" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# To run this code from within the notebook.\n", "!mpicc -std=c99 -o favorite_number favorite_number.c\n", "!mpiexec -n 2 ./favorite_number" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "I specifically spent a message from process 0 to process 1 indicating that the favorite number is 7, but process 1 still says the favorite number is 1. The problem is that process 1 didn't wait to see if the message was actually received before it got to the print statement. Since we used a nonblocking receive, MPI returned control to the program without ensuring validity of the receive buffer. So process 1 had finished running through the program before the message was ever sent.\n", "\n", "What about the case where we have some unrelated work to do right after the receive? We'd like to be able to perform this work while we wait for the message to arrive (overlapping computation and communication, an important concept in parallel programming), and since we're not using the same variable, we don't have to worry about accessing the buffer prematurely. In fact, this is the true purpose of the nonblocking communications. We can check whether the receive has completed after performing these unrelated calculations by using the MPI_Test or MPI_Wait functions. Both of these functions use the `MPI_Request` argument to check the status of the nonblocking communication. MPI_Test will provide a flag indicating whether the nonblocking command has finished or not, and MPI_Wait will prevent the process from continuing until the communication has completed. Try it in this next example." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%%file favorite_number.c\n", "#include \n", "#include \n", "#include \n", "#include \n", "#include \n", "#include \n", "\n", "\n", "int main(int argc, char *argv[]) {\n", " int rank_id, ierr, num_ranks, favorite_number;\n", " MPI_Status status; MPI_Request request;\n", " \n", " ierr = MPI_Init(&argc, &argv);\n", " ierr = MPI_Comm_size(MPI_COMM_WORLD, &num_ranks);\n", " ierr = MPI_Comm_rank(MPI_COMM_WORLD, &rank_id);\n", " srand(time(NULL) + rank_id);\n", "\n", " if (rank_id == 0)\n", " {\n", " // State what my favorite number is\n", " favorite_number = 7;\n", " // Let me mull it over first\n", " sleep(3);\n", " // Yep, time to tell process 1 what my favorite number is\n", " ierr = MPI_Send(&favorite_number, 1, MPI_INT, 1, 0, MPI_COMM_WORLD);\n", "\n", " }\n", " if (rank_id == 1)\n", " {\n", " // Let's guess what the favorite number is\n", " favorite_number = 9;\n", " // Now see what process 0 said our favorite number is\n", " ierr = MPI_Irecv(&favorite_number, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, &request);\n", " printf(\"This is process 1 just waiting for the message from process 0\\n\");\n", " printf(\"As a side note, the value of 5^5 is: %i\\n\",5*5*5*5*5);\n", " ierr = MPI_Wait(&request, &status);\n", " }\n", " \n", " printf (\"Process 0's favorite number is %i according to process %i\\n\",favorite_number,rank_id);\n", " \n", " MPI_Finalize();\n", " return 0;\n", "}" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# To run this code from within the notebook.\n", "!mpicc -std=c99 -o favorite_number favorite_number.c\n", "!mpiexec -n 2 ./favorite_number" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## More Collective Operations: `MPI_Barrier`, `MPI_Gather`, `MPI_Bcast`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "### `MPI_Barrier`\n", "Now, with nonblocking operations we can easily run into a situation where we need to know that the processes aren't getting to far ahead of each other. For example, we may want to require that no process starts the next iteration of a loop before all processes have finished the loop. In that case, `MPI_Barrier` defines a point at which all processes will wait until they are synchronized with each other again. They slow code down, however, and so they should be used sparingly.\n", "\n", "![](https://raw.githubusercontent.com/maxim-belkin/hpc-sp16/gh-pages/lessons/mpi/img/MPI-Barrier.gif)\n", "\n", " int MPI_Barrier(\n", " MPI_Comm comm // Use MPI_COMM_WORLD \n", " )\n", "\n", "### Exercise\n", "\n", "- What is the difference in the following two code snippets?\n", "\n", "\n", " // Pass data left.\n", " if (nbr_l >= 0) MPI_Isend(phi_l_out, size, MPI_DOUBLE, nbr_l, 0, MPI_COMM_WORLD, &request);\n", " if (nbr_r >= 0) MPI_Irecv(phi_r_in, size, MPI_DOUBLE, nbr_r, MPI_ANY_TAG, MPI_COMM_WORLD, &request);\n", " if (nbr_l >= 0 && n_steps == 1000) printf(\"%d: %d sending left %f to %d\\n\", n_steps, rank_id, phi_l_out[5], nbr_l);\n", " if (nbr_r >= 0 && n_steps == 1000) printf(\"%d: %d recving right %f of %d\\n\", n_steps, rank_id, phi_r_in[5], nbr_r);\n", " MPI_Barrier(MPI_COMM_WORLD);\n", "\n", " // Pass data left.\n", " if (nbr_l >= 0) MPI_Isend(phi_l_out, size, MPI_DOUBLE, nbr_l, 0, MPI_COMM_WORLD, &request);\n", " if (nbr_r >= 0) MPI_Irecv(phi_r_in, size, MPI_DOUBLE, nbr_r, MPI_ANY_TAG, MPI_COMM_WORLD, &request);\n", " MPI_Barrier(MPI_COMM_WORLD);\n", " if (nbr_l >= 0 && n_steps == 1000) printf(\"%d: %d sending left %f to %d\\n\", n_steps, rank_id, phi_l_out[5], nbr_l);\n", " if (nbr_r >= 0 && n_steps == 1000) printf(\"%d: %d recving right %f of %d\\n\", n_steps, rank_id, phi_r_in[5], nbr_r);\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### `MPI_Gather`\n", "\n", "Previously, we examined the function `MPI_Reduce`, which took data from every process, transformed them by some operation like addition or multiplication, and placed the result in a variable on `root`. In the case above, we needed to gather data from every process _without_ transforming the data or collapsing it from an array to a single value.\n", "\n", "`MPI_Gather` takes arrays from every process and combines them (by rank) into a new array on `root`. The new array, as we saw above, is ordered by rank, meaning that if a different topology is in use you have to correct the data (which we did).\n", "\n", "![](https://raw.githubusercontent.com/maxim-belkin/hpc-sp16/gh-pages/lessons/mpi/img/MPI-Gather-vec.png)\n", "\n", " int MPI_Gather(\n", " void* sendbuf, // Starting address of send buffer\n", " int sendcnt, // Number of elements in send buffer\n", " MPI_Datatype datatype, // Type of data (e.g. MPI_INT) \n", " void* recvbuf, // Starting address of receive buffer\n", " int recvcnt, // Number of elements for any single receive\n", " MPI_Datatype recvtype, // Type of data (e.g. MPI_INT) \n", " int root, // Process that receives the answer \n", " MPI_Comm comm // Use MPI_COMM_WORLD \n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### `MPI_Bcast`\n", "\n", "What if we need to do the opposite of a reduction or gather? That is, instead of collecting all variables on one process, maybe we need to share a variable stored on one process with all the other processes. We could individually send the variable to each other process, but that will require several lines to repeatedly call `MPI_Send` or the construction of a for loop as well as an if statement to have every non-source process receive the message. Instead, we can do this in one using a broadcast function, specifically the function `MPI_Bcast`.\n", "\n", " int MPI_Bcast(\n", " void* buffer, // Starting address of buffer\n", " int count, // Number of elements to broadcast\n", " MPI_Datatype datatype, // Type of data (e.g. MPI_INT) \n", " int root, // The process doing the broadcasting \n", " MPI_Comm comm // Use MPI_COMM_WORLD \n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "## Scaling\n", "\n", "_Scaling_ refers to how efficiently your code retains performance as both the problem size $N$ and the number of processes $P$ increase. It is typically divided into _weak_ and _strong_ scaling.\n", "\n", "**Strong Scaling**: How does the performance behave as the number of processors $P$ increases for a fixed total problem size $N$?\n", "\n", "This is what most people think of as scaling, and describes the overall performance of a code as a problem is subdivided into smaller and smaller pieces between processes. To test this, select a problem size suitable to scaling to a large number of processes (for instance, divisible by 64), and simply call larger and larger jobs while outputting timing information.\n", "\n", "**Weak Scaling**: How does the performance behave as the number of processors $P$ varies with each processor handling a fixed process size $N_P = N/P = \\text{const}$?\n", "\n", "In this case (which is most appropriate for $O(N)$ algorithms but can be used for any system ([ref](https://web.archive.org/web/20140307224104/http://www.stfc.ac.uk/cse/25052.aspx))), we observe more about the relative overhead incurred by including additional machines in the problem solution. Weak scaling can be tested in a straightforward manner by keeping track of the pieces of the puzzle each process is responsible for. With strongly decomposable systems, like molecular dynamics or PDE solution on meshes, weak scaling can be revealing. However, in my experience there are many types of problems for which solving for $N$ and solving for $N+1$ are radically different problems, such as in density functional theory where the addition of an electron means the solution of a fundamentally different physical system." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "## Memory and C99 Variable-Length Arrays\n", "\n", "Are the following two segments of code equivalent in C99?\n", "\n", "**Snippet 1**\n", "\n", " int m = 5, n = 10, p = 7;\n", " double array[m][n][p];\n", " \n", "**Snippet 2**\n", "\n", " double ***array;\n", " array = (double***) malloc(m*sizeof(double**));\n", " for (i = 0; i < m; i++) {\n", " array[i] = (double**) malloc(n*sizeof(double*));\n", " for (j = 0; j < n; j++) {\n", " array[i][j] = (double*) malloc(p*sizeof(double));\n", " }\n", " }\n", "\n", "They _aren't_—but it's very subtle _why_. The two snippets are _not_ equivalent from a memory standpoint because of where they allocate `array`: the first allocates `array` on the stack, the second on the heap.\n", "\n", "Application memory is conventionally divided into two regions: the _stack_ and the _heap_. For many applications it doesn't really matter which you use, but when you start defining very large arrays of data or using multiple threads in parallel then it can become critical to manage your memory well. ([ref1](https://stackoverflow.com/questions/79923/what-and-where-are-the-stack-and-heap)) ([ref2](https://stackoverflow.com/questions/22555639/mpi-communicate-large-two-dimensional-arrays))\n", "\n", "- **Stack** memory is specific to a thread, and is generally fixed in size. It is often faster to allocate in, and easier to read the allocation code.\n", "\n", "- **Heap** memory is shared among all threads, and grows as demand requires.\n", "\n", "C99 variable-length arrays (VLAs) allocate on the stack. Since you can overflow the stack, VLAs are probably not a good idea for serious numeric code. They do make code so much more readable that I opted for them in this lesson.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise\n", "- Implement the parallel prefix sum algorithm depicted graphically above. Feel free to copy code snippets from earlier to structure your code and then make it work." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%%file parprefix.c\n", "// your code here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "## Passing Vectors\n", "\n", "MPI is designed to work straightforwardly with C-style arrays. Oftentimes, however, we are working with more complex data types, such as objects or C++ STL `vector`s. We can pass these between processes as well.\n", "\n", "Recollect the format of the `MPI_Send` command:\n", "\n", " int MPI_Send(\n", " void* message, // Pointer to data to send\n", " int count, // Number of data values to send\n", " MPI_Datatype datatype, // Type of data (e.g. MPI_INT)\n", " int destination_rank, // Rank of process to receive message\n", " int tag, // Identifies message type\n", " MPI_Comm comm // Use MPI_COMM_WORLD\n", " )\n", "\n", "A regular C-style array is sent thus:\n", "\n", " double array[5];\n", " MPI_Send(array, 5, MPI_DOUBLE, nbr, 0, MPI_COMM_WORLD);\n", "\n", "You may thus hope that the following would work:\n", "\n", " std::vector vector(5, 1.0);\n", " MPI_Send(vector, vector.size(), MPI_DOUBLE, nbr, 0, MPI_COMM_WORLD);\n", "\n", "but it turns out that STL doesn't guarantee that the address of your `std::vector` object is the same as the start of the array of data it contains. In fact, the `vector` itself (i.e. the header information) is often stored in the stack (as is the case here), while the array of data is always in the heap. There are a couple options to tackle this problem, one way is to get the address to the first piece of data in the vector, `&vector[0]`. This will work because the data in an std::vector is guaranteed to be continuous. \n", "\n", " MPI_Send(&vector[0], vector.size(), MPI_DOUBLE, nbr, 0, MPI_COMM_WORLD);\n", "Another option is to use the `data()` function in the vector class, `vector.data()`. This function returns a pointer to the first entry in the `vector`.\n", "\n", " MPI_Send(vector.data(), vector.size(), MPI_DOUBLE, nbr, 0, MPI_COMM_WORLD);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "## Resources\n", "\n", "- [DeinoMPI documentation](http://mpi.deino.net/mpi_functions/). The most complete and thorough documentation (with examples) of MPI-2 functions.\n", "\n", "- [UMinn tutorial](http://static.msi.umn.edu/tutorial/scicomp/general/MPI/content1.html)\n", "\n", "\n", "### Where to go next\n", "\n", "Where should you go next? Look into the [Training Roadmap at hpcuniversity](http://hpcuniversity.org/roadmap/) for an overview of HPC software development and usage—it covers a lot of gaps we inevitably left in our whirlwind tour of Campus Cluster. NICS at University of Tennessee also archives their [HPC seminar series](https://www.nics.tennessee.edu/hpc-seminar-series/). The [Parallel Computing Institute](http://parallel.illinois.edu/education) located here is a premier center for parallel computing research. Finally, there are [a number of courses here](https://wiki.cites.illinois.edu/wiki/display/parcomp/Existing+Courses+in+Parallel+Computing) you can take as well to learn the theory of parallel computing:\n", "\n", "- ECE 408/CS 483 *Applied Parallel Programming*\n", "\n", "- ECE 492/CS 420/CSE 402 *Introduction to Parallel Programming for\n", " Scientists and Engineers*\n", "\n", "- ECE 428/CS 425/CSE 424 *Distributed Systems*\n", "\n", "- CS 524 *Concurrent Programming Languages*\n", "\n", "- CS 525 *Advanced Topics in Distributed Systems*\n", "\n", "- CS 533 *Parallel Computer Architectures*\n", "\n", "- CS 554/CSE 512 *Parallel Numerical Algorithms*\n", "\n", "- ECE 598HK/CS 598HK *Computational Thinking for Many-Core Computing*\n", "\n", "- Coursera—[High Performance Scientific Computing](https://www.coursera.org/course/scicomp)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "## Credits\n", "\n", "Neal Davis, Maxim Belkin, and Darin Peetz developed these materials for [Computational Science and Engineering](http://cse.illinois.edu/) at the University of Illinois at Urbana–Champaign.\n", "\n", "\n", "This content is available under a [Creative Commons Attribution 4.0 Unported License](https://creativecommons.org/licenses/by/4.0/).\n", "\n", "[![](https://bytebucket.org/davis68/resources/raw/f7c98d2b95e961fae257707e22a58fa1a2c36bec/logos/baseline_cse_wdmk.png?token=be4cc41d4b2afe594f5b1570a3c5aad96a65f0d6)](http://cse.illinois.edu/)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "## List of MPI Functions\n", "**Initialization & Cleanup**\n", "\n", "`MPI_Init`\n", "\n", "`MPI_Finalize`\n", "\n", "`MPI_Abort`\n", "\n", "`MPI_Comm_rank`\n", "\n", "`MPI_Comm_size`\n", "\n", "`MPI_COMM_WORLD`\n", "\n", "`MPI_Wtime`\n", "\n", "**Message Passing**\n", "\n", "`MPI_Status`\n", "\n", "`MPI_Probe`\n", "\n", "**Point-to-Point**\n", "\n", "We didn't discuss the mysterious \"other\" ways of sending messages mentioned earlier. These are they—other modes include explicit buffering (`B`), ready sending (so a matching receive must have already posted) (`R`), and synchronous sending (best for efficiency) (`S`). ([ref](http://www.mcs.anl.gov/research/projects/mpi/sendmode.html))\n", "\n", "`MPI_Send`\n", "\n", "`MPI_Bsend`\n", "\n", "`MPI_Ssend`\n", "\n", "`MPI_Rsend`\n", "\n", "`MPI_Isend`\n", "\n", "`MPI_Ibsend`\n", "\n", "`MPI_Irsend`\n", "\n", "\n", "\n", "`MPI_Recv`\n", "\n", "`MPI_Irecv`\n", "\n", "\n", "\n", "`MPI_Sendrecv`\n", "\n", "**Collective**\n", "\n", "`MPI_Bcast`\n", "\n", "`MPI_Gather`\n", "\n", "`MPI_Scatter`\n", "\n", "`MPI_Allgather x{v}`\n", "\n", "`MPI_Allreduce`\n", "\n", "`MPI_Alltoall x{w,v}`\n", "\n", "x`MPI_Reduce_scatter`\n", "\n", "x`MPI_Scan`\n", "\n", "\n", "\n", "`MPI_Barrier`\n", "\n", "**Advanced**\n", "\n", "*Derived Datatypes and Operations*\n", "\n", "`MPI_Pack`\n", "\n", "`MPI_Type_vector`\n", "\n", "`MPI_Type_contiguous`\n", "\n", "`MPI_Type_commit`\n", "\n", "`MPI_Type_free`\n", "\n", "\n", "\n", "`MPI_Op_create`\n", "\n", "`MPI_User_function`\n", "\n", "`MPI_Op_free`\n", "\n", "Robey’s [Kahan sum](http://www.sciencedirect.com/science/article/pii/S0167819111000238)\n", "\n", "\n", "\n", "`MPI_Comm_create`\n", "\n", "`MPI_Scan`\n", "\n", "`MPI_Exscan`\n", "\n", "`MPI_Comm_free`\n", "\n", "**I/O**\n", "\n", "`MPI_File_*`\n", "\n", "**One-Sided Communication**\n", "MPI_Put write to remote memory\n", "MPI_Get read from remote memory\n", "MPI_Accumulate reduction op on same memory across multiple tasks" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "## Legacy Versions\n", "- MPI basics\n", "\n", " - *MPI-1 supports the classical message-passing programming model:\n", " basic point-to-point communication, collectives, datatypes, etc*\n", "\n", "- MPI-2\n", "\n", " - C/C++/F90, MPI+threads, MPI-I/O, remote memory access\n", "\n", "- MPI-3\n", "\n", " - Nonblocking collectives, neighborhood collectives, better\n", " one-sided communication, tools, F2008 bindings\n", "\n", " - Deprecated C++ bindings\n", "\n", "- Versions\n", "\n", "|**Major Edition** | **Year** | **Languages**\n", "| ----------------- |:--------:| ------------------\n", "|MPI-1 | 1992 |C (ANSI), F77\n", "|MPI-2 | 1997 |C (ISO), C++, F90\n", "|MPI-3 | 2012 |C (ISO), F90, F08\n", "\n", "- You may wonder why we are worrying so much about legacy versions of\n", " MPI. A major factor is that scientific code persists much longer\n", " than much other code: it is not unreasonable to suggest that F77 or\n", " even F66 code could be required to compile and run today on a Los\n", " Alamos supercomputer, for instance.\n", "\n", "- Just worry about MPI-1.3, MPI-2.2, and MPI-3.0." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.1" } }, "nbformat": 4, "nbformat_minor": 0 }