---------------------------------------------------------------------- KNOWN ISSUES ---------------------------------------------------------------------- ### CH4 * Build Issues - Embedded ucx or libfabric builds will not succeed with Solaris compilers. * Dynamic process support - Requires fi_tagged capability in OFI netmod - Not supported in UCX netmod at this time * Test failures - CH4 will not currently pass 100% of the testsuite. - Test failures differ based on experimental setup and network support built in. ### OS/X * C++ bindings - Exception handling in C++ bindings is broken because of a bug in libc++ (see https://github.com/android-ndk/ndk/issues/289). You can workaround this issue by explicitly linking your application to libstdc++ instead (e.g., mpicxx -o foo foo.cxx -lstdc++). * GCC builds - When using Homebrew or Macports gcc (not the default gcc, which is just a symlink to clang), a bug in gcc (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=40838) causes stack alignment issues in some cases causing segmentation faults. You can workaround this issue by building mpich with clang, or by passing a lower optimization level than the default (-O2). Currently, this bug only seems to show up when the --disable-error-checking flag is passed to configure. ### Fine-grained thread safety * ch3:sock does not (and will not) support fine-grained threading. * MPI-IO APIs are not currently thread-safe when using fine-grained threading (--enable-thread-cs=per-object). * ch3:nemesis:tcp fine-grained threading is still experimental and may have correctness or performance issues. Known correctness issues include dynamic process support and generalized request support. ### Lacking channel-specific features * ch3 does not presently support communication across heterogeneous platforms (e.g., a big-endian machine communicating with a little-endian machine). * Support for "external32" data representation is incomplete. This affects the MPI_Pack_external and MPI_Unpack_external routines, as well the external data representation capabilities of ROMIO. In particular: noncontiguous user buffers could consume egregious amounts of memory in the MPI library and any types which vary in width between the native representation and the external32 representation will likely cause corruption. The following ticket contains some additional information: http://trac.mpich.org/projects/mpich/ticket/1754 * ch3 has known problems in some cases when threading and dynamic processes are used together on communicators of size greater than one. ### Performance issues * MPI_Irecv operations that are not explicitly completed before MPI_Finalize is called may fail to complete before MPI_Finalize returns, and thus never complete. Furthermore, any matching send operations may erroneously fail. By explicitly completed, we mean that the request associated with the operation is completed by one of the MPI_Test or MPI_Wait routines.