Reference
1 Environment
mpi.finalize()-
Closes the MPI state.
This function must be called by all processes before the program exits.
mpi.finalized()-
Returns true if the MPI state has been closed; otherwise returns false.
mpi.get_version()-
Returns the major and minor number of the MPI version.
mpi.get_processor_name()-
Returns a string with the name of the processor on which the function was called. The string provides an identification for a specific piece of hardware, for example, the hostname of a node.
2 Communicators
mpi.comm_world-
Predefined communicator that contains all processes.
comm:rank()-
Returns the rank of the process.
comm:size()-
Returns the number of processes.
3 Point-to-point communication
mpi.send(buf, count, datatype, dest, tag, comm)-
Performs a blocking send. A block of
countelements with datatypedatatypeis read frombufand sent to the process with rankdest; unlessdestis nil, in which case no send is performed.tagis a non-negative integer that identifies the message. mpi.recv(buf, count, datatype, source, tag, comm)-
Performs a blocking receive. A block of
countelements with datatypedatatypeis received from the process with ranksourceand stored inbuf; unlesssourceis nil, in which case no receive is performed.tagis a non-negative integer that identifies the message. mpi.sendrecv(sendbuf, sendcount, sendtype, dest, sendtag, recvbuf, recvcount, recvtype, source, recvtag, comm)-
Performs a blocking send and receive. A block of
sendcountelements with datatypesendtypeis read fromsendbufand sent to the process with rankdest; unlessdestis nil, in which case no send is performed.sendtagis a non-negative integer that identifies the sent message. A block ofrecvcountelements with datatyperecvtypeis received from the process with ranksourceand stored inrecvbuf; unlesssourceis nil, in which case no receive is performed.recvtagis a non-negative integer that identifies the received message. mpi.sendrecv_replace(buf, count, datatype, dest, sendtag, source, recvtag, comm)-
Performs a blocking in-place send and receive. A block of
countelements with datatypedatatypeis read frombufand sent to the process with rankdest; unlessdestis nil, in which case no send is performed.sendtagis a non-negative integer that identifies the sent message. A block ofcountelements with datatypedatatypeis received from the process with ranksourceand stored inbuf; unlesssourceis nil, in which case no receive is performed.
4 Collective communication
mpi.barrier(comm)-
Blocks the calling process until all processes have called the function.
mpi.bcast(buf, count, datatype, root, comm)-
Performs a broadcast operation. For each rank i in the communicator
comm, the process with rankrootsends a block ofcountelements with datatypedatatypeinbufto the process with rank i, which stores the values inbuf. mpi.gather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm)-
Performs a gather operation. For each rank i in the communicator
comm, the process with rank i sends a block ofsendcountelements with datatypesendtypeinsendbufto the process with rankroot, which stores the values in the i-th block ofrecvcountelements with datatyperecvtypeinrecvbuf.mpi.in_placemay be passed assendbufat the process with rankrootto perform the operation in place. In this casesendcountandsendtypeare ignored, and the process does not gather elements from itself. mpi.gatherv(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm)-
Performs a gather operation with varying block sizes. For each rank i in the communicator
comm, the process with rank i sends a block ofsendcountelements with datatypesendtypeinsendbufto the process with rankroot, which stores the values in a block ofrecvcounts[i]elements at offsetdispls[i]with datatyperecvtypeinrecvbuf.mpi.in_placemay be passed assendbufat the process with rankrootto perform the operation in place. In this casesendcountandsendtypeare ignored, and the process does not gather elements from itself. mpi.scatter(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm)-
Performs a scatter operation. For each rank i in the communicator
comm, the process with rankrootsends the i-th block ofsendcountelements of datatypesendtypeinsendbufto the process with rank i, which stores the values in a block ofrecvcountelements with datatyperecvtypeinrecvbuf.mpi.in_placemay be passed asrecvbufat the process with rankrootto perform the operation in place. In this caserecvcountandrecvtypeare ignored, and the process does not scatter elements to itself. mpi.scatterv(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm)-
Performs a scatter operation with varying block sizes. For each rank i in the communicator
comm, the process with rankrootsends a block ofsendcounts[i]elements at offsetdispls[i]with datatypesendtypeinsendbufto the process with rank i, which stores the values in a block ofrecvcountelements with datatyperecvtypeinrecvbuf.mpi.in_placemay be passed asrecvbufat the process with rankrootto perform the operation in place. In this caserecvcountandrecvtypeare ignored, and the process does not scatter elements to itself. mpi.allgather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm)-
Performs a gather-to-all operation. For each pair of ranks i and j in the communicator
comm, the process with rank i sends a block ofsendcountelements with datatypesendtypeinsendbufto the process with rank j, which stores the values in the i-th block ofrecvcountelements with datatyperecvtypeinrecvbuf.mpi.in_placemay be passed assendbufat all processes to perform the operation in place. In this casesendcountandsendtypeare ignored, and the process with rank i sends the i-th block ofrecvcountelements with datatyperecvtypeinrecvbufto the process with rank j for each pair of different ranks i and j. mpi.allgatherv(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm)-
Performs a gather-to-all operation with varying block sizes. For each pair of ranks i and j in the communicator
comm, the process with rank i sends a block ofsendcountelements with datatypesendtypeinsendbufto the process with rank j, which stores the values in a block ofrecvcounts[i]elements at offsetdispls[i]with datatyperecvtypeinrecvbuf.mpi.in_placemay be passed assendbufat all processes to perform the operation in place. In this casesendcountandsendtypeare ignored, and the process with rank i sends the i-th block ofrecvcounts[i]elements at offsetdispls[i]with datatyperecvtypeinrecvbufto the process with rank j for each pair of different ranks i and j. mpi.alltoall(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm)-
Performs an all-to-all scatter/gather operation. For each pair of ranks i and j in the communicator
comm, the process with rank i sends the j-th block ofsendcountelements with datatypesendtypeinsendbufto the process with rank j, which stores the values in the i-th block ofrecvcountelements with datatyperecvtypeinrecvbuf.mpi.in_placemay be passed assendbufat all processes to perform the operation in place. In this casesendcountandsendtypeare ignored, and the process with rank i sends the j-th block ofrecvcountelements with datatyperecvtypeinrecvbufto the process with rank j for each pair of different ranks i and j.The in-place variant of this function is available with MPI-2.2.
mpi.alltoallv(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm)-
Performs an all-to-all scatter/gather operation with varying block sizes. For each pair of rank i and j in the communicator
comm, the process with rank i sends a block ofsendcounts[j]elements at offsetsdispls[j]with datatypesendtypeinsendbufto the process with rank j, which stores the values in a block ofrecvcounts[i]elements at offsetrdispls[i]with datatyperecvtypeinrecvbuf.mpi.in_placemay be passed assendbufat all processes to perform the operation in place. In this casesendcounts,sdisplsandsendtypeare ignored, and the process with rank i sends a block ofrecvcounts[j]elements at offsetrdispls[j]with datatyperecvtypeinrecvbufto the process with rank j for each pair of different ranks i and j.The in-place variant of this function is available with MPI-2.2.
mpi.alltoallw(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm)-
Performs an all-to-all scatter/gather operation with varying block sizes and datatypes. For each pair of rank i and j in the communicator
comm, the process with rank i sends a block ofsendcounts[j]elements at offset in bytessdispls[j]with datatypesendtypes[j]insendbufto the process with rank j, which stores the values in a block ofrecvcounts[i]elements at offset in bytesrdispls[i]with datatyperecvtypes[i]inrecvbuf.mpi.in_placemay be passed assendbufat all processes to perform the operation in place. In this casesendcounts,sdisplsandsendtypesare ignored, and the process with rank i sends a block ofrecvcounts[j]elements at offset in bytesrdispls[j]with datatyperecvtypes[j]inrecvbufto the process with rank j for each pair of different ranks i and j.The in-place variant of this function is available with MPI-2.2.
mpi.reduce(sendbuf, recvbuf, count, datatype, op, root, comm)-
Performs a reduce operation. Blocks of
countelements with datatypedatatypeinsendbufof all processes in the communicatorcommare combined element-wise using the reduction operatorop, and the result is stored in a block ofcountelements with datatypedatatypeinrecvbufof the process with rankroot.mpi.in_placemay be passed assendbufat the process with rankrootto perform the operation in place. In this case the elements of the process are read fromrecvbuf. mpi.allreduce(sendbuf, recvbuf, count, datatype, op, comm)-
Performs a reduce-to-all operation. Blocks of
countelements with datatypedatatypeinsendbufof all processes in the communicatorcommare combined element-wise using the reduction operatorop, and the result is stored in a block ofcountelements with datatypedatatypeinrecvbufof all processes.mpi.in_placemay be passed assendbufat all processes to perform the operation in place. In this case the elements of all processes are read fromrecvbuf. mpi.scan(sendbuf, recvbuf, count, datatype, op, comm)-
Performs an inclusive scan operation. For each rank i in the communicator
comm, blocks ofcountelements with datatypedatatypeinsendbufof the processes with ranks 0 to i are combined element-wise using the reduction operatorop, and the result is stored in a block ofcountelements with datatypedatatypeinrecvbufof the process with rank i.mpi.in_placemay be passed assendbufat all processes to perform the operation in place. In this case the elements of all processes are read fromrecvbuf. mpi.exscan(sendbuf, recvbuf, count, datatype, op, comm)-
Performs an exclusive scan operation. For each rank i > 0 in the communicator
comm, blocks ofcountelements with datatypedatatypeinsendbufof the processes with ranks 0 to i - 1 are combined element-wise using the reduction operatorop, and the result is stored in a block ofcountelements with datatypedatatypeinrecvbufof the process with rank i.mpi.in_placemay be passed assendbufat all processes to perform the operation in place. In this case the elements of all processes are read fromrecvbuf.
4.1 Predefined reduction operators
| Operator | Description |
|---|---|
mpi.max |
Maximum |
mpi.maxloc |
Maximum and location |
mpi.min |
Minimum |
mpi.minloc |
Minimum and location |
mpi.sum |
Sum |
mpi.prod |
Product |
mpi.land |
Logical and |
mpi.band |
Bit-wise and |
mpi.lor |
Logical or |
mpi.bor |
Bit-wise or |
mpi.lxor |
Logical exclusive or |
mpi.bxor |
Bit-wise exclusive or |
5 Datatypes
mpi.type_contiguous(count, datatype)-
Returns a new datatype constructed from a sequence of
countelements of the given datatype. mpi.type_vector(count, blocklength, stride, datatype)-
Returns a new datatype constructed from a sequence of
countblocks of elements of the given datatype.blocklengthspecifies the number of elements in each block.stridespecifies the number of elements between the first elements of consecutive blocks. mpi.type_create_indexed_block(count, blocklength, displacements, datatype)-
Returns a new datatype constructed from a sequence of
countblocks of elements of the given datatype.blocklengthspecifies the number of elements in each block.displacementsspecifies for each block the number of elements between the first elements of that block and of the first block. mpi.type_indexed(count, blocklengths, displacements, datatype)-
Returns a new datatype constructed from a sequence of
countblocks of elements of the given datatype.blocklengthsspecifies for each block the number of elements in that block.displacementsspecifies for each block the number of elements between the first elements of that block and of the first block. mpi.type_create_struct(count, blocklengths, displacements, datatypes)-
Returns a new datatype constructed from a sequence of
countblocks of elements.blocklengthsspecifies for each block the number of elements in that block.displacementsspecifies for each block the offset in bytes of that block relative to the start of the datatype.datatypesspecifies for each block the datatype of the elements in that block. mpi.type_create_resized(datatype, lb, extent)-
Returns a new datatype constructed from the given datatype with lower bound
lband sizeextentin bytes.This function may be used to adjust for padding at the beginning or end of a datatype.
datatype:commit()-
Commits the datatype, after which it may be used for communications.
datatype:get_extent()-
Returns the lower bound and size of the datatype in bytes.
5.1 Predefined datatypes
| Datatype | Description |
|---|---|
mpi.char |
C type char |
mpi.wchar |
C type wchar_t |
mpi.schar |
C type signed char |
mpi.uchar |
C type unsigned char |
mpi.short |
C type short |
mpi.ushort |
C type unsigned short |
mpi.int |
C type int |
mpi.uint |
C type unsigned int |
mpi.long |
C type long |
mpi.ulong |
C type unsigned long |
mpi.llong |
C type long long |
mpi.ullong |
C type
unsigned long long |
mpi.float |
C type float |
mpi.double |
C type double |
mpi.short_int |
C types short and
int |
mpi.float_int |
C types float and
int |
mpi.int_int |
C types int and
int |
mpi.double_int |
C types double and
int |
mpi.long_int |
C types long and
int |
mpi.byte |
Binary data |
mpi.packed |
Packed data |
The following datatypes are available with MPI-2.2 or later.
| Datatype | Description |
|---|---|
mpi.bool |
C type bool |
mpi.int8 |
C type int8_t |
mpi.uint8 |
C type uint8_t |
mpi.int16 |
C type int16_t |
mpi.uint16 |
C type uint16_t |
mpi.int32 |
C type int32_t |
mpi.uint32 |
C type uint32_t |
mpi.int64 |
C type int64_t |
mpi.uint64 |
C type uint64_t |
mpi.fcomplex |
C type float complex |
mpi.dcomplex |
C type double complex |
mpi.aint |
C type MPI_Aint |
mpi.offset |
C type MPI_Offset |
6 Process topologies
mpi.cart_create(comm, dims, periods, reorder)-
Returns a new communicator that maps the processes in
commto a Cartesian grid.dimsis a sequence that specifies the number of processes for each dimension of the grid.periodsis a sequence that specifies for each dimension whether the grid is periodic (true) or non-periodic (false) along that dimension. Ifreorderis true, the ranks of the processes in the new communicator may be reordered to optimally map the Cartesian topology onto the machine topology; ifreorderis false, the ranks of the processes remain the same as incomm.If the size of the Cartesian grid is smaller than the number of processes in
comm, the function returns nil on some processes. mpi.cart_get(comm)-
Returns three sequences that specify for each dimension of the Cartesian grid the number of processes, whether the grid is periodic (true) or non-periodic (false) along that dimension, and the coordinate of the calling process along that dimension.
mpi.cart_coords(comm, rank)-
Returns a sequence with the Cartesian coordinates of the process with the given rank.
mpi.cart_rank(comm, coords)-
Returns the rank of the process with the given Cartesian coordinates.
mpi.cart_shift(comm, direction, disp)-
Returns the ranks of the source and destination processes in a Cartesian shift, which involves a send to the destination process in the given direction and a receive from the source process in the opposite direction.
directionspecifies the coordinate dimension along which the shift is performed.dispspecifies the displacement of the destination process relative to the calling process, either a positive integer for an upwards shift or a negative integer for a downwards shift.If the shift to the calling process would cross a non-periodic boundary, the function returns nil inplace of a source rank; if the shift from the calling process would cross a non-periodic boundary, the function returns nil inplace of a destination rank.