MPI topology

The MPITopology type defines the MPI Cartesian topology of the decomposition. In other words, it contains information about the number of decomposed dimensions, and the number of processes in each of these dimensions.

This type should only be used if more control is needed regarding the MPI decomposition. In particular, dealing with MPITopology is not required when using the high-level interface to construct domain decomposition configurations.

Construction

The main MPITopology constructor takes a MPI communicator and a tuple specifying the number of processes in each dimension. For instance, to distribute 12 MPI processes on a $3 × 4$ grid:

comm = MPI.COMM_WORLD  # we assume MPI.Comm_size(comm) == 12
pdims = (3, 4)
topology = MPITopology(comm, pdims)

A convenience constructor is provided that automatically chooses a default pdims from the number of processes and from the dimension N of decomposition grid. For instance, for a two-dimensional decomposition:

topology = MPITopology(comm, Val(2))

Under the hood, this works by letting MPI_Dims_create choose the number of divisions along each dimension.

At the lower level, MPITopology uses MPI_Cart_create to define a Cartesian MPI communicator. For more control, one can also create a Cartesian communicator using MPI.Cart_create, and pass that to MPITopology:

dims = (3, 4)
comm_cart = MPI.Cart_create(comm, dims)
topology = MPITopology{2}(comm_cart)  # note the "{2}"!!

Note that in this case, one needs to indicate the number of dimensions M of the decomposition (here M = 2).

Types

PencilArrays.Pencils.MPITopologies.MPITopologyType
MPITopology{N}

Describes an N-dimensional Cartesian MPI decomposition topology.


MPITopology(comm::MPI.Comm, pdims::Dims{N})

Create N-dimensional MPI topology information.

The pdims tuple specifies the number of MPI processes to put in every dimension of the topology. The product of its values must be equal to the number of processes in communicator comm.

Example

Divide 2D topology into 4×2 blocks:

comm = MPI.COMM_WORLD
@assert MPI.Comm_size(comm) == 8
topology = MPITopology(comm, (4, 2))

MPITopology(comm::MPI.Comm, Val(N))

Convenient MPITopology constructor defining an N-dimensional decomposition of data among all MPI processes in communicator.

The number of divisions along each of the N dimensions is automatically determined by a call to MPI.Dims_create.

Example

Create 2D decomposition grid:

comm = MPI.COMM_WORLD
topology = MPITopology(comm, Val(2))

MPITopology{N}(comm_cart::MPI.Comm)

Create topology information from MPI communicator with Cartesian topology (typically constructed using MPI.Cart_create). The topology must have dimension N.

Example

Divide 2D topology into 4×2 blocks:

comm = MPI.COMM_WORLD
@assert MPI.Comm_size(comm) == 8
pdims = (4, 2)
comm_cart = MPI.Cart_create(comm, pdims)
topology = MPITopology{2}(comm_cart)
source

Methods

Base.lengthMethod
length(t::MPITopology)

Get total size of Cartesian topology (i.e. total number of MPI processes).

source
Base.ndimsMethod
ndims(t::MPITopology)

Get dimensionality of Cartesian topology.

source
ndims(p::Pencil)

Number of spatial dimensions associated to pencil data.

This corresponds to the total number of dimensions of the space, which includes the decomposed and non-decomposed dimensions.

source
Base.sizeMethod
size(t::MPITopology)

Get dimensions of Cartesian topology.

source

Index