httomo.data.mpiutil.alltoall_ring

Contents

httomo.data.mpiutil.alltoall_ring#

httomo.data.mpiutil.alltoall_ring(arrays: List[numpy.ndarray], comm: mpi4py.MPI.Comm, concat_axis: int = 0) numpy.ndarray[source]#

Distributes a list of contiguous numpy arrays from each rank to every other rank using a ring communication pattern for reduced memory usage.

This implementation uses point-to-point communication in a ring pattern instead of collective Alltoallv, trading some performance for significantly lower memory usage. It only keeps one send/receive pair in memory at a time.

The received arrays are directly written into a pre-allocated concatenated output array, eliminating the need for a separate concatenation step and reducing memory usage.

Parameters:
  • arrays (List[np.ndarray]) – List of 3D numpy arrays to be distributed. Length must be the full size of the given communicator. arrays[i] will be sent to rank i.

  • comm (MPI.Comm) – MPI communicator

  • concat_axis (int) – The axis along which received arrays should be concatenated (default: 0)

Returns:

A single concatenated array containing all received data along the specified axis. The data from rank i is placed at the appropriate offset along concat_axis.

Return type:

np.ndarray