Presentation
Desynchronization and Wave Pattern Formation in MPI-Parallel and Hybrid Memory-Bound Programs
SessionResearch Paper Session
Event Type
Research Paper
Pre-Recorded
TimeTuesday, June 23rd1:25pm - 1:50pm
LocationDigital
DescriptionAnalytic, first-principles performance modeling of
distributed-memory parallel codes is notoriously imprecise. Even for
applications with extremely regular and homogeneous
compute-communicate phases, simply adding communication time to
computation time does often not yield a satisfactory prediction of
parallel runtime due to deviations from the expected simple lockstep
pattern caused by system noise, variations in communication time,
and inherent load imbalance. In this paper, we highlight the
specific cases of provoked and spontaneous desynchronization of
memory-bound, bulk-synchronous pure MPI and hybrid MPI+OpenMP
programs. Using simple microbenchmarks we observe that although
desynchronization can introduce increased waiting time per process,
it does not necessarily cause lower resource utilization but can lead
to an increase in available bandwidth per core. In case of
significant communication overhead, even natural noise can shove the
system into a state of automatic overlap of communication and
computation, improving the overall time to solution. The saturation
point, i.e., the number of processes per memory domain required to
achieve full memory bandwidth, is pivotal in the dynamics of this
process and the emerging stable wave pattern. We also demonstrate
how hybrid MPI-OpenMP programming can prevent desirable
desynchronization by eliminating the bandwidth bottleneck among
processes. A Chebyshev filter diagonalization application is used
to demonstrate some of the observed effects in a realistic setting.
distributed-memory parallel codes is notoriously imprecise. Even for
applications with extremely regular and homogeneous
compute-communicate phases, simply adding communication time to
computation time does often not yield a satisfactory prediction of
parallel runtime due to deviations from the expected simple lockstep
pattern caused by system noise, variations in communication time,
and inherent load imbalance. In this paper, we highlight the
specific cases of provoked and spontaneous desynchronization of
memory-bound, bulk-synchronous pure MPI and hybrid MPI+OpenMP
programs. Using simple microbenchmarks we observe that although
desynchronization can introduce increased waiting time per process,
it does not necessarily cause lower resource utilization but can lead
to an increase in available bandwidth per core. In case of
significant communication overhead, even natural noise can shove the
system into a state of automatic overlap of communication and
computation, improving the overall time to solution. The saturation
point, i.e., the number of processes per memory domain required to
achieve full memory bandwidth, is pivotal in the dynamics of this
process and the emerging stable wave pattern. We also demonstrate
how hybrid MPI-OpenMP programming can prevent desirable
desynchronization by eliminating the bandwidth bottleneck among
processes. A Chebyshev filter diagonalization application is used
to demonstrate some of the observed effects in a realistic setting.