Dark side of Amdahl's law. Part III
Posted December 9, 2011 at 12:04 by SergeAS
...use a blocking call Sendrecv() instead of a pair of Send() + Recv() slightly improves the performance of the code but does not result to improved scalability
Significantly improves the scalability of the solver, using the Halo ehchanges, the use of non-blocking MPI calls Isend() and Irecv(). This allows data transfer between subdomains is really parallel.
The following figure shows the so-called "efficiency parallelization" calculated as the ratio of real to the ideal speedup as a function of the number of cores involved.
In analyzing the above figures, we see that despite the increase in net performance solver "parallelization efficiency" decreases
to be continued...
Significantly improves the scalability of the solver, using the Halo ehchanges, the use of non-blocking MPI calls Isend() and Irecv(). This allows data transfer between subdomains is really parallel.
The following figure shows the so-called "efficiency parallelization" calculated as the ratio of real to the ideal speedup as a function of the number of cores involved.
In analyzing the above figures, we see that despite the increase in net performance solver "parallelization efficiency" decreases
to be continued...
Total Comments 0