|
[Sponsors] |
Get the most of parallel simulations [mpi flags] |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
September 15, 2016, 06:37 |
Get the most of parallel simulations [mpi flags]
|
#1 |
Senior Member
Pablo Higuera
Join Date: Jan 2011
Location: Auckland
Posts: 627
Rep Power: 19 |
Dear all,
lately I have been working with a new computer and have been experiencing a very poor parallelization gain (subjective opinion) for what I was expecting. My computer is a dual Intel Xeon E52680 v3 (2.5GHz, 12 physical cores, 24 threads each), 64GB (4x16GB, 2133MHz DDR4) RAM. To confirm my thought I decided to do a parallel performance test, as shown here: https://www.pugetsystems.com/labs/hp...d-Opteron-587/ So basically I prepared a case based on the cavity tutorial, with a 1024x1024 mesh and reduced viscosity that iterates 100 times (no output to disk). Then I set a batch of cases with different mpirun flags, each to run independently while the computer was completely idle at night. See graph attached. The X axis is the number of processes and the Y axis is the speedup. Legend: perfect -> the land of utopia normal -> mpirun -np X simpeFoam -parallel bc -> --bind-to core: overload-allowed bcmc -> --bind-to core: overload-allowed --map-by core bh -> --bind-to hwthread bb -> --bind-to board My findings confirm my fears, up to 12 processes, the scaling looks excellent, but after that it plateaus. Since the case is not huge, I could stand that scaling turned worse, but not as bad as shown by the graph. Furthermore, this guy has almost linear scaling all the way through to 40 cores (quad socket...), so I was expecting something similar up to 24. What is interesting is that adding the flag --bind-to core: overload-allowed, increases the performance radically for large numbers of processes. Does anyone have any clue on what could be happening or any thoughts to boost parallel performance further to 24 processes? Thanks! Pablo |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Running parallel case after parallel meshing with snappyHexMesh? | Adam Persson | OpenFOAM Running, Solving & CFD | 0 | August 31, 2015 23:04 |
Can not run OpenFOAM in parallel in clusters, help! | ripperjack | OpenFOAM Running, Solving & CFD | 5 | May 6, 2014 16:25 |
parallel Grief: BoundaryFields ok in single CPU but NOT in Parallel | JR22 | OpenFOAM Running, Solving & CFD | 2 | April 19, 2013 17:49 |
Running in parallel | Djub | OpenFOAM Running, Solving & CFD | 3 | January 24, 2013 17:01 |
Parallel Computing Classes at San Diego Supercomputer Center Jan. 20-22 | Amitava Majumdar | Main CFD Forum | 0 | January 5, 1999 13:00 |