|
[Sponsors] |
SU2 code scaling poorly on multiple nodes |
View Poll Results: Does SU2 scales well on multiple nodes? | |||
Yes | 0 | 0% | |
No | 0 | 0% | |
Dont know | 0 | 0% | |
Voters: 0. You may not vote on this poll |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
July 18, 2018, 06:22 |
SU2 code scaling poorly on multiple nodes
|
#1 |
New Member
Samir Shaikh
Join Date: Jul 2018
Posts: 6
Rep Power: 8 |
Hi All,
I have successfully compiled parallel version of SU2 on our HPC cluster having Intel Broadwell nodes. I made changes in parallel_computation.py so as to make mpirun command for running SU2_CFD in parallel. On single node I can see a linear scaling with number of mpi processes but when executing same script through batch mode using SLURM on multiple nodes, performance is degrades. I tried simulation of Turbulent ONERA_M6 testcase. Thanks in advance for your suggestions / help in this regard Attached is slurm script to submit job |
|
August 25, 2018, 20:15 |
|
#2 | |
Senior Member
Heather Kline
Join Date: Jun 2013
Posts: 309
Rep Power: 14 |
You may want to refer to SU2_PY/SU2/run/interface.py to see how the mpi command is called from the python scripts, to make sure that this works with your cluster. You can also set the SU2_MPI_COMMAND in your config file to set something customized without needing to modify the python scripts.
Sometimes multiple nodes each with several processors scales worse than multiple processors within a single node because information now has to travel between multiple nodes rather than just within a single node - and, what sometimes surprises people, is that it actually matters how long the cable is that connects the nodes. However, on most modern clusters it shouldn't be so extreme as to prevent you from benefiting from multiple nodes. If it is an extreme difference you can try running other parallel programs that require communication between processes, or contacting your system administrators about what they expect for the difference in communication between vs within nodes, and tips on compiling in a way to optimize for the specific cluster architecture. Quote:
|
||
Tags |
intel broadwell, intel compiler, su2 aerodynamic noise, su2 examples |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Autogrid export CGNS mesh only with multiple base nodes??? | swm | Fidelity CFD | 5 | January 22, 2017 03:09 |
multiple zone CGNS file convert to su2? | jywang | SU2 | 2 | June 20, 2016 06:29 |
pimpleDyMFoam on multiple nodes | amd | OpenFOAM Running, Solving & CFD | 0 | October 9, 2012 05:59 |
Design Integration with CFD? | John C. Chien | Main CFD Forum | 19 | May 17, 2001 16:56 |
Open source CFD code development, possible? | Dr. Yazid Bindar | Main CFD Forum | 27 | July 18, 2000 01:18 |