|
[Sponsors] |
Openfoam running extremely slowly using multiple nodes on HPC |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
July 20, 2022, 23:39 |
Openfoam running extremely slowly using multiple nodes on HPC
|
#1 |
Member
Dongxu Wang
Join Date: Sep 2018
Location: China
Posts: 33
Rep Power: 8 |
Hi Foamers,
I recently complied OpenMPI and OF2006 under my own account of our HPC. The program was successfully installed and can be run in parallel mode. However, I found that the simulation is extremely slow when running the case using multiple nodes. The problem didn't appear when the case is run locally or using only one node. For example, when I run the dam-break case using 4 processors, I found that if only one node is used (namely, #SBATCH --nodes=1, #SBATCH --ntasks-per-node=4), the time used is 47.34s; if two nodes are used (namely, #SBATCH --nodes=2, #SBATCH --ntasks-per-node=2), the time used is 381.22s. The time increased by approximately 8 times. It can be easily deduced that the problem is due to the communication of nodes. The administrater suggested I use interMPI but I haven't tried until now because the HPC is temporarily down due to some reason. However, I am not sure if using another mpi can address this problem. Could someone give me some advice? The node information is as follows: Code:
u01 state = free np = 56 ntype = cluster status = rectime=1657244447,state=free,slurmstate=idle,size=0kb:0kb,ncpus=56,boards=1,sockets=2,cores=28,threads=1,availmem=380000mb,opsys=linux 3.10.0-862.el7.x86_64 Code:
mpirun --mca btl_tcp_if_include "ip address“ --mca btl '^openib' -np 4 interIsoFoam -parallel 2>&1 > log.interIsoFoam Thanks! wdx |
|
July 22, 2022, 06:09 |
|
#2 |
Member
Dongxu Wang
Join Date: Sep 2018
Location: China
Posts: 33
Rep Power: 8 |
OK, I solve the problem.
It is indeed caused by the use of OpenMPI. I installed intelMPI and found that the time required decreased to 73s when 2 nodes are used. Although this time is still longer than that on one node, it has been improved significantly. |
|
September 17, 2023, 12:34 |
|
#3 |
New Member
Join Date: Mar 2022
Posts: 8
Rep Power: 4 |
Got the same problem, did you manage to solve it?
|
|
September 19, 2023, 06:17 |
|
#4 |
Senior Member
|
Pls try larger cell count (i.e. finer mesh) to avoid interprocessor communication (i.e. data transfer between processor during the linear solve) to dominate computation on each processor (i.e. local linear solves).
|
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Map of the OpenFOAM Forum - Understanding where to post your questions! | wyldckat | OpenFOAM | 10 | September 2, 2021 06:29 |
Issues with OpenFOAM on an HPC with GCC+MVAPICH2. | Prakriti | OpenFOAM Programming & Development | 0 | October 18, 2020 16:48 |
Problem with OpenFoam 6 -parallel on remote HPC nodes | mrishi | OpenFOAM Running, Solving & CFD | 10 | September 6, 2019 09:46 |
Running Multiple OpenFOAM Cases at Once on HPC | papamilad | Hardware | 5 | August 2, 2019 13:20 |
OpenFOAM Running error with multiple nodes | dokeun | OpenFOAM Running, Solving & CFD | 1 | June 28, 2019 01:04 |