CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

Worse parallel efficiency with openfoam running on HPC cluster

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   March 27, 2022, 00:27
Default Worse parallel efficiency with openfoam running on HPC cluster
  #1
New Member
 
Jin Zhang
Join Date: May 2018
Location: Germany
Posts: 15
Rep Power: 8
sjlouie91 is on a distinguished road
Hi all,

I am now testing benchmark case (cavity) on our own HPC cluster. The total number of cells is 15,000,000. As it is newly installed, I found that it cannot perfrom good parallel effiency with increasing nodes. We use intel-2020 for compiler and IntelMPI.
For HPC cluster, each node has 64 CPUs and we totally have 5 nodes.
This is the jobfile for intel-2020 and also the speed-up plot. I dont't know whether it is due to any wrong settings for MPI. Do you have any suggestions?

Thanks!
Jin
Attached Images
File Type: png intel2020.png (48.9 KB, 18 views)
File Type: png scale.png (19.3 KB, 23 views)
sjlouie91 is offline   Reply With Quote

Old   March 28, 2022, 05:52
Default
  #2
Senior Member
 
Gerhard Holzinger
Join Date: Feb 2012
Location: Austria
Posts: 342
Rep Power: 28
GerhardHolzinger will become famous soon enoughGerhardHolzinger will become famous soon enough
This is normal behaviour. Parallel efficiency levels-off at some point. This is why you see a speed-up from using 1, 2 or 3 nodes. However, the reason why 5 nodes performs poorer than using 4 nodes, is the ever increasing communications-workload in contrast to the ever decreasing per-node computational workload.

When using 3 nodes (64 CPUs each), your per-node computational effort is around 78.000 cells for each parallel process.

When using 5 nodes, every parallel process only deals with roughly 46.000 cells.

So, while using more and more parallel process seems a good idea, we need to bear in mind, that the effort of communication scales super-linearly with the number of parallel processes, while the reduction of computational effort only scales linearly.

If you double the number of CPUs devoted to a parallel simulation, the per-node computational size is cut in half, while the effort for communication between all these parallel processes increases by a factor of between 2 and 4 (i.e. the squre of 2). The factor for the incease oin communication depends on your decomposition.

If you decompose badly, each parallel process needs to communicate with every other process. If you decompose well, each parallel process only needs to communicate with a few processes.
GerhardHolzinger is offline   Reply With Quote

Reply

Tags
hpc, intel-2020, intelmpi, parallel


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
poor pressure convergence upon restart of parallel case saladbowl OpenFOAM Running, Solving & CFD 1 February 7, 2022 05:55
[OpenFOAM.org] Problems with installing openfoam 5.0 on HPC Cluster sjlouie91 OpenFOAM Installation 4 January 20, 2019 16:35
How to run SU2 on HPC cluster in parallel on HPC cluster? Samirs Main CFD Forum 0 July 13, 2018 01:44
OpenFOAM can't be run in parallel in cluster sibo OpenFOAM Running, Solving & CFD 4 February 21, 2017 17:29
Fluent 14.0 file not running in parallel mode in cluster tejakalva FLUENT 0 February 4, 2015 08:02


All times are GMT -4. The time now is 12:06.