|
[Sponsors] |
March 25, 2022, 09:53 |
Parallel efficiency for multiRegionFoam?
|
#1 |
New Member
Lei Zhou
Join Date: Nov 2021
Posts: 6
Rep Power: 5 |
Hello, everybody,
When I use multiRegionFoam, as I know, this solver solves the multi region on by on and couple regions by Picard iterate. So I am a little worried about whether the total calculating time would be seriously slowed by the fewest mesh elements' region when I adopt parallel calculation? It seems that the calculation efficiency of serial calculation is better than parallel calculation for mesh which has only 1000 or even less elements because of the cost of core commutation. For example, if I have three regions, the first region has 1 millions mesh elements, the second one has 50 thousands mesh elements, the third one only has 1 thousands mesh elements. If I used eight cores to adopt parallel calculation, did each of the regions have to be decomposed into eight processors. Apparently, the parallel efficiency of the third region seems to be worse than serial calculation. And would the total calculation time of all region be seriously slowed by the third region because of the core communication for less mesh element regions? Whether the above descriptions of parallel calculation for multRegion solver are right or not? If right, do we have some methods to only decompose the first region into the eight cores and the third region into the same core? Thanks in advance. |
|
March 25, 2022, 13:13 |
|
#2 |
Senior Member
|
Have you tried using scotch decomposition ? This algorithm tries to minimize the number of boundaries, thereby reduced required communication between the processors and thus faster simulation.
|
|
March 27, 2022, 07:24 |
|
#3 |
Senior Member
Join Date: Sep 2013
Posts: 353
Rep Power: 21 |
That is correct. But you can use multi level decomposition for that. So you can split the smallest region into 5 cores and the biggest into 100 for example. This is possible in the esi openfoam versions, not sure about the foundation version.
|
|
March 27, 2022, 11:14 |
|
#4 |
New Member
Lei Zhou
Join Date: Nov 2021
Posts: 6
Rep Power: 5 |
Thanks for your reply, Kumaresh. I have tried to use scotch decomposition method for each region. But What my point is that which decompose method for multiRegion solver. Maybe I could explain it further. For example, I have two regions, one is a fluid region, which always has large amounts of mesh elements, the other is a solid region, which always has small amounts of mesh elements. And if I do this conjugate heat transfer simulation parallelly, I have to use multiRegionFoam solver and put the decomposeParDict into each region. And what I want to know is that which decompose method is the best or suitable?
|
|
March 27, 2022, 11:31 |
|
#5 | |
New Member
Lei Zhou
Join Date: Nov 2021
Posts: 6
Rep Power: 5 |
Quote:
And I find an introduction about multi level. But I have some small questions about this dictionary file? Question1: what is the relationship between the method (metis in this dictionary file) which defined in the top and the method (hierarchical, scotch) defined in the subDictionary regions Questions2: Do I have to define every region with decompose method in subDictionary regions. If so, should the total number of numberOfSubdomains in the regions equal to numberOfSubdomains defiend in the top? If not, what is the default setting? Question3: Should I put this dictionary with the same setting into each region? Question4: If I write numberOfSubdomains 2048, Do I have to use mpirun -np 2048 xxFoam -parallel Code:
numberOfSubdomains 2048; method metis; regions { heater { numberOfSubdomains 2; method hierarchical; coeffs { n (2 1 1); } } "*.solid" { numberOfSubdomains 16; method scotch; } } |
||
March 27, 2022, 13:39 |
|
#6 |
Senior Member
|
||
Tags |
decomposepar methods, multiregionfoam, parallel calculation |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
[snappyHexMesh] OpenFoam in parallel with sHM and sFE | pradyumnsingh | OpenFOAM Meshing & Mesh Conversion | 4 | October 26, 2018 17:25 |
parallel efficiency in SU2 | ymc11 | SU2 | 2 | January 6, 2016 21:41 |
Can not run OpenFOAM in parallel in clusters, help! | ripperjack | OpenFOAM Running, Solving & CFD | 5 | May 6, 2014 16:25 |
Large case parallel efficiency | lakeat | OpenFOAM Running, Solving & CFD | 69 | October 27, 2012 04:11 |
Parallel Computing Classes at San Diego Supercomputer Center Jan. 20-22 | Amitava Majumdar | Main CFD Forum | 0 | January 5, 1999 13:00 |