|
[Sponsors] |
decomposePar: how to distribute cells unevenly? |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
February 9, 2017, 12:33 |
decomposePar: how to distribute cells unevenly?
|
#1 |
Senior Member
Kevin van As
Join Date: Sep 2014
Location: TU Delft, The Netherlands
Posts: 252
Rep Power: 21 |
Background:
With decomposePar it is possible to cut the domain in the x, y and z directions such that each processor receives (roughly) the same number of cells. That's the simple/hierarchical method. The scotch method does something similar, but then automated. Either way, each processor will receive (roughly) the same number of cells. Question: Is there, in OpenFoam, a way to distribute the cells unevenly over the processors? That is, how can I enforce some processors to receive less cells than others? (Preferably except for using "manually".) ( "Why would you do that?" In case you wonder why I would want to do that: You'd like to spread the load over all processors, not cells! Without any further information, your best guess to distributing the load will be to give all processors the same number of cells, so the default behaviour makes sense. However, when you do know that a certain region will be computationally more expensive, it will be desirable to assign less cells of that region to a processor and more cells of the uninteresting ambient region. In my specific case, this is because I use dynamicRefineFvMesh with interDyMFoam. I know in advance roughly what region will be refined. Therefore I'd like to assign less cells of that region to each processor, such that in the refined mesh I obtain roughly the same load per processor. ); |
|
February 10, 2017, 06:56 |
|
#2 |
Member
Join Date: Jun 2016
Posts: 66
Rep Power: 11 |
I never needed to consider what you are trying to do so I maybe won't be of much help. Anyway, there is an option in decomposeParDict to choose method "manual" where you allocate each cell to a particular processor. I did not see any example of how to define this so I am afraid you would have to have a look in the code what input it requires.
A possible workaround that comes into my mind is that you would terminate a calculation before/after each refine step, reconstruct the domain and decompose it again. This would be applicable if the refinement does not happen too often so the overhead caused by reconstruct/decompose steps would not slow down your computation too much. |
|
February 10, 2017, 11:16 |
|
#3 |
Senior Member
Anton Kidess
Join Date: May 2009
Location: Germany
Posts: 1,377
Rep Power: 30 |
Hi Kevin, manual distribution is an option as Zbynek said; dynamic load balancing is another:
Run-Time Parallel Load Balancing
__________________
*On twitter @akidTwit *Spend as much time formulating your questions as you expect people to spend on their answer. |
|
February 24, 2017, 09:41 |
|
#4 | |
Senior Member
Sergei
Join Date: Dec 2009
Posts: 261
Rep Power: 21 |
Quote:
Code:
// "system/decomposeParDict" numberOfSubdomains 4; method scotch; scotchCoeffs { processorWeights (10 1 1 1); } |
||
February 25, 2017, 05:15 |
|
#5 |
Member
Rodrigo
Join Date: Mar 2010
Posts: 98
Rep Power: 16 |
Hi Kevin,
I recently ported the runtime load-balancing utility to OF-4.1. Maybe it is useful for you: Run-Time Parallel Load Balancing Last edited by guin; February 25, 2017 at 11:41. |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
[snappyHexMesh] Snappy creates strange cells far away from boundary | vainilreb | OpenFOAM Meshing & Mesh Conversion | 3 | December 16, 2020 06:11 |
[snappyHexMesh] Error snappyhexmesh - Multiple outside loops | avinashjagdale | OpenFOAM Meshing & Mesh Conversion | 53 | March 8, 2019 10:42 |
[snappyHexMesh] Problem: after snappyHexMesh, the cells size are not the same | kanes | OpenFOAM Meshing & Mesh Conversion | 0 | January 25, 2016 09:06 |
[snappyHexMesh] snappyHexMesh won't work - zeros everywhere! | sc298 | OpenFOAM Meshing & Mesh Conversion | 2 | March 27, 2011 22:11 |
physical boundary error!! | kris | Siemens | 2 | August 3, 2005 01:32 |