|
[Sponsors] |
Mesh too big for memory. How to perform decomposition in parallel? |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
January 9, 2017, 19:08 |
Mesh too big for memory. How to perform decomposition in parallel?
|
#1 |
Senior Member
Thomas Oliveira
Join Date: Apr 2015
Posts: 114
Rep Power: 12 |
Hi,
I need to decompose a case to run it in many nodes because the mesh is too large for the memory of a single node. It is so large that running decomposePar in a single node is not possible. How can I decompose the case in parallel, using distributed memory? Running decomposePar in parallel, if possible, would be a solution. Best wishes, Thomas P.S.: A question like this was asked nine years ago. I am asking again because things may have changed since then. |
|
January 10, 2017, 05:30 |
|
#2 |
Senior Member
Anton Kidess
Join Date: May 2009
Location: Germany
Posts: 1,377
Rep Power: 30 |
How did you generate the mesh? Can this be less memory consuming than the decomposition?
__________________
*On twitter @akidTwit *Spend as much time formulating your questions as you expect people to spend on their answer. |
|
January 12, 2017, 08:16 |
|
#3 |
Senior Member
Thomas Oliveira
Join Date: Apr 2015
Posts: 114
Rep Power: 12 |
Dear Anton,
Indeed, your question makes sense. I have also had problems to generate the mesh using blockMesh. Since my geometry is sufficiently simple, I wrote a program which directly writes down the files points, faces, owner, neighbour and boundary without using OpenFOAM classes. An alternative would be to write another program that write those files in processor* directories, but this would require investigating how decomposePar does so. Anyway, this alternative would not solve the problem for the cases in which I have had access to a large-memory machine when creating the mesh but that machine is not available anymore for decomposing it. Best wishes, Thomas |
|
January 12, 2017, 23:55 |
|
#4 | |
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,286
Rep Power: 34 |
Quote:
I am surprised to learn that openFOAM does not do paritioning in parallel. In FVUS I do parallel partitioning using par metis but when i was porting the solver for windows par metis no longer reliably available and thus had to write partitioner myself. Since I am no expert the partitioner is serial only. I put it there very reluctantly because that was one step that handicapped the solver for large meshes and felt very bad about it. openfoam is out there for years so its surprising. |
||
January 13, 2017, 04:16 |
|
#5 |
Senior Member
Anton Kidess
Join Date: May 2009
Location: Germany
Posts: 1,377
Rep Power: 30 |
Not that surprising. I guess 90% of all users either have relatively small meshes or use parallel meshing (snappyHexMesh). Did any of your FVUS customers complain about your partitioner being serial only?
__________________
*On twitter @akidTwit *Spend as much time formulating your questions as you expect people to spend on their answer. |
|
January 13, 2017, 07:45 |
|
#6 | |
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,286
Rep Power: 34 |
Quote:
No so far no-one complained. But all the people who are doing serious work using linux version that is parallel partitioning. Most people using it for VOF and multiphase so big sizes are avoided as much as possible. (There is someone going to try on more than 50 million cell cases but that would be in april). Even on windows the only step which is serial is the partitioning part so only that graph structure that is used for partitioning is serial (that is collected on root process). Rest of the solver is parallel. That means FVUS loads the mesh into partitions user is running solver on, then partitioning is performed (serial on windows and parallel on linux) and redistribution takes place. Solver then continues. Because only one graph structure is serial only the size of meshes user can run still be much higher than openFOAM. |
||
January 13, 2017, 18:41 |
|
#7 |
Senior Member
Cyprien
Join Date: Feb 2010
Location: Stanford University
Posts: 299
Rep Power: 18 |
one solution is to create a coarse background grid (with blockMesh), then to decompose the domain on X CPUs and refine the background grid in parallel with refineMesh. Then you can use snappyHexMesh in parallel.
|
|
July 20, 2017, 07:42 |
decomposing seperat file after the decomposition process
|
#8 | |
Member
Mohamed Elghorab
Join Date: May 2016
Location: Coventry, Engalnd
Posts: 41
Rep Power: 10 |
Quote:
As you are talking about the decomposition process I have a problem here, I'm preparing a new boundary condition but it refuse to be decomposed during the decompsePar so I have to redistribute it manually for all processors , is there any way to use specific command to distribute it directly or I have to paste it in each processor manually: Thanks in advance |
||
Tags |
decomposepar, decomposition, mesh, parallel decomposition |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Reconstruction of the parallel case with dynamic mesh | makaveli_lcf | OpenFOAM Post-Processing | 7 | October 18, 2023 12:28 |
sliding mesh problem in CFX | Saima | CFX | 46 | September 11, 2021 08:38 |
decomposePar problem: Cell 0contains face labels out of range | vaina74 | OpenFOAM Pre-Processing | 37 | July 20, 2020 06:38 |
[snappyHexMesh] snappyHexMesh won't work - zeros everywhere! | sc298 | OpenFOAM Meshing & Mesh Conversion | 2 | March 27, 2011 22:11 |
[snappyHexMesh] external flow with snappyHexMesh | chelvistero | OpenFOAM Meshing & Mesh Conversion | 11 | January 15, 2010 20:43 |