|
[Sponsors] |
September 16, 2008, 19:21 |
SnappyHexMesh in Parallel
|
#1 |
Senior Member
BastiL
Join Date: Mar 2009
Posts: 530
Rep Power: 20 |
Hi all,
I started first tries with snappyHexMesh in parallel instead of serial. Everything works quite good so far. However, I am wondering about stategy of decomposition. The way it is intended to work seems to be: 1. Underlying hex-mesh (blockmesh) 2. decompose hex-mesh into n parts (decomposepar) 3. Run snappyHexMesh parallel with n processes 4. get the final snapped mesh distributed into n parts So I guess you should use the same number of partitions for meshing as you intend to use for calculation. If you do so there is no need for redecomposition. If you want one mesh you may run reconstructParMesh. This worked for me. If you may run the calculation on m processes and m<n redistributeParMesh may do the job, but I have not tried it. More interesting I want to run calculation on m cores, m>n. redistributeParMesh does not seem to the able handling this redistribution to a larger number of domains, is it? Another question: Due to lack of memory I do not want to start all parallel processes at ones but e.g. 2 out of 4 and the other two after finishing the first two. Of course during meshing there is no opportunity for dynamic load balancing. However, it will save RAM. Is it possible? Regards |
|
September 17, 2008, 04:54 |
redistributeParMesh should be
|
#2 |
Senior Member
Mattijs Janssens
Join Date: Mar 2009
Posts: 1,419
Rep Power: 26 |
redistributeParMesh should be able to handle m>n. Just start it with the larger of the two. Don't know if it reads a mesh from time directories or always from constant so you might have to move your polyMesh into constant beforehand.
snappyHexMesh needs parallel communications in all phases, not just in the load balancing phase so no you cannot run in sequence. |
|
September 17, 2008, 05:57 |
Matthijs,
thanks for this a
|
#3 |
Senior Member
BastiL
Join Date: Mar 2009
Posts: 530
Rep Power: 20 |
Matthijs,
thanks for this answer. So far I only got it working for m=n. I will run some more tests for m>n and m<n this afternoon and I will get back to you afterwards. |
|
September 17, 2008, 12:05 |
redistributeParMesh seems to r
|
#4 |
Senior Member
BastiL
Join Date: Mar 2009
Posts: 530
Rep Power: 20 |
redistributeParMesh seems to read from time-directories. However, it does not work that way . I got it working to redistribute a mesh from 2 to 3 parts using redistributeMeshPar. It is litte tricky and only worked wih hierarchical or metis, not with parmetis. Using metis I get the warning:
You have selected decomposition method decompositionMethod which does not synchronise the decomposition across processor patches. I do not understand the meaning and consequences from that. |
|
September 18, 2008, 09:29 |
I managed to run snappyHexMesh
|
#5 |
Senior Member
BastiL
Join Date: Mar 2009
Posts: 530
Rep Power: 20 |
I managed to run snappyHexMesh with processes one after another manually. This also seams to work with some tricks. These leads to two questions for me:
- Why not replace decompsePar with redistributeParMesh. His also works for 1->many parts and is parallel. His helps saving memory.... - Why not implement the option to run snappyHexMesh in parallel with individual parts sequentialy one after another instead of all at once. This will also save memory. Clusters are rare for meshing because snappyHexMesh is the first tool I know that is able to use them. Regards |
|
September 18, 2008, 10:30 |
You must be some kind of mirac
|
#6 |
Senior Member
Eugene de Villiers
Join Date: Mar 2009
Posts: 725
Rep Power: 21 |
You must be some kind of miracle worker Basti, because there is a lot of communication required between processors in snappyHexMesh. Running separate processors sequentially will not produce anything remotely resembling what you would get it you ran it in parallel.
If you are going to solve a case on a cluster, why not mesh it there as well? Saves you a whole lot of trouble. |
|
September 18, 2008, 11:52 |
Eugene,
two reasons that cu
|
#7 |
Senior Member
BastiL
Join Date: Mar 2009
Posts: 530
Rep Power: 20 |
Eugene,
two reasons that currently might be a problem: - Our current environment has not been designed for that because current commercial meshers are not really parallel. Meshing with snappyHexMesh takes more RAM than solving. I am running out of memory. SnappyHexMesh needs lots of memory. - On the other hand I have messing nodes with relatively large shared memory. However this is a problem of current hardware status and may change in future. What I did to get it working is quite simple: 1. Run blockMesh for underlying hexmesh 2. Distribute hexmesh 3. Run snappyHexMesh on each of the underlying parts individually instead of one run. (Of course you can run each part in parallel once again...) This is quite similar to what old "proAM" can do. You have to change the "processor" patch to type "patch" for this to work. 4. Assemble mesh. I sometimes had troubles with stitchmesh for that. Maybe our next generation cluster will solve al this. Regards. |
|
September 18, 2008, 12:31 |
Well it sure is an interesting
|
#8 |
Senior Member
Eugene de Villiers
Join Date: Mar 2009
Posts: 725
Rep Power: 21 |
Well it sure is an interesting approach, but I repeat, the meshes you generate like this will be substantially different (and probably poorer in quality) than ones generated with all components online. Specifically, you will get weird jumps where you match the processors.
If you cannot mesh the entire thing due to insufficient memory, then I guess this is the only way to go. |
|
February 12, 2009, 05:27 |
Hi,
thanks for you advices.
|
#9 |
Senior Member
Wolfgang Heydlauff
Join Date: Mar 2009
Location: Germany
Posts: 136
Rep Power: 21 |
Hi,
thanks for you advices. Let me sum the procedure. - run "blockMesh" a usual - decomposeMethode in decomposeParDict must be hirarcial - run "decomposePar" - run "foamJob -p -s snappyHexMesh" - afterwards run "reconstructParMesh -mergeTol 1 e-06 -latestTime" (or -time 1; -time 2; ...) Works perfect. ("Yes, it can!") ;-) |
|
June 25, 2009, 21:45 |
|
#10 |
Senior Member
|
Hi,
I am having trouble running snappyHexMesh in parallel. I use hierachical decomposition and I can run snappyHexMesh in parallel as long as I don't run the "snap" phase, which I absolutely need! Running the snap phase causes an immediate error. My case runs fine on a single processor, however wwhen I set it to parallel (mpirun or foamJob -p,) I get a Smoothing patch points ... Smoothing iteration 0 Found 0 non-mainfold point(s). [louis-dell:32518] *** An error occurred in MPI_Recv [louis-dell:32518] *** on communicator MPI_COMM_WORLD [louis-dell:32518] *** MPI_ERR_TRUNCATE: message truncated [louis-dell:32518] *** MPI_ERRORS_ARE_FATAL (goodbye) and changing MPI_BUFFER_SIZE does not solve the problem. It either changes the error message to a segmentation fault or a "cannot satisfy memory request" error. Even on a 40K cells mesh!! Thanks for any hints on solving this! -Louis |
|
August 4, 2009, 01:47 |
|
#11 |
Member
Cem Albukrek
Join Date: Mar 2009
Posts: 52
Rep Power: 17 |
How do you assign fields to the decomposed meshes that you generate with snappyhexmesh in parallel? Can it be done directly on the decomposed mesh or does the mesh need to be reconstructed and then decomposed again?
|
|
August 28, 2009, 14:40 |
|
#12 |
Senior Member
|
What do you mean by fields?
|
|
August 28, 2009, 14:52 |
|
#13 |
Member
Cem Albukrek
Join Date: Mar 2009
Posts: 52
Rep Power: 17 |
I was trying to identify a way to assign the specified U,p, k, epsilon, nut, etc. flow variable (field) boundary conditions to the decomposed mesh directly. The incentive was to be able to process large cases on a 32bit parallel machine, as I thought the reconstructed mesh would violate the 32bit memory limit, which is around 2.5gigs.
It turns out the serial processes for the reconstruction, flow field assignment and re-decomposition do not consume the memory for the whole mesh. So I do not need a solution to this issue at this point, although one would improve the overall process, avoiding unnecessary mesh reconstruction and re-decomposition steps with it. Last edited by albcem; August 28, 2009 at 15:28. |
|
August 28, 2009, 17:18 |
|
#14 |
Senior Member
|
Well maybe I don't understand properly but as far as I know you can set the field conditions in the "0" folder. As for patch names, you have to define them prior to decomposition.
Best of luck, -Louis |
|
September 9, 2009, 07:18 |
|
#15 |
Member
Andrew King
Join Date: Mar 2009
Location: Perth, Western Australia, Australia
Posts: 82
Rep Power: 17 |
Hi Louis,
Unfortunately snappyHexMesh adds new patches, however decomposePar doesn't pass on any field BCs for patches that don't exist at decomposition time. ie. even if you've defined the BC's for the new patch in 0 before decomposition, it won't copy these to the processor directories. You can do it manually, but for large numbers of processors its not optimal. However, i think there may be a workaround. If you create an empty patch in constant/polyMesh/boundary with the same name as the patch(es) that snappy adds, the field will be decomposed, and all is fine. To create the empty patch open constant/polyMesh/boundary find the last patch (which should look something like) Code:
last_patch { type wall; nFaces 1000; startFace 22000; } Code:
new_empty_patch { type wall; nFaces 0; startFace 23000; } I'm about to test this, so I'll let you know if it works. Cheers, Andrew
__________________
Dr Andrew King Fluid Dynamics Research Group Curtin University |
|
September 23, 2009, 08:43 |
|
#16 |
Senior Member
|
Dear Andrew,
did your approach work? Best regards, -Louis |
|
September 23, 2009, 10:15 |
|
#17 |
Member
Andrew King
Join Date: Mar 2009
Location: Perth, Western Australia, Australia
Posts: 82
Rep Power: 17 |
Hi Louis,
It worked in some ways, the empty patch worked for the decomposition, but the mesh was not in a state to run anything. (missing cellProcAddressing files). I had to use reconstructParMesh followed by decomposePar again. This approach seemed to work, without running out of memory. Cheers, Andrew |
|
October 16, 2009, 06:38 |
triSurface directories
|
#18 |
New Member
Simon Rees
Join Date: Mar 2009
Posts: 12
Rep Power: 17 |
I have got some way with running snappyHexMesh in prallel (a great utility) but have a question. It seems that decomposePar does what you would normally expect to run a solver but snappyHexMesh will not run in parallel (I am using v1.6 and doing 'foamJob -p -s snappyHexMesh -overwrite') unless I manually copy the constant/triSurface directory into processor?/constant/. This is true where I have used an stl file but also in the iglooWithFridges tutorial where there is only an edge file in triSurface directory. Is this the expected behaviour? Is there something I am missing that would avoid this?
Thanks, Simon |
|
October 16, 2009, 11:00 |
|
#19 |
Senior Member
|
Hi Simon,
I don't know the answer to your question but I'm happy to know that snappy works in parallel with 1.6! Can't wait to try it -Louis |
|
April 1, 2010, 16:07 |
parallel meshing on n processors to parallel solution on n processors?
|
#20 |
Member
|
I know the answer is probably here, but I think this needs to be explicitly discussed... has someone gotten parallel snappyHexMesh to parallel solver (in my case simpleFoam) to work? I understand that one must move the polymesh folder from the latest snappyHexMesh iteration folder (2 or 3 or ?), but I get the following error messages after attempting to run...
host2:/data/offToCluster # mpirun --mca btl openib,sm,self -np 10 -machinefile ~/machinelist.txt simpleFoam -parallel > log.simpleFoam & [1] 7652 host2:/data/offToCluster # host2:/data/offToCluster # [1] [1] [1] keyword OBJECT_patch0 is undefined in dictionary "/data/offToCluster/processor1/0/p::boundaryField" [1] [1] file: /data/offToCluster/processor1/0/p::boundaryField from line 26 to line 63. [1] [1] From function dictionary::subDict(const word& keyword) const [1] in file db/dictionary/dictionary.C at line 449. [1] I thought that the workflow was basic input deck > blockMesh > decomposePar (n processes) > copy stl files to all processor folders > snappyHexMesh (in parallel on n processes) > simpleFoam (in parallel on n processes). My workaround of using "reconstructParMesh -mergeTol 1e-06 -time 2" does work, but then that limits me to the RAM of only one node because reconstructParMesh doesn't run in parallel. Do I need to copying boundary conditions around in some way? Last edited by bjr; April 1, 2010 at 16:47. |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
[snappyHexMesh] snappyHexMesh in parallel | shailesh.nitk | OpenFOAM Meshing & Mesh Conversion | 33 | January 25, 2022 11:35 |
Error running openfoam in parallel | fede32 | OpenFOAM Programming & Development | 5 | October 4, 2018 17:38 |
[snappyHexMesh] SnappyHexMesh in parallel missing 0 folder | libindaniel2000 | OpenFOAM Meshing & Mesh Conversion | 0 | May 26, 2016 23:46 |
[snappyHexMesh] SnappyHexMesh in Parallel problem | swifty | OpenFOAM Meshing & Mesh Conversion | 10 | November 6, 2015 05:40 |
Explicitly filtered LES | saeedi | Main CFD Forum | 16 | October 14, 2015 12:58 |