|
[Sponsors] |
[snappyHexMesh] surfaceRedistributePar - problem when decomposing the surface |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
March 25, 2019, 05:06 |
surfaceRedistributePar - problem when decomposing the surface
|
#1 |
New Member
Bartłomiej Gackiewicz
Join Date: Sep 2017
Location: Lublin, Poland
Posts: 3
Rep Power: 9 |
Hello all,
I'm trying to decompose my stl file into few parts to run snappyHexMesh in parallel. I couldn't find any detailed documentation for surfaceRedistributePar OpenFOAM utility but maybe someone here will help me solve this problem. I'm running surfaceRedistributePar on decomposed case with: Code:
mpirun --allow-run-as-root -np 4 surfaceRedistributePar -parallel sand.stl follow The geometry is described in snappyHexMeshDict: Code:
geometry { sand.stl { regions { ascii // Named region in the STL file { name sand_boundary; // User-defined patch name } // otherwise given sphere.stl_secondSolid } type distributedTriSurfaceMesh; distributionType follow; } main_cylinder { type searchableCylinder; point1 (2.10099737 2.10099737 -0.3); point2 (2.10099737 2.10099737 3.5); radius 2.05298263; name boundaryWalls; } }; Code:
*** An error occurred in MPI_Bsend *** reported by process [140337071128577,140333761429507] *** on communicator MPI_COMM_WORLD *** MPI_ERR_BUFFER: invalid buffer pointer *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, *** and potentially your MPI job) Line Code:
Loading undecomposed surface "s3down10_diststl/constant/triSurface/sand.stl" Code:
Loading undecomposed surface "" I checked differences between OpenFOAM 2.4 and 5.0 and couldn't find significant ones (beyond blockMeshDict file location). |
|
May 8, 2019, 11:57 |
|
#2 | |
New Member
Bartłomiej Gackiewicz
Join Date: Sep 2017
Location: Lublin, Poland
Posts: 3
Rep Power: 9 |
I will add that the error i OpenFOAM 5.0 was:
Quote:
However, the same error as in OF 2.4 is present and it seems to be related to MPI. I use SYSTEMOPENMPI implementation in OpenFOAM 2.4.0 and OPENMPI in OF 5.0 |
||
May 17, 2019, 22:49 |
|
#3 |
Senior Member
Join Date: Jul 2013
Posts: 124
Rep Power: 13 |
I'm currently having this exact same problem. Please let me know if you have solved this issue.
|
|
May 19, 2019, 16:45 |
|
#4 |
Senior Member
Join Date: Jul 2013
Posts: 124
Rep Power: 13 |
Can someone please take a look at this? There is very limited information about how to actually use surfaceRedistributePar available. I have a large geometry file that I am trying to use with snappyHexMesh (~3+ GB). The default behavior of snappyHexMesh is to load the geometry file on every processor, which is obviously consuming a tremendous amount of memory and limiting the number of processors that I can use. Apparently surfaceRedistributePar can be used to decompose the geometry file onto the processors so that only the relevant part of the geometry is loaded. However, this does not appear to work as expected.
The normal case setup for snappyHexMesh is to store the .stl or .obj files in constant/triSurface. The workflow to use snappyHexMesh and surfaceRedistributePar together is apparently along these lines: blockMesh -> decomposePar -> surfaceRedistributePar -> snappyHexMesh. However, running decomposePar obviously does not decompose the .stl or .obj geometry, and surfaceRedistributePar will immediately complain with a fatal error that it cannot find the triSurfaceMesh in the processor*/constant/triSurface directories. Does this mean that you have to first copy the (large) full geometry file onto each of the processor directories? That seems incorrect. However, even if you do copy the file into each processor directory, surfaceRedistributePar will still fail with other errors, including the fileModificationChecking setting which can be solved by changing the setting to "timeStamp". However, then surfaceRedistributePar complains about MPI errors, such as Selecting decompositionMethod simple [Laptop:19880] *** An error occurred in MPI_Bsend [Laptop:19880] *** reported by process [3787915265,0] [Laptop:19880] *** on communicator MPI COMMUNICATOR 3 SPLIT FROM 0 [Laptop:19880] *** MPI_ERR_BUFFER: invalid buffer pointer [Laptop:19880] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [Laptop:19880] *** and potentially your MPI job) [Laptop:19878] 3 more processes have sent help message help-mpi-errors.txt / mpi_errors_are_fatal [Laptop:19878] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages Does anyone actually know the correct way to use this utility? |
|
May 31, 2019, 03:36 |
|
#5 | ||
New Member
Bartłomiej Gackiewicz
Join Date: Sep 2017
Location: Lublin, Poland
Posts: 3
Rep Power: 9 |
Quote:
Quote:
Which version of OpenFOAM do you have? And which implementation of MPI do you use? (you can check in etc/bashrc file in your OpenFOAM installation folder) I will report my progress when I come back to this problem. |
|||
May 31, 2019, 10:03 |
|
#6 |
Senior Member
Join Date: Jul 2013
Posts: 124
Rep Power: 13 |
I think you are right that this is a bug introduced in the later versions of OpenFOAM. I am using version 6. I think I am using cray-mpich/7.6.3. I may give it a try with the other branch of OpenFOAM (v1802 I think?), but I'm not looking forward to going through compilation yet again.
If someone else sees this, please enlighten us! |
|
June 9, 2020, 04:41 |
|
#7 |
Senior Member
Franco
Join Date: Nov 2019
Location: Compiègne, France
Posts: 129
Rep Power: 6 |
hello ,
i have just discover this possibility to use in large STLs, have anyone succed in the workflow of this application? and if the bug have been solved with the newer vr 7? thanks |
|
Tags |
snappyhexmesh, stl, stl mesh, surfaceredistributepar |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
[ICEM] surface mesh merging problem | everest | ANSYS Meshing & Geometry | 44 | April 14, 2016 07:41 |
Problem with surface tension | luc.gig | Fluent Multiphase | 0 | March 7, 2014 08:53 |
Catalyst Modelling Boundary Problem (Surface reaction) | Fritz | STAR-CCM+ | 0 | August 6, 2013 10:10 |
surface orentation problem in icemcfd | jeevan kumar | CFX | 0 | August 18, 2008 05:25 |
Surface orentation problem in icemcfd | jeevan kumar | CFX | 0 | August 13, 2008 02:54 |