CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

Problem with foam-extend 4.0 ggi parallel run

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   February 20, 2018, 08:41
Default Problem with foam-extend 4.0 ggi parallel run
  #1
New Member
 
Metikurke
Join Date: May 2017
Posts: 21
Rep Power: 9
Metikurke is on a distinguished road
Dear OpenFOAM/foam-extend users,

I am trying to simulate a wind turbine in moving condition using ggi option in Foam-extend 4.0, I have found very good suggestions in this forum to run and solve my case in both serial and parallel. First of all thank you everyone for that. The following link has helped me so much to decompose my case to run in parallel. The link provides information about how to use manual decomposition.

DecomposePar utility (You can find the steps related to my project at the bottom of this thread)

Although I am successful in running simulation of wind turbine in parallel, it is taking same amount of time as in the case of serial. Hence I request you to share your knowledge to run the parallel simulation effectively.The reason for the above is of the following:

1. There are in total 7.4 million cells. But, 5.9 million cells in moving region.

2. The method in the above link helps to confine all the cells in moving region to a single processor. That means I will be having 1.5 million cells distributed to 11 processors and 5.9 million cells in 1 processor.

So far I have followed the following steps, which might be useful to understand my problem and also for the future users who would like to use ggi option.

1. Created structured mesh for both Bounding box and Wind turbine blades using CF Mesh.

2. I have used mergeMesh utility to merge the different mesh regions. And, copied this polymesh folder to the constant folder.

3. I am running RANS simulation, hence created k, epsilon, p, u, nut files in accordance with patch types.

4. A setBatchGgi file has been used for these face sets.

5. Manual decomposition method is used as suggested in the thread (directMapped + regionCoupling + parallel problems), and decomposePar -cellDist has been used to create the required cellDecomposition file in constant folder.

6. modify header (object entry) and the name of this file to decompDict, for instance

7. For instance: replace all '2' by '1' in decompDict file . Using vim comand: :%s/2/1/g for instance

8. Change the decomposeParDict file in order to use the manual method and specify the name of your file (i.e decompDict in this case)

9. Run setSet -batch setBatchGgi.

10. faceToZones -noFlipMaps.

11. This time running decomposePar using the decompDict file in constant folder.

12. pimpleDyMFoam -n 12 -parallel.

Only important files are attached since the case is too big

setBatchGgi.tar.gz

0.tar.gz

system.tar.gz

boundary.tar.gz

If I don't use manual method I get the following error

[5]
[5]
[5] --> FOAM FATAL ERROR:
[5] face 0 area does not match neighbour by 0.136932% -- possible face ordering problem.
patch: procBoundary5to3 my area: 4.52233e-05 neighbour area: 4.52852e-05 matching tolerance: 1
Mesh face: 4295161 vertices: 4((-1.06997 2.03593 -0.0422083) (-1.06656 2.03203 -0.0429862) (-1.06671 2.03204 -0.051505) (-1.06992 2.03595 -0.0513499))
Rerun with processor debug flag set for more information.
[5]
[5] From function processorPolyPatch::calcGeometry()
[5] in file meshes/polyMesh/polyPatches/constraint/processor/processorPolyPatch.C at line 230.
[5]
FOAM parallel run exiting
[5]
[3]
[3]
[3] --> FOAM FATAL ERROR:
[3] face 0 area does not match neighbour by 0.136932% -- possible face ordering problem.
patch: procBoundary3to5 my area: 4.52852e-05 neighbour area: 4.52233e-05 matching tolerance: 1
Mesh face: 4303238 vertices: 4((-1.06997 2.03593 -0.0422083) (-1.06992 2.03595 -0.0513499) (-1.06668 2.03205 -0.051505) (-1.06653 2.03204 -0.0429862))
Rerun with processor debug flag set for more information.
[3]
[3] From function processorPolyPatch::calcGeometry()
[3] in file meshes/polyMesh/polyPatches/constraint/processor/processorPolyPatch.C at line 230.
[3]
FOAM parallel run exiting
[3]
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 3 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[lsm230:08067] 1 more process has sent help message help-mpi-api.txt / mpi-abort
[lsm230:08067] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages

Awaiting for the suggestions, Thank you.


Regards,

Metikurke

Last edited by Metikurke; February 20, 2018 at 08:54. Reason: Grammar mistake in title
Metikurke is offline   Reply With Quote

Old   December 6, 2018, 16:51
Default
  #2
New Member
 
Enrico De Filippi
Join Date: Jul 2018
Location: Brescia, Italy
Posts: 14
Rep Power: 8
EnricoDeFilippi is on a distinguished road
Did you menage to solve the problem?
EnricoDeFilippi is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Explicitly filtered LES saeedi Main CFD Forum 16 October 14, 2015 12:58
damBreak case parallel run problem behzad-cfd OpenFOAM Running, Solving & CFD 5 August 2, 2015 18:18
simpleFoam in parallel issue plucas OpenFOAM Running, Solving & CFD 3 July 17, 2013 12:30
[blockMesh] BlockMesh FOAM warning gaottino OpenFOAM Meshing & Mesh Conversion 7 July 19, 2010 15:11
[blockMesh] Axisymmetrical mesh Rasmus Gjesing (Gjesing) OpenFOAM Meshing & Mesh Conversion 10 April 2, 2007 15:00


All times are GMT -4. The time now is 16:04.