CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

InterDyMFoam dynamic messing in parallel fails under nonquiescent conditions

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   August 22, 2008, 13:12
Default I have a number of simulations
  #1
Member
 
Adam Donaldson
Join Date: Mar 2009
Location: Ottawa, Ontario, Canada
Posts: 37
Rep Power: 17
adona058 is on a distinguished road
I have a number of simulations that require dynamic mesh refinement, which are entirely too large to complete on a single processor.

I have had no problem getting quiescent cases (no superimposed velocity field) to run using interDyMFoam. The only limitation is that the original mesh being decomposed has cells that are at a level of 0 (i.e. no cell merging will occur accross the processor boundary).

The problem that has more recently occured is a bit more troublesome. As an example, I have been running a simple bubble rising simulation, where a low-density bubble is allowed to rise through a high-density medium. The solution is stable until I impose the gravitational force. After the force is enabled, the simulation proceeds until the point where the refined cells at the trailing edge of the interface merge to the lowest level. At that point I get the following error:

[1]
[1]
[1] face:6(323 329 20840 395 8186 389) level:6(0 0 2 0 1 0) startFp:2 wantedLevel:1#0 Foam::error::printStack(Foam:stream&) in "/usr/local/OpenFOAM/OpenFOAM-1.5/lib/linux64GccDPOpt/libOpenFOAM.so"
#1 Foam::error::abort() in "/usr/local/OpenFOAM/OpenFOAM-1.5/lib/linux64GccDPOpt/libOpenFOAM.so"
#2 Foam::hexRef8::findLevel(Foam::face const&, int, bool, int) const in "/usr/local/OpenFOAM/OpenFOAM-1.5/lib/linux64GccDPOpt/libdynamicMesh.so"
#3 Foam::hexRef8::createInternalFaces(Foam::List<foam ::list<int> > const&, Foam::List<foam::list<int> > const&, Foam::List<int> const&, Foam::List<int> const&, Foam::List<int> const&, Foam::List<int> const&, int, Foam::polyTopoChange&) const in "/usr/local/OpenFOAM/OpenFOAM-1.5/lib/linux64GccDPOpt/libdynamicMesh.so"
#4 Foam::hexRef8::setRefinement(Foam::List<int> const&, Foam::polyTopoChange&) in "/usr/local/OpenFOAM/OpenFOAM-1.5/lib/linux64GccDPOpt/libdynamicMesh.so"
#5 Foam::dynamicRefineFvMesh::refine(Foam::List<int> const&) in "/usr/local/OpenFOAM/OpenFOAM-1.5/lib/linux64GccDPOpt/libdynamicFvMesh.so"
#6 Foam::dynamicRefineFvMesh::update() in "/usr/local/OpenFOAM/OpenFOAM-1.5/lib/linux64GccDPOpt/libdynamicFvMesh.so"
#7 main in "/usr/local/OpenFOAM/OpenFOAM-1.5/applications/bin/linux64GccDPOpt/interDyMFoam"
#8 __libc_start_main in "/lib64/libc.so.6"
#9 Foam::regIOobject::writeObject(Foam::IOstream::str eamFormat, Foam::IOstream::versionNumber, Foam::IOstream::compressionType) const in "/usr/local/OpenFOAM/OpenFOAM-1.5/applications/bin/linux64GccDPOpt/interDyMFoam"
[1]
[1]
[1] From function hexRef8::findLevel
[1] in file polyTopoChange/polyTopoChange/hexRef8.C at line 710.
[1]
FOAM parallel run aborting
[1]
[x27:06511] MPI_ABORT invoked on rank 1 in communicator MPI_COMM_WORLD with errorcode 1
mpirun noticed that job rank 0 with PID 4440 on node x31.icpet.nrc.ca exited on signal 15 (Terminated).


This error has occured consistantly, at approximtely the same time step in each repeated simulation.

I would appreciate any help that you can provide on resolving this error. It is not so much for the use of InterDyMFoam, but another diffuse-interface multiphase solver I am developing that requires the use of the adaptive mesh refinement.

If you need any additional information on the settings, I can provide it. The dynamicMeshDict file is identical to that used in the damBreaking tutorial for 1.5, except the maximum number of cells has been increased.

Thank you,

Adam
adona058 is offline   Reply With Quote

Old   August 22, 2008, 13:22
Default If this is a case of cells mer
  #2
Member
 
Adam Donaldson
Join Date: Mar 2009
Location: Ottawa, Ontario, Canada
Posts: 37
Rep Power: 17
adona058 is on a distinguished road
If this is a case of cells merging accross the processor boundary, is there any way of incorporating a check into code which will inhibit cell-merges? i.e. if a group of cells is flagged for unrefinement, check to see if any of the internally connecting faces belong to a processor boundary, and if so, stop the unrefinement.


I know that I can create a surfaceScalarField, set the processor boundary value to 1, and all others to zero, then force the mesh refinement to occur at the boundaries to the highest level, thereby avoiding any possibilty of subsequent unrefinement... but this seems like a waste of computational resources.

Thanks.

Adam
adona058 is offline   Reply With Quote

Old   August 22, 2008, 14:07
Default Can you file a bug report (in
  #3
Senior Member
 
Mattijs Janssens
Join Date: Mar 2009
Posts: 1,419
Rep Power: 26
mattijs is on a distinguished road
Can you file a bug report (in OpenFOAM-bugs) and create a testcase?
mattijs is offline   Reply With Quote

Old   August 22, 2008, 16:23
Default As per your request, I have fi
  #4
Member
 
Adam Donaldson
Join Date: Mar 2009
Location: Ottawa, Ontario, Canada
Posts: 37
Rep Power: 17
adona058 is on a distinguished road
As per your request, I have filed the bug report. it can be found at Bug Report.

I have included a test case. Due to the size limitations, I needed to put the file on my website and link to it.

Thanks.

Adam
adona058 is offline   Reply With Quote

Old   September 18, 2009, 15:45
Default
  #5
New Member
 
Sean McIntyre
Join Date: Mar 2009
Location: State College, PA
Posts: 11
Rep Power: 17
sean_mcintyre is on a distinguished road
In OpenFOAM-1.6, I've been experiencing similar issues using AMR. As soon as flow starts really moving, the solution behaves non-physically around the processor boundaries, eventually leading to instability and the solution crashes. It works quite well in serial, but slowly due to the number of cells.

Thanks,
Sean McIntyre
sean_mcintyre is offline   Reply With Quote

Old   August 19, 2010, 12:47
Default
  #6
Senior Member
 
kmooney's Avatar
 
Kyle Mooney
Join Date: Jul 2009
Location: San Francisco, CA USA
Posts: 323
Rep Power: 18
kmooney is on a distinguished road
I've observed nearly identical behavior in a VOF case I'm attempting to run. Ive tried three versions (1.5.x 1.6.x and 1.7.x) with the same result. There seems to some issue with parallel refinement, then again, only with an imposed inlet velocity.
kmooney is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
InterDyMFoam dynamic refinement ala OpenFOAM Running, Solving & CFD 12 September 28, 2016 19:51
Running interDyMFoam in parallel sega OpenFOAM Running, Solving & CFD 1 March 12, 2009 06:54
Ignition fails in parallel run combustion solvers msha OpenFOAM Bugs 17 January 17, 2009 04:49
InterDyMFoam dynamic meshing in parallel fails under nonquiescent conditions adona058 OpenFOAM Bugs 7 November 18, 2008 15:58
Serial run OK parallel one fails r2d2 OpenFOAM Running, Solving & CFD 2 November 16, 2005 13:44


All times are GMT -4. The time now is 17:44.