CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

Trouble with parallel runs

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   February 10, 2009, 07:48
Default Hi all, Running my case on
  #1
Member
 
Markus Weinmann
Join Date: Mar 2009
Location: Stuttgart, Germany
Posts: 77
Rep Power: 17
cfdmarkus is on a distinguished road
Hi all,

Running my case on a single core works fine (OF-1.5.x, simpleFoam, second order, default relaxation).

When I try to run the same in parallel mode the simulations keep blowing up after some iterations. Up to now, I havent found a way to avoid diveregence in parallel. Reducing underrelaxation and first order does not help!

I am using my own turbulence model library which is loaded in the controlDict file. When I decompose the case, the follwoing warning occurs:

--> FOAM Warning :
From function dlLibraryTable::open(const fileName& functionLibName)
in file db/dlLibraryTable/dlLibraryTable.C at line 79
could not load /rhome/mw405/OpenFOAM/mw405-1.5.x/lib/linux64GccDPOpt/libmyincompressibleRASMode ls.so: undefined symbol: _ZN4Foam14incompressible8RASModel30dictionaryConst ructorTablePtr_E

I dont know what this message is trying to tell me. How does it relate to the problem with running in jobs in parallel mode?

Does anyone know what is going wrong here?

Regards,
Markus
cfdmarkus is offline   Reply With Quote

Old   February 10, 2009, 10:05
Default Hi Markus! That kind of sym
  #2
Assistant Moderator
 
Bernhard Gschaider
Join Date: Mar 2009
Posts: 4,225
Rep Power: 51
gschaider will become famous soon enoughgschaider will become famous soon enough
Hi Markus!

That kind of symbol is usually a C++-symbol that was mangled to look like a C symbol (http://en.wikipedia.org/wiki/Name_mangling). Using the c++filt-command you can get the C++-name which in your case would be Foam::incompressible::RASModel::dictionaryConstruc torTablePtr_. This means that decomposePar does not know about turbulence-models (why should it?). Best thing you can do is comment out the libs entry in the controlDict that you used to load your turbulence-model. Decompose. Uncomment the entry. Run.

Bernhard

PS: of course the --remove-libs-option of the PyFoam-utilities does the commenting/uncommenting for you. But that is advertisment
__________________
Note: I don't use "Friend"-feature on this forum out of principle. Ah. And by the way: I'm not on Facebook either. So don't be offended if I don't accept your invitation/friend request
gschaider is offline   Reply With Quote

Old   February 10, 2009, 10:49
Default Thanks Bernhard, commenting
  #3
Member
 
Markus Weinmann
Join Date: Mar 2009
Location: Stuttgart, Germany
Posts: 77
Rep Power: 17
cfdmarkus is on a distinguished road
Thanks Bernhard,

commenting out the libs entry before decomposing the case eliminates the FOAM warning :-)

However, the main problem of divergence when running in parallel still persists.

So far, I couldn't find a way to stabilise the parallel runs sufficiently such that divergence does not occur (I tried: frist order and reduced relaxation). Again, the same setup works without problems for serial runs (second order, default relaxation).

Do you have any ideas on this issue?

Regards,
Markus
cfdmarkus is offline   Reply With Quote

Old   February 10, 2009, 12:16
Default Hi! Sorry. I thought the li
  #4
Assistant Moderator
 
Bernhard Gschaider
Join Date: Mar 2009
Posts: 4,225
Rep Power: 51
gschaider will become famous soon enoughgschaider will become famous soon enough
Hi!

Sorry. I thought the libs were the problem.

No idea what could be the case. Just one vague idea: if you're manipulating single cells of a field a correctBoundaryConditions-call might distribute those changes to the boundary patches on other processors

Can you check whether the blow-up happens on the the processor-boundaries?

Bernhard
__________________
Note: I don't use "Friend"-feature on this forum out of principle. Ah. And by the way: I'm not on Facebook either. So don't be offended if I don't accept your invitation/friend request
gschaider is offline   Reply With Quote

Old   February 11, 2009, 03:30
Default in the global controlDict chan
  #5
Senior Member
 
Wolfgang Heydlauff
Join Date: Mar 2009
Location: Germany
Posts: 136
Rep Power: 21
wolle1982 will become famous soon enough
in the global controlDict change the "floatTransfer" to 0 (before:1).

see other threat for this.

try running the case in serial for some timesteps and use result for parallelrun.
wolle1982 is offline   Reply With Quote

Old   February 11, 2009, 10:59
Default Hi From the beginning I was
  #6
Member
 
Markus Weinmann
Join Date: Mar 2009
Location: Stuttgart, Germany
Posts: 77
Rep Power: 17
cfdmarkus is on a distinguished road
Hi

From the beginning I was running parallel jobs with "floatTransfer" set to 0.
Also, restarting from converged results of a single processor simulation does not work.

I have a strong feeling that my turbulence model does not like running in parallel mode.
I just need to figure out why.

Thanks for now.
Markus
cfdmarkus is offline   Reply With Quote

Old   February 11, 2009, 11:31
Default If you are running anything th
  #7
Senior Member
 
Eugene de Villiers
Join Date: Mar 2009
Posts: 725
Rep Power: 21
eugene is on a distinguished road
If you are running anything that has been customised, switch it off and/or use an existing component.

If not try looking at the partially diverged flow field to see where the problem originates from.

Run the case in serie, decompose. Run in parallel dumping every timestep (or at least often enough to see the onset of instability). Reconstruct and look at the results.
eugene is offline   Reply With Quote

Old   February 18, 2009, 12:00
Default I now confiremed that my turbu
  #8
Member
 
Markus Weinmann
Join Date: Mar 2009
Location: Stuttgart, Germany
Posts: 77
Rep Power: 17
cfdmarkus is on a distinguished road
I now confiremed that my turbulence model does not run properly in parallel mode.

In order to determine the solution of a nonlinear equation, I need to loop over all elements of a volScalarField (see code below). I am sure that this looping causes my troubles in parallel mode. Unfortunately, I don't understand why it fails in parallel and works in serial mode.

As far as I know, looping over the elements of a volScalarField does not include boundary patches. Do I have to take extra care of the boundary/processor patches or am I missing something more fundamental?

Any ideas are appreciated.

Regards
Markus




//---------------------------------------
volScalarField P1 = (A3_*A3_/27.0 + (A1_*A4_/6.0 - 2.0/9.0*A2_*A2_)*I2S - 2.0/3.0*I2O)*A3_;
volScalarField P2 = P1*P1 - pow( (A3_*A3_/9.0 + (A1_*A4_/3.0 + 2.0/9.0*A2_*A2_)*I2S + 2.0/3.0*I2O) , 3.0);
volScalarField Nsol = P2/P2;

forAll(Nsol, i)
{
if (P2[i] >= 0.0) { Nsol[i] = ...}
else{ Nsol[i] = ... }
}
// compute nut_ using Nsol
...
//---------------------------------------
cfdmarkus is offline   Reply With Quote

Old   February 24, 2009, 09:53
Default On a different thread I found
  #9
Member
 
Markus Weinmann
Join Date: Mar 2009
Location: Stuttgart, Germany
Posts: 77
Rep Power: 17
cfdmarkus is on a distinguished road
On a different thread I found that divergence may be caused by a linear solver bottoming out.

Does anyone know what happens in such a case?

I am asking because the resdiuals of the omega equation look a bit suspicious. When things start to go wrong the residul for omega drops to 1e-26, whereas all other residuals are roughly 20 orders of magnitude higher.

Does such a behaviour make sense?

Markus
cfdmarkus is offline   Reply With Quote

Old   February 27, 2009, 04:59
Default Hi everybody! I have proble
  #10
jwp
New Member
 
Jens Wunderlich-Pfeiffer
Join Date: Mar 2009
Location: Berlin
Posts: 12
Rep Power: 17
jwp is on a distinguished road
Hi everybody!

I have problems with parallel run, too. (OF 1.4.1-dev, simpleFoam)
My case consists of about one million cells (all hexahedra), but rather complicated geometry.
After decomposing it in two pieces, there are minimum 200 processor faces (with metis).

Does anybody know, whether it's normal, that the decomposed case is more slowly than the serial one in such a case?

The other question: I would like to decompose the case by hand, in "manual method", in order to minimize the number of processor faces. But I think it's not so simple. Can anybody help?

Jens


PS: I had divergence problems, too. But in fact, I had solved this, by running the case in serial a few timesteps and then in parallel.
jwp is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Differences between serial and parallel runs carsten OpenFOAM Bugs 11 September 12, 2008 12:16
Parallel runs using LAM sek OpenFOAM Running, Solving & CFD 11 February 13, 2008 08:36
Data storage for parallel runs nordborg OpenFOAM Running, Solving & CFD 1 October 9, 2007 05:19
parallel runs Andy F CFX 1 March 5, 2006 17:32
Distributed parallel runs on ANSYS CFX 10 Manoj Kumar CFX 4 January 25, 2006 09:00


All times are GMT -4. The time now is 18:13.