CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

Parallelization of a new solver

Register Blogs Community New Posts Updated Threads Search

Like Tree2Likes
  • 1 Post By ntrask
  • 1 Post By ntrask

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   November 10, 2008, 05:17
Default Dear Forum Good Morning
  #1
Senior Member
 
Join Date: Mar 2009
Posts: 248
Rep Power: 18
jaswi is on a distinguished road
Dear Forum

Good Morning

I have some doubts related to parallelization of an OpenFOAM application or solver.

A modiication done to interFoam works fine on single processor but crashes, when run in parallel. A simple test such as :

mesh.ncells() or field.size() give different results when run on single and in parallel.

Also the changes to the solver do some explicit operation. For example :

Look for a cell with a particular value of the volume fraction field gamma and take their neighbours and do some calculation. How to make such operations parallel aware. As far as my understanding goes one has to explicitly take care of it. In code this might look:

forAll(gamma, cellI)
{
if (0.01 < gamma < 0.99)
{
perform calcs ....

}


}



I request the users with experience of this aspect to please share the information. It would be great if we could compile a list of guidelines one may follow to make sure that their solver is completely compatible to parallel runs.

Hope that I will find some contributors

Kind Regards
Jaswi
jaswi is offline   Reply With Quote

Old   November 10, 2008, 10:23
Default Some more search shows that a
  #2
Senior Member
 
Join Date: Mar 2009
Posts: 248
Rep Power: 18
jaswi is on a distinguished road
Some more search shows that a reduce operation has to be done whereever something processor dependent is being done. I tried the following:

label cellCount = mesh.nCells();
reduce(cellCount, sumOp<label>());

Info << "Mesh cell count : " << cellCount << endl;

returns the same value for single and parallel, but when I do the same for faces:

label faceCount = mesh.nFaces();
reduce(faceCount, sumOp<label>());

Info << "Mesh face count : " << faceCount << endl;

the output is different as the faceCount now includes the number of faces for the processor patches as well so be aware of that fact. I do not how yet how to get the correct number of faces using reduction.

It seems that the reduce operation can be done for various operations such as

reduce(foundLabel, orOp<bool>())

Still digging....

Any body having a clear understanding please contribute

Kind Regards
Jaswi
jaswi is offline   Reply With Quote

Old   November 10, 2008, 10:35
Default I have a suggestion - although
  #3
Senior Member
 
Sandeep Menon
Join Date: Mar 2009
Location: Amherst, MA
Posts: 403
Rep Power: 25
deepsterblue will become famous soon enough
I have a suggestion - although I'm not too sure about the feasibility of implementation. MPI inherently numbers processes between 0 and (n-1); where n is the number of subdomains. I'm guessing that every processor patch can identify which MPI process / sub-domain it talks to. So, for every processor patch, identify if the neighbouring sub-domain is numbered higher than the current one. If yes, then add the number of faces for the processor patch to the current sub-domain's face-count, and discard if otherwise. Finally, you can do a conventional reduce with sumOp, to sum up the face count for all sub-domains.
__________________
Sandeep Menon
University of Massachusetts Amherst
https://github.com/smenon
deepsterblue is offline   Reply With Quote

Old   November 30, 2008, 09:11
Default OpenFOAM uses domain decomposi
  #4
Senior Member
 
Henrik Rusche
Join Date: Mar 2009
Location: Wernigerode, Sachsen-Anhalt, Germany
Posts: 281
Rep Power: 18
henrik is on a distinguished road
OpenFOAM uses domain decomposition. Therefore each processor only knows about the cells and faces residing on it.

Your problem is difficult because you want to identify cells and then do calculations with their neighbours. This stretches the "domain decomposition" paradigmn because the neighbour might reside on a different processor.

My recipie for you:

1) Create an indicator field locally
2) interpolate the result to the faces (This should update the processor-processor boundaries)
3) Identify the neighbour cells: They have faces with non-zero values in the interpolated indicator field
4) Do the calc involving neighbours

There is probably a much more elegant solution, but I would have to dig a lot.

Henrik
henrik is offline   Reply With Quote

Old   November 30, 2008, 10:58
Default @Jaswinder I'm intrested to i
  #5
Senior Member
 
Ivan Flaminio Cozza
Join Date: Mar 2009
Location: Torino, Piemonte, Italia
Posts: 210
Rep Power: 18
ivan_cozza is on a distinguished road
Send a message via MSN to ivan_cozza
@Jaswinder
I'm intrested to in this kind of problem as I will create a solver that needs to know for each cell the neighbours, and I want to make it work in parallel.
My question is: does exist any structure in Pstream that store the local cell location?
So, something that tells us if a cell in the x domain is on the boundary with the y domain.
Thanks!
ivan_cozza is offline   Reply With Quote

Old   December 2, 2008, 17:29
Default processorPolyPatch contains a
  #6
New Member
 
Nat Trask
Join Date: Mar 2009
Location: Providence, RI
Posts: 17
Rep Power: 17
ntrask is on a distinguished road
processorPolyPatch contains a list of faces on the patch between different processors and functions for getting the processor number of each patch.

I believe what you want to do is loop over the faces of your cell, check to see if the faces are on a processorPolyPatch, and then use the functions in processorPolyPatch to return the two processor IDs.

-Nat
ntrask is offline   Reply With Quote

Old   December 13, 2008, 13:09
Default Anyone has a follow-up on this
  #7
Member
 
Heng Xiao
Join Date: Mar 2009
Location: Zurich, Switzerland
Posts: 58
Rep Power: 17
xiao is on a distinguished road
Anyone has a follow-up on this issue? Or are the suggestions (recipie) of Nat and Henrik are the optimal ones already?

Suggestions are appreciated!

Heng
xiao is offline   Reply With Quote

Old   December 22, 2008, 11:48
Default After some research, it seems
  #8
Member
 
Heng Xiao
Join Date: Mar 2009
Location: Zurich, Switzerland
Posts: 58
Rep Power: 17
xiao is on a distinguished road
After some research, it seems that the best solution I can propose now, it to create a global mesh on each processor by reading the mesh data in the global dir (this object needs only to be a "primitiveMesh" object, according the document written by Jaswi "Creation processes in OpenFoam"). Then, each processor can analyze the global mesh in terms of neighbouring relation with the build-in mesh analyze tools such as pointCell(); cellCell() etc to find neighbours in global mesh. And then, with processor-to-global mesh addressing relations / mapping (refer to the code in reconstructParMesh.C), you can find the neighboring relation in terms of local cell ID.

I admit this is an ugly solution, because the neighboring relation is obtained by going back to the global mesh data, and the parallel communication based on this neighboring relation still need to be build up from scratch, but this is the best I can think of now and I don't really need communication for now, just the neighboring relation.

For my case, the global mesh is not very big at all (then people would ask, why bother go parallel of your mesh is small? the reason is, I am doing Euler-MD coupling, with OpenFoam dealing with the Eulerian part, based on the icoLagrangianFoam, I want the MD part to be parallel and be able to handle collision).

I tried the approach by Nat, but it seems that "processorPolyPatch" only give the faceCenters and faceAreas of the neighbours (accross processors). What I need is the cell centers associated with these faces.

Hope this contribution would finally lead to an elegant solution of this problem.
xiao is offline   Reply With Quote

Old   December 22, 2008, 13:20
Default Heng processorPolyPatch can
  #9
New Member
 
Nat Trask
Join Date: Mar 2009
Location: Providence, RI
Posts: 17
Rep Power: 17
ntrask is on a distinguished road
Heng

processorPolyPatch can provide the processor id of a neighboring patch. You can then use pstream to send and receive messages between processors. You can assemble a list of cell centers or whatever you'd like and send them across the processor boundary.

i.e. Loop over patch faces, create a list of owner cell centers, send to neighbor with pstream, and process on opposite side.

If anyone can give their input as well it would be much appreciated.

-Nat
ntrask is offline   Reply With Quote

Old   December 22, 2008, 16:41
Default Hi Nat, (First, sorry for c
  #10
Member
 
Heng Xiao
Join Date: Mar 2009
Location: Zurich, Switzerland
Posts: 58
Rep Power: 17
xiao is on a distinguished road
Hi Nat,

(First, sorry for changing the name.. this is for some very weird reason ... I am still Heng)

Thanks for the clarification! So, the procedure would be, on each processor:
-------------------
forEach-ProcessorPatch
{
.... forEachFace-OnThisPatch
.....{
..........find the cells which own this face via faceCell()
..........add this cell to a buffer list
.....}
......Create Pstream
......Send & receive to neighbours
}
------------------------

Correct?
The procedure of creating Pstream and exchange (send/receive) with neighbours is pretty commn in through the OpenFoam code, but I am not sure the part I am doing for each face on the processorPolyPatch is right.

BTW, I feel that to solve the problem raised in this thread, something could be learned from the class "moleculerCloud",

http://foam.sourceforge.net/doc/Doxygen/html/dir_fb64a3fdd283c156088e34430113301 a.html

Particularly the following files: (build***InteractionList.H/C)

http://foam.sourceforge.net/doc/Doxygen/html/moleculeCloudBuildDirectInteraction List_8H-source.html

http://foam.sourceforge.net/doc/Doxygen/html/moleculeCloudBuildReferredInteracti onList_8H-source.html

But I haven't fully understood the details yet. Still working on it. I will share my findings shortly if any.

Best,
Heng
xiao is offline   Reply With Quote

Old   December 23, 2008, 05:03
Default Hi Nat, could you post a piec
  #11
Senior Member
 
Ivan Flaminio Cozza
Join Date: Mar 2009
Location: Torino, Piemonte, Italia
Posts: 210
Rep Power: 18
ivan_cozza is on a distinguished road
Send a message via MSN to ivan_cozza
Hi Nat,
could you post a piece of code in which you do this passage?
ivan_cozza is offline   Reply With Quote

Old   December 23, 2008, 11:35
Default This is a bit of my code where
  #12
New Member
 
Nat Trask
Join Date: Mar 2009
Location: Providence, RI
Posts: 17
Rep Power: 17
ntrask is on a distinguished road
This is a bit of my code where I did something similar. Replace the bit where it says blobID with whatever field you're sending across. Good luck deciphering

//Parallel
if (Pstream::parRun()){
//Loop over processor patches
Info << "Sending stuff" << endl;
forAll (mesh.boundaryMesh(),patchInd){
const polyPatch& patch = mesh.boundaryMesh()[patchInd];
if (typeid(patch) == typeid(processorPolyPatch)){
const processorPolyPatch& procpatch = refCast<const>(patch);
//only for master chunk -> cut work in half
if (procpatch.myProcNo() > procpatch.neighbProcNo()){

//Make buffer
Field<scalar> mybuffer(patch.size());
const labelList& internalcells = patch.faceCells();
forAll(internalcells, ind){
label curcell = internalcells[ind];
mybuffer[ind] = blobID[curcell];
patchboundary[curcell] = procpatch.myProcNo(); //for debugging
}

//Send buffer to neighbor
OPstream tNP(procpatch.neighbProcNo(),0,false);
tNP << mybuffer << endl;
}
}
}

Info << "Messages sent" << endl;

//Receive messages
forAll (mesh.boundaryMesh(),patchInd){
const polyPatch& patch = mesh.boundaryMesh()[patchInd];
if (typeid(patch) == typeid(processorPolyPatch)){
const processorPolyPatch& procpatch = refCast<const>(patch);
if (procpatch.myProcNo() < procpatch.neighbProcNo()){ //only for slave chunk
//Make buffer
Field<scalar> yourbuffer(patch.size());

IPstream fNP(procpatch.neighbProcNo());
fNP >> yourbuffer;
D.R. likes this.
ntrask is offline   Reply With Quote

Old   December 23, 2008, 14:10
Default Uau! Thanks Nat, this could b
  #13
Senior Member
 
Ivan Flaminio Cozza
Join Date: Mar 2009
Location: Torino, Piemonte, Italia
Posts: 210
Rep Power: 18
ivan_cozza is on a distinguished road
Send a message via MSN to ivan_cozza
Uau!
Thanks Nat, this could be a great help for beginners in parallelization of codes!!!
ivan_cozza is offline   Reply With Quote

Old   December 23, 2008, 17:30
Default Thanks a lot !!! One line of
  #14
Member
 
Heng Xiao
Join Date: Mar 2009
Location: Zurich, Switzerland
Posts: 58
Rep Power: 17
xiao is on a distinguished road
Thanks a lot !!!
One line of code is as good as a thousand words!

Heng
xiao is offline   Reply With Quote

Old   December 23, 2008, 18:07
Default A question: is it true that o
  #15
Member
 
Heng Xiao
Join Date: Mar 2009
Location: Zurich, Switzerland
Posts: 58
Rep Power: 17
xiao is on a distinguished road
A question: is it true that on both sides of a processorPolyPatch, the faces are number the same, or at least in the same order? This is necessary for the communication above to work.

Say, proc 1 and 2 share a patch with 3 faces. So, face #3 on the patch on proc 1 is the same as face #3 on the patch of proc 2;


Proc 1

3 ***** 4 **** 5 ******** (face # on proc 1)
---------|---------|-----------

3 **** 4 **** 5 ******** (face # on proc 2)

Proc 2

Or is it like this:

Proc 1

2 ***** 5 **** 6 ******** (face # on proc 1)
---------|--------|----------

3 **** 4 **** 5 ******** (face # on proc 2)

Proc 2

************************************
The former seems to be true according to this post:

http://openfoamwiki.net/index.php/Write_OpenFOAM_meshes

but I doubt how this can be achieved, since one processor could be neighbor with several other processors.

Regards,
Heng
xiao is offline   Reply With Quote

Old   January 5, 2009, 10:07
Default The faces are most likely not
  #16
New Member
 
Nat Trask
Join Date: Mar 2009
Location: Providence, RI
Posts: 17
Rep Power: 17
ntrask is on a distinguished road
The faces are most likely not numbered the same, but the patches store the face list in the same order.

-Nat
D.R. likes this.
ntrask is offline   Reply With Quote

Old   January 5, 2009, 12:34
Default Hi Nat, Thanks for the cla
  #17
Member
 
Heng Xiao
Join Date: Mar 2009
Location: Zurich, Switzerland
Posts: 58
Rep Power: 17
xiao is on a distinguished road
Hi Nat,

Thanks for the clarification!

best,
Heng
xiao is offline   Reply With Quote

Old   August 24, 2009, 05:12
Default
  #18
Senior Member
 
Dr. Alexander Vakhrushev
Join Date: Mar 2009
Posts: 256
Blog Entries: 1
Rep Power: 19
makaveli_lcf is on a distinguished road
Send a message via ICQ to makaveli_lcf
Hi all!

I read this post and I'm also worrying about OpenFOAM parallelization of solvers. Actually based on by experience parallel solution is not the same as serial one.
It is concerns both standard solvers and new one, created by myself.

ser.png par.png

First example is transient solution of the flow simulated with turbFoam.
You see that solutions with serial run (first picture) and the parallel (second figure) are different at the same time moment.

It is worse with coupled solver for the flow and energy calculations. My solver has a criteria to stop internal loop when all initial residuals are less then fixed value (say 1e-06), so it means that equations for liquid motion and heat transfer (which are connected through source terms) are converged together. For serial solution I get smooth fields. For parallel run there is strong gradient at the processors borders (see picture attached).

par_4cpus.jpg

Please, does anyone has an idea what coses such behavior between processors?
For me it is obvious that solver performs convergence criteria only within local domain, not taking into account global distribution of modeled fields at the neighboring processors. How to "teach" solver to perform global convergence criteria for parallel runs?

Thank you, looking forward for your advices!
__________________
Best regards,

Dr. Alexander VAKHRUSHEV

Christian Doppler Laboratory for "Metallurgical Applications of Magnetohydrodynamics"

Simulation and Modelling of Metallurgical Processes
Department of Metallurgy
University of Leoben

http://smmp.unileoben.ac.at
makaveli_lcf is offline   Reply With Quote

Old   August 23, 2012, 12:20
Default
  #19
New Member
 
Erik Spence
Join Date: May 2012
Location: Princeton, NJ, USA
Posts: 1
Rep Power: 0
tapper_of_spines is on a distinguished road
Quote:
Originally Posted by makaveli_lcf View Post
It is worse with coupled solver for the flow and energy calculations. My solver has a criteria to stop internal loop when all initial residuals are less then fixed value (say 1e-06), so it means that equations for liquid motion and heat transfer (which are connected through source terms) are converged together. For serial solution I get smooth fields. For parallel run there is strong gradient at the processors borders (see picture attached).

Please, does anyone has an idea what coses such behavior between processors?
For me it is obvious that solver performs convergence criteria only within local domain, not taking into account global distribution of modeled fields at the neighboring processors. How to "teach" solver to perform global convergence criteria for parallel runs?

Thank you, looking forward for your advices!

Dear Dr. Vakhrushev,

I realize that this thread is very old, but I've run into the same problem that you described here: large gradients forming at subdomain boundaries. Did you ever solve this problem?

TOS
tapper_of_spines is offline   Reply With Quote

Old   August 27, 2012, 03:45
Default
  #20
Senior Member
 
Dr. Alexander Vakhrushev
Join Date: Mar 2009
Posts: 256
Blog Entries: 1
Rep Power: 19
makaveli_lcf is on a distinguished road
Send a message via ICQ to makaveli_lcf
Hi Erik!

It seams it was solved with a OF version update... I'm not sure, I have to recheck that case file. But currently I do not observe similar behavior any more.

Cheers,
Alexander
__________________
Best regards,

Dr. Alexander VAKHRUSHEV

Christian Doppler Laboratory for "Metallurgical Applications of Magnetohydrodynamics"

Simulation and Modelling of Metallurgical Processes
Department of Metallurgy
University of Leoben

http://smmp.unileoben.ac.at
makaveli_lcf is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
parallelization on linux cesco FLUENT 1 September 27, 2008 13:18
Poission Solver Parallelization Yunliang Main CFD Forum 0 September 5, 2003 01:37
Parallelization Levi Main CFD Forum 1 May 25, 2003 03:03
About the parallelization ptyue Main CFD Forum 8 January 27, 2003 00:29
Parallelization efficiency.. karthik Main CFD Forum 1 January 4, 2000 12:20


All times are GMT -4. The time now is 17:10.