|
[Sponsors] |
the global index for cells and facess in parallel computation |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
October 22, 2015, 12:24 |
the global index for cells and facess in parallel computation
|
#1 |
Senior Member
Join Date: Jan 2013
Posts: 372
Rep Power: 14 |
Hello,
When the parallel computations are conducted with OpenFOAM, the domain will be decomposed into several parts based on the number of the processors. So the following looping and the index "celli" will only for the cells or faces for the local processor: Code:
const volVectorField& cellcentre = mesh.C(); forAll(cellcentre,celli) { X[celli] = cellcentre[celli][0]; Y[celli] = cellcentre[celli][1]; Z[celli] = cellcentre[celli][2]; } |
|
October 31, 2015, 11:25 |
|
#2 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128 |
Hi OFFO,
This is one of those situations that reaaaaally depends on what is your final objective. Because from your original description, the solution is pretty simple:
But like I wrote above, it might depend on what is your actual objective. Best regards, Bruno
__________________
|
|
October 31, 2015, 11:44 |
|
#3 | |
Senior Member
Join Date: Jan 2013
Posts: 372
Rep Power: 14 |
Thank you so much, Bruno.
My objective is: when I run the parallel computations, all the cells and faces will be looped with their indices just in the local processor, not in the global indices (before it was decomposed). Now I would like to get the global indices within the looping in the parallel computations. Is there any method to do that? Thank you. Quote:
|
||
October 31, 2015, 12:00 |
|
#4 | |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128 |
Quote:
When you run decomposePar, the field will be automatically decomposed. Then in your solver, you need to load the field, like you do any other field, like "U" and "p". And then, instead of looking for the "cellcentre[celli]", you look for: Code:
originalID = cellID[celli]; |
||
October 31, 2015, 12:03 |
|
#5 |
Senior Member
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 355
Rep Power: 21 |
You can read cellProcAddresing arrays from processor0 ... processorN folders
see example below Code:
List<List<label> > processCellToGlobalAddr_; List<label> globalCellToProcessAddr_; if (Pstream::parRun()) { processCellToGlobalAddr_.resize ( Pstream::nProcs() ); //read local cell addressing labelIOList localCellProcAddr ( IOobject ( "cellProcAddressing", localMesh.facesInstance(), localMesh.meshSubDir, localMesh, IOobject::MUST_READ, IOobject::NO_WRITE ) ); processCellToGlobalAddr_[Pstream::myProcNo()] = localCellProcAddr; //send local cell addressing to master process if (Pstream::master()) { for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++) { IPstream fromSlave(Pstream::scheduled, jSlave); label nSlaveCells = 0; fromSlave >> nSlaveCells; processCellToGlobalAddr_[jSlave].resize(nSlaveCells); labelList& slaveCellProcAddr = processCellToGlobalAddr_[jSlave]; forAll(slaveCellProcAddr, iCell) { fromSlave >> slaveCellProcAddr[iCell]; } } } else { OPstream toMaster (Pstream::scheduled, Pstream::masterNo()); toMaster << localCellProcAddr.size(); forAll(localCellProcAddr, iCell) { toMaster << localCellProcAddr[iCell]; } } //redistribute cell addressing to slave processes if (Pstream::master()) { for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++) { OPstream toSlave (Pstream::scheduled, jSlave); forAll(processCellToGlobalAddr_, iProcess) { const labelList& thisProcessAddr = processCellToGlobalAddr_[iProcess]; const label nCells = thisProcessAddr.size(); toSlave << nCells; forAll(thisProcessAddr, jCell) { toSlave << thisProcessAddr[jCell]; } } } } else { IPstream fromMaster(Pstream::scheduled, Pstream::masterNo()); forAll(processCellToGlobalAddr_, iProcess) { labelList& thisProcessAddr = processCellToGlobalAddr_[iProcess]; label nCells = 0; fromMaster >> nCells; thisProcessAddr.resize(nCells); forAll(thisProcessAddr, jCell) { fromMaster >> thisProcessAddr[jCell]; } } } forAll(processCellToGlobalAddr_, jProc) { const labelList& jProcessAddr = processCellToGlobalAddr_[jProc]; forAll(jProcessAddr, iCell) { label iGlobalCell = jProcessAddr[iCell]; globalCellToProcessAddr_[iGlobalCell] = iCell; } } }
__________________
MDPI Fluids (Q2) special issue for OSS software: https://www.mdpi.com/journal/fluids/..._modelling_OSS GitHub: https://github.com/unicfdlab Linkedin: https://linkedin.com/in/matvey-kraposhin-413869163 RG: https://www.researchgate.net/profile/Matvey_Kraposhin |
|
October 31, 2015, 17:18 |
|
#6 | |
Senior Member
Join Date: Jan 2013
Posts: 372
Rep Power: 14 |
Bruno, Thank you for your reply. I think your idea works. This is very clever method!
Quote:
|
||
October 31, 2015, 17:19 |
|
#7 | |
Senior Member
Join Date: Jan 2013
Posts: 372
Rep Power: 14 |
Hello mkraposhin,
Thank you so much for your help. I will try your method for my case. This is also very clever approach! Quote:
|
||
October 31, 2015, 18:23 |
|
#8 | |
Senior Member
Join Date: Jan 2013
Posts: 372
Rep Power: 14 |
Dear Mkraposhin,
In the following lines, if I need to build the relation for bot local cell and face for their global ones, how can I add the face related in the following: Code:
if (Pstream::master()) { for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++) { IPstream fromSlave(Pstream::scheduled, jSlave); label nSlaveCells = 0; fromSlave >> nSlaveCells; processCellToGlobalAddr_[jSlave].resize(nSlaveCells); labelList& slaveCellProcAddr = processCellToGlobalAddr_[jSlave]; forAll(slaveCellProcAddr, iCell) { fromSlave >> slaveCellProcAddr[iCell]; } } } else { OPstream toMaster (Pstream::scheduled, Pstream::masterNo()); toMaster << localCellProcAddr.size(); forAll(localCellProcAddr, iCell) { toMaster << localCellProcAddr[iCell]; } } Quote:
|
||
November 1, 2015, 07:34 |
|
#9 |
Senior Member
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 355
Rep Power: 21 |
Hi, i'm not sure that i understand your question right, but i will try to give more explanation on the code, that i posted above.
For each MPI process (or processor) you can read addressing arrays, which maps local process indexation of primitives into global indexation of primitives. This arrays are located in folder processj/constant/polyMesh in next files Code:
boundaryProcAddressing cellProcAddressing faceProcAddressing pointProcAddressing
You can read this arrays in each MPI process with the code similar to next Code:
labelIOList localCellProcAddr ( IOobject ( "cellProcAddressing", mesh.facesInstance(), mesh.meshSubDir, mesh, IOobject::MUST_READ, IOobject::NO_WRITE ) ); Code:
labelIOList localFaceProcAddr ( IOobject ( "faceProcAddressing", mesh.facesInstance(), mesh.meshSubDir, mesh, IOobject::MUST_READ, IOobject::NO_WRITE ) ); Code:
processCellToGlobalAddr_[Pstream::myProcNo()] = localCellProcAddr; At this point array processCellToGlobalAddr_ contains information only about adressing of current process, adressing of other processes is invsible. That's why at next step step you need to redistribute this information across other processes. The idea is simple: 1) send addresing information from all processes to master process (with id #0) 2) send gather infromation from master process to other processes Code:
//send local cell addressing to master process if (Pstream::master()) { for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++) { IPstream fromSlave(Pstream::scheduled, jSlave); label nSlaveCells = 0; fromSlave >> nSlaveCells; processCellToGlobalAddr_[jSlave].resize(nSlaveCells); labelList& slaveCellProcAddr = processCellToGlobalAddr_[jSlave]; forAll(slaveCellProcAddr, iCell) { fromSlave >> slaveCellProcAddr[iCell]; } } } else { OPstream toMaster (Pstream::scheduled, Pstream::masterNo()); toMaster << localCellProcAddr.size(); forAll(localCellProcAddr, iCell) { toMaster << localCellProcAddr[iCell]; } } //redistribute cell addressing to slave processes if (Pstream::master()) { for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++) { OPstream toSlave (Pstream::scheduled, jSlave); forAll(processCellToGlobalAddr_, iProcess) { const labelList& thisProcessAddr = processCellToGlobalAddr_[iProcess]; const label nCells = thisProcessAddr.size(); toSlave << nCells; forAll(thisProcessAddr, jCell) { toSlave << thisProcessAddr[jCell]; } } } } else { IPstream fromMaster(Pstream::scheduled, Pstream::masterNo()); forAll(processCellToGlobalAddr_, iProcess) { labelList& thisProcessAddr = processCellToGlobalAddr_[iProcess]; label nCells = 0; fromMaster >> nCells; thisProcessAddr.resize(nCells); forAll(thisProcessAddr, jCell) { fromMaster >> thisProcessAddr[jCell]; } } } Code:
forAll(processCellToGlobalAddr_, jProc) { const labelList& jProcessAddr = processCellToGlobalAddr_[jProc]; forAll(jProcessAddr, iCell) { label iGlobalCell = jProcessAddr[iCell]; globalCellToProcessAddr_[iGlobalCell] = iCell; } }
__________________
MDPI Fluids (Q2) special issue for OSS software: https://www.mdpi.com/journal/fluids/..._modelling_OSS GitHub: https://github.com/unicfdlab Linkedin: https://linkedin.com/in/matvey-kraposhin-413869163 RG: https://www.researchgate.net/profile/Matvey_Kraposhin Last edited by mkraposhin; November 1, 2015 at 07:39. Reason: grammar |
|
November 1, 2015, 11:07 |
|
#10 |
Senior Member
Join Date: Jan 2013
Posts: 372
Rep Power: 14 |
Dear Matvey,
Thank you so much for your detailed explanation. This is very helpful. If I would like to collect and send the data for both cells and faces in the following codes: Code:
//send local cell addressing to master process if (Pstream::master()) { for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++) { IPstream fromSlave(Pstream::scheduled, jSlave); label nSlaveCells = 0; fromSlave >> nSlaveCells; processCellToGlobalAddr_[jSlave].resize(nSlaveCells); labelList& slaveCellProcAddr = processCellToGlobalAddr_[jSlave]; forAll(slaveCellProcAddr, iCell) { fromSlave >> slaveCellProcAddr[iCell]; } } } else { OPstream toMaster (Pstream::scheduled, Pstream::masterNo()); toMaster << localCellProcAddr.size(); forAll(localCellProcAddr, iCell) { toMaster << localCellProcAddr[iCell]; } } Or in other words, when we have multiple packages of data, how to do the sending and gathering in Openfing parallelizations here? cheer, OFFO |
|
November 1, 2015, 14:10 |
|
#11 |
Senior Member
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 355
Rep Power: 21 |
In OpenFOAM synchronization between processes can be done in many ways:
1) With the usage of MPI standard API - just include "mpi.h" and do what you want to do 2) With IPstream and OPstream class: 2a) with overloaded stream operators "<<" and ">>" 2b) with static functions IPstream::read and OPstream::write 3) reduce and scatter operations - Foam::reduce(...) and Foam::scatter(...) static functions In the example posted above, i used method 2a) - i don't need to care how data is marked, i only need to remember which data comes first. For example, if i want to send cellProcAddresing and faceProcAddressing array to master process, i must create OPstream class at each slave process. Then i will use it to send data to master process with "<<" operator: Code:
if ( ! Pstream::master() ) { OPstream toMaster (Pstream::scheduled, Pstream::masterNo()); //send size of cellProcAddr to master toMaster << cellProcAddr.size(); //send size of faceProcAddr to master toMaster << faceProcAddr.size(); //send arrays to master toMaster << cellProcAddr; toMaster << faceProcAddr; } Code:
if ( Pstream::master() ) { //read data from each slaves for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++) { IPstream fromSlave(Pstream::scheduled, jSlave); //read number of cells in slave process label nSlaveCells = 0; fromSlave >> nSlaveCells; //read number of faces in slave process label nSlaveFaces = 0; fromSlave >> nSlaveFaces; //create storage and read cellProcAddr from slave labelList slaveCellProcAddr (nSlaveCells); fromSlave >> slaveCellProcAddr; //create storage and read faceProcAddr from slave labelList slaveFaceProcAddr (nSlaveFaces); fromSlave >> slaveFaceProcAddr; } }
__________________
MDPI Fluids (Q2) special issue for OSS software: https://www.mdpi.com/journal/fluids/..._modelling_OSS GitHub: https://github.com/unicfdlab Linkedin: https://linkedin.com/in/matvey-kraposhin-413869163 RG: https://www.researchgate.net/profile/Matvey_Kraposhin Last edited by mkraposhin; November 1, 2015 at 15:22. |
|
November 1, 2015, 14:19 |
|
#12 |
Senior Member
Join Date: Jan 2013
Posts: 372
Rep Power: 14 |
Dear Matvey,
Thank you so much! This is clear for me now. Your explanations are very good! OFFO |
|
November 1, 2015, 15:09 |
|
#13 |
Senior Member
Join Date: Jan 2013
Posts: 372
Rep Power: 14 |
Dear Matvey,
Please allow me to ask one more question: if I do the following: Code:
1) With the usage of MPI standard API - just include "mpi.h" and do what you want to do |
|
November 1, 2015, 15:20 |
|
#14 | |
Senior Member
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 355
Rep Power: 21 |
Quote:
__________________
MDPI Fluids (Q2) special issue for OSS software: https://www.mdpi.com/journal/fluids/..._modelling_OSS GitHub: https://github.com/unicfdlab Linkedin: https://linkedin.com/in/matvey-kraposhin-413869163 RG: https://www.researchgate.net/profile/Matvey_Kraposhin |
||
November 2, 2015, 05:31 |
|
#15 |
Senior Member
Join Date: Jan 2013
Posts: 372
Rep Power: 14 |
Dear Matvey,
I have modified your code to send and receive the information for face and cell. If you could help to have a look, I would really appreciate it. Thank you. Code:
List<List<label> > processCellToGlobalAddr_; List<label> globalCellToProcessAddr_; List<List<label> > processFaceToGlobalAddr_; List<label> globalFaceToProcessAddr_; if (Pstream::parRun()) { processCellToGlobalAddr_.resize ( Pstream::nProcs() ); processFaceToGlobalAddr_.resize ( Pstream::nProcs() ); //read local cell addressing labelIOList localCellProcAddr ( IOobject ( "cellProcAddressing", localMesh.facesInstance(), localMesh.meshSubDir, localMesh, IOobject::MUST_READ, IOobject::NO_WRITE ) ); //read local face addressing labelIOList localFaceProcAddr ( IOobject ( "faceProcAddressing", localMesh.facesInstance(), localMesh.meshSubDir, localMesh, IOobject::MUST_READ, IOobject::NO_WRITE ) ); processCellToGlobalAddr_[Pstream::myProcNo()] = localCellProcAddr; processFaceToGlobalAddr_[Pstream::myProcNo()] = localFaceProcAddr; //send local cell and face addressing to master process if (Pstream::master()) // if this is rank=0 processor { for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++) { IPstream fromSlave(Pstream::scheduled, jSlave); // the cell information label nSlaveCells = 0; fromSlave >> nSlaveCells; processCellToGlobalAddr_[jSlave].resize(nSlaveCells); labelList& slaveCellProcAddr = processCellToGlobalAddr_[jSlave]; forAll(slaveCellProcAddr, iCell) { fromSlave >> slaveCellProcAddr[iCell]; } // the face information label nSlaveFaces = 0; fromSlave >> nSlaveFaces; processFaceToGlobalAddr_[jSlave].resize(nSlaveFaces); labelList& slaveFaceProcAddr = processFaceToGlobalAddr_[jSlave]; forAll(slaveFaceProcAddr, iFace) { fromSlave >> slaveFaceProcAddr[iFace]; } } } else // if this is non-rank processor { OPstream toMaster (Pstream::scheduled, Pstream::masterNo()); // cell information toMaster << localCellProcAddr.size(); forAll(localCellProcAddr, iCell) { toMaster << localCellProcAddr[iCell]; } // face information toMaster << localFaceProcAddr.size(); forAll(localFaceProcAddr, iFace) { toMaster << localFaceProcAddr[iFace]; } } //redistribute cell and face addressing to slave processes if (Pstream::master()) { for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++) { OPstream toSlave (Pstream::scheduled, jSlave); // cell information forAll(processCellToGlobalAddr_, iProcess) { const labelList& thisProcessAddr = processCellToGlobalAddr_[iProcess]; const label nCells = thisProcessAddr.size(); toSlave << nCells; forAll(thisProcessAddr, jCell) { toSlave << thisProcessAddr[jCell]; } } // face informaiton forAll(processFaceToGlobalAddr_, iProcess) { const labelList& thisProcessAddr = processFaceToGlobalAddr_[iProcess]; const label nFaces = thisProcessAddr.size(); toSlave << nFaces; forAll(thisProcessAddr, jFace) { toSlave << thisProcessAddr[jFace]; } } } } else { IPstream fromMaster(Pstream::scheduled, Pstream::masterNo()); // cell information forAll(processCellToGlobalAddr_, iProcess) { labelList& thisProcessAddr = processCellToGlobalAddr_[iProcess]; label nCells = 0; fromMaster >> nCells; thisProcessAddr.resize(nCells); forAll(thisProcessAddr, jCell) { fromMaster >> thisProcessAddr[jCell]; } } // face information forAll(processFaceToGlobalAddr_, iProcess) { labelList& thisProcessAddr = processFaceToGlobalAddr_[iProcess]; label nFaces = 0; fromMaster >> nFaces; thisProcessAddr.resize(nFaces); forAll(thisProcessAddr, jFace) { fromMaster >> thisProcessAddr[jFace]; } } } // Set the relation between local and global cell indices forAll(processCellToGlobalAddr_, jProc) { const labelList& jProcessAddr = processCellToGlobalAddr_[jProc]; forAll(jProcessAddr, iCell) { label iGlobalCell = jProcessAddr[iCell]; globalCellToProcessAddr_[iGlobalCell] = iCell; } } forAll(processFaceToGlobalAddr_, jProc) { const labelList& jProcessAddr = processFaceToGlobalAddr_[jProc]; forAll(jProcessAddr, iFace) { label iGlobalFace = jProcessAddr[iFace]; globalFaceToProcessAddr_[iGlobalFace] = iFace; } } } |
|
November 2, 2015, 05:39 |
|
#16 |
Senior Member
Join Date: Jan 2013
Posts: 372
Rep Power: 14 |
Dear Matvey,
How did you define the "localmesh" in the following: Code:
//read local cell addressing labelIOList localCellProcAddr ( IOobject ( "cellProcAddressing", localMesh.facesInstance(), localMesh.meshSubDir, localMesh, IOobject::MUST_READ, IOobject::NO_WRITE ) ); Code:
CFD_Cell_Local2Glob.H(27): error: identifier "localMesh" is undefined localMesh.facesInstance(), |
|
November 2, 2015, 05:45 |
|
#17 |
Senior Member
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 355
Rep Power: 21 |
localMesh is simply mesh in standard OpenFOAM solver.
Also, according to OpenFOAM coding style, you must erase underscores "_" at the end of variables List<List<label> > processCellToGlobalAddr_; List<label> globalCellToProcessAddr_; List<List<label> > processFaceToGlobalAddr_; List<label> globalFaceToProcessAddr_; In my example this variables were protected members of class, but in your case (if i understand it right) this variables are used as global variables of main(...) procedure.
__________________
MDPI Fluids (Q2) special issue for OSS software: https://www.mdpi.com/journal/fluids/..._modelling_OSS GitHub: https://github.com/unicfdlab Linkedin: https://linkedin.com/in/matvey-kraposhin-413869163 RG: https://www.researchgate.net/profile/Matvey_Kraposhin |
|
November 2, 2015, 06:00 |
|
#18 |
Senior Member
Join Date: Jan 2013
Posts: 372
Rep Power: 14 |
Hi Matvey,
compilation is successful now. Thank you. |
|
November 2, 2015, 06:11 |
|
#19 |
Senior Member
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 355
Rep Power: 21 |
If you want to check that everything works, you can do it by dumping arrays to files (OFstream class). If all files from all processors are the same, then your program operates like you expected
__________________
MDPI Fluids (Q2) special issue for OSS software: https://www.mdpi.com/journal/fluids/..._modelling_OSS GitHub: https://github.com/unicfdlab Linkedin: https://linkedin.com/in/matvey-kraposhin-413869163 RG: https://www.researchgate.net/profile/Matvey_Kraposhin |
|
November 2, 2015, 06:17 |
|
#20 |
Senior Member
Join Date: Jan 2013
Posts: 372
Rep Power: 14 |
Thank you so much. In openFOAM, how can I output the array to a file? I always try to print it out on the screen, but this does not look good when the data is very large. Thank you.
|
|
|
|