CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Programming & Development

the global index for cells and facess in parallel computation

Register Blogs Community New Posts Updated Threads Search

Like Tree34Likes

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   November 2, 2015, 06:58
Default
  #21
Senior Member
 
Join Date: Jan 2013
Posts: 372
Rep Power: 14
openfoammaofnepo is on a distinguished road
In the list of faceProcAddressing, I found that there are some negative indices in my case. Are they for the boundary faces or other particular types?

Also, the number of face in the file faceProcAddressing is the total number of faces in the local processor (physical boundary face, inter-processor faces and internal faces)? Thank you.
openfoammaofnepo is offline   Reply With Quote

Old   November 2, 2015, 10:31
Default
  #22
Senior Member
 
Join Date: Jan 2013
Posts: 372
Rep Power: 14
openfoammaofnepo is on a distinguished road
The answer has been found:

Code:
For decomposed meshes there are additional files (labelIOLists) that refer back to the undecomposed mesh: 
faceProcAddressing  for every face the original face in the undecomposed mesh. Also codes  whether the face has been reversed. If procFaceI is the local face index  and globalFaceI the index in the undecomposed mesh: 
- faceProcAddressing[procFaceI] == 0 : not allowed 
- faceProcAddressing[procFaceI] >0 : globalFaceI =  mag(faceProcAddressing[procFaceI])-1 and orientation same as  undecomposed mesh 
- faceProcAddressing[procFaceI] <0 : globalFaceI =  mag(faceProcAddressing[procFaceI])-1 and orientation opposite to  undecomposed mesh.
So for face indices, it starts at 1, not 0. How about the cell indices? Start from 0 or from 1? Thanks.
openfoammaofnepo is offline   Reply With Quote

Old   November 3, 2015, 04:30
Default
  #23
Senior Member
 
Join Date: Jan 2013
Posts: 372
Rep Power: 14
openfoammaofnepo is on a distinguished road
The cell indices in the cellProcAddressing starts from 0, which is different from the faceProcAddressing (start from 0). Is this correct?
openfoammaofnepo is offline   Reply With Quote

Old   November 7, 2015, 07:16
Default
  #24
Senior Member
 
mkraposhin's Avatar
 
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 355
Rep Power: 21
mkraposhin is on a distinguished road
Just for those, who are still interested in how to write specified arrays at each MPI process to different files

Code:
{
    const word processFileName = "cellProcAddressing" +  Foam::name(Pstream::myProcNo());
    OFstream procFile (processFileName);
    procFile << cellProcAddressing << endl;
    procFile.flush() // to be sure that data flushed to disk
}
mkraposhin is offline   Reply With Quote

Old   November 17, 2015, 06:18
Default
  #25
Senior Member
 
Join Date: Jan 2013
Posts: 372
Rep Power: 14
openfoammaofnepo is on a distinguished road
Dear Matvey,

Thank you so much. In the file of "faceProcAddressing", the faces included are both internal and boundary faces. But I am not sure their order in that file. I mean, internal face listed first, followed by the the indices for boundary faces?
openfoammaofnepo is offline   Reply With Quote

Old   November 17, 2015, 07:21
Default
  #26
Senior Member
 
mkraposhin's Avatar
 
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 355
Rep Power: 21
mkraposhin is on a distinguished road
Quote:
Originally Posted by openfoammaofnepo View Post
Dear Matvey,
I mean, internal face listed first, followed by the the indices for boundary faces?
Hi, faceProcAddresing
is a list of global faces indices for all
local processor internal faces followed by local processor boundary faces

some local processor boundary faces (processor boundaries) can be global internal faces
mkraposhin is offline   Reply With Quote

Old   November 17, 2015, 07:24
Default
  #27
Senior Member
 
Join Date: Jan 2013
Posts: 372
Rep Power: 14
openfoammaofnepo is on a distinguished road
Thank you for your reply. Clear for me now.
openfoammaofnepo is offline   Reply With Quote

Old   November 17, 2015, 09:57
Default
  #28
Senior Member
 
Join Date: Jan 2013
Posts: 372
Rep Power: 14
openfoammaofnepo is on a distinguished road
Dear Matvey,

I have a special type of faces. The mesh is generated by ICEM with multiple bodies. Between bodies, there will be an interior faces. I always output the icem mesh using fluent v6 format. Then convert that mesh with fluent3DMeshToFoam to openfoam format. I am not sure for this interior faces, what does it belong for global mesh set and local processor mesh set. Do you have ideas about this? Thank you.

Quote:
Originally Posted by mkraposhin View Post
Hi, faceProcAddresing
is a list of global faces indices for all
local processor internal faces followed by local processor boundary faces

some local processor boundary faces (processor boundaries) can be global internal faces
openfoammaofnepo is offline   Reply With Quote

Old   November 17, 2015, 10:24
Default
  #29
Senior Member
 
mkraposhin's Avatar
 
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 355
Rep Power: 21
mkraposhin is on a distinguished road
Can you upload an example of your mesh?
mkraposhin is offline   Reply With Quote

Old   November 17, 2015, 11:07
Default
  #30
Senior Member
 
Join Date: Jan 2013
Posts: 372
Rep Power: 14
openfoammaofnepo is on a distinguished road
The mesh is too big to put it here......
openfoammaofnepo is offline   Reply With Quote

Old   November 17, 2015, 11:13
Default
  #31
Senior Member
 
mkraposhin's Avatar
 
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 355
Rep Power: 21
mkraposhin is on a distinguished road
I mean, can you make a simple example, that can be used as a tutorial? And then put it here
mkraposhin is offline   Reply With Quote

Old   November 17, 2015, 12:19
Default
  #32
Senior Member
 
Join Date: Jan 2013
Posts: 372
Rep Power: 14
openfoammaofnepo is on a distinguished road
I have made some investigations, it seems that in faceProcAddressing file, the "interior" faces are counted in the boundary faces. But this is special, since they are not counted in the file as constant/polymesh/boundary (this is reasonable since for the global mesh, it is indeed interior faces). I check the original file, it is not similar to other physical boundaries, interior faces have both owner and neighbor cells. However, for the physical boundaries, they only have owner cell, the neighbor cell index is set to zero. This is very confusing here. I am still not sure how to label it in openfoam, because it does not appear in boundary list.
openfoammaofnepo is offline   Reply With Quote

Old   November 17, 2015, 12:38
Default
  #33
Senior Member
 
mkraposhin's Avatar
 
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 355
Rep Power: 21
mkraposhin is on a distinguished road
I'm sorry, i didn't understood everything

But, as for owner/neighbor cells , the distinguish rule is simple:
- the face is called owner with respect to cell, when it's normal points outward of cell
- the face is called neighbor with respect to cell, when it's normal points inside the cell

Each boundary face can be connected only to one cell and we know, that surface of domain should have normal, pointing outward. So, all boundary faces are owners only
mkraposhin is offline   Reply With Quote

Old   November 17, 2015, 12:47
Default
  #34
Senior Member
 
Join Date: Jan 2013
Posts: 372
Rep Power: 14
openfoammaofnepo is on a distinguished road
Thanks, but the parallel solver, does openfoam treat the inter-body faces as boundary or inteiror faces? How to extract the owner and neighbor cell index for the inter-body faces?
openfoammaofnepo is offline   Reply With Quote

Old   November 17, 2015, 13:05
Default
  #35
Senior Member
 
mkraposhin's Avatar
 
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 355
Rep Power: 21
mkraposhin is on a distinguished road
If you call processor as *body*, then my answe is: yes, inter-processor faces are treated as boundary faces
mkraposhin is offline   Reply With Quote

Old   November 17, 2015, 13:09
Default
  #36
Senior Member
 
Join Date: Jan 2013
Posts: 372
Rep Power: 14
openfoammaofnepo is on a distinguished road
No, here Body is from ICEM. That means a multi-block mesh, and different blocks (i.e. body) share one common surface.
openfoammaofnepo is offline   Reply With Quote

Old   November 17, 2015, 13:14
Default
  #37
Senior Member
 
mkraposhin's Avatar
 
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 355
Rep Power: 21
mkraposhin is on a distinguished road
O.K.,

Actually, i don't know which structure OpenFOAM uses to represent ICEM body, that's why i asked you to upload any simple example
mkraposhin is offline   Reply With Quote

Old   November 10, 2017, 00:39
Default
  #38
Member
 
Vishwesh Ravi Shrimali
Join Date: Aug 2017
Posts: 43
Rep Power: 9
vishwesh is on a distinguished road
Hi!

Though this thread is about 2 years old but is pretty relevant to a problem that I am facing. (Neighboring cells in tetrahedral mesh)

To explain my problem in detail: (Considering parallel run)

1. I form a list of cell IDs with alpha > 0.3 for each processor and then merge these lists to form a mail list.
2. I iterate over each element of the main list, find the neighboring cells of the element and see how many of these neighboring cells have alpha > 0. These cells then end up becoming a part of a bubble.

The trouble that I have realized now is that because I am using parallel run, I might not be getting the global cellIDs and am only stuck with local IDs. Can anyone please help me out with this? What should I do...

Thank you in advance
vishwesh is offline   Reply With Quote

Old   November 10, 2017, 05:08
Default
  #39
Member
 
Vishwesh Ravi Shrimali
Join Date: Aug 2017
Posts: 43
Rep Power: 9
vishwesh is on a distinguished road
Quote:
Originally Posted by mkraposhin View Post
Hi, i'm not sure that i understand your question right, but i will try to give more explanation on the code, that i posted above.

For each MPI process (or processor) you can read addressing arrays, which maps local process indexation of primitives into global indexation of primitives. This arrays are located in folder processj/constant/polyMesh in next files
Code:
  boundaryProcAddressing 
  cellProcAddressing
  faceProcAddressing
  pointProcAddressing
  • boundaryProcAddresing - each element contains global index of patch that is present on current process, for "processor" boundaries this index is -1. Size of this array is equal to number of patches in global mesh plus number of "processor" patches in current processorj folder or processor.
  • cellProcAddressing - each element contains global index of given local process cell. Size of this array is equal to number of cells in current processor
  • faceProcAddressing - each element contains global index of given local process face. Size of this array is equal to number of face in current processor
  • pointProcAddressing - each element contains global index of given local process point. Size of this array is equal to number of points in current processor

You can read this arrays in each MPI process with the code similar to next
Code:
	labelIOList localCellProcAddr
	(
	    IOobject
	    (
		"cellProcAddressing",
		mesh.facesInstance(),
		mesh.meshSubDir,
		mesh,
		IOobject::MUST_READ,
		IOobject::NO_WRITE
	    )
	);
or for faces

Code:
	labelIOList localFaceProcAddr
	(
	    IOobject
	    (
		"faceProcAddressing",
		mesh.facesInstance(),
		mesh.meshSubDir,
		mesh,
		IOobject::MUST_READ,
		IOobject::NO_WRITE
	    )
	);
then you can store this addresing arrays in another arrays

Code:
processCellToGlobalAddr_[Pstream::myProcNo()] = localCellProcAddr;
Variable Pstream::myProcNo() contains id of process. In master process it's value will be 0, in process 1 it's value will be 1 and so on.
At this point array processCellToGlobalAddr_ contains information only about adressing of current process, adressing of other processes is invsible. That's why at next step step you need to redistribute this information across other processes. The idea is simple:
1) send addresing information from all processes to master process (with id #0)
2) send gather infromation from master process to other processes
Code:
	//send local cell addressing to master process
	if (Pstream::master())
	{
	    for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++)
	    {
		IPstream fromSlave(Pstream::scheduled, jSlave);
		label nSlaveCells = 0;
		fromSlave >> nSlaveCells;
		processCellToGlobalAddr_[jSlave].resize(nSlaveCells);
		labelList& slaveCellProcAddr = processCellToGlobalAddr_[jSlave];
		forAll(slaveCellProcAddr, iCell)
		{
		    fromSlave >> slaveCellProcAddr[iCell];
		}
	    }
	}
	else
	{
	    OPstream toMaster (Pstream::scheduled, Pstream::masterNo());
	    toMaster << localCellProcAddr.size();
	    
	    forAll(localCellProcAddr, iCell)
	    {
		toMaster << localCellProcAddr[iCell];
	    }
	}
	
	//redistribute cell addressing to slave processes
	if (Pstream::master())
	{
	    for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++)
	    {
		OPstream toSlave (Pstream::scheduled, jSlave);
		forAll(processCellToGlobalAddr_, iProcess)
		{
		    const labelList& thisProcessAddr = processCellToGlobalAddr_[iProcess];
		    const label nCells = thisProcessAddr.size();
		    toSlave << nCells;
		    forAll(thisProcessAddr, jCell)
		    {
			toSlave << thisProcessAddr[jCell];
		    }
		}
	    }
	}
	else
	{
	    IPstream fromMaster(Pstream::scheduled, Pstream::masterNo());
	    forAll(processCellToGlobalAddr_, iProcess)
	    {
		labelList& thisProcessAddr = processCellToGlobalAddr_[iProcess];
		label nCells = 0;
		fromMaster >> nCells;
		thisProcessAddr.resize(nCells);
		forAll(thisProcessAddr, jCell)
		{
		    fromMaster >> thisProcessAddr[jCell];
		}
	    }
	}
At the last step we may need to create reverse addressing - from global cell id to local process cell id (or face id, or point id):
Code:
	forAll(processCellToGlobalAddr_, jProc)
	{
	    const labelList& jProcessAddr = processCellToGlobalAddr_[jProc];
	    forAll(jProcessAddr, iCell)
	    {
		label iGlobalCell = jProcessAddr[iCell];
		globalCellToProcessAddr_[iGlobalCell] = iCell;
	    }
	}

Hi!
I was trying out the code for a problem that I am facing. I changed localmesh to mesh and removed "_" from the variable names too. I was able to compile the solver properly, but somehow it's not working properly. I am attaching the code and the result here. I would be very grateful if you could help me out with it.

Code:
        Info<<"\n\nEntering addressing array transfer to Master processor\n\n"; 

        /*============TRANSFER CELL INDEX FROM SLAVE TO MASTER=============*\
         |The local cell index from each processor folder will be obtained |
         |from cellProcAddressing array. This data will contain the local  |
         |cell ID and global cell ID. This data will be transferred to the |
         |master processor for gathering the data.                         |
        \*================================================================*/

         // for transferring process cell ID to global cell ID
         List<List<label> > processCellToGlobalAddr;
         // for transferring global cell ID to process cell ID
         List<label> globalCellToProcessAddr;

         if (RUN_IN_PARALLEL)
         {
            processCellToGlobalAddr.resize
            (
                Pstream::nProcs()
            );
            // read local cell addressing
            labelIOList localCellProcAddr
            (
                IOobject
                (
                    "cellProcAddressing",
                    mesh.facesInstance(),
                    mesh.meshSubDir,
                    mesh,
                    IOobject::MUST_READ,
                    IOobject::NO_WRITE
                )
            );
            // store addressing arrays in different array
            processCellToGlobalAddr[Pstream::myProcNo()] = localCellProcAddr;
            // Redistribute this information across other processes as follows:
            // > send addressing information from all processors to Master Processor
            // > Gather information at master processor
            // > Send gathered information from master processor to other processors

            // send local cell addressing to master processor
            if (Pstream::master())
            {
                for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++)
                {
                    Pout<<"I worked first"<<endl;
                    IPstream fromSlave(Pstream::scheduled, jSlave);
                    label nSlaveCells = 0;
                    fromSlave >> nSlaveCells;
                    processCellToGlobalAddr[jSlave].resize(nSlaveCells);
                    labelList& slaveCellProcAddr = processCellToGlobalAddr[jSlave];
                    forAll(slaveCellProcAddr, iCell)
                    {
                        fromSlave >> slaveCellProcAddr[iCell];
                    }
//                    Pout<<"slave: "<<jSlave<<"\nslaveCellProcAddr:\n"<<slaveCellProcAddr<<"\nprocessCellToGlobalAddr:\n"<<processCellToGlobalAddr<<endl;
                }
            }
            else
            {
                Pout<<"I worked"<<endl;
                OPstream toMaster (Pstream::scheduled, Pstream::masterNo());
                toMaster << localCellProcAddr.size();

                forAll(localCellProcAddr, iCell)
                {
                    toMaster << localCellProcAddr[iCell];
                }
//                Pout<<"localCellProcAddr:\n"<<localCellProcAddr<<endl;
            }

            Info<<"\nInformation transferred to master processor.\nBeginning cell addressing redistribution to slave processes.\n\n";

            // redistribute cell addressing to slave processes
            if (Pstream::master())
            {
                for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++)
                {
                    Pout<<"I worked too"<<endl;
                    OPstream toSlave (Pstream::scheduled, jSlave);
                    forAll(processCellToGlobalAddr, iProcess)
                    {
                        const labelList& thisProcessAddr = processCellToGlobalAddr[iProcess];
                        const label nCells = thisProcessAddr.size();
                        toSlave << nCells;
                        forAll(thisProcessAddr, jCell)
                        {
                            toSlave << thisProcessAddr[jCell];
                        }
                    }
                }
            }
            else
            {
                Pout<<"I worked second"<<endl;
                IPstream fromMaster(Pstream::scheduled, Pstream::masterNo());
                forAll(processCellToGlobalAddr, iProcess)
                {
                    labelList& thisProcessAddr = processCellToGlobalAddr[iProcess];
                    label nCells = 0;
                    fromMaster >> nCells;
                    thisProcessAddr.resize(nCells);
                    forAll(thisProcessAddr, jCell)
                    {
                        fromMaster >> thisProcessAddr[jCell];
                    }
//                    Pout<<"thisProcessAddr:\n"<<thisProcessAddr<<endl;
                }
//                Pout<<"thisProcessAddr:\n"<<thisProcessAddr<<endl;
            }

            // reverse addressing:- from global cell ID to local process cell ID

            Info<<"\nInformation transferred to slave processes.\nBeginning conversion of global cell ID to local process cell ID.\n\n";

            forAll(processCellToGlobalAddr, jProc)
            {
                const labelList& jProcessAddr = processCellToGlobalAddr[jProc];
                forAll(jProcessAddr, iCell)
                {
                    label iGlobalCell = jProcessAddr[iCell];
                    globalCellToProcessAddr[iGlobalCell] = iCell;
                }
            }
//            Pout<<"globalCellToProcessAddr:\n"<<globalCellToProcessAddr<<endl;
            Info<<"\nReverse addressing complete.\n\n";
        }
Here is the result:

Code:
Entering addressing array transfer to Master processor

[12] I worked
[24] I worked
[29] I worked
[0] I worked first
[6] I worked
[10] I worked
[2] I worked
[18] I worked
[27] I worked
[21] I worked
[31] I worked
[1] I worked
[9] I worked
[3] I worked
[1] I worked second
[15] I worked
[0] I worked first
[2] I worked second
[26] I worked
[3] I worked second
[16] I worked
[0] I worked first
[0] I worked first
[4] I worked
[13] I worked
[4] I worked second
[28] I worked
[0] I worked first
[19] I worked
[22] I worked
[20] I worked
[25] I worked
[11] I worked
[30] I worked
[5] I worked
[5] I worked second
[6] I worked second
[17] I worked
[0] I worked first
[14] I worked
[0] I worked first
[23] I worked
[7] I worked
[7] I worked second
[0] I worked first
[8] I worked
[8] I worked second
[10] I worked second
[9] I worked second
[0] I worked first
[0] I worked first
[12] I worked second
[11] I worked second
[0] I worked first
[0] I worked first
[13] I worked second
[14] I worked second
[0] I worked first
[0] I worked first
[0] I worked first
[0] I worked first
[15] I worked second
[16] I worked second
[0] I worked first
[0] I worked first
[17] I worked second
[18] I worked second
[0] I worked first
[0] I worked first
[19] I worked second
[20] I worked second
[0] I worked first
[0] I worked first
[21] I worked second
[22] I worked second
[0] I worked first
[23] I worked second
[0] I worked first
[24] I worked second
[0] I worked first
[25] I worked second
[0] I worked first
[26] I worked second
[0] I worked first
[27] I worked second
[0] I worked first
[28] I worked second
[0] I worked first
[29] I worked second
[31] I worked second
[30] I worked second
[0] I worked first
[0] I worked first

Information transferred to master processor.
Beginning cell addressing redistribution to slave processes.

[0] I worked too
[0] I worked too
[1] #0  Foam::error::printStack(Foam::Ostream&)[0] I worked too
 in "/share/OpenFOAM/OpenFOAM-3.0.x/platforms/linux64GccDPInt64Opt/lib/libOpenFOAM.so"
[1] #1  Foam::sigSegv::sigHandler(int)[0] I worked too
[2] #0  Foam::error::printStack(Foam::Ostream&) in "/share/OpenFOAM/OpenFOAM-3.0.x/platforms/linux64GccDPInt64Opt/lib/libOpenFOAM.so"
[1] #2  ?[0] I worked too
 in "/share/OpenFOAM/OpenFOAM-3.0.x/platforms/linux64GccDPInt64Opt/lib/libOpenFOAM.so"
[2] #1  Foam::sigSegv::sigHandler(int)[3] #0  Foam::error::printStack(Foam::Ostream&) at sigaction.c:0
[1] #3  main in "/share/OpenFOAM/OpenFOAM-3.0.x/platforms/linux64GccDPInt64Opt/lib/libOpenFOAM.so"
[2] #2  ? in "/hom in "/share/OpenFOAM/OpenFOAM-3.0.x/platforms/linux64GccDPInt64Opt/lib/libOpenFOAM.so"
e[3] #1  Foam::sigSegv::sigHandler(int)/[0] I worked too
vshrima/OpenFOAM/vshrima-3.0.x/platforms/linux64GccDPInt64Opt/bin/myLPTVOF_RP_test"
[1] #4  __libc_start_main at sigaction.c:0
[2] #3  [4] #0  Foam::error::printStack(Foam::Ostream&)main in "/share/OpenFOAM/OpenFOAM-3.0.x/platforms/linux64GccDPInt64Opt/lib/libOpenFOAM.so"
[3] #2  ? in "/lib64/libc.so.6"
[1] #5  ? at sigaction.c:0
[3] #3   in "/home/vshrima/OpenFOAM/vshrima-3.0.x/platforms/linux64GccDPInt64Opt/bin/myLPTVOF_RP_test"
[2] #4  __libc_start_main in "/share/OpenFOAM/OpenFOAM-3.0.x/platforms/linux64GccDPInt64Opt/lib/libOpenFOAM.so"
[4] #1  Foam::sigSegv::sigHandler(int)[0] I worked too
main in "/home/vshrima/OpenFOAM/vshrima-3.0.x/platforms/linux64GccDPInt64Opt/bin/myLPTVOF_RP_test"
[cgscd000ggb3t92:35078] *** Process received signal ***
[cgscd000ggb3t92:35078] Signal: Segmentation fault (11)
[cgscd000ggb3t92:35078] Signal code:  (-6)
[cgscd000ggb3t92:35078] Failing at address: 0x11e7d00008906
[cgscd000ggb3t92:35078] [ 0] /lib64/libc.so.6[0x3eb7c32510]
[cgscd000ggb3t92:35078] [ 1] /lib64/libc.so.6(gsignal+0x35)[0x3eb7c32495]
[cgscd000ggb3t92:35078] [ 2] /lib64/libc.so.6[0x3eb7c32510]
[cgscd000ggb3t92:35078] [ 3] myLPTVOF_RP_test(main+0x22e7)[0x448bf7]
[cgscd000ggb3t92:35078] [ 4] /lib64/libc.so.6(__libc_start_main+0xfd)[0x3eb7c1ed1d]
[cgscd000ggb3t92:35078] [ 5] myLPTVOF_RP_test[0x446439]
[cgscd000ggb3t92:35078] *** End of error message ***
 in "/lib64/libc.so.6"
[2] #5  [5] #0  Foam::error::printStack(Foam::Ostream&) in "/home/vshrima/OpenFOAM/vshrima-3.0.x/platforms/linux64GccDPInt64Opt/bin/myLPTVOF_RP_test"
[3] #4  __libc_start_main? in "/share/OpenFOAM/OpenFOAM-3.0.x/platforms/linux64GccDPInt64Opt/lib/libOpenFOAM.so"
[4] #2  ? in "/lib64/libc.so.6"
[3] #5  [6] #0  Foam::error::printStack(Foam::Ostream&) in "/home/vshrima/OpenFOAM/vshrima-3.0.x/platforms/linux64GccDPInt64Opt/bin/myLPTVOF_RP_test"
[cgscd000ggb3t92:35079] *** Process received signal ***
[cgscd000ggb3t92:35079] Signal: Segmentation fault (11)
[cgscd000ggb3t92:35079] Signal code:  (-6)
[cgscd000ggb3t92:35079] Failing at address: 0x11e7d00008907
[cgscd000ggb3t92:35079] [ 0] /lib64/libc.so.6[0x3eb7c32510]
[cgscd000ggb3t92:35079] [ 1] /lib64/libc.so.6(gsignal+0x35)[0x3eb7c32495]
[cgscd000ggb3t92:35079] [ 2] /lib64/libc.so.6[0x3eb7c32510]
[cgscd000ggb3t92:35079] [ 3] myLPTVOF_RP_test(main+0x22e7)[0x448bf7]
[cgscd000ggb3t92:35079] [ 4] /lib64/libc.so.6(__libc_start_main+0xfd)[0x3eb7c1ed1d]
[cgscd000ggb3t92:35079] [ 5] myLPTVOF_RP_test[0x446439]
[cgscd000ggb3t92:35079] *** End of error message ***
? at sigaction.c:0
[4] #3   in "/share/OpenFOAM/OpenFOAM-3.0.x/platforms/linux64GccDPInt64Opt/lib/libOpenFOAM.so"
[0] I worked too
[5] #1  Foam::sigSegv::sigHandler(int) in "/home/vshrima/OpenFOAM/vshrima-3.0.x/platforms/linux64GccDPInt64Opt/bin/myLPTVOF_RP_test"
[cgscd000ggb3t92:35080] *** Process received signal ***
[cgscd000ggb3t92:35080] Signal: Segmentation fault (11)
[cgscd000ggb3t92:35080] Signal code:  (-6)
[cgscd000ggb3t92:35080] Failing at address: 0x11e7d00008908
[cgscd000ggb3t92:35080] [ 0] /lib64/libc.so.6[0x3eb7c32510]
[cgscd000ggb3t92:35080] [ 1] /lib64/libc.so.6(gsignal+0x35)[0x3eb7c32495]
[cgscd000ggb3t92:35080] [ 2] /lib64/libc.so.6[0x3eb7c32510]
[cgscd000ggb3t92:35080] [ 3] myLPTVOF_RP_test(main+0x22e7)[0x448bf7]
[cgscd000ggb3t92:35080] [ 4] /lib64/libc.so.6(__libc_start_main+0xfd)[0x3eb7c1ed1d]
[cgscd000ggb3t92:35080] [ 5] myLPTVOF_RP_test[0x446439]
[cgscd000ggb3t92:35080] *** End of error message ***
main in "/share/OpenFOAM/OpenFOAM-3.0.x/platforms/linux64GccDPInt64Opt/lib/libOpenFOAM.so"
[6] #1  Foam::sigSegv::sigHandler(int) in "/home/vshrima/OpenFOAM/vshrima-3.0.x/platforms/linux64GccDPInt64Opt/bin/myLPTVOF_RP_test"
[4] #4  __libc_start_main in "/share/OpenFOAM/OpenFOAM-3.0.x/platforms/linux64GccDPInt64Opt/lib/libOpenFOAM.so"
[5] #2  ? in "/share/OpenFOAM/OpenFOAM-3.0.x/platforms/linux64GccDPInt64Opt/lib/libOpenFOAM.so"
[6] #2  ? in "/lib64/libc.so.6"
[4] #5   at sigaction.c:0
[5] #3  ? in "/home/vshrima/OpenFOAM/vshrima-3.0.x/platforms/linux64GccDPInt64Opt/bin/myLPTVOF_RP_test"
[cgscd000ggb3t92:35081] *** Process received signal ***
[cgscd000ggb3t92:35081] Signal: Segmentation fault (11)
[cgscd000ggb3t92:35081] Signal code:  (-6)
[cgscd000ggb3t92:35081] Failing at address: 0x11e7d00008909
[cgscd000ggb3t92:35081] [ 0] /lib64/libc.so.6[0x3eb7c32510]
[cgscd000ggb3t92:35081] [ 1] /lib64/libc.so.6(gsignal+0x35)[0x3eb7c32495]
[cgscd000ggb3t92:35081] [ 2] /lib64/libc.so.6[0x3eb7c32510]
[cgscd000ggb3t92:35081] [ 3] myLPTVOF_RP_test(main+0x22e7)[0x448bf7]
[cgscd000ggb3t92:35081] [ 4] /lib64/libc.so.6(__libc_start_main+0xfd)[0x3eb7c1ed1d]
[cgscd000ggb3t92:35081] [ 5] myLPTVOF_RP_test[0x446439]
[cgscd000ggb3t92:35081] *** End of error message ***
 at sigaction.c:0
[6] #3  main in "/home/vshrima/OpenFOAM/vshrima-3.0.x/platforms/linux64GccDPInt64Opt/bin/myLPTVOF_RP_test"
[6] #4  __libc_start_main in "/lib64/libc.so.6"
[6] #5  ? in "/home/vshrima/OpenFOAM/vshrima-3.0.x/platforms/linux64GccDPInt64Opt/bin/myLPTVOF_RP_test"
[cgscd000ggb3t92:35083] *** Process received signal ***
[cgscd000ggb3t92:35083] Signal: Segmentation fault (11)
[cgscd000ggb3t92:35083] Signal code:  (-6)
[cgscd000ggb3t92:35083] Failing at address: 0x11e7d0000890b
[cgscd000ggb3t92:35083] [ 0] /lib64/libc.so.6[0x3eb7c32510]
[cgscd000ggb3t92:35083] [ 1] /lib64/libc.so.6(gsignal+0x35)[0x3eb7c32495]
[cgscd000ggb3t92:35083] [ 2] /lib64/libc.so.6[0x3eb7c32510]
[cgscd000ggb3t92:35083] [ 3] myLPTVOF_RP_test(main+0x22e7)[0x448bf7]
[cgscd000ggb3t92:35083] [ 4] /lib64/libc.so.6(__libc_start_main+0xfd)[0x3eb7c1ed1d]
[cgscd000ggb3t92:35083] [ 5] myLPTVOF_RP_test[0x446439]
[cgscd000ggb3t92:35083] *** End of error message ***
(shortened)

Thanks in advance
vishwesh is offline   Reply With Quote

Old   April 24, 2020, 16:15
Default localToGlobal cell mapping in a dynamic mesh refinement case
  #40
New Member
 
Join Date: Jul 2019
Posts: 3
Rep Power: 7
mehrabadi is on a distinguished road
Quote:
Originally Posted by mkraposhin View Post
You can read cellProcAddresing arrays from processor0 ... processorN folders

see example below


Code:
List<List<label> > processCellToGlobalAddr_;
List<label> globalCellToProcessAddr_;

    if (Pstream::parRun())
    {
	processCellToGlobalAddr_.resize
	(
	    Pstream::nProcs()
	);
        
	//read local cell addressing
	labelIOList localCellProcAddr
	(
	    IOobject
	    (
		"cellProcAddressing",
		localMesh.facesInstance(),
		localMesh.meshSubDir,
		localMesh,
		IOobject::MUST_READ,
		IOobject::NO_WRITE
	    )
	);
	
	processCellToGlobalAddr_[Pstream::myProcNo()] = localCellProcAddr;
	
	//send local cell addressing to master process
	if (Pstream::master())
	{
	    for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++)
	    {
		IPstream fromSlave(Pstream::scheduled, jSlave);
		label nSlaveCells = 0;
		fromSlave >> nSlaveCells;
		processCellToGlobalAddr_[jSlave].resize(nSlaveCells);
		labelList& slaveCellProcAddr = processCellToGlobalAddr_[jSlave];
		forAll(slaveCellProcAddr, iCell)
		{
		    fromSlave >> slaveCellProcAddr[iCell];
		}
	    }
	}
	else
	{
	    OPstream toMaster (Pstream::scheduled, Pstream::masterNo());
	    toMaster << localCellProcAddr.size();
	    
	    forAll(localCellProcAddr, iCell)
	    {
		toMaster << localCellProcAddr[iCell];
	    }
	}
	
	//redistribute cell addressing to slave processes
	if (Pstream::master())
	{
	    for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++)
	    {
		OPstream toSlave (Pstream::scheduled, jSlave);
		forAll(processCellToGlobalAddr_, iProcess)
		{
		    const labelList& thisProcessAddr = processCellToGlobalAddr_[iProcess];
		    const label nCells = thisProcessAddr.size();
		    toSlave << nCells;
		    forAll(thisProcessAddr, jCell)
		    {
			toSlave << thisProcessAddr[jCell];
		    }
		}
	    }
	}
	else
	{
	    IPstream fromMaster(Pstream::scheduled, Pstream::masterNo());
	    forAll(processCellToGlobalAddr_, iProcess)
	    {
		labelList& thisProcessAddr = processCellToGlobalAddr_[iProcess];
		label nCells = 0;
		fromMaster >> nCells;
		thisProcessAddr.resize(nCells);
		forAll(thisProcessAddr, jCell)
		{
		    fromMaster >> thisProcessAddr[jCell];
		}
	    }
	}

	forAll(processCellToGlobalAddr_, jProc)
	{
	    const labelList& jProcessAddr = processCellToGlobalAddr_[jProc];
	    forAll(jProcessAddr, iCell)
	    {
		label iGlobalCell = jProcessAddr[iCell];
		globalCellToProcessAddr_[iGlobalCell] = iCell;
	    }
	}
    }

This approach is very insightful. Thank you for sharing. I understand that the initial local-to-global mapping is read from the disk and then distributed among all processors. My question is what happens when there is mesh refinement during the run-time where cell connectivity and numbering change locally. With number of cells and connectivity potentially changing in an adaptive mesh refinement case, do you have some suggestions how to update "globalCellToProcessAddr_" array?
mehrabadi is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On



All times are GMT -4. The time now is 21:11.