|
[Sponsors] |
the global index for cells and facess in parallel computation |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
November 2, 2015, 06:58 |
|
#21 |
Senior Member
Join Date: Jan 2013
Posts: 372
Rep Power: 14 |
In the list of faceProcAddressing, I found that there are some negative indices in my case. Are they for the boundary faces or other particular types?
Also, the number of face in the file faceProcAddressing is the total number of faces in the local processor (physical boundary face, inter-processor faces and internal faces)? Thank you. |
|
November 2, 2015, 10:31 |
|
#22 |
Senior Member
Join Date: Jan 2013
Posts: 372
Rep Power: 14 |
The answer has been found:
Code:
For decomposed meshes there are additional files (labelIOLists) that refer back to the undecomposed mesh: faceProcAddressing for every face the original face in the undecomposed mesh. Also codes whether the face has been reversed. If procFaceI is the local face index and globalFaceI the index in the undecomposed mesh: - faceProcAddressing[procFaceI] == 0 : not allowed - faceProcAddressing[procFaceI] >0 : globalFaceI = mag(faceProcAddressing[procFaceI])-1 and orientation same as undecomposed mesh - faceProcAddressing[procFaceI] <0 : globalFaceI = mag(faceProcAddressing[procFaceI])-1 and orientation opposite to undecomposed mesh. |
|
November 3, 2015, 04:30 |
|
#23 |
Senior Member
Join Date: Jan 2013
Posts: 372
Rep Power: 14 |
The cell indices in the cellProcAddressing starts from 0, which is different from the faceProcAddressing (start from 0). Is this correct?
|
|
November 7, 2015, 07:16 |
|
#24 |
Senior Member
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 355
Rep Power: 21 |
Just for those, who are still interested in how to write specified arrays at each MPI process to different files
Code:
{ const word processFileName = "cellProcAddressing" + Foam::name(Pstream::myProcNo()); OFstream procFile (processFileName); procFile << cellProcAddressing << endl; procFile.flush() // to be sure that data flushed to disk }
__________________
MDPI Fluids (Q2) special issue for OSS software: https://www.mdpi.com/journal/fluids/..._modelling_OSS GitHub: https://github.com/unicfdlab Linkedin: https://linkedin.com/in/matvey-kraposhin-413869163 RG: https://www.researchgate.net/profile/Matvey_Kraposhin |
|
November 17, 2015, 06:18 |
|
#25 |
Senior Member
Join Date: Jan 2013
Posts: 372
Rep Power: 14 |
Dear Matvey,
Thank you so much. In the file of "faceProcAddressing", the faces included are both internal and boundary faces. But I am not sure their order in that file. I mean, internal face listed first, followed by the the indices for boundary faces? |
|
November 17, 2015, 07:21 |
|
#26 | |
Senior Member
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 355
Rep Power: 21 |
Quote:
is a list of global faces indices for all local processor internal faces followed by local processor boundary faces some local processor boundary faces (processor boundaries) can be global internal faces
__________________
MDPI Fluids (Q2) special issue for OSS software: https://www.mdpi.com/journal/fluids/..._modelling_OSS GitHub: https://github.com/unicfdlab Linkedin: https://linkedin.com/in/matvey-kraposhin-413869163 RG: https://www.researchgate.net/profile/Matvey_Kraposhin |
||
November 17, 2015, 07:24 |
|
#27 |
Senior Member
Join Date: Jan 2013
Posts: 372
Rep Power: 14 |
Thank you for your reply. Clear for me now.
|
|
November 17, 2015, 09:57 |
|
#28 |
Senior Member
Join Date: Jan 2013
Posts: 372
Rep Power: 14 |
Dear Matvey,
I have a special type of faces. The mesh is generated by ICEM with multiple bodies. Between bodies, there will be an interior faces. I always output the icem mesh using fluent v6 format. Then convert that mesh with fluent3DMeshToFoam to openfoam format. I am not sure for this interior faces, what does it belong for global mesh set and local processor mesh set. Do you have ideas about this? Thank you. |
|
November 17, 2015, 10:24 |
|
#29 |
Senior Member
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 355
Rep Power: 21 |
Can you upload an example of your mesh?
__________________
MDPI Fluids (Q2) special issue for OSS software: https://www.mdpi.com/journal/fluids/..._modelling_OSS GitHub: https://github.com/unicfdlab Linkedin: https://linkedin.com/in/matvey-kraposhin-413869163 RG: https://www.researchgate.net/profile/Matvey_Kraposhin |
|
November 17, 2015, 11:07 |
|
#30 |
Senior Member
Join Date: Jan 2013
Posts: 372
Rep Power: 14 |
The mesh is too big to put it here......
|
|
November 17, 2015, 11:13 |
|
#31 |
Senior Member
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 355
Rep Power: 21 |
I mean, can you make a simple example, that can be used as a tutorial? And then put it here
__________________
MDPI Fluids (Q2) special issue for OSS software: https://www.mdpi.com/journal/fluids/..._modelling_OSS GitHub: https://github.com/unicfdlab Linkedin: https://linkedin.com/in/matvey-kraposhin-413869163 RG: https://www.researchgate.net/profile/Matvey_Kraposhin |
|
November 17, 2015, 12:19 |
|
#32 |
Senior Member
Join Date: Jan 2013
Posts: 372
Rep Power: 14 |
I have made some investigations, it seems that in faceProcAddressing file, the "interior" faces are counted in the boundary faces. But this is special, since they are not counted in the file as constant/polymesh/boundary (this is reasonable since for the global mesh, it is indeed interior faces). I check the original file, it is not similar to other physical boundaries, interior faces have both owner and neighbor cells. However, for the physical boundaries, they only have owner cell, the neighbor cell index is set to zero. This is very confusing here. I am still not sure how to label it in openfoam, because it does not appear in boundary list.
|
|
November 17, 2015, 12:38 |
|
#33 |
Senior Member
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 355
Rep Power: 21 |
I'm sorry, i didn't understood everything
But, as for owner/neighbor cells , the distinguish rule is simple: - the face is called owner with respect to cell, when it's normal points outward of cell - the face is called neighbor with respect to cell, when it's normal points inside the cell Each boundary face can be connected only to one cell and we know, that surface of domain should have normal, pointing outward. So, all boundary faces are owners only
__________________
MDPI Fluids (Q2) special issue for OSS software: https://www.mdpi.com/journal/fluids/..._modelling_OSS GitHub: https://github.com/unicfdlab Linkedin: https://linkedin.com/in/matvey-kraposhin-413869163 RG: https://www.researchgate.net/profile/Matvey_Kraposhin |
|
November 17, 2015, 12:47 |
|
#34 |
Senior Member
Join Date: Jan 2013
Posts: 372
Rep Power: 14 |
Thanks, but the parallel solver, does openfoam treat the inter-body faces as boundary or inteiror faces? How to extract the owner and neighbor cell index for the inter-body faces?
|
|
November 17, 2015, 13:05 |
|
#35 |
Senior Member
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 355
Rep Power: 21 |
If you call processor as *body*, then my answe is: yes, inter-processor faces are treated as boundary faces
__________________
MDPI Fluids (Q2) special issue for OSS software: https://www.mdpi.com/journal/fluids/..._modelling_OSS GitHub: https://github.com/unicfdlab Linkedin: https://linkedin.com/in/matvey-kraposhin-413869163 RG: https://www.researchgate.net/profile/Matvey_Kraposhin |
|
November 17, 2015, 13:09 |
|
#36 |
Senior Member
Join Date: Jan 2013
Posts: 372
Rep Power: 14 |
No, here Body is from ICEM. That means a multi-block mesh, and different blocks (i.e. body) share one common surface.
|
|
November 17, 2015, 13:14 |
|
#37 |
Senior Member
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 355
Rep Power: 21 |
O.K.,
Actually, i don't know which structure OpenFOAM uses to represent ICEM body, that's why i asked you to upload any simple example
__________________
MDPI Fluids (Q2) special issue for OSS software: https://www.mdpi.com/journal/fluids/..._modelling_OSS GitHub: https://github.com/unicfdlab Linkedin: https://linkedin.com/in/matvey-kraposhin-413869163 RG: https://www.researchgate.net/profile/Matvey_Kraposhin |
|
November 10, 2017, 00:39 |
|
#38 |
Member
Vishwesh Ravi Shrimali
Join Date: Aug 2017
Posts: 43
Rep Power: 9 |
Hi!
Though this thread is about 2 years old but is pretty relevant to a problem that I am facing. (Neighboring cells in tetrahedral mesh) To explain my problem in detail: (Considering parallel run) 1. I form a list of cell IDs with alpha > 0.3 for each processor and then merge these lists to form a mail list. 2. I iterate over each element of the main list, find the neighboring cells of the element and see how many of these neighboring cells have alpha > 0. These cells then end up becoming a part of a bubble. The trouble that I have realized now is that because I am using parallel run, I might not be getting the global cellIDs and am only stuck with local IDs. Can anyone please help me out with this? What should I do... Thank you in advance |
|
November 10, 2017, 05:08 |
|
#39 | |
Member
Vishwesh Ravi Shrimali
Join Date: Aug 2017
Posts: 43
Rep Power: 9 |
Quote:
Hi! I was trying out the code for a problem that I am facing. I changed localmesh to mesh and removed "_" from the variable names too. I was able to compile the solver properly, but somehow it's not working properly. I am attaching the code and the result here. I would be very grateful if you could help me out with it. Code:
Info<<"\n\nEntering addressing array transfer to Master processor\n\n"; /*============TRANSFER CELL INDEX FROM SLAVE TO MASTER=============*\ |The local cell index from each processor folder will be obtained | |from cellProcAddressing array. This data will contain the local | |cell ID and global cell ID. This data will be transferred to the | |master processor for gathering the data. | \*================================================================*/ // for transferring process cell ID to global cell ID List<List<label> > processCellToGlobalAddr; // for transferring global cell ID to process cell ID List<label> globalCellToProcessAddr; if (RUN_IN_PARALLEL) { processCellToGlobalAddr.resize ( Pstream::nProcs() ); // read local cell addressing labelIOList localCellProcAddr ( IOobject ( "cellProcAddressing", mesh.facesInstance(), mesh.meshSubDir, mesh, IOobject::MUST_READ, IOobject::NO_WRITE ) ); // store addressing arrays in different array processCellToGlobalAddr[Pstream::myProcNo()] = localCellProcAddr; // Redistribute this information across other processes as follows: // > send addressing information from all processors to Master Processor // > Gather information at master processor // > Send gathered information from master processor to other processors // send local cell addressing to master processor if (Pstream::master()) { for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++) { Pout<<"I worked first"<<endl; IPstream fromSlave(Pstream::scheduled, jSlave); label nSlaveCells = 0; fromSlave >> nSlaveCells; processCellToGlobalAddr[jSlave].resize(nSlaveCells); labelList& slaveCellProcAddr = processCellToGlobalAddr[jSlave]; forAll(slaveCellProcAddr, iCell) { fromSlave >> slaveCellProcAddr[iCell]; } // Pout<<"slave: "<<jSlave<<"\nslaveCellProcAddr:\n"<<slaveCellProcAddr<<"\nprocessCellToGlobalAddr:\n"<<processCellToGlobalAddr<<endl; } } else { Pout<<"I worked"<<endl; OPstream toMaster (Pstream::scheduled, Pstream::masterNo()); toMaster << localCellProcAddr.size(); forAll(localCellProcAddr, iCell) { toMaster << localCellProcAddr[iCell]; } // Pout<<"localCellProcAddr:\n"<<localCellProcAddr<<endl; } Info<<"\nInformation transferred to master processor.\nBeginning cell addressing redistribution to slave processes.\n\n"; // redistribute cell addressing to slave processes if (Pstream::master()) { for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++) { Pout<<"I worked too"<<endl; OPstream toSlave (Pstream::scheduled, jSlave); forAll(processCellToGlobalAddr, iProcess) { const labelList& thisProcessAddr = processCellToGlobalAddr[iProcess]; const label nCells = thisProcessAddr.size(); toSlave << nCells; forAll(thisProcessAddr, jCell) { toSlave << thisProcessAddr[jCell]; } } } } else { Pout<<"I worked second"<<endl; IPstream fromMaster(Pstream::scheduled, Pstream::masterNo()); forAll(processCellToGlobalAddr, iProcess) { labelList& thisProcessAddr = processCellToGlobalAddr[iProcess]; label nCells = 0; fromMaster >> nCells; thisProcessAddr.resize(nCells); forAll(thisProcessAddr, jCell) { fromMaster >> thisProcessAddr[jCell]; } // Pout<<"thisProcessAddr:\n"<<thisProcessAddr<<endl; } // Pout<<"thisProcessAddr:\n"<<thisProcessAddr<<endl; } // reverse addressing:- from global cell ID to local process cell ID Info<<"\nInformation transferred to slave processes.\nBeginning conversion of global cell ID to local process cell ID.\n\n"; forAll(processCellToGlobalAddr, jProc) { const labelList& jProcessAddr = processCellToGlobalAddr[jProc]; forAll(jProcessAddr, iCell) { label iGlobalCell = jProcessAddr[iCell]; globalCellToProcessAddr[iGlobalCell] = iCell; } } // Pout<<"globalCellToProcessAddr:\n"<<globalCellToProcessAddr<<endl; Info<<"\nReverse addressing complete.\n\n"; } Code:
Entering addressing array transfer to Master processor [12] I worked [24] I worked [29] I worked [0] I worked first [6] I worked [10] I worked [2] I worked [18] I worked [27] I worked [21] I worked [31] I worked [1] I worked [9] I worked [3] I worked [1] I worked second [15] I worked [0] I worked first [2] I worked second [26] I worked [3] I worked second [16] I worked [0] I worked first [0] I worked first [4] I worked [13] I worked [4] I worked second [28] I worked [0] I worked first [19] I worked [22] I worked [20] I worked [25] I worked [11] I worked [30] I worked [5] I worked [5] I worked second [6] I worked second [17] I worked [0] I worked first [14] I worked [0] I worked first [23] I worked [7] I worked [7] I worked second [0] I worked first [8] I worked [8] I worked second [10] I worked second [9] I worked second [0] I worked first [0] I worked first [12] I worked second [11] I worked second [0] I worked first [0] I worked first [13] I worked second [14] I worked second [0] I worked first [0] I worked first [0] I worked first [0] I worked first [15] I worked second [16] I worked second [0] I worked first [0] I worked first [17] I worked second [18] I worked second [0] I worked first [0] I worked first [19] I worked second [20] I worked second [0] I worked first [0] I worked first [21] I worked second [22] I worked second [0] I worked first [23] I worked second [0] I worked first [24] I worked second [0] I worked first [25] I worked second [0] I worked first [26] I worked second [0] I worked first [27] I worked second [0] I worked first [28] I worked second [0] I worked first [29] I worked second [31] I worked second [30] I worked second [0] I worked first [0] I worked first Information transferred to master processor. Beginning cell addressing redistribution to slave processes. [0] I worked too [0] I worked too [1] #0 Foam::error::printStack(Foam::Ostream&)[0] I worked too in "/share/OpenFOAM/OpenFOAM-3.0.x/platforms/linux64GccDPInt64Opt/lib/libOpenFOAM.so" [1] #1 Foam::sigSegv::sigHandler(int)[0] I worked too [2] #0 Foam::error::printStack(Foam::Ostream&) in "/share/OpenFOAM/OpenFOAM-3.0.x/platforms/linux64GccDPInt64Opt/lib/libOpenFOAM.so" [1] #2 ?[0] I worked too in "/share/OpenFOAM/OpenFOAM-3.0.x/platforms/linux64GccDPInt64Opt/lib/libOpenFOAM.so" [2] #1 Foam::sigSegv::sigHandler(int)[3] #0 Foam::error::printStack(Foam::Ostream&) at sigaction.c:0 [1] #3 main in "/share/OpenFOAM/OpenFOAM-3.0.x/platforms/linux64GccDPInt64Opt/lib/libOpenFOAM.so" [2] #2 ? in "/hom in "/share/OpenFOAM/OpenFOAM-3.0.x/platforms/linux64GccDPInt64Opt/lib/libOpenFOAM.so" e[3] #1 Foam::sigSegv::sigHandler(int)/[0] I worked too vshrima/OpenFOAM/vshrima-3.0.x/platforms/linux64GccDPInt64Opt/bin/myLPTVOF_RP_test" [1] #4 __libc_start_main at sigaction.c:0 [2] #3 [4] #0 Foam::error::printStack(Foam::Ostream&)main in "/share/OpenFOAM/OpenFOAM-3.0.x/platforms/linux64GccDPInt64Opt/lib/libOpenFOAM.so" [3] #2 ? in "/lib64/libc.so.6" [1] #5 ? at sigaction.c:0 [3] #3 in "/home/vshrima/OpenFOAM/vshrima-3.0.x/platforms/linux64GccDPInt64Opt/bin/myLPTVOF_RP_test" [2] #4 __libc_start_main in "/share/OpenFOAM/OpenFOAM-3.0.x/platforms/linux64GccDPInt64Opt/lib/libOpenFOAM.so" [4] #1 Foam::sigSegv::sigHandler(int)[0] I worked too main in "/home/vshrima/OpenFOAM/vshrima-3.0.x/platforms/linux64GccDPInt64Opt/bin/myLPTVOF_RP_test" [cgscd000ggb3t92:35078] *** Process received signal *** [cgscd000ggb3t92:35078] Signal: Segmentation fault (11) [cgscd000ggb3t92:35078] Signal code: (-6) [cgscd000ggb3t92:35078] Failing at address: 0x11e7d00008906 [cgscd000ggb3t92:35078] [ 0] /lib64/libc.so.6[0x3eb7c32510] [cgscd000ggb3t92:35078] [ 1] /lib64/libc.so.6(gsignal+0x35)[0x3eb7c32495] [cgscd000ggb3t92:35078] [ 2] /lib64/libc.so.6[0x3eb7c32510] [cgscd000ggb3t92:35078] [ 3] myLPTVOF_RP_test(main+0x22e7)[0x448bf7] [cgscd000ggb3t92:35078] [ 4] /lib64/libc.so.6(__libc_start_main+0xfd)[0x3eb7c1ed1d] [cgscd000ggb3t92:35078] [ 5] myLPTVOF_RP_test[0x446439] [cgscd000ggb3t92:35078] *** End of error message *** in "/lib64/libc.so.6" [2] #5 [5] #0 Foam::error::printStack(Foam::Ostream&) in "/home/vshrima/OpenFOAM/vshrima-3.0.x/platforms/linux64GccDPInt64Opt/bin/myLPTVOF_RP_test" [3] #4 __libc_start_main? in "/share/OpenFOAM/OpenFOAM-3.0.x/platforms/linux64GccDPInt64Opt/lib/libOpenFOAM.so" [4] #2 ? in "/lib64/libc.so.6" [3] #5 [6] #0 Foam::error::printStack(Foam::Ostream&) in "/home/vshrima/OpenFOAM/vshrima-3.0.x/platforms/linux64GccDPInt64Opt/bin/myLPTVOF_RP_test" [cgscd000ggb3t92:35079] *** Process received signal *** [cgscd000ggb3t92:35079] Signal: Segmentation fault (11) [cgscd000ggb3t92:35079] Signal code: (-6) [cgscd000ggb3t92:35079] Failing at address: 0x11e7d00008907 [cgscd000ggb3t92:35079] [ 0] /lib64/libc.so.6[0x3eb7c32510] [cgscd000ggb3t92:35079] [ 1] /lib64/libc.so.6(gsignal+0x35)[0x3eb7c32495] [cgscd000ggb3t92:35079] [ 2] /lib64/libc.so.6[0x3eb7c32510] [cgscd000ggb3t92:35079] [ 3] myLPTVOF_RP_test(main+0x22e7)[0x448bf7] [cgscd000ggb3t92:35079] [ 4] /lib64/libc.so.6(__libc_start_main+0xfd)[0x3eb7c1ed1d] [cgscd000ggb3t92:35079] [ 5] myLPTVOF_RP_test[0x446439] [cgscd000ggb3t92:35079] *** End of error message *** ? at sigaction.c:0 [4] #3 in "/share/OpenFOAM/OpenFOAM-3.0.x/platforms/linux64GccDPInt64Opt/lib/libOpenFOAM.so" [0] I worked too [5] #1 Foam::sigSegv::sigHandler(int) in "/home/vshrima/OpenFOAM/vshrima-3.0.x/platforms/linux64GccDPInt64Opt/bin/myLPTVOF_RP_test" [cgscd000ggb3t92:35080] *** Process received signal *** [cgscd000ggb3t92:35080] Signal: Segmentation fault (11) [cgscd000ggb3t92:35080] Signal code: (-6) [cgscd000ggb3t92:35080] Failing at address: 0x11e7d00008908 [cgscd000ggb3t92:35080] [ 0] /lib64/libc.so.6[0x3eb7c32510] [cgscd000ggb3t92:35080] [ 1] /lib64/libc.so.6(gsignal+0x35)[0x3eb7c32495] [cgscd000ggb3t92:35080] [ 2] /lib64/libc.so.6[0x3eb7c32510] [cgscd000ggb3t92:35080] [ 3] myLPTVOF_RP_test(main+0x22e7)[0x448bf7] [cgscd000ggb3t92:35080] [ 4] /lib64/libc.so.6(__libc_start_main+0xfd)[0x3eb7c1ed1d] [cgscd000ggb3t92:35080] [ 5] myLPTVOF_RP_test[0x446439] [cgscd000ggb3t92:35080] *** End of error message *** main in "/share/OpenFOAM/OpenFOAM-3.0.x/platforms/linux64GccDPInt64Opt/lib/libOpenFOAM.so" [6] #1 Foam::sigSegv::sigHandler(int) in "/home/vshrima/OpenFOAM/vshrima-3.0.x/platforms/linux64GccDPInt64Opt/bin/myLPTVOF_RP_test" [4] #4 __libc_start_main in "/share/OpenFOAM/OpenFOAM-3.0.x/platforms/linux64GccDPInt64Opt/lib/libOpenFOAM.so" [5] #2 ? in "/share/OpenFOAM/OpenFOAM-3.0.x/platforms/linux64GccDPInt64Opt/lib/libOpenFOAM.so" [6] #2 ? in "/lib64/libc.so.6" [4] #5 at sigaction.c:0 [5] #3 ? in "/home/vshrima/OpenFOAM/vshrima-3.0.x/platforms/linux64GccDPInt64Opt/bin/myLPTVOF_RP_test" [cgscd000ggb3t92:35081] *** Process received signal *** [cgscd000ggb3t92:35081] Signal: Segmentation fault (11) [cgscd000ggb3t92:35081] Signal code: (-6) [cgscd000ggb3t92:35081] Failing at address: 0x11e7d00008909 [cgscd000ggb3t92:35081] [ 0] /lib64/libc.so.6[0x3eb7c32510] [cgscd000ggb3t92:35081] [ 1] /lib64/libc.so.6(gsignal+0x35)[0x3eb7c32495] [cgscd000ggb3t92:35081] [ 2] /lib64/libc.so.6[0x3eb7c32510] [cgscd000ggb3t92:35081] [ 3] myLPTVOF_RP_test(main+0x22e7)[0x448bf7] [cgscd000ggb3t92:35081] [ 4] /lib64/libc.so.6(__libc_start_main+0xfd)[0x3eb7c1ed1d] [cgscd000ggb3t92:35081] [ 5] myLPTVOF_RP_test[0x446439] [cgscd000ggb3t92:35081] *** End of error message *** at sigaction.c:0 [6] #3 main in "/home/vshrima/OpenFOAM/vshrima-3.0.x/platforms/linux64GccDPInt64Opt/bin/myLPTVOF_RP_test" [6] #4 __libc_start_main in "/lib64/libc.so.6" [6] #5 ? in "/home/vshrima/OpenFOAM/vshrima-3.0.x/platforms/linux64GccDPInt64Opt/bin/myLPTVOF_RP_test" [cgscd000ggb3t92:35083] *** Process received signal *** [cgscd000ggb3t92:35083] Signal: Segmentation fault (11) [cgscd000ggb3t92:35083] Signal code: (-6) [cgscd000ggb3t92:35083] Failing at address: 0x11e7d0000890b [cgscd000ggb3t92:35083] [ 0] /lib64/libc.so.6[0x3eb7c32510] [cgscd000ggb3t92:35083] [ 1] /lib64/libc.so.6(gsignal+0x35)[0x3eb7c32495] [cgscd000ggb3t92:35083] [ 2] /lib64/libc.so.6[0x3eb7c32510] [cgscd000ggb3t92:35083] [ 3] myLPTVOF_RP_test(main+0x22e7)[0x448bf7] [cgscd000ggb3t92:35083] [ 4] /lib64/libc.so.6(__libc_start_main+0xfd)[0x3eb7c1ed1d] [cgscd000ggb3t92:35083] [ 5] myLPTVOF_RP_test[0x446439] [cgscd000ggb3t92:35083] *** End of error message *** Thanks in advance |
||
April 24, 2020, 16:15 |
localToGlobal cell mapping in a dynamic mesh refinement case
|
#40 | |
New Member
Join Date: Jul 2019
Posts: 3
Rep Power: 7 |
Quote:
This approach is very insightful. Thank you for sharing. I understand that the initial local-to-global mapping is read from the disk and then distributed among all processors. My question is what happens when there is mesh refinement during the run-time where cell connectivity and numbering change locally. With number of cells and connectivity potentially changing in an adaptive mesh refinement case, do you have some suggestions how to update "globalCellToProcessAddr_" array? |
||
|
|