|
[Sponsors] |
Keeping global LDU addressing using decomposePar? |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
December 6, 2018, 17:35 |
Keeping global LDU addressing using decomposePar?
|
#1 |
Senior Member
Klaus
Join Date: Mar 2009
Posts: 281
Rep Power: 22 |
Hi,
I work on a problem for which I need to maintain global LDU addresses of the matrix coefficients in each decomposed part of the matrix. Usually OpenFoam applies a local numbering for each processor core/mpi process when a case is decomposed using decomposePar. Is it possible to avoid the local numbering in the first place or can the global addresses be "re-" constructed? Klaus |
|
December 30, 2018, 16:37 |
|
#2 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,982
Blog Entries: 45
Rep Power: 128 |
Greetings Klaus,
Sorry for the late response, although right now I don't have a complete answer. However, what has come to mind is that you should try to get into the NUMAP-FOAM courses if the opportunity arises: https://foam-extend.fsb.hr/numap/ - or the CFD with open source courses at Chalmers: http://www.tfd.chalmers.se/~hani/kurser/OS_CFD/ Given that it seems you've been looking into this topic for sometime now, perhaps you could have gotten into one of those courses... Moving on, let me do a compilation of threads in which you've posted related questions to this, which I have not moved to here, given that they were somewhat in each thread in its correct topic... so the threads are:
You probably already know this, but just in case, a few rules of thumb in this kind of development:
Now, the first problem is that working directly with the LDU matrix structure has a fairly big inconvenience: it's a decomposition in itself, which means that correlating between the cells/faces back and forth with the matrix may not be easy to do. The fvMatrix class should pretty much have the code on how its done, but I've never studied it properly. The second problem is preserving the order of the cells as they appear or at least having a way to identify them. In principle, you can rely on the manual decomposition methods or the simple decomposition method to forcefully decompose the mesh in the same order as the original cell order... but that only works well for structured 1D and possibly 2D meshes... but 3D will be a pain... therefore, it's not straight forward preserving the cell order across subdomains. Now, what I might be able to do provide is the instructions on having two out of 3 missing pieces that you are looking for:
So, the first piece: having a global cell number. This is simple enough and I've roughly explained at the global index for cells and facess in parallel computation - post #2 - namely to generate a field that has the number of each original cell. Pretty much it's a matter of having a utility or function object that generates a field with the cell IDs, which the following code should do the trick: Code:
#include "argList.H" #include "timeSelector.H" #include "Time.H" #include "fvMesh.H" #include "volFields.H" using namespace Foam; // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // int main(int argc, char *argv[]) { timeSelector::addOptions(); #include "addRegionOption.H" #include "setRootCase.H" #include "createTime.H" instantList timeDirs = timeSelector::select0(runTime, args); #include "createNamedMesh.H" forAll(timeDirs, timeI) { runTime.setTime(timeDirs[timeI], timeI); Info<< "Time = " << runTime.timeName() << endl; // Check for new mesh mesh.readUpdate(); volScalarField cci ( IOobject ( "cellIDs", runTime.timeName(), mesh, IOobject::NO_READ, IOobject::AUTO_WRITE ), mesh.C().component(0) ); forAll(cci, celli) { cci[celli] = scalar(celli); } cci.write(); } Info<< "\nEnd\n" << endl; return 0; } Then you need to modify the solver application you are using, to load this field. You can then get access to said field at runtime from anywhere in the code, if you follow the examples given here: Accessing boundary patch dictionary within thermodynamic library - either the lookup one or the dict one... That said, the example given here: the global index for cells and facess in parallel computation - post #5 - already gives you both the IDs of the cells and how they are transferred back to the master process. As for the second piece: transferring arrays to the master process... it's pretty much explained at How to do communication across processors - the problem is that all of the examples there are incomplete and only demonstrate the core transfer functionality. My apologies, in order to give a better answer, I would need more time to construct the code myself and test it. And I've already spent ... I'm not sure, maybe nearly an hour gathering information and figuring things out... With some luck, I'm able to look into this sometime in the near future. In the meantime, if you are able to provide an example application with which we do trial an error together, it would make it easier to help you here, instead of having me or anyone else doing all of the example code and testing it. This is mostly because there is no clear cut way to do what you want to do and requires trial and error... Best regards, Bruno
__________________
|
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
pimpleDyMFoam computation randomly stops | babapeti | OpenFOAM Running, Solving & CFD | 5 | January 24, 2018 06:28 |
Floating point exception error | lpz_michele | OpenFOAM Running, Solving & CFD | 53 | October 19, 2015 03:50 |
Upgraded from Karmic Koala 9.10 to Lucid Lynx10.04.3 | bookie56 | OpenFOAM Installation | 8 | August 13, 2011 05:03 |
IcoFoam parallel woes | msrinath80 | OpenFOAM Running, Solving & CFD | 9 | July 22, 2007 03:58 |
Could anybody help me see this error and give help | liugx212 | OpenFOAM Running, Solving & CFD | 3 | January 4, 2006 19:07 |