|
[Sponsors] |
Parallel computation and simultaneous post processing |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
December 25, 2010, 16:26 |
Parallel computation and simultaneous post processing
|
#1 |
New Member
Armin Gh.
Join Date: Sep 2010
Location: Aachen,Germany
Posts: 29
Rep Power: 16 |
Hi FOAMers,
I am using interFOAM and computing in parallel on many processors , I have a data in which in every time step, the location of phase interface is read at a defined location. And written to a file. ( in a simple header data, which then will be added to interfoam.c) Problem is; after computations I will be having many files ( equal to the number of my processors), just one of which actually includes the data I need, and other ones are just empty files. Is there a way to just write the data and make just one file for the processor , in which the location actually is? This is important since I will be running the case on thousand processors. Thanks in advance for your posts, Armin |
|
December 25, 2010, 19:38 |
|
#2 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128 |
Greetings Armin,
If I understood you correctly, you only need the master process (responsible for processor0) to write data to a file, is that it? If so, check the parallelTest utility available in the folder "$FOAM_APP/test/parallel". In that utility, you can find out how to tell the solver/application who should do certain tasks, namely the master or the slave processes. Best regards and good luck! Bruno
__________________
|
|
December 26, 2010, 06:08 |
|
#3 |
New Member
Armin Gh.
Join Date: Sep 2010
Location: Aachen,Germany
Posts: 29
Rep Power: 16 |
Hi Bruno,
Thanks for your reply, I am not sure however what you mean , following is the parallel test , am I right? I'm not seeing anything relevant to assigning certain tasks to certain processors. Or this could be way over my head in programming . Can you make it a little clear? Perr<< "\nStarting transfers\n" << endl; vector data(0, 1, 2); if (Pstream:arRun()) { if (Pstream::myProcNo() != Pstream::masterNo()) { { Perr<< "slave sending to master " << Pstream::masterNo() << endl; OPstream toMaster(Pstream::scheduled, Pstream::masterNo()); toMaster << data; } Perr<< "slave receiving from master " << Pstream::masterNo() << endl; IPstream fromMaster(Pstream::scheduled, Pstream::masterNo()); fromMaster >> data; Perr<< data << endl; } else { for ( int slave=Pstream::firstSlave(); slave<=Pstream::lastSlave(); slave++ ) { Perr << "master receiving from slave " << slave << endl; IPstream fromSlave(Pstream::scheduled, slave); fromSlave >> data; Perr<< data << endl; } for ( int slave=Pstream::firstSlave(); slave<=Pstream::lastSlave(); slave++ ) { Perr << "master sending to slave " << slave << endl; OPstream toSlave(Pstream::scheduled, slave); toSlave << data; } } } Info<< "End\n" << endl; return 0; } |
|
December 26, 2010, 08:39 |
|
#4 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128 |
Hi Armin,
It's easy! It's right there near the start of the code you pasted!! Code:
if (Pstream::ParRun()) //first make sure you're in Parallel mode { if (Pstream::myProcNo() == Pstream::masterNo()) { //Insert master process code inside this block } } Bruno
__________________
|
|
December 26, 2010, 09:20 |
|
#5 |
New Member
Armin Gh.
Join Date: Sep 2010
Location: Aachen,Germany
Posts: 29
Rep Power: 16 |
Hi Bruno,
Thanks for your prompt answer, there are though a couple of things, I do not get. First of all how do I know which processor is actually the master and which is the slave? (I am running on a cluster with hundred cores) And let's say I somehow do know, but it is really not possible for the program to know where the Location is. So I am assuming in the former post, I didn't actually expressed myself right. Let's say I have a channel with a two phase flow in it. Which would be then partitioned horizontally and assigned to say 10 processors. Now I am evaluating the position of the interface at a predefined location, which would be in one of the processors mentioned above. So every processors opens a file and writes the interface position, but only the processor containing the actual location actually writes the data , the other ones just make new empty file. I want to avoid these empty files. Or is the master proc the one which actually writes the data? if so how does Openfoam understand that. Thanks for bearing with me, Armin |
|
December 26, 2010, 10:23 |
|
#6 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128 |
Hi Armin,
Ah, now I get it... that's why you weren't looking for the master process. I think that usually the master process is allocated in the first host/core/cpu given to the mpirun. OK, so how exactly is the data saved? Did you base the code on an already existing solver or example? Or did you create the modification yourself? The other possibility is to do the reverse: when all processes are done saving to the time snapshot and file, all of them then check if the file is empty or not. If it's empty, erase the file. It's not very efficient, but if you used some internal OpenFOAM function that does things on it's own, then this would be the quickest solution. Best regards, Bruno
__________________
|
|
December 26, 2010, 12:47 |
|
#7 |
New Member
Armin Gh.
Join Date: Sep 2010
Location: Aachen,Germany
Posts: 29
Rep Power: 16 |
Hi again ,
and thanks again for your support, The code is actually the same as interFOAM solver , except another header file, which does the calculations finds the interface and then does the following: (... some lines to read the necessary data from Dict files) then, std:stringstream ostr; int proNumb = Pstream::myProcNo(); ostr << proNumb; std::string = "interface" + osrt.str() const char* DataName = s.c_str(); ofstream myfileHF (DataName, ios_baase:ut | ios_base::app); then some calculations to assign the interface to a variable named Height, and then: myfileHF << height.value() << "t"; and well... your fast solution would not work, because the data produced by other processors are not actually empty they have a common header, which I assign to differentiate between data later. so on basis of the example I told you in the former post, I would have 9 data just with headers and 1 with header and data. |
|
December 26, 2010, 16:10 |
|
#8 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128 |
Hi Armin,
Then why don't you send the ProcNo and height from the slave to the master process and have the master save all of the collected data into a single file? The parallelTest.C code shows you exactly how you can do that transfer, since it sends the contents of the data variable between master and slaves! Then use the other block of code I posted, to have only the master open and save the file with the data! It's so easy! The way I see it, you can have all slaves send a vector to the master with:
Best regards and good luck! Bruno
__________________
|
|
December 27, 2010, 15:26 |
|
#9 |
New Member
Armin Gh.
Join Date: Sep 2010
Location: Aachen,Germany
Posts: 29
Rep Power: 16 |
Hi Bruno,
Sry for the late reply, I tried out your instruction and I got a runtime error: MULES: Solving for alpha1 Liquid phase volume fraction = 0.195192 Min(alpha1) = 0 Max(alpha1) = 1 MULES: Solving for alpha1 Liquid phase volume fraction = 0.195192 Min(alpha1) = 0 Max(alpha1) = 1 MULES: Solving for alpha1 Liquid phase volume fraction = 0.195192 Min(alpha1) = 0 Max(alpha1) = 1 MULES: Solving for alpha1 Liquid phase volume fraction = 0.195192 Min(alpha1) = 0 Max(alpha1) = 1 DICPCG: Solving for p, Initial residual = 1, Final residual = 0.0434658, No Iterations 2 DICPCG: Solving for p, Initial residual = 0.0252118, Final residual = 0.00105264, No Iterations 22 DICPCG: Solving for p, Initial residual = 0.00483827, Final residual = 9.51204e-08, No Iterations 163 time step continuity errors : sum local = 1.43681e-11, global = 9.70906e-13, cumulative = 9.70906e-13 Write Sample [2] slave sending to master 0 ExecutionTime = 2.57 s ClockTime = 3 s [fix:9186] *** An error occurred in MPI_Recv [fix:9186] *** on communicator MPI_COMM_WORLD [fix:9186] *** MPI_ERR_TRUNCATE: message truncated [fix:9186] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort) As you can see there is something wrong with my sending process, I'm guessing the buffer size is too small for the data transfer, since it's in a loop. BTW, I used the following to do the transfer ; if (Pstream:arRun() && Pstream::myProcNo() != Pstream::masterNo()) { Perr<< "slave sending to master " << Pstream::masterNo() << endl; OPstream toMaster(Pstream::scheduled, Pstream::masterNo()); toMaster << Heights.value(); } if (Pstream::myProcNo() == Pstream::masterNo()) { ofstream myfileHF (DateiName, ios_base:ut | ios_base::app); myfileHF << runTime.timeName() << "\t"; Info << "Heights" << Heights << endl; myfileHF << Heights.value() << "\t" ; } am i not understanding something here? Thanks in advance, Armin |
|
December 27, 2010, 20:18 |
|
#10 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128 |
Hi Armin,
OK, let's try to do this the other way around:
I have not made the actual code modifications because:
Bruno
__________________
|
|
|
|