CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM

Parallel computation and simultaneous post processing

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   December 25, 2010, 16:26
Default Parallel computation and simultaneous post processing
  #1
New Member
 
Armin Gh.
Join Date: Sep 2010
Location: Aachen,Germany
Posts: 29
Rep Power: 16
Armin is on a distinguished road
Hi FOAMers,

I am using interFOAM and computing in parallel on many processors , I have a data in which in every time step, the location of phase interface is read at a defined location. And written to a file. ( in a simple header data, which then will be added to interfoam.c)

Problem is; after computations I will be having many files ( equal to the number of my processors), just one of which actually includes the data I need, and other ones are just empty files.

Is there a way to just write the data and make just one file for the processor , in which the location actually is?

This is important since I will be running the case on thousand processors.

Thanks in advance for your posts,

Armin
Armin is offline   Reply With Quote

Old   December 25, 2010, 19:38
Default
  #2
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Greetings Armin,

If I understood you correctly, you only need the master process (responsible for processor0) to write data to a file, is that it?
If so, check the parallelTest utility available in the folder "$FOAM_APP/test/parallel". In that utility, you can find out how to tell the solver/application who should do certain tasks, namely the master or the slave processes.

Best regards and good luck!
Bruno
__________________
wyldckat is offline   Reply With Quote

Old   December 26, 2010, 06:08
Default
  #3
New Member
 
Armin Gh.
Join Date: Sep 2010
Location: Aachen,Germany
Posts: 29
Rep Power: 16
Armin is on a distinguished road
Hi Bruno,

Thanks for your reply,

I am not sure however what you mean , following is the parallel test , am I right? I'm not seeing anything relevant to assigning certain tasks to certain processors. Or this could be way over my head in programming . Can you make it a little clear?

Perr<< "\nStarting transfers\n" << endl;

vector data(0, 1, 2);

if (Pstream:arRun())
{
if (Pstream::myProcNo() != Pstream::masterNo())
{
{
Perr<< "slave sending to master "
<< Pstream::masterNo() << endl;
OPstream toMaster(Pstream::scheduled, Pstream::masterNo());
toMaster << data;
}

Perr<< "slave receiving from master "
<< Pstream::masterNo() << endl;
IPstream fromMaster(Pstream::scheduled, Pstream::masterNo());
fromMaster >> data;

Perr<< data << endl;
}
else
{
for
(
int slave=Pstream::firstSlave();
slave<=Pstream::lastSlave();
slave++
)
{
Perr << "master receiving from slave " << slave << endl;
IPstream fromSlave(Pstream::scheduled, slave);
fromSlave >> data;

Perr<< data << endl;
}

for
(
int slave=Pstream::firstSlave();
slave<=Pstream::lastSlave();
slave++
)
{
Perr << "master sending to slave " << slave << endl;
OPstream toSlave(Pstream::scheduled, slave);
toSlave << data;
}
}
}

Info<< "End\n" << endl;

return 0;
}
Armin is offline   Reply With Quote

Old   December 26, 2010, 08:39
Default
  #4
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Hi Armin,

It's easy! It's right there near the start of the code you pasted!!
Code:
if (Pstream::ParRun()) //first make sure you're in Parallel mode
{
     if (Pstream::myProcNo() == Pstream::masterNo())
     {
         //Insert master process code inside this block
     }
}
Good luck!
Bruno
__________________
wyldckat is offline   Reply With Quote

Old   December 26, 2010, 09:20
Default
  #5
New Member
 
Armin Gh.
Join Date: Sep 2010
Location: Aachen,Germany
Posts: 29
Rep Power: 16
Armin is on a distinguished road
Hi Bruno,

Thanks for your prompt answer, there are though a couple of things, I do not get.

First of all how do I know which processor is actually the master and which is the slave? (I am running on a cluster with hundred cores)

And let's say I somehow do know, but it is really not possible for the program to know where the Location is.

So I am assuming in the former post, I didn't actually expressed myself right. Let's say I have a channel with a two phase flow in it. Which would be then partitioned horizontally and assigned to say 10 processors. Now I am evaluating the position of the interface at a predefined location, which would be in one of the processors mentioned above.

So every processors opens a file and writes the interface position, but only the processor containing the actual location actually writes the data , the other ones just make new empty file. I want to avoid these empty files.

Or is the master proc the one which actually writes the data? if so how does Openfoam understand that.

Thanks for bearing with me,
Armin
Armin is offline   Reply With Quote

Old   December 26, 2010, 10:23
Default
  #6
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Hi Armin,

Ah, now I get it... that's why you weren't looking for the master process. I think that usually the master process is allocated in the first host/core/cpu given to the mpirun.

OK, so how exactly is the data saved? Did you base the code on an already existing solver or example? Or did you create the modification yourself?

The other possibility is to do the reverse: when all processes are done saving to the time snapshot and file, all of them then check if the file is empty or not. If it's empty, erase the file.
It's not very efficient, but if you used some internal OpenFOAM function that does things on it's own, then this would be the quickest solution.

Best regards,
Bruno
__________________
wyldckat is offline   Reply With Quote

Old   December 26, 2010, 12:47
Default
  #7
New Member
 
Armin Gh.
Join Date: Sep 2010
Location: Aachen,Germany
Posts: 29
Rep Power: 16
Armin is on a distinguished road
Hi again ,

and thanks again for your support,

The code is actually the same as interFOAM solver , except another header file, which does the calculations finds the interface and then does the following:

(... some lines to read the necessary data from Dict files) then,
std:stringstream ostr;

int proNumb = Pstream::myProcNo();

ostr << proNumb;
std::string = "interface" + osrt.str()

const char* DataName = s.c_str();

ofstream myfileHF (DataName, ios_baase:ut | ios_base::app);

then some calculations to assign the interface to a variable named Height, and then:

myfileHF << height.value() << "t";

and well... your fast solution would not work, because the data produced by other processors are not actually empty they have a common header, which I assign to differentiate between data later. so on basis of the example I told you in the former post, I would have 9 data just with headers and 1 with header and data.
Armin is offline   Reply With Quote

Old   December 26, 2010, 16:10
Default
  #8
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Hi Armin,

Then why don't you send the ProcNo and height from the slave to the master process and have the master save all of the collected data into a single file?
The parallelTest.C code shows you exactly how you can do that transfer, since it sends the contents of the data variable between master and slaves! Then use the other block of code I posted, to have only the master open and save the file with the data! It's so easy!

The way I see it, you can have all slaves send a vector to the master with:
  • a flag indicating if it has the height;
  • the number of the process that has the height;
  • the height itself.
Of course, don't forget that the master itself can have the desired height This way, you can even track a "probe particle" flowing through the model

Best regards and good luck!
Bruno
__________________
wyldckat is offline   Reply With Quote

Old   December 27, 2010, 15:26
Default
  #9
New Member
 
Armin Gh.
Join Date: Sep 2010
Location: Aachen,Germany
Posts: 29
Rep Power: 16
Armin is on a distinguished road
Hi Bruno,

Sry for the late reply, I tried out your instruction and I got a runtime error:

MULES: Solving for alpha1
Liquid phase volume fraction = 0.195192 Min(alpha1) = 0 Max(alpha1) = 1
MULES: Solving for alpha1
Liquid phase volume fraction = 0.195192 Min(alpha1) = 0 Max(alpha1) = 1
MULES: Solving for alpha1
Liquid phase volume fraction = 0.195192 Min(alpha1) = 0 Max(alpha1) = 1
MULES: Solving for alpha1
Liquid phase volume fraction = 0.195192 Min(alpha1) = 0 Max(alpha1) = 1
DICPCG: Solving for p, Initial residual = 1, Final residual = 0.0434658, No Iterations 2
DICPCG: Solving for p, Initial residual = 0.0252118, Final residual = 0.00105264, No Iterations 22
DICPCG: Solving for p, Initial residual = 0.00483827, Final residual = 9.51204e-08, No Iterations 163
time step continuity errors : sum local = 1.43681e-11, global = 9.70906e-13, cumulative = 9.70906e-13
Write Sample
[2] slave sending to master 0
ExecutionTime = 2.57 s ClockTime = 3 s

[fix:9186] *** An error occurred in MPI_Recv
[fix:9186] *** on communicator MPI_COMM_WORLD
[fix:9186] *** MPI_ERR_TRUNCATE: message truncated
[fix:9186] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)

As you can see there is something wrong with my sending process, I'm guessing the buffer size is too small for the data transfer, since it's in a loop.

BTW, I used the following to do the transfer ;

if (Pstream:arRun() && Pstream::myProcNo() != Pstream::masterNo())
{
Perr<< "slave sending to master " << Pstream::masterNo() << endl;
OPstream toMaster(Pstream::scheduled, Pstream::masterNo());
toMaster << Heights.value();
}
if (Pstream::myProcNo() == Pstream::masterNo())
{
ofstream myfileHF (DateiName, ios_base:ut | ios_base::app);
myfileHF << runTime.timeName() << "\t";
Info << "Heights" << Heights << endl;
myfileHF << Heights.value() << "\t" ;
}

am i not understanding something here?

Thanks in advance,
Armin
Armin is offline   Reply With Quote

Old   December 27, 2010, 20:18
Default
  #10
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Hi Armin,

OK, let's try to do this the other way around:
  1. Build the parallelTest utility first.
  2. Now run it in parallel mode, while inside your existing decomposed case:
    Code:
    foamJob -p -s parallelTest
    Or something like this, if you only want two parallel processes:
    Code:
    mpirun -n 2 parallelTest -parallel
  3. If it executes successfully, you should see something like this:
    Code:
    Create time
    
    [1]
    Starting transfers
    [1]
    [1] slave sending to master 0
    [1] slave receiving from master 0
    [0]
    Starting transfers
    [0]
    [0] master receiving from slave 1
    [0] (0 1 2)
    [0] master sending to slave 1
    End
    
    [1] (0 1 2)
    Finalising parallel run
  4. As you can see from the previous output, both the master sends data to the slaves, as well as the slaves send to the master.
  5. Test again running in parallel, but this time with more than 2 parallel processes.
Now, let's make a copy and modify the code:
  1. Copy the source folder for the parallelTest utility to another folder.
  2. Now modify in the copied version the file "Make/files", changing the name in the last line from "parallelTest" to "slaves2master" or something like that.
  3. You can rename the file "parallelTest.C" to "slaves2master.C" if you want to, but be sure to rename it both the real (copied) file, as well as the name in "Make/files".
  4. Now, open the source code C file "parallelTest.C" or "slaves2master.C" and examine it thoroughly! Compare the output you got from running parallelTest with the source code.
  5. You should be able to see and differentiate the different blocks of code where it sends data from the slaves to the master, as well as from the master to the slaves:
    Code:
    if running in parallel mode:
      if I'm a slave:
         send data to master
         receive data from master
      else //if I'm the master:
         receive data from all slaves
         send data to all slaves
  6. Now, remove the unwanted operations and leave only the desired ones:
    Code:
    if running in parallel mode:
      if I'm a slave:
         send data to master
      else //if I'm the master:
         receive data from all slaves
  7. Save and compile/build the modified utility.
  8. Test and run.
  9. Modify it again, but this time make it send values in the same category as the ones you need to send in your modified interFoam. Also, modify it to open the desired file for saving the heights.
  10. Test and run. Keep in mind that this is a dummy test of your final code.
  11. When you're satisfied with the result, copy-paste this block of code onto the proper areas in your modified interFoam application.
And there you have it, this is the step-by-step of what you should do.

I have not made the actual code modifications because:
  1. I don't have any more time to spend on this;
  2. You're the one who needs to overcome this obstacle. Otherwise, demand more file space in the cluster
Best regards and good luck!!
Bruno
__________________
wyldckat is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On



All times are GMT -4. The time now is 13:47.