|
[Sponsors] |
Writting own files calculated value during the run in Parallel |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
December 22, 2006, 07:31 |
Hello,
I would like to extr
|
#1 |
Member
anne dejoan
Join Date: Mar 2009
Location: madrid, spain
Posts: 66
Rep Power: 17 |
Hello,
I would like to extract extra-data from a parallel run and that the file be written in each processor directory. The data I want to write is athe time evolution of a calculated value from the field during the computation. Is there any ready file in OpenFoam that would inspire me to code this. When including the writing in the solver, only the master processor open the file, in the local directory. Is there any parallel writing corresponsding to calculated value ? Thanks you, Anne |
|
December 22, 2006, 13:57 |
If you just want to write a fi
|
#2 |
Senior Member
Mattijs Janssens
Join Date: Mar 2009
Posts: 1,419
Rep Power: 26 |
If you just want to write a file to the local case directory:
{ OFstream str(runTime.path()/"myFile"); str<< "extraData:" << extraData << endl; } |
|
January 8, 2007, 10:52 |
Hello Mattijs and Happy New Ye
|
#3 |
Member
anne dejoan
Join Date: Mar 2009
Location: madrid, spain
Posts: 66
Rep Power: 17 |
Hello Mattijs and Happy New Year,
I just look again at my request now because I was away. Actually that what I have done, exactely hereafter follows how I implemented it (end of the mail). This works well in serial BUT not in parallel. My problem is to well output my data when I run in parallel. What I don't know is how to specify that the output will be written by each process. Beside this, I could note that the use of "probe" works well in serial but not when running in parallel. Could you help me in this ? Thanks Anne --------------------------------- const vectorField& centres = mesh.C(); const scalarField& volumes = mesh.V(); const int nroom = 7; OFstream CMEAN ("Cmean.dat"); scalarField cmhb1(nroom,0.0); scalarField totvol(nroom,0.0); forAll(centres, celli) { const vector& cCentre = centres[celli]; scalar x=cCentre[0]; scalar y=cCentre[1]; scalar z=cCentre[2]; if((0.5 < z) &&(z < 1.5)) { // Hall B1 if((0<y) &&(y<30)) { if((0<x)&&(x<60)) { totvol[0]+=volumes[celli]; cmhb1[0]+=T[celli]*volumes[celli]; } } Info<< "\nHall B1"<< endl; Info<< "\nTotVol Room Step1"<< 0<<" = " << totvol[0]<< endl; //////////////////// // Hall B2 if((30<y) &&(y<60)) { if((0<x)&&(x<60)) { totvol[1]+=volumes[celli]; cmhb1[1]+=T[celli]*volumes[celli]; } } //////////////////// // Exit 1 B1 if((-7<y) &&(y<0)) { if((26<x)&&(x<34)) { totvol[2]+=volumes[celli]; cmhb1[2]+=T[celli]*volumes[celli]; } } //////////////////// // Exit 2 B1 if((12<y) &&(y<18)) { if((-7<x)&&(x<0)) { totvol[3]+=volumes[celli]; cmhb1[3]+=T[celli]*volumes[celli]; } } //////////////////// // Exit B2 if((42<y) &&(y<48)) { if((-7<x)&&(x<0)) { totvol[4]+=volumes[celli]; cmhb1[4]+=T[celli]*volumes[celli]; } } //////////////////// // Corridor part south if((-21.7<y) &&(y<-7)) { if((-21.7<x)&&(x<60)) { totvol[5]+=volumes[celli]; cmhb1[5]+=T[celli]*volumes[celli]; } } //////////////////// // Corridor part North if((-7<y) &&(y<60)) { if((-21.7<x)&&(x<-7)) { totvol[6]+=volumes[celli]; cmhb1[6]+=T[celli]*volumes[celli]; } } } // End of test on Z plane }//en of All cells for (int j=0;j<nroom;j++) { // cmhb1[j]/=totvol[j]; // Info<< "\nTotVol Room "<< j<<" = " << totvol[j]<< endl; // Info<< "\nTotVol Room J= "<< j<<endl; CMEAN <<runTime.value()<<tab<<cmhb1[j]<< tab; } CMEAN<<endl; ----------------------------------------------------------- |
|
January 9, 2007, 17:37 |
If you want each processor to
|
#4 |
Senior Member
Mattijs Janssens
Join Date: Mar 2009
Posts: 1,419
Rep Power: 26 |
If you want each processor to write to a local (i.e. in the processor directory) file the easiest is to use a 'registered' IO-object. These get written into (a subdirectory) of the case. Examples are labelIOList or scalarIOField.
Have a scan through the $FOAM_UTILITIES sources for e.g. labelIOList. p.s. 'Info' only writes on the master, it does not do anything on the slave processors. |
|
January 10, 2007, 04:19 |
Hello Mattijs,
Actually wha
|
#5 |
Member
anne dejoan
Join Date: Mar 2009
Location: madrid, spain
Posts: 66
Rep Power: 17 |
Hello Mattijs,
Actually what I am not sure about is that if coding such as I wrote it (see my previous email) the master file will write the global information i.e the master has the global grid mesh data to be able to perform the "if" test and write my data. I thought that each processor will do the "if" test and then write its own part. what led me to this is that the utility Probe works well in serial but does not write the data when being used in parallel. Please, could let me know about my concern on the master process. I thank you a lot for your help, Anne |
|
January 10, 2007, 06:24 |
in parallel no node has the wh
|
#6 |
Senior Member
Mattijs Janssens
Join Date: Mar 2009
Posts: 1,419
Rep Power: 26 |
in parallel no node has the whole mesh. Each processor has a bit of the mesh. You'll have to make sure that only the information from the processor that holds the information gets dumped.
Usually one sets the information on all the processors that don't have the data to some special value (e.g. -GREAT) and does a 'reduce' on it that will let through only the correct information (e.g. reduce(..., maxOp<scalar()). (probes in the next version will work in parallel) |
|
January 10, 2007, 07:04 |
Hi Mattijs,
Could you be a
|
#7 |
Member
anne dejoan
Join Date: Mar 2009
Location: madrid, spain
Posts: 66
Rep Power: 17 |
Hi Mattijs,
Could you be a little more explcit on the way to use reduce in C++. I have used it before but in a home code in f77 and I don't know how to implement it in C++. I thank you for your help. anne |
|
January 11, 2007, 05:34 |
Do a grep for e.g. Pstream::re
|
#8 |
Senior Member
Mattijs Janssens
Join Date: Mar 2009
Posts: 1,419
Rep Power: 26 |
Do a grep for e.g. Pstream::reduce in the utilities sources (in $FOAM_UTILITIES). Also search on the Wiki. E.g.
http://openfoamwiki.net/index.php/Sn...ting_mass_flow |
|
January 11, 2007, 12:55 |
Deat Mattijs,
Thanks for th
|
#9 |
Member
anne dejoan
Join Date: Mar 2009
Location: madrid, spain
Posts: 66
Rep Power: 17 |
Deat Mattijs,
Thanks for the information on reduce. Regarding my way to code the sum I want to calculate (see above), I am not sure that the use of "celli" (as I use it) works well in parallel. What I want is only the sum of my scalar at a determine location so that each process has to check the location. Could you have a look at it and let me know if it seems correct, in particular the use of "centres" and "celli" Thanks you Anne |
|
May 23, 2008, 22:30 |
Hello,
I would like to ext
|
#10 |
New Member
xiuying
Join Date: Mar 2009
Posts: 24
Rep Power: 17 |
Hello,
I would like to extract extra-data from a parallel run which also includes some value with the whole domain average, and then the file with time evolution be written in root directory during the computation. My code is the following. OFstream str(runTime.path()/"n.dat"); dimensionedScalar epsilonmean=fvc::domainIntegrate(turbulence->epsilon())/sum(mesh.V()); dimensionedScalar uxmean=mag((flowDirection & U)().weightedAverage(mesh.V())); dimensionedScalar n=sqrt(epsilonmean)/uxmean.value()/sqrt(uxmean); str<< runTime.timeOutputValue() <<" "<< n.value() << nl<<endl; The result showed the code wrote the file in every processor and the value of 'n' is different at the same time in every processor. But I just want to get an average 'n' for the total domain. Could you inform me how to deal with it? Thanks you, kang |
|
May 25, 2008, 14:33 |
Hi Kang!
_Reducing_ the val
|
#11 |
Assistant Moderator
Bernhard Gschaider
Join Date: Mar 2009
Posts: 4,225
Rep Power: 51 |
Hi Kang!
_Reducing_ the values of one variable to one value that is the same on all processor is achieved by the reduce-statement. Summing up one variable for instance is achieved by scalar val=0; // val is calculated and different on all processors reduce(val,SumOp<scalar>()); // val is the same on all processors Adapt this example for your application (Be sure to use the total(==all processors) volume, where applicable) Bernhard
__________________
Note: I don't use "Friend"-feature on this forum out of principle. Ah. And by the way: I'm not on Facebook either. So don't be offended if I don't accept your invitation/friend request |
|
May 27, 2008, 16:13 |
Hi, Bernhard,
Thank you so
|
#12 |
New Member
xiuying
Join Date: Mar 2009
Posts: 24
Rep Power: 17 |
Hi, Bernhard,
Thank you so much. Now I have a question. I used a single processor to run the same case. I found that 'uxmean' was the same as the parallel result, but 'epsilonmean' was different. Could you inform me the reason? Thanks, Kang |
|
May 27, 2008, 16:53 |
Hi Kang!
To get the correct
|
#13 |
Assistant Moderator
Bernhard Gschaider
Join Date: Mar 2009
Posts: 4,225
Rep Power: 51 |
Hi Kang!
To get the correct value for epsilonmena in your notation you'll have to have a separate summation-variable for the Volume and divide the Integration of epsilon with that AFTER both of them have been reduced Bernhard
__________________
Note: I don't use "Friend"-feature on this forum out of principle. Ah. And by the way: I'm not on Facebook either. So don't be offended if I don't accept your invitation/friend request |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Passing calculated values in parallel run | wese | OpenFOAM Running, Solving & CFD | 6 | April 3, 2008 11:04 |
writting equation msdev fortran editor | Bounecer | Main CFD Forum | 0 | January 20, 2008 02:32 |
Writting equations on cfd-online.com | Guillermo Marraco | CFD-Wiki | 2 | September 11, 2006 19:12 |
Text Command for writting a pathline file | Laurence Wallian | FLUENT | 0 | September 29, 2005 06:56 |
parallel phi's files | Phoenics User | Phoenics | 2 | February 2, 2005 07:25 |