CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Programming & Development

MPI Reduce operator in the parallel calculation of a nested loop

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   August 3, 2021, 09:33
Post MPI Reduce operator in the parallel calculation of a nested loop
  #1
New Member
 
peyman havaej
Join Date: Jan 2016
Posts: 16
Rep Power: 10
peyman.havaej is on a distinguished road
Hello Foamers,

I have written a new boundary condition in which I have to solve a semi-infinite integral to obtain a face center value, e.g. if there are 1000 faces on a patch, an integral should be solved 1000 times. Hence, it is really a time-consuming calculation, and I do require a parallel calculation for it.
The code works perfectly in the serial mode, but the parallel computation is not the same as serial calculation. I have found that it is because of a nested loop used in the code, a loop within another loop. For a loop, the "Reduce" operate should be used to collect data from all processors to a single processor.

I assumed that I have to use the reduce operator after each loop, but in this case, the solver goes blank and the calculation is not continued after the first loop. However, suppose the reduce operator is applied for the second loop only, in that case, the calculated values are different to the single processors' computation.

I am looking to know how to use the MPI Reduce operator in the parallel calculation of a nested loop?
Any comments or suggestions will be greatly appreciated.


Best regards,
Peyman


Code:
void Foam::CarslawFvPatchScalarField::SijCoeffientsCalculationMatrix(
    scalarField& dTNew,
    scalarField& SIJ_
)
{
vectorField	BMeshV = patch().Cf();
const scalarField& magS = patch().magSf();
scalar dij;
scalar cij;

    forAll(SIJ_, i) 
 {
    	SIJ_ = 0;
		
	 forAll(SIJ_, j) 
	     {
			cij = magS[j] /thickness_;//mag(deltaPatch[j].x());
			
			dij =  BMeshV[i].x() - BMeshV[j].x();
			SIJ_[j] = (dij>0)? cij /mag(sqrt(dij)):0 ;

			SIJ_[j] = SIJ_[j] * 1.0/sqrt(mathematicalConstant::pi * kappaS_ *rhoS_ *CpS_ * Us_);
		
		}
	// reduce(SIJ_,  sumOp<scalarField>()); //  *** First reduce operator ***


	dTNew[i] = sum(SIJ_ * heatFlux_);  
 }
	reduce(dTNew ,  sumOp<scalarField>()); //  *** Second reduce operator***

 	Info<<"Maximum dTNew is \n"<<max(dTNew)<<endl;
 	Info<<"Minimum dTNew is \n"<<min(dTNew)<<endl;

}
peyman.havaej is offline   Reply With Quote

Reply

Tags
boudary condition, mpi, opeanfoam, parallel computaion, programing


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
problem during mpi in server: expected Scalar, found on line 0 the word 'nan' muth OpenFOAM Running, Solving & CFD 3 August 27, 2018 05:18
Run Mode:Platform MPI Local Parallel core problem mztcu CFX 0 October 13, 2016 04:14
NACA0012 geometry/design software needed Franny Main CFD Forum 13 July 7, 2007 16:57
PROBLEM IN PARALLEL PROGRAMMING WITH MPI Niavarani Main CFD Forum 1 April 20, 2004 07:51
MPI and parallel computation Wang Main CFD Forum 7 April 15, 2004 12:25


All times are GMT -4. The time now is 10:27.