|
[Sponsors] |
OF-2.2.x: Can't access cellZones in parallel run |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
February 13, 2014, 14:52 |
OF-2.2.x: Can't access cellZones in parallel run
|
#1 |
Member
Join Date: Jul 2011
Posts: 54
Rep Power: 15 |
Hi,
I was writing a small functionObject to monitor p and U values in selected cellSets. The sets are created with topoSet and then converted to cellZones with setsToZones. It does all run well in a serial simulation, but not if I try to run it in parallel using mpirun. It will then just think that the cellZones are empty. What I did before, when having to handle cellZones, is that I executed the topoSet and setsToZones in every processor directory explicitly, which seemed to solve the problems. This time that did not solve the problem. I have a guess that just one processor is looped over in my functionObject and not all of them, and that the processors don't communicate in this cellZone loop, as they would in a loop over a volume field. Code:
const fvMesh& mesh(refCast<const fvMesh>(obr_)); const volScalarField& p(mesh.lookupObject<volScalarField>("p")); const volVectorField& U(mesh.lookupObject<volVectorField>("U")); const cellZoneMesh& cellZones(mesh.cellZones()); forAll(cellZones, zoneID) { const word& zoneName(cellZones.names()[zoneID]); const labelUList& cellZone(cellZones[zoneID]); scalar pSumme(0); scalar UXSumme(0); scalar UYSumme(0); scalar UZSumme(0); scalar count(0); forAll(cellZone, i) { pSumme += p[cellZone[i]]; UXSumme += U[cellZone[i]].x(); UYSumme += U[cellZone[i]].y(); UZSumme += U[cellZone[i]].z(); count++; } scalar pAvg = pSumme/(count+VSMALL); scalar UXAvg = UXSumme/(count+VSMALL); scalar UYAvg = UYSumme/(count+VSMALL); scalar UZAvg = UZSumme/(count+VSMALL); Info<< nl << zoneName << ": " << pAvg << tab << UXAvg << tab << UYAvg << tab << UZAvg << nl << endl; } The method I chose isn't the most elegant one, but I don't know how I can average in an easier way and I don't know if that even matters, because the cellZones can't be found. Anyone having an idea? |
|
February 14, 2014, 05:24 |
Problem solved
|
#2 |
Member
Join Date: Jul 2011
Posts: 54
Rep Power: 15 |
Found a solution in an old thread in the forums:
Use the reduce() function to assure communication between the processors. In my case this has to be done on the averaged fields. An example is shown below: Code:
reduce(pSumme, sumOp<scalar>()); reduce(count, sumOp<scalar>()); scalar pAvg = pSumme/(count+VSMALL); Last edited by A_Pete; February 17, 2014 at 02:37. Reason: Mistake in example |
|
January 4, 2017, 04:05 |
|
#3 |
New Member
Yu Han
Join Date: Nov 2014
Posts: 3
Rep Power: 12 |
Hi Pete,
Your solution really helps me a lot. Think you very much! |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Case running in serial, but Parallel run gives error | atmcfd | OpenFOAM Running, Solving & CFD | 18 | March 26, 2016 13:40 |
dynamicMesh parallel run | popcorn | OpenFOAM Running, Solving & CFD | 0 | October 2, 2012 13:34 |
Script to Run Parallel Jobs in Rocks Cluster | asaha | OpenFOAM Running, Solving & CFD | 12 | July 4, 2012 23:51 |
Parallel run in fluent for multiphase flow | apurv | FLUENT | 2 | August 3, 2011 20:44 |
Unable to run OF in parallel on a multiple-node cluster | quartzian | OpenFOAM | 3 | November 24, 2009 14:37 |