CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

Foam-Extend bad scalability

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   August 3, 2020, 12:09
Default Foam-Extend bad scalability
  #1
Member
 
Join Date: Dec 2015
Posts: 74
Rep Power: 10
WhiteW is on a distinguished road
Hi,
I'm trying to run in parallel an anlysis with a MRF zone and mixing plane interface, I'm using Foam extend 4.1.
However I'm not having advanteages in parallelize the analysis.
I read that for cyclicGGI zones and Mixing Plane I need to use the patchConstrained Method.
My decomposeParDict is (in the case of 4 processors):


Code:
numberOfSubdomains 4;

method    patchConstrained;

globalFaceZones
(
    RUCYCLIC1Zone
    RUINLETZone
    RUCYCLIC2Zone
    RUOUTLETZone
    GVOUTLETZone
    DTINLETZone
    DTCYCLIC1Zone
    DTCYCLIC2Zone
);

patchConstrainedCoeffs
{
    method            metis;
    numberOfSubdomains    4;
    patchConstraints
    (
        (RUINLET 0)
        (GVOUTLET 0)
        (RUOUTLET 1)
        (DTINLET 1)
    );
}

metisCoeffs
{
    processorWeights
    (
        1
        1
        1
        1
    );
}


distributed     no;


// ************************************************************************* //
The scalability of the analysis seems very bad, the fastest run is using one core.
I run the multi-processor analysis with the comand:

Code:
mpirun -np 8 MRFSimpleFoam > log.MRFSimpleFoam_np08
The attached scalability analysis is refered tot he MRFSimpleFoam axial Turbine tutorial, however the same scalability can be observed with all the other axialTurbine tutorials of Foam extended.

Do you have some advise to speed up the analysis by decomposing the domain? Am I doing something wrong in the decomposeParDict setting?

Thanks,
WhiteW
Attached Images
File Type: png scalability.PNG (17.8 KB, 14 views)
WhiteW is offline   Reply With Quote

Old   August 3, 2020, 15:13
Default
  #2
Senior Member
 
Santiago Lopez Castano
Join Date: Nov 2012
Posts: 354
Rep Power: 15
Santiago is on a distinguished road
Quote:
Originally Posted by WhiteW View Post
Hi,
I'm trying to run in parallel an anlysis with a MRF zone and mixing plane interface, I'm using Foam extend 4.1.
However I'm not having advanteages in parallelize the analysis.
I read that for cyclicGGI zones and Mixing Plane I need to use the patchConstrained Method.
My decomposeParDict is (in the case of 4 processors):


Code:
numberOfSubdomains 4;

method    patchConstrained;

globalFaceZones
(
    RUCYCLIC1Zone
    RUINLETZone
    RUCYCLIC2Zone
    RUOUTLETZone
    GVOUTLETZone
    DTINLETZone
    DTCYCLIC1Zone
    DTCYCLIC2Zone
);

patchConstrainedCoeffs
{
    method            metis;
    numberOfSubdomains    4;
    patchConstraints
    (
        (RUINLET 0)
        (GVOUTLET 0)
        (RUOUTLET 1)
        (DTINLET 1)
    );
}

metisCoeffs
{
    processorWeights
    (
        1
        1
        1
        1
    );
}


distributed     no;


// ************************************************************************* //
The scalability of the analysis seems very bad, the fastest run is using one core.
I run the multi-processor analysis with the comand:

Code:
mpirun -np 8 MRFSimpleFoam > log.MRFSimpleFoam_np08
The attached scalability analysis is refered tot he MRFSimpleFoam axial Turbine tutorial, however the same scalability can be observed with all the other axialTurbine tutorials of Foam extended.

Do you have some advise to speed up the analysis by decomposing the domain? Am I doing something wrong in the decomposeParDict setting?

Thanks,
WhiteW
you forgot the -parallel flag when running your code.!! basically you are running 8 serial instances of the MRFSimpleFoam solver!
Santiago is offline   Reply With Quote

Old   August 3, 2020, 17:13
Default
  #3
Member
 
Join Date: Dec 2015
Posts: 74
Rep Power: 10
WhiteW is on a distinguished road
Hi Santiago,
thanks for your reply.
I did not used the -parallel option because it did not work well in my OF.

using:
Code:
mpirun -np 2 MRFSimpleFoam -parallel >log.MRFSimpleFoam
the code stops (without errors) after these prints:

Code:
Time = 1

BiCGStab:  Solving for Ux, Initial residual = 1, Final residual = 0.00286063, No Iterations 1
BiCGStab:  Solving for Uy, Initial residual = 1, Final residual = 0.00320029, No Iterations 1
BiCGStab:  Solving for Uz, Initial residual = 1, Final residual = 0.00224373, No Iterations 1
BiCGStab:  Solving for p, Initial residual = 1, Final residual = 0.0484594, No Iterations 25
BiCGStab:  Solving for p, Initial residual = 0.377316, Final residual = 0.0180105, No Iterations 47
time step continuity errors : sum local = 0.2369, global = -0.0946146, cumulative = -0.0946146
Initializing the mixingPlane interpolator between master/shadow patches: GVOUTLET/RUINLET
The two processor are active but the simulation does not procede.

using:
Code:
mpirun -np 3 MRFSimpleFoam -parallel >log.MRFSimpleFoam
I get the following error:

Code:
Create time

Create mesh for time = 0

Initializing the GGI interpolator between master/shadow patches: RUCYCLIC1/RUCYCLIC2
Initializing the GGI interpolator between master/shadow patches: DTCYCLIC1/DTCYCLIC2
Initializing the mixingPlane interpolator between master/shadow patches: RUOUTLET/DTINLET
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
The only decomposition that works is with 4 processor:
Code:
mpirun -np 4 MRFSimpleFoam -parallel >log.MRFSimpleFoam
and actually is the faster one.
What could be the cause of the other erros?
Thanks,
WhiteW
WhiteW is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
[foam-extend.org] Installing Foam Extend 4.0 on Linux Mint andysuth OpenFOAM Installation 1 May 11, 2019 08:37
error with reactingFoam BakedAlmonds OpenFOAM Running, Solving & CFD 4 June 22, 2016 02:21
[OpenFOAM] Take derivative of mean velocity in paraFoam hiuluom ParaView 13 April 26, 2016 06:44
[blockMesh] BlockMesh FOAM warning gaottino OpenFOAM Meshing & Mesh Conversion 7 July 19, 2010 14:11
Problems of Duns Codes! Martin J Main CFD Forum 8 August 14, 2003 23:19


All times are GMT -4. The time now is 22:09.