|
[Sponsors] |
September 2, 2015, 10:59 |
unsatisfactory cluster speed
|
#1 |
New Member
Join Date: Jun 2015
Posts: 11
Rep Power: 11 |
hej,
so i have set up a little cluster from 5 Machines (Cores: 8,4,2,2,2) via local network. It works quite good, but the solutions speed doesnt get better. in fact i have the slight feeling the perfomance gets worse, the more cores i use.... i know that the network is an potentional bottleneck, but still, that cant be all to it! maybe someone has a hint for me, to get more power!? her are my files: fvsolution Code:
/*--------------------------------*- C++ -*----------------------------------*\ | ========= | | | \\ / F ield | OpenFOAM: The Open Source CFD Toolbox | | \\ / O peration | Version: 2.4.0 | | \\ / A nd | Web: www.OpenFOAM.org | | \\/ M anipulation | | \*---------------------------------------------------------------------------*/ FoamFile { version 2.0; format ascii; class dictionary; location "system"; object fvSolution; } // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // solvers { p { solver GAMG; tolerance 1e-05; relTol 0.01; smoother GaussSeidel; cacheAgglomeration true; nCellsInCoarsestLevel 100;//10; agglomerator faceAreaPair; mergeLevels 1; } pFinal { solver GAMG; tolerance 1e-05; relTol 0; smoother GaussSeidel; cacheAgglomeration true; nCellsInCoarsestLevel 4; agglomerator faceAreaPair; mergeLevels 1; } "(U|k|epsilon)" { solver smoothSolver; smoother symGaussSeidel; tolerance 1e-05; relTol 0.1; } "(U|k|epsilon)Final" { $U; tolerance 1e-05; relTol 0; } } PIMPLE { nOuterCorrectors 2;//1//1 nCorrectors 2;//2 nNonOrthogonalCorrectors 1;//2//1 pRefCell 0; pRefValue 0; } residualControl { U { tolerance 1e-04; relTol 0; absTol 0; } p { relTol 0; tolerance 1e-04; absTol 0; } // relaxationFactors //{ // fields // { // } // equations // { // "U.*" 1; // "k.*" 1; // "epsilon.*" 1; // } } // ************************************************************************* // Code:
/*--------------------------------*- C++ -*----------------------------------*\ | ========= | | | \\ / F ield | OpenFOAM: The Open Source CFD Toolbox | | \\ / O peration | Version: 2.4.0 | | \\ / A nd | Web: www.OpenFOAM.org | | \\/ M anipulation | | \*---------------------------------------------------------------------------*/ FoamFile { version 2.0; format ascii; class dictionary; location "system"; object fvSchemes; } // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // ddtSchemes { default Euler; } gradSchemes { default Gauss linear; grad(p) Gauss linear; grad(U) Gauss linear; } divSchemes { default none; div(phi,U) Gauss limitedLinearV 1;//vanLeerV 1;//limitedLinearV 1; div(phi,k) Gauss limitedLinear 1;//limitedLinear 1; div(phi,epsilon) Gauss limitedLinear 1;//limitedLinear 1; div(phi,R) Gauss limitedLinear 1;//limitedLinear 1; div(R) Gauss linear 1;//linear; div(phi,nuTilda) Gauss limitedLinear 1;//limitedLinear 1; div((nuEff*dev(T(grad(U))))) Gauss linear; } laplacianSchemes { default Gauss linear corrected; } interpolationSchemes { default linear; //interpolate(U) linear;//####### } snGradSchemes { default corrected; } fluxRequired { default no; p ; } // ************************************************************************* // Code:
/*--------------------------------*- C++ -*----------------------------------*\ | ========= | | | \\ / F ield | OpenFOAM: The Open Source CFD Toolbox | | \\ / O peration | Version: 2.0.1 | | \\ / A nd | Web: www.OpenFOAM.com | | \\/ M anipulation | | \*---------------------------------------------------------------------------*/ FoamFile { version 2.0; format ascii; class dictionary; object decomposeParDict; } // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // numberOfSubdomains 16;//12 //method simple;//hierarchical; // method simple;//for MESHING method scotch;//for RUNNING // scotchCoeffs // { // processorWeights // ( // // ); // } simpleCoeffs { n (4 3 1); delta 0.001; } hierarchicalCoeffs { n (4 3 1); delta 0.001; order xyz; } manualCoeffs { dataFile "cellDecomposition"; } distributed no; roots ( ); // ************************************************************************* // Code:
master slots=8 max-slots=8 cluster0 slots=4 max-slots=4 cluster1 slots=2 max-slots=2 cluster2 slots=2 max-slots=2 cluster3 slots=2 max-slots=2 Code:
decomposePar mpirun -np 16 renumberMesh -overwrite -parallel pyFoamPlotRunner.py mpirun -hostfile /home/user/OpenFOAM/bps-2.4.0/run/machines -np 16 pimpleFoam -parallel |
|
Tags |
cluster, performance, pimple, slots |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
AMI speed performance | danny123 | OpenFOAM | 21 | October 24, 2020 05:13 |
[OpenFOAM.org] OpenFOAM Cluster Setup for Beginners | Ruli | OpenFOAM Installation | 7 | July 22, 2016 05:14 |
Why not install cluster by connecting workstations together for CFD application? | Anna Tian | Hardware | 5 | July 18, 2014 15:32 |
Train Speed | yeo | FLUENT | 5 | February 14, 2012 09:38 |
FLUENT Speed Issues on Cluster | cfd23 | FLUENT | 2 | April 4, 2010 00:43 |