|
[Sponsors] |
Divergence in parallel running with large number of processors |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
April 4, 2023, 03:06 |
Divergence in parallel running with large number of processors
|
#1 |
New Member
Hung
Join Date: Dec 2021
Posts: 4
Rep Power: 4 |
Hi guys,
I have trouble running my case with a larger number of processors. My case first succeeded with 36 processors. However, when I increased the number of processors to 72, it diverged with the same time step, fvSchemes, fvSolution, etc. As you can see in the attachments, at the same time of 0.45316s, the results are totally different. Does anyone know why this is the case? I used the scotch decomposition method. Thank you very much. The fvSchemes and fvSolution are set up as follows: fvSchemes Code:
ddtSchemes { default backward; //Euler; } gradSchemes { default Gauss linear;//Gauss linear; grad(U) Gauss linear; } divSchemes { default none; div(phi,U) Gauss LUST grad(U); //linear; div((nuEff*dev2(T(grad(U))))) Gauss linear; //linear; } laplacianSchemes { default Gauss linear limited corrected 0.333;//Gauss linear corrected; } interpolationSchemes { default linear; } snGradSchemes { default limited corrected 0.333;//corrected; } wallDist { method meshWave; } Code:
solvers { p { solver GAMG; tolerance 1e-5; relTol 0.01; smoother GaussSeidel; } pFinal { $p; smoother DICGaussSeidel; tolerance 1e-05; relTol 0; } "(U|k|epsilon|omega|R|nut|nuTilda)" { solver smoothSolver; smoother symGaussSeidel; tolerance 1e-05; relTol 0; } "(U|k|omega|nut|nuTilda)Final" { $U; tolerance 1e-05; relTol 0; } } PISO { nCorrectors 5;//1; pressure corrector nNonOrthogonalCorrectors 2;//2; nOuterCorrectors 100;//momentum corrector innerCorrectorResidualControl { p { relTol 0; // If this inital tolerance is reached, leave tolerance 1e-5; } U { relTol 0; // If this initial tolerance is reached, leave tolerance 1e-5; } } residualControl { p 1e-5; U 1e-5; "(nut|k|epsilon|omega|f|v2)" 1e-5; } } relaxationFactors { p 0.5; U 0.5; } |
|
April 10, 2023, 02:48 |
|
#3 | |
New Member
Hung
Join Date: Dec 2021
Posts: 4
Rep Power: 4 |
Quote:
Unfortunately, it still didn't work following your suggestions. The Courant is still very high (attached file). Do you have any other ideas? Thank you. |
||
August 28, 2023, 11:27 |
|
#4 |
Senior Member
TWB
Join Date: Mar 2009
Posts: 414
Rep Power: 19 |
Hi, I have similar problems - prob diverged around the starting pt. What I do is I run with a small no. of procs for a short interval like 2e-5. Then I restart from there with a larger no. of procs. FYI, I am running an incompressible case with pressure gradient at all boundaries equal to zero. Hence, I have to denote a location with p = 0.
Similarly, I have also experienced the opposite case: Problem diverges or not depending on procs number |
|
September 8, 2023, 15:02 |
|
#5 |
New Member
Antonio Lau
Join Date: Dec 2022
Posts: 3
Rep Power: 3 |
Maybe using a smaller time step would help
|
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Decomposing error simple method | DanGode | OpenFOAM Pre-Processing | 1 | July 27, 2021 07:58 |
[snappyHexMesh] snappyHexMesh stuck when snap is turned on | yukuns | OpenFOAM Meshing & Mesh Conversion | 3 | February 2, 2021 14:05 |
fluent divergence for no reason | sufjanst | FLUENT | 2 | March 23, 2016 17:08 |
simpleFoam parallel | AndrewMortimer | OpenFOAM Running, Solving & CFD | 12 | August 7, 2015 19:45 |
Cluster ID's not contiguous in compute-nodes domain. ??? | Shogan | FLUENT | 1 | May 28, 2014 16:03 |