|
[Sponsors] |
Radically Different GAMG Pressure Solve Iterations with Varying Processor Count |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
August 30, 2012, 13:32 |
Radically Different GAMG Pressure Solve Iterations with Varying Processor Count
|
#1 |
Member
Matthew J. Churchfield
Join Date: Nov 2009
Location: Boulder, Colorado, USA
Posts: 49
Rep Power: 19 |
I am performing a scaling study of OpenFOAM and using channel flow DNS to study that. I am finding that PCG scales well to the point that roughly 10K-20K cells/core is reached. GAMG does seem to scale well, not down to the point that PCG does, but is much faster than PCG.
However, there is some anomalous behavior that I am trying to understand with GAMG. The best example is as follows: I ran a case with 315M cells for roughly 2000 time steps. I tried this on 1024, 2048, and 4096 cores. I am using default scotch decomposition. Everything is exactly the same in all three cases, except for the number of cores used. The 2048 case requires roughly twice the final pressure solve iterations to achieve the same tolerance as the 1024 and 4096 cases. In general, I have seen that for a fixed problem size, as the number of cores used is increased, the number of final pressure solve iterations required increases slightly, but this 2048 case is an outlier. Does anyone have any idea why this may have occurred? My pressure solver settings are as follows, and I used OF-2.1.0: p { solver GAMG; tolerance 1e-5; relTol 0.05; smoother DIC; nPreSweeps 0; nPostSweeps 2; nFinestSweeps 2; cacheAgglomeration true; nCellsInCoarsestLevel 100; agglomerator faceAreaPair; mergeLevels 1; } pFinal { solver GAMG; tolerance 1e-6; relTol 0.0; smoother DIC; nPreSweeps 0; nPostSweeps 2; nFinestSweeps 2; cacheAgglomeration true; nCellsInCoarsestLevel 100; agglomerator faceAreaPair; mergeLevels 1; } Thank you, Matt Churchfield |
|
August 31, 2012, 07:23 |
|
#2 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128 |
Hi Matt,
Looks like you've hit a corner case due to the number of divisions. I know I've seen some explanations on this subject... OK, two I've found:
Another possibility is the number of cells available per processor: 315M cells / 2048 processors ~= 154k cells > 50k cells. Mmm, I guess that it's enough cells to go around. Of course you should confirm if scotch isn't unbalancing the distribution, by shifting 40k to a single processor and the remaining 110k spread through other processors. By the way, another detail I've found sometime ago that might help: http://www.cfd-online.com/Forums/ope...tml#post367979 post #8 - it's possible to do multi-level decomposition! Best regards, Bruno
__________________
|
|
Tags |
gamg scaling multigrid |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
transsonic nozzle with rhoSimpleFoam | Unseen | OpenFOAM Running, Solving & CFD | 8 | July 1, 2022 07:54 |
Extrusion with OpenFoam problem No. Iterations 0 | Lord Kelvin | OpenFOAM Running, Solving & CFD | 8 | March 28, 2016 12:08 |
Low Mach number Compressible jet flow using LES | ankgupta8um | OpenFOAM Running, Solving & CFD | 7 | January 15, 2011 14:38 |
Negative value of k causing simulation to stop | velan | OpenFOAM Running, Solving & CFD | 1 | October 17, 2008 06:36 |
Unknown error | sivakumar | OpenFOAM Pre-Processing | 9 | September 9, 2008 13:53 |