|
[Sponsors] |
September 30, 2013, 07:25 |
interFoam parallel
|
#1 |
Member
Join Date: Aug 2011
Posts: 89
Rep Power: 15 |
Hello together,
I am using interFoam and it´s getting really slow when I use it in parallel. I´ve got a 8,1 Mio cell-grid and I used 100, 2000 and 4000 processors. With 100 processors I need 10 hours for one interval to be written out. The intervall is 10^(-4) seconds. When I use 2000 or 4000 processors I need about 1 min for one iteration. Has anyone some experiences with interFoam in parallel? Thanks a lot |
|
September 30, 2013, 08:57 |
|
#2 |
Senior Member
Nima Samkhaniani
Join Date: Sep 2009
Location: Tehran, Iran
Posts: 1,267
Blog Entries: 1
Rep Power: 25 |
it may relate to your matrix solver, would you please post your fvSolution here
__________________
My Personal Website (http://nimasamkhaniani.ir/) Telegram channel (https://t.me/cfd_foam) |
|
September 30, 2013, 09:01 |
|
#3 |
Senior Member
Bernhard
Join Date: Sep 2009
Location: Delft
Posts: 790
Rep Power: 22 |
4000 processors seems a bit excessive for 8M cells. I would assume you did not gain anything by increasing from 2000 to 4000 processors.
How are you solving the pressure equation? |
|
September 30, 2013, 09:45 |
|
#4 |
Member
Join Date: Aug 2011
Posts: 89
Rep Power: 15 |
Hello,
thanks for your ideas Here is my fvSolution-file: solvers { pcorr { solver PCG; preconditioner DIC; tolerance 1e-10; relTol 0; } p_rgh { solver PCG; preconditioner DIC; tolerance 1e-07; relTol 0.05; } p_rghFinal { solver PCG; preconditioner DIC; tolerance 1e-07; relTol 0; } "(U|k|epsilon)" { solver PBiCG; preconditioner DILU; tolerance 1e-06; relTol 0; } "(U|k|epsilon)Final" { solver PBiCG; preconditioner DILU; tolerance 1e-08; relTol 0; } } PIMPLE { momentumPredictor no; nCorrectors 3; nNonOrthogonalCorrectors 0; nAlphaCorr 1; nAlphaSubCycles 4; cAlpha } Thanks a lot |
|
October 4, 2013, 05:01 |
|
#5 |
Member
Michiel
Join Date: Oct 2010
Location: Delft, Netherlands
Posts: 97
Rep Power: 16 |
I think using GAMG for the pressure can help speed up the solution process quite a bit.
And I agree with Bernhard that the amount of processors you use is really excessive. For an 8M grid using 2000 processors results in only 4000 cells per processors, which might sound nice but all of these processors have to talk to their neighbours so chances are (big) that the communication between processors becomes the limiting factor if you use that many processors and it might even make you lose speed. You can easily test the speed up more systematically by running relatively short simulation runs (e.g. only a few hundred timesteps) with different amounts of processors. And see how much speed you gain from adding more processors. A typical way of doing this is doubling the amount of processors every time and look at the speed up. So start e.g. at 50 then 100, 200, 400, etc |
|
October 4, 2013, 05:43 |
|
#6 |
Senior Member
Bernhard
Join Date: Sep 2009
Location: Delft
Posts: 790
Rep Power: 22 |
Be careful, GAMG does not necessary outperform PCG on large parallel case, see: https://www.hpc.ntnu.no/display/hpc/...lywithpisoFoam
|
|
October 11, 2013, 14:32 |
|
#7 |
Senior Member
Santiago Marquez Damian
Join Date: Aug 2009
Location: Santa Fe, Santa Fe, Argentina
Posts: 452
Rep Power: 24 |
Hi, it was suggested in this forum to use ~50Kcells/processor, which gives you ~160 processors. I think beyond this value the speed-up will start to decrease due to communication times.
Regards.
__________________
Santiago MÁRQUEZ DAMIÁN, Ph.D. Research Scientist Research Center for Computational Methods (CIMEC) - CONICET/UNL Tel: 54-342-4511594 Int. 7032 Colectora Ruta Nac. 168 / Paraje El Pozo (3000) Santa Fe - Argentina. http://www.cimec.org.ar |
|
October 23, 2013, 03:35 |
|
#8 |
Member
Join Date: Aug 2011
Posts: 89
Rep Power: 15 |
Hello,
I tried a lot but in my case I need 320 processors to get a "fast" simulation. If I use 160 processors I need more than twice the time as I need with 320 processor. But still I need 5-6 sec per complete iteration step ( I count here the time between the appearing from one "Courant Number mean..." to the next in my output-file) Is it normal that interFoam needs so much time? Thanks a lot |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Interfoam blows on parallel run | danvica | OpenFOAM Running, Solving & CFD | 16 | December 22, 2012 03:09 |
interFoam (OF 1.7.1) in parallel ..need help | farhagim | OpenFOAM | 4 | July 26, 2012 17:42 |
InterFoam in parallel | sara | OpenFOAM Running, Solving & CFD | 3 | April 19, 2011 06:05 |
interFoam parallel | bunni | OpenFOAM Bugs | 2 | June 9, 2010 18:39 |
Performance of interFoam running in parallel | hsieh | OpenFOAM Running, Solving & CFD | 8 | September 14, 2006 10:15 |