|
[Sponsors] |
Different convergence behavior on different computers with single config file |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
February 4, 2021, 15:43 |
Different convergence behavior on different computers with single config file
|
#1 |
Senior Member
|
Hi there,
I am running a few standard test cases mostly known as validation and verification cases. Here I have the ONERA M6 test case which is popular for viscous boundary layer interaction and shocks. I have generated a course mesh which has no problem and everything is fine with it. Then I have a config file using JST and Multigrid without CFL adaption. Here is the issue I hope someone could explain: Same mesh, as I mentioned above, and same config file on two different computers is leading to different convergence behavior. 1. Comptuter 1: mpirun --use-hwthreads-cpus -np 16 SU2_CFD config_file It converges in 5K iteration and gets to e-10 2. Computer 2: mpirun -n 8 SU2_CFD config_file It converges in almost 3k iterations and gets to e-10 Without multigrid they both show same convergence pattern and get to e-10 in almost 15k iterations. But with MG as I attached two pics they both get down well and get to the same CL and CD but number of iterations are not same! Assuming the second computer is faster than the first computer per core, we expect the second computer shows less run time (cpu time per iteration) but same number of iterations in overall. I don't understand why by changing to another computer using same config file and same mesh, number of iterations to get to e-10 differs when one runs SU2 in parallel and switching MG on. And I do understand that MG is helping us to converge faster by having less iterations by switching solutions between fine and course mesh helping high and low frequency errors damp faster but in this case the MG for both are same. How mpi affects the multigrid in the SU2? Best, Pay |
|
February 5, 2021, 06:51 |
|
#2 |
Senior Member
Pedro Gomes
Join Date: Dec 2017
Posts: 466
Rep Power: 14 |
We have geometric multigrid in SU2, the agglomeration algorithm operates on the subdomains / partitions created by parmetis.
To my knowledge the algorithm in parmetis does not have any criteria for the quality of the subdomains it creates, which may lead to weird shapes that cannot be coarsened very well... You will notice that as you increase the number of cores, the agglomeration ratio decreases (coarse grids have more CV's) and the number of coarse grids that can be created decreases as the agglomerations starts failing. I could not come up with a solution for the multigrid, and so I reduced the partitioning by implementing hybrid parallelization (MPI+threads): https://su2foundation.org/wp-content...0/06/Gomes.pdf Slide 3 shows the same behaviour you found, slide 4 tells you how to compile and run the code. |
|
February 5, 2021, 07:41 |
|
#3 | |
Senior Member
|
Quote:
Best, Pay |
||
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
[swak4Foam] groovyBC in openFOAM-2.0 for parabolic velocity bc | ofslcm | OpenFOAM Community Contributions | 25 | March 6, 2017 11:03 |
[foam-extend.org] problem when installing foam-extend-1.6 | Thomas pan | OpenFOAM Installation | 7 | September 9, 2015 22:53 |
[Other] Adding solvers from DensityBasedTurbo to foam-extend 3.0 | Seroga | OpenFOAM Community Contributions | 9 | June 12, 2015 18:18 |
[swak4Foam] Error bulding swak4Foam | sfigato | OpenFOAM Community Contributions | 18 | August 22, 2013 13:41 |
friction forces icoFoam | ofslcm | OpenFOAM | 3 | April 7, 2012 11:57 |