|
[Sponsors] |
Linear solver diverges with 24 cores but it's fine with 16 |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
January 22, 2021, 18:34 |
Linear solver diverges with 24 cores but it's fine with 16
|
#1 |
Senior Member
|
Hi there,
I am running bunch of v&v test cases with my own generated meshes and I am encountering interesting issues here. Standard Compressible RAE2822: The mesh is fine and has no issues.I had run it with another solver back in 2012 and got good results. When I run the test case with provided config file in the test-case folder in SU2 repo, everything is fine until I increase the number of cores to 24 from 16. FYI, I cannot test it with 32 cuz my desktop has 24 cores -- I don't know if increasing number of cores by factor of 2 is an issue or not. The linear solver diverges after 100 something iterations with this error: Code:
------------------------------------------------------------------------- FGMRES orthogonalization failed, linear solver diverged. ------------------------------ Error Exit ------------------------------- Error in "void CSysSolve<ScalarType>::ModGramSchmidt(int, su2matrix<ScalarType>&, std::vector<CSysVector<ScalarType> >&) const [with ScalarType = double; su2matrix<ScalarType> = C2DContainer<long unsigned int, double, StorageType::RowMajor, 64, 0, 0>]": ------------------------------------------------------------------------- FGMRES orthogonalization failed, linear solver diverged. ------------------------------ Error Exit ------------------------------- If I set the preconditioner to ILU with 5 number of iterations, it exits by itself without any error. If I set the preconditioner to ILU with 2 number of iterations it doesn't diverge but it doesn't converge either, it sticks with some number of the continuity's residual and it doesn't go down down any further. I do appreciate help me understand the issue here. Best, Pay |
|
January 23, 2021, 05:33 |
|
#2 |
Senior Member
Pedro Gomes
Join Date: Dec 2017
Posts: 466
Rep Power: 14 |
The SU2 multigrid is known for acting up with some core counts but not others.
You can try turning it off. You may also try using the adaptive CFL which will hopefully find a lower CFL on more challenging parts of the grid / flow field. |
|
February 4, 2021, 15:13 |
|
#3 | |
Senior Member
|
Quote:
I already made it work but I wanted to see what triggers the issue here. I understand now that MG and MPI are not friends here. I am going to open another issue about MG and MPI that could be very interesting. Personally I'm not a fan of CFL adaptation when it comes to convergence acceleration because it doesn't always work in particular when it comes to high aspect ration boundary layer cells and shock interaction on a very fine mesh. |
||
January 18, 2023, 03:48 |
|
#4 |
Member
Ashish Magar
Join Date: Jul 2016
Location: Mumbai, India
Posts: 81
Rep Power: 10 |
Can you mention how did you solve this issue?
|
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
foam-extend-4.1 release | hjasak | OpenFOAM Announcements from Other Sources | 19 | July 16, 2021 06:02 |
fluent divergence for no reason | sufjanst | FLUENT | 2 | March 23, 2016 17:08 |
Fatal overflow in linear solver | n.soumya | CFX | 2 | August 8, 2015 22:36 |
Solver runs parallel on 3 cores but not on 4 or more cores | hrushi.397 | OpenFOAM Running, Solving & CFD | 3 | October 21, 2013 09:03 |
Getting too many iterations by velocity solving (aborting). Changing U - Solver? | suitup | OpenFOAM Running, Solving & CFD | 0 | January 20, 2010 08:45 |