|
[Sponsors] |
Applying AMG to explicit CFD solvers by solving in Ax=b form |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
August 24, 2022, 20:11 |
Applying AMG to explicit CFD solvers by solving in Ax=b form
|
#1 |
Senior Member
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8 |
Hello everyone,
I'm realizing that developing GMG methods for explicit solvers on unstructured grids might be a lot of work. I'm wondering if it's possible to use AMG methods instead. Converting the equations to a sparse Ax=b form, then applying AMG to solve this, just might be possible. Since explicit equations are easy to parallelize, I'm wondering if it's possible to do the same to the matrix A i.e domain decompose across all processors, and apply AMG on the matrix's "block" locally on each processor. Benefit of the method, if possible, would be that it might work for explicit solvers on all kinds of grids. Tri, Quad, Hex, Prism, Tetra, i.e in both 2D and 3D, and we save time on development. |
|
August 25, 2022, 00:37 |
|
#2 |
Senior Member
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,747
Rep Power: 66 |
The AMG approach is used in just about every commercial cfd software unless I missed something about why AMG can't be done for your case.
|
|
August 25, 2022, 01:42 |
|
#3 | |
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,285
Rep Power: 34 |
Quote:
As it is mentioned AMG methods are pretty much default in all commercial softwares. Writing a good parallel AMG could be time consuming so you can try libs like hypre etc. |
||
August 25, 2022, 05:07 |
|
#4 |
Senior Member
|
If I understand you correctly, you mean something like an algebraic FAS.
This, or a similar question, already popped up here some time ago. I now miss the several reasons against this, but for sure there is one, the highest difficulty in AMG (at least in its common FV implementation as additive correction) is building the matrix in parallel at the coarse levels (the serial version is pretty much trivial). Once you have that, using it explicitly makes no sense. Which recalls me of another fundamental reason, the fact that the power of AMG is indeed in the implicit realm. In the explicit case FAS is much less effective in accelerating the solution, whose advancement speed is mostly governed by the time step. |
|
August 25, 2022, 07:48 |
|
#5 | |
Senior Member
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8 |
Quote:
If my code is explicit, it will be far too slow for most people's use. Luckily, I now have enough brain cells to understand implicit codes and am slowly moving away from explicit ones. |
||
August 25, 2022, 07:59 |
|
#6 | |
Senior Member
|
Quote:
In this case, what Luckytran and Arjun wrote applies, using AMG for linearized systems of equations arising from implicit discretizations is, indeed, the norm in most commercial solvers out there. As I already wrote, doing it in serial is trivial, at least for the common additive correction multigrid in use in most FV solvers. The difficult part is the parallel, it has costs at each iteration and there is no definitive guide out there on how to implement it, nor the matter is such that some specific way would appear obvious from the context. There are some papers out there, from Maximillian Emans, former AVL developer, that give some context and clue. Some others from the Moukalled and Darwish group, but that's it. Most of the remaining stuff is focused on more complex AMG flavors, mostly for math oriented people (who like a lot to complicate stuff). Note, however, that the system solver is not a difficult part, in general, for the implicit approach. A LU-SGS is just a couple of loops on the cells, where you update the system unknowns. Most Krylov methods add some matrix-vector products, which again are the same kind of loop. AMG is the kind of the exception here, because you need to update the matrix at each level in parallel. For tall the others, you just need a sparse-matrix data structure and you're done, the parallel exchange is not different fromother variables. The difficult part is the book-keeping one, which sign does this element have, which matrix element does this term goes, etc. Explicit has his use cases, but that is in the unsteady realm with no doubt. |
||
August 25, 2022, 16:40 |
|
#7 |
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,285
Rep Power: 34 |
Why not just use BiCGStab or GMRES etc. They are not tough to write and in compressible solver the linear system is easier to solve compared to incompressible case (hyperbolic vs elliptic system).
|
|
August 25, 2022, 16:54 |
|
#8 |
Senior Member
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8 |
Thanks Arjun! I will most likely use LU-SGS or BiCGStab or GMRES. Which one is the easiest?
|
|
August 25, 2022, 16:56 |
|
#9 | |
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,285
Rep Power: 34 |
Quote:
Try BiCGStab even though the LU-SGS will be faster but BiCG will be more robust. If you want parallel consistency then use polynomial methods are preconditioner for BiCGStab. |
||
August 26, 2022, 17:56 |
|
#10 | |
Senior Member
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8 |
Quote:
BiCGStab seems to be the combination of BiCG, and GMRES. Su2 uses FGMRES by default. I think it might be easier for me to implement and use FGMRES first, then move on to BiCGStab. |
||
August 27, 2022, 01:58 |
|
#11 | |
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,285
Rep Power: 34 |
Quote:
However please check out PyAMG library. There you can learn how to implement BiCGStab or GMRES or even FGMRES if I remember correctly . They are at github too. |
||
August 27, 2022, 04:36 |
|
#12 | |
Senior Member
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8 |
Quote:
|
||
August 27, 2022, 04:37 |
|
#13 | |
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,285
Rep Power: 34 |
Quote:
Thats why suggested PyAMG. You can see the python code for them and it makes it easier to learn. |
||
October 20, 2023, 03:57 |
|
#14 |
New Member
Join Date: Oct 2023
Location: CN
Posts: 6
Rep Power: 3 |
I have a qustion, is the GMERS,LU-SGS,BiCG methods used as smoother in AMG? as i know, AMG has Pre-smoothing, Coarse-grid correction and Post-smoothing; all the methods before used as Coarse-grid correction?
|
|
October 20, 2023, 09:40 |
|
#15 |
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,285
Rep Power: 34 |
even though the post is bot generated. I think (need to double check) only CFX has AMG that uses BiCG as smoother.
|
|
October 21, 2023, 03:30 |
|
#16 |
New Member
Join Date: Oct 2023
Location: CN
Posts: 6
Rep Power: 3 |
Yeah, but in fluent, there are G-S/ILU used as smoother, so the BiCGSTAB/RPM/GMERS which can choose at Stabilization Method means iteration for Coarse-grid?
|
|
October 21, 2023, 05:18 |
|
#17 |
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,285
Rep Power: 34 |
||
October 21, 2023, 05:37 |
|
#18 |
New Member
Join Date: Oct 2023
Location: CN
Posts: 6
Rep Power: 3 |
sorry, maybe i didn't get you, i wonder just like in the Fluent, i can use G-S/ILU as pre/post-smoothing, but what is the iterarion method for coarse-mesh? direct iteration or use these methods 'GMERS'/'BiCGSTAB'; I just learn AMG method recently and i confused is the 'GMERS'/'BiCGSTAB' methods used as iteration in solve Coarse-grid or AMG is a preconditioner before we use the'GMERS'.
anyway, thank you for your response. |
|
October 21, 2023, 10:22 |
|
#19 | |
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,285
Rep Power: 34 |
Quote:
Yes you can use G-S and ILU as smoother. These are the most popular options. AMG can be used as stand alone linear system solver. AMG can also be used as precoditioner to GMRES, BiCongujate etc solvers. Rarely BiConjugate and GMRES are used as smoothers because they are expensive or they take more time. |
||
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
chtMultiRegionSimpleFoam: maximum number of iterations excedeed. | Nkl | OpenFOAM Running, Solving & CFD | 19 | October 10, 2019 03:42 |
Suppress twoPhaseEulerFoam energy | AlmostSurelyRob | OpenFOAM Running, Solving & CFD | 33 | September 25, 2018 18:45 |
HeatSource BC to the whole region in chtMultiRegionHeater | xsa | OpenFOAM Running, Solving & CFD | 3 | November 7, 2016 06:07 |
Compressor Simulation using rhoPimpleDyMFoam | Jetfire | OpenFOAM Running, Solving & CFD | 107 | December 9, 2014 14:38 |
calculation stops after few time steps | sivakumar | OpenFOAM Running, Solving & CFD | 7 | March 17, 2013 07:37 |