|
[Sponsors] |
So what's exactly bad about triangular cell based grids? |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
February 20, 2022, 23:46 |
So what's exactly bad about triangular cell based grids?
|
#1 |
Senior Member
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8 |
Making a solver that uses triangular cell based grids, would be the best way to ensure your solver works with every geometry.
So what's exactly bad about them? I can only see two bad things about them. Performance wise, one bad thing would be : we need to read some additional connectivity information between neighbor cells to make the solver work, or when neighbor cells are far away in memory and we need to jump around in memory. Both can be solved by creating small patches of triangles and organizing them closely in memory. Max distance between any two neighbor cells won't be above let's say 512. Accuracy wise, one bad thing would be : when boundary layers need to be thin, but that's difficult with conventional triangles. But to solve that, we can just divide the thin boundary quads in half to create two triangles. Hecc, we can even use anisotropic triangle cell based grids for efficient shock capture. That's how NASA's unstructured code does it. So what's bad about triangular cell based grids? Everyone harps on them unfairly. |
|
February 21, 2022, 05:58 |
|
#2 |
Senior Member
|
First and foremost, quad/hexa cells have an advantage in error cancellation from opposite faces of a cell. This strictly applies for uniform spacing, but actually works also for more general cases.
Second, non hexa cells can't be aligned to certain flow features, which means that they always come with additional errors that an equivalent hexa cell wouldn't have in the same scenario. Third, which is a bridge between the second reason above and fourth reason below, hexa cells can indeed be adapted to certain flow features without spending too much cells. Imagine a flat plate, you have two very different resolution requirements for wall parallel and wall normal directions. Hexa will give you the chance to have very high aspect ratio cells, which are optimal in this case, without impairing the mesh quality. Fourth, mesh quality and size, which is already mentioned in the points above, but also deserves its own point. When you mesh with tri/tetra, it is generally detrimental to have long, elongated cells. Some would argue that robust second order solver can be made with respect to this, but this is not something just built into a FV code like a telescopic property. Besides the pure error (which grows), there are matrix conditioning issues, gradient computation issues, etc. Basically, it is just a bad habit to have bad tri/tetra in your mesh, so you inevitably end up using more cells than the hexa case to resolve certain flow features or geometric arrangements. LES/DNS are a great benchmark for this because numerics is pushed to the limits and governing certain flow dynamics (one could debate if this is actually ok, but that's another story, and this is how most LES are done today), with respect to RANS where it is covered by the turbulence model. |
|
February 21, 2022, 09:08 |
|
#3 |
Senior Member
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8 |
I would most likely write a generic solver that works with only quad + tri cells. No fancy poly/hexa cells.
I don't fully trust polyhexa meshes right now. They say it speeds up convergence because there are less number of cells in the whole domain. Okay, but that's just like saying our code is fast because it uses floats instead of doubles. Large poly/hexa cells (I think) behave just like coarse cells and they probably won't capture detailed flow features if the cells are too large. To capture detailed flow features we would need to create small poly/hexa cells. Which will then naturally slow down convergence because there are more cells to compute. More over, Star-CCM's adaptive refinment code subdivides the poly/hexa cells into triangles (correction : highly skewed quads that look like triangles), which (lol) will face the same issues that poly/hexa cells were trying to avoid. So cfd companies saying poly/hexa cells speeds up convergence feels like a dirty ... dirty ... sales tactic. I don't fully trust it, but of course I'm still learning, so I could be wrong. Of course I'm not saying they're entirely bad. Tri cells do have issues with fluid flows, which poly/hexa cells don't seem to have, because there are so many faces on the poly/hexa cells that the fluid can enter the domain from any direction. Even if the flow is coming in at a weird angle, at least one or more faces will be able to properly compute the fluxes, so the error accumulating inside the cell will not be as large as a tri cell or even a quad cell. But I don't trust it. For implementing a flow solver for poly/hexa cells we would need to store 5 or 6 or more indices or pointers to the neighbor elements of each cell. This means we need extra memory. One small company's commercial code I personally saw (but didn't understand at the time) allowed 8-15 neighbor cells, which felt like a massive overkill. This sky rocketed the memory required and increased cache misses because most of the neighbor indices were just left empty. I'm not saying other commercial codes are also wasting memory, but they're at least wasting a little bit. Maybe one commercial code supports max 12 neighbors and other supports only 10. I will most likely stick with quad + tri cells for 2D : Thin quads at the boundary layer + thick square quads in the majority of the domain + tri cells wherever quads can't be used , and for connecting two separate grids together. |
|
February 21, 2022, 11:27 |
|
#4 |
Senior Member
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,750
Rep Power: 66 |
Polyhedral cells the way they are generated today are basically merged tets. Polyhedral grids have more in common with tetrahedral grids than hexahedral grids. If you believe in tets, you should believe in poly's. They're basically the same thing except one looks like a pyramid and the other (looks like a soccerball. Anything good you can say about tets you can also say about poly's.
Understanding what speeding up convergence means requires reading more than just fine print but the way it is used means converges in fewer iterations. Hexes and poly meshes do have this kind of faster convergence and you can get this from a glance just by looking at the matrix structure. Doesn't need a salesperson to tell you this, it's a property of the matrix. That's why it's such a great sales tactic. |
|
February 21, 2022, 12:06 |
|
#5 | |
Senior Member
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8 |
Quote:
What's the technical name of this property? Some matrices are symmetric, some are tri-diagonal, what's this feature called? |
||
February 21, 2022, 12:23 |
|
#6 |
Senior Member
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,750
Rep Power: 66 |
The really really technical name is spectral radius but the looking at a glance part is just looking at how diagonally dominant the matrix is. FWIW, abusing the spectral radius is how convergence accelerators based on agglomeration (i.e. geometric multigrid and algebraic multigrid) works. None of this proves of course that the case converges faster per compute year, which is in the fine print. Or not, usually there is no fine print.
|
|
February 21, 2022, 12:51 |
|
#7 |
Senior Member
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8 |
hol' up. even tri grids can be re-ordered to be diagonally dominant : https://blog.pointwise.com/2020/08/2...ulation-speed/ i suppose it's the same thing? |
|
February 21, 2022, 13:10 |
|
#8 |
Senior Member
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,750
Rep Power: 66 |
Re-ordering a grid doesn't change its eigenvalues or spectral radius. The point is not to re-order the matrix to make it diagonally dominant. This is a no-brainer and happens in every respectable solver anyway (especially when you have to partition the mesh for parallel). The point is, given two matrices, which one is MORE diagonally dominant than the other? And that's where you need to get technical and look at eigenvalues and spectral radius.
So no, it's not the same thing. But you can quickly glance at two matrices and tell that one is more diagonal than another. |
|
February 21, 2022, 13:44 |
|
#9 | |
Senior Member
|
Quote:
|
||
February 22, 2022, 05:57 |
|
#10 | |
Senior Member
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8 |
Quote:
Or better, what if we combine cells from the voronoi diagram of the tri grid? Also since the voronoi diagram is essentially the dual grid used in node centered tri grid solvers, wouldn't they normally benefit from the same properties of the poly grids? After all, the poly grids are created from the dual grid of some initial tri grid. |
||
February 22, 2022, 11:57 |
|
#11 |
Senior Member
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,750
Rep Power: 66 |
Tetrahedral grid are generated from voronoi diagrams (or more specifically a Delaunay triangulation). I don't follow how merging cells from a voronoi diagram of a grid that was generated from a voronoi diagram is different than simply merging cells of the same grid. Sounds like a very cumbersome and redundant way of just saying that polyhedral grids are generated from merged tetrahedral cells.
I don't follow why we need to talk about what-if this happens when that is exactly how polyhedral gris are generated today in Star-CCM, in Fluent, etc. Having a higher spectral radius isn't the solution to all your problems either. You know what happens if your convergence accelerator is doing too much accelerating and is too aggresive... it blows up. There's a tradeoff game you have to play for each and every case. And we haven't even gotten into whether you are using a segregated solver or coupled solver, because those equations naturally have their own characteristics which make them suited better/worse for larger/smaller spectral radius. |
|
February 22, 2022, 14:27 |
|
#12 |
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,285
Rep Power: 34 |
Let me point out few things from flow solver point of view:
1. The major impact of unstructured meshes is on the gradients. 2. The error on gradients affect: a. convection terms b. diffusion terms c. turbulence production d. turbulence viscosity etc etc. 3. The major source of errors in gradients are: a. skew dependence. b. wrong limiting or no limiting of gradients 4. the skew in mesh also affects the flux dissipation and with the increase in skew the coupling of pressure and velocity weakens. NOTE: The skew related problems in gradients are mostly under control in third order solvers due to larger stencil. Second order gradients once calculated from larger stencil would be much more accurate if one is after the accuracy. |
|
February 22, 2022, 15:38 |
|
#13 | |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,855
Rep Power: 73 |
Quote:
Well, I want to spend just few words, having worked a lot of time ago on triangular unstructured grid and FV method. The best way to use triangles (or tetrahedron) is to look at the FE manner of introducing the shape functions on lagrangian simplex. This way, the function is totally defined in the element and you can accurately represent the gradient. For example I worked by means of bi-quadratic 6 nodes elements. The convective and pressure terms have no gradient when Gauss is adopted, only the diffusive flux (and, if present, an eddy viscosity term). As far the spectral radius is concerned, one should look at the spectral radius of the resulting iteration matrix (different from the specreral radius of the original matriux of the system) for the convergence of an iterative method. Is that the question? |
||
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
foam-extend-4.1 release | hjasak | OpenFOAM Announcements from Other Sources | 19 | July 16, 2021 06:02 |
UDF in case with DPM modle | POSTHU | Fluent UDF and Scheme Programming | 0 | March 3, 2021 08:21 |
interFoam running blowing up | sandy13 | OpenFOAM Running, Solving & CFD | 2 | May 5, 2015 08:16 |
Journal file error | magicalmarshmallow | FLUENT | 3 | April 4, 2014 13:25 |
which is better? vertex based or cell centered? | neil | Main CFD Forum | 1 | April 23, 2007 17:54 |