CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Main CFD Forum

So what's exactly bad about triangular cell based grids?

Register Blogs Community New Posts Updated Threads Search

Like Tree10Likes
  • 2 Post By sbaffini
  • 1 Post By LuckyTran
  • 1 Post By LuckyTran
  • 1 Post By LuckyTran
  • 1 Post By sbaffini
  • 1 Post By LuckyTran
  • 2 Post By arjun
  • 1 Post By FMDenaro

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   February 20, 2022, 23:46
Default So what's exactly bad about triangular cell based grids?
  #1
Senior Member
 
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8
aerosayan is on a distinguished road
Making a solver that uses triangular cell based grids, would be the best way to ensure your solver works with every geometry.

So what's exactly bad about them?


I can only see two bad things about them.

Performance wise, one bad thing would be : we need to read some additional connectivity information between neighbor cells to make the solver work, or when neighbor cells are far away in memory and we need to jump around in memory.

Both can be solved by creating small patches of triangles and organizing them closely in memory. Max distance between any two neighbor cells won't be above let's say 512.

Accuracy wise, one bad thing would be : when boundary layers need to be thin, but that's difficult with conventional triangles. But to solve that, we can just divide the thin boundary quads in half to create two triangles.

Hecc, we can even use anisotropic triangle cell based grids for efficient shock capture. That's how NASA's unstructured code does it.

So what's bad about triangular cell based grids?

Everyone harps on them unfairly.
aerosayan is offline   Reply With Quote

Old   February 21, 2022, 05:58
Default
  #2
Senior Member
 
sbaffini's Avatar
 
Paolo Lampitella
Join Date: Mar 2009
Location: Italy
Posts: 2,191
Blog Entries: 29
Rep Power: 39
sbaffini will become famous soon enoughsbaffini will become famous soon enough
Send a message via Skype™ to sbaffini
First and foremost, quad/hexa cells have an advantage in error cancellation from opposite faces of a cell. This strictly applies for uniform spacing, but actually works also for more general cases.

Second, non hexa cells can't be aligned to certain flow features, which means that they always come with additional errors that an equivalent hexa cell wouldn't have in the same scenario.

Third, which is a bridge between the second reason above and fourth reason below, hexa cells can indeed be adapted to certain flow features without spending too much cells. Imagine a flat plate, you have two very different resolution requirements for wall parallel and wall normal directions. Hexa will give you the chance to have very high aspect ratio cells, which are optimal in this case, without impairing the mesh quality.

Fourth, mesh quality and size, which is already mentioned in the points above, but also deserves its own point. When you mesh with tri/tetra, it is generally detrimental to have long, elongated cells. Some would argue that robust second order solver can be made with respect to this, but this is not something just built into a FV code like a telescopic property. Besides the pure error (which grows), there are matrix conditioning issues, gradient computation issues, etc. Basically, it is just a bad habit to have bad tri/tetra in your mesh, so you inevitably end up using more cells than the hexa case to resolve certain flow features or geometric arrangements.

LES/DNS are a great benchmark for this because numerics is pushed to the limits and governing certain flow dynamics (one could debate if this is actually ok, but that's another story, and this is how most LES are done today), with respect to RANS where it is covered by the turbulence model.
piu58 and aerosayan like this.
sbaffini is offline   Reply With Quote

Old   February 21, 2022, 09:08
Default
  #3
Senior Member
 
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8
aerosayan is on a distinguished road
I would most likely write a generic solver that works with only quad + tri cells. No fancy poly/hexa cells.

I don't fully trust polyhexa meshes right now. They say it speeds up convergence because there are less number of cells in the whole domain. Okay, but that's just like saying our code is fast because it uses floats instead of doubles. Large poly/hexa cells (I think) behave just like coarse cells and they probably won't capture detailed flow features if the cells are too large.

To capture detailed flow features we would need to create small poly/hexa cells. Which will then naturally slow down convergence because there are more cells to compute.

More over, Star-CCM's adaptive refinment code subdivides the poly/hexa cells into triangles (correction : highly skewed quads that look like triangles), which (lol) will face the same issues that poly/hexa cells were trying to avoid.

So cfd companies saying poly/hexa cells speeds up convergence feels like a dirty ... dirty ... sales tactic. I don't fully trust it, but of course I'm still learning, so I could be wrong.

Of course I'm not saying they're entirely bad. Tri cells do have issues with fluid flows, which poly/hexa cells don't seem to have, because there are so many faces on the poly/hexa cells that the fluid can enter the domain from any direction. Even if the flow is coming in at a weird angle, at least one or more faces will be able to properly compute the fluxes, so the error accumulating inside the cell will not be as large as a tri cell or even a quad cell.

But I don't trust it. For implementing a flow solver for poly/hexa cells we would need to store 5 or 6 or more indices or pointers to the neighbor elements of each cell. This means we need extra memory. One small company's commercial code I personally saw (but didn't understand at the time) allowed 8-15 neighbor cells, which felt like a massive overkill. This sky rocketed the memory required and increased cache misses because most of the neighbor indices were just left empty.

I'm not saying other commercial codes are also wasting memory, but they're at least wasting a little bit. Maybe one commercial code supports max 12 neighbors and other supports only 10.

I will most likely stick with quad + tri cells for 2D : Thin quads at the boundary layer + thick square quads in the majority of the domain + tri cells wherever quads can't be used , and for connecting two separate grids together.
aerosayan is offline   Reply With Quote

Old   February 21, 2022, 11:27
Default
  #4
Senior Member
 
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,750
Rep Power: 66
LuckyTran has a spectacular aura aboutLuckyTran has a spectacular aura aboutLuckyTran has a spectacular aura about
Polyhedral cells the way they are generated today are basically merged tets. Polyhedral grids have more in common with tetrahedral grids than hexahedral grids. If you believe in tets, you should believe in poly's. They're basically the same thing except one looks like a pyramid and the other (looks like a soccerball. Anything good you can say about tets you can also say about poly's.

Understanding what speeding up convergence means requires reading more than just fine print but the way it is used means converges in fewer iterations. Hexes and poly meshes do have this kind of faster convergence and you can get this from a glance just by looking at the matrix structure. Doesn't need a salesperson to tell you this, it's a property of the matrix. That's why it's such a great sales tactic.
aerosayan likes this.
LuckyTran is offline   Reply With Quote

Old   February 21, 2022, 12:06
Default
  #5
Senior Member
 
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8
aerosayan is on a distinguished road
Quote:
Originally Posted by LuckyTran View Post
Hexes and poly meshes do have this kind of faster convergence and you can get this from a glance just by looking at the matrix structure. Doesn't need a salesperson to tell you this, it's a property of the matrix. That's why it's such a great sales tactic.

What's the technical name of this property?


Some matrices are symmetric, some are tri-diagonal, what's this feature called?
aerosayan is offline   Reply With Quote

Old   February 21, 2022, 12:23
Default
  #6
Senior Member
 
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,750
Rep Power: 66
LuckyTran has a spectacular aura aboutLuckyTran has a spectacular aura aboutLuckyTran has a spectacular aura about
The really really technical name is spectral radius but the looking at a glance part is just looking at how diagonally dominant the matrix is. FWIW, abusing the spectral radius is how convergence accelerators based on agglomeration (i.e. geometric multigrid and algebraic multigrid) works. None of this proves of course that the case converges faster per compute year, which is in the fine print. Or not, usually there is no fine print.
aerosayan likes this.
LuckyTran is offline   Reply With Quote

Old   February 21, 2022, 12:51
Default
  #7
Senior Member
 
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8
aerosayan is on a distinguished road
Quote:
Originally Posted by LuckyTran View Post
how diagonally dominant the matrix is

hol' up. even tri grids can be re-ordered to be diagonally dominant : https://blog.pointwise.com/2020/08/2...ulation-speed/


i suppose it's the same thing?
aerosayan is offline   Reply With Quote

Old   February 21, 2022, 13:10
Default
  #8
Senior Member
 
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,750
Rep Power: 66
LuckyTran has a spectacular aura aboutLuckyTran has a spectacular aura aboutLuckyTran has a spectacular aura about
Re-ordering a grid doesn't change its eigenvalues or spectral radius. The point is not to re-order the matrix to make it diagonally dominant. This is a no-brainer and happens in every respectable solver anyway (especially when you have to partition the mesh for parallel). The point is, given two matrices, which one is MORE diagonally dominant than the other? And that's where you need to get technical and look at eigenvalues and spectral radius.

So no, it's not the same thing. But you can quickly glance at two matrices and tell that one is more diagonal than another.
aerosayan likes this.
LuckyTran is offline   Reply With Quote

Old   February 21, 2022, 13:44
Default
  #9
Senior Member
 
sbaffini's Avatar
 
Paolo Lampitella
Join Date: Mar 2009
Location: Italy
Posts: 2,191
Blog Entries: 29
Rep Power: 39
sbaffini will become famous soon enoughsbaffini will become famous soon enough
Send a message via Skype™ to sbaffini
Quote:
Originally Posted by aerosayan View Post
I would most likely write a generic solver that works with only quad + tri cells. No fancy poly/hexa cells.

I don't fully trust polyhexa meshes right now. They say it speeds up convergence because there are less number of cells in the whole domain. Okay, but that's just like saying our code is fast because it uses floats instead of doubles. Large poly/hexa cells (I think) behave just like coarse cells and they probably won't capture detailed flow features if the cells are too large.

To capture detailed flow features we would need to create small poly/hexa cells. Which will then naturally slow down convergence because there are more cells to compute.

More over, Star-CCM's adaptive refinment code subdivides the poly/hexa cells into triangles (correction : highly skewed quads that look like triangles), which (lol) will face the same issues that poly/hexa cells were trying to avoid.

So cfd companies saying poly/hexa cells speeds up convergence feels like a dirty ... dirty ... sales tactic. I don't fully trust it, but of course I'm still learning, so I could be wrong.

Of course I'm not saying they're entirely bad. Tri cells do have issues with fluid flows, which poly/hexa cells don't seem to have, because there are so many faces on the poly/hexa cells that the fluid can enter the domain from any direction. Even if the flow is coming in at a weird angle, at least one or more faces will be able to properly compute the fluxes, so the error accumulating inside the cell will not be as large as a tri cell or even a quad cell.

But I don't trust it. For implementing a flow solver for poly/hexa cells we would need to store 5 or 6 or more indices or pointers to the neighbor elements of each cell. This means we need extra memory. One small company's commercial code I personally saw (but didn't understand at the time) allowed 8-15 neighbor cells, which felt like a massive overkill. This sky rocketed the memory required and increased cache misses because most of the neighbor indices were just left empty.

I'm not saying other commercial codes are also wasting memory, but they're at least wasting a little bit. Maybe one commercial code supports max 12 neighbors and other supports only 10.

I will most likely stick with quad + tri cells for 2D : Thin quads at the boundary layer + thick square quads in the majority of the domain + tri cells wherever quads can't be used , and for connecting two separate grids together.
Just to be sure we understand each other: hexa is the 3D equivalent of quad in 2D: a 6 face hexaedra, a box. I wouldn't really put that in any relation with polyhedral cells.
aerosayan likes this.
sbaffini is offline   Reply With Quote

Old   February 22, 2022, 05:57
Default
  #10
Senior Member
 
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8
aerosayan is on a distinguished road
Quote:
Originally Posted by LuckyTran View Post
Re-ordering a grid doesn't change its eigenvalues or spectral radius. The point is not to re-order the matrix to make it diagonally dominant. This is a no-brainer and happens in every respectable solver anyway (especially when you have to partition the mesh for parallel). The point is, given two matrices, which one is MORE diagonally dominant than the other? And that's where you need to get technical and look at eigenvalues and spectral radius.

So no, it's not the same thing. But you can quickly glance at two matrices and tell that one is more diagonal than another.
So can we combine multiple tri cells together to form a jagged poly cell, and benefit from the same performance optimizations?

Or better, what if we combine cells from the voronoi diagram of the tri grid?

Also since the voronoi diagram is essentially the dual grid used in node centered tri grid solvers, wouldn't they normally benefit from the same properties of the poly grids?

After all, the poly grids are created from the dual grid of some initial tri grid.
aerosayan is offline   Reply With Quote

Old   February 22, 2022, 11:57
Default
  #11
Senior Member
 
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,750
Rep Power: 66
LuckyTran has a spectacular aura aboutLuckyTran has a spectacular aura aboutLuckyTran has a spectacular aura about
Tetrahedral grid are generated from voronoi diagrams (or more specifically a Delaunay triangulation). I don't follow how merging cells from a voronoi diagram of a grid that was generated from a voronoi diagram is different than simply merging cells of the same grid. Sounds like a very cumbersome and redundant way of just saying that polyhedral grids are generated from merged tetrahedral cells.

I don't follow why we need to talk about what-if this happens when that is exactly how polyhedral gris are generated today in Star-CCM, in Fluent, etc.

Having a higher spectral radius isn't the solution to all your problems either. You know what happens if your convergence accelerator is doing too much accelerating and is too aggresive... it blows up. There's a tradeoff game you have to play for each and every case. And we haven't even gotten into whether you are using a segregated solver or coupled solver, because those equations naturally have their own characteristics which make them suited better/worse for larger/smaller spectral radius.
aerosayan likes this.
LuckyTran is offline   Reply With Quote

Old   February 22, 2022, 14:27
Default
  #12
Senior Member
 
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,285
Rep Power: 34
arjun will become famous soon enougharjun will become famous soon enough
Let me point out few things from flow solver point of view:

1. The major impact of unstructured meshes is on the gradients.

2. The error on gradients affect:

a. convection terms
b. diffusion terms
c. turbulence production
d. turbulence viscosity
etc etc.

3. The major source of errors in gradients are:

a. skew dependence.
b. wrong limiting or no limiting of gradients

4. the skew in mesh also affects the flux dissipation and with the increase in skew the coupling of pressure and velocity weakens.


NOTE: The skew related problems in gradients are mostly under control in third order solvers due to larger stencil. Second order gradients once calculated from larger stencil would be much more accurate if one is after the accuracy.
sbaffini and aerosayan like this.
arjun is offline   Reply With Quote

Old   February 22, 2022, 15:38
Default
  #13
Senior Member
 
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,855
Rep Power: 73
FMDenaro has a spectacular aura aboutFMDenaro has a spectacular aura aboutFMDenaro has a spectacular aura about
Quote:
Originally Posted by arjun View Post
Let me point out few things from flow solver point of view:

1. The major impact of unstructured meshes is on the gradients.

2. The error on gradients affect:

a. convection terms
b. diffusion terms
c. turbulence production
d. turbulence viscosity
etc etc.

3. The major source of errors in gradients are:

a. skew dependence.
b. wrong limiting or no limiting of gradients

4. the skew in mesh also affects the flux dissipation and with the increase in skew the coupling of pressure and velocity weakens.


NOTE: The skew related problems in gradients are mostly under control in third order solvers due to larger stencil. Second order gradients once calculated from larger stencil would be much more accurate if one is after the accuracy.



Well, I want to spend just few words, having worked a lot of time ago on triangular unstructured grid and FV method.
The best way to use triangles (or tetrahedron) is to look at the FE manner of introducing the shape functions on lagrangian simplex. This way, the function is totally defined in the element and you can accurately represent the gradient. For example I worked by means of bi-quadratic 6 nodes elements.

The convective and pressure terms have no gradient when Gauss is adopted, only the diffusive flux (and, if present, an eddy viscosity term).


As far the spectral radius is concerned, one should look at the spectral radius of the resulting iteration matrix (different from the specreral radius of the original matriux of the system) for the convergence of an iterative method. Is that the question?
aerosayan likes this.
FMDenaro is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
foam-extend-4.1 release hjasak OpenFOAM Announcements from Other Sources 19 July 16, 2021 06:02
UDF in case with DPM modle POSTHU Fluent UDF and Scheme Programming 0 March 3, 2021 08:21
interFoam running blowing up sandy13 OpenFOAM Running, Solving & CFD 2 May 5, 2015 08:16
Journal file error magicalmarshmallow FLUENT 3 April 4, 2014 13:25
which is better? vertex based or cell centered? neil Main CFD Forum 1 April 23, 2007 17:54


All times are GMT -4. The time now is 11:49.