CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Main CFD Forum

Survey: Mesh Dependency

Register Blogs Community New Posts Updated Threads Search

Like Tree5Likes
  • 1 Post By lcarasik
  • 2 Post By LuckyTran
  • 2 Post By LuckyTran

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   February 23, 2017, 16:51
Default Survey: Mesh Dependency
  #1
Senior Member
 
Matt
Join Date: Aug 2014
Posts: 947
Rep Power: 18
fluid23 is on a distinguished road
Greetings All,
I was hoping to get some feedback from the community on how they approach mesh dependency studies. What mesh metrics do you target when refining and to what threshold do you subject your figures of merit?

For example: I typically try to target a 5% increase in cell count and look for less than 3% change in my figures of merit.

Would love to hear back from as many people as possible on this.
fluid23 is offline   Reply With Quote

Old   February 24, 2017, 11:28
Default
  #2
Senior Member
 
Lane Carasik
Join Date: Aug 2014
Posts: 692
Rep Power: 15
lcarasik is on a distinguished road
Quote:
Originally Posted by MBdonCFD View Post
Greetings All,
I was hoping to get some feedback from the community on how they approach mesh dependency studies. What mesh metrics do you target when refining and to what threshold do you subject your figures of merit?

For example: I typically try to target a 5% increase in cell count and look for less than 3% change in my figures of merit.

Would love to hear back from as many people as possible on this.
Are you defining mesh dependency as mesh convergence (i.e. this mesh is sufficient for further use?)? Or for determining the associated numerical uncertainty with the simulation values?
lcarasik is offline   Reply With Quote

Old   February 24, 2017, 11:58
Default
  #3
Senior Member
 
Matt
Join Date: Aug 2014
Posts: 947
Rep Power: 18
fluid23 is on a distinguished road
Independence of figures of merit with respect to changes in mesh refinement level. I suppose you could call it mesh convergence, I have always heard it referred to as mesh dependency. It is a common practice.
fluid23 is offline   Reply With Quote

Old   February 24, 2017, 13:22
Default
  #4
Senior Member
 
Lane Carasik
Join Date: Aug 2014
Posts: 692
Rep Power: 15
lcarasik is on a distinguished road
Quote:
Originally Posted by MBdonCFD View Post
Independence of figures of merit with respect to changes in mesh refinement level. I suppose you could call it mesh convergence, I have always heard it referred to as mesh dependency. It is a common practice.
Although there are things that are considered "common practice" in one sub-field of CFD, the terminology is not necessarily consistent across the entirety of CFD. This is what I've found in industrial and research settings.

It is important to provide a definition of the terminology used for a discussion such as this.
FMDenaro likes this.
lcarasik is offline   Reply With Quote

Old   February 25, 2017, 02:52
Default
  #5
Senior Member
 
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,754
Rep Power: 66
LuckyTran has a spectacular aura aboutLuckyTran has a spectacular aura aboutLuckyTran has a spectacular aura about
I find Patrick Roache's several papers and book on grid convergence index to be extremely useful. It's more systematic and gives a common way to interpret the results of a mesh study. It has a nice theoretical basis and is strongly linked with uncertainty analysis. You can easily find it via google, a few NASA sites also contain most of the information that you need. Essentially, the GCI is a uncertainty estimate w/ factor of safety. I highly recommend GCI or a better method. GCI is intended to be applied to your sought-after-parameter, which is usually a bulk or volumetric quantity. But often you want to compare detailed local distributions. In these cases, I use a localized GCI, a GCI at every cell face, and compute uncertainty fields.

I treat the results of a grid convergence study the same way I treat my experiments. Essentially it is my error bar that I apply to all my results and figures.

One of the things people don't check anymore is the order of convergence of their problem, which you can really only check by going to much denser grids. Much denser in this case means at least halving the cell widths in every dimension, which usually results in 8x more cells for 3D problems (2*2*2=8). Even in industry, we recognize the importance of doing these checks and do them whenever we can. When we can't run 8x more cells, we're not afraid to go 8x coarser! If we can't afford to run 8x more cells (because the mesh is already 100 million cells) then we'll settle for 2x more cells. But I would never consider a change of less than 2x.

We do these studies for 2 things. 1 to get the uncertainty, which pops out via the GCI. 2 to determine what mesh resolution will work for that type of problem and then apply that resolution to other problems. For example, I need # cells across this feature in my domain. Once you understand the mesh resolution required for your problem, you can quickly do many other problems without worrying about whether your grid is fine enough. Of course, your new simulations will always have some sensitivity to the mesh, which is the same error that you would always get from the GCI.

Quote:
Originally Posted by MBdonCFD View Post
For example: I typically try to target a 5% increase in cell count and look for less than 3% change in my figures of merit.
People these days are really good at running meshes with all the same resolution and showing the result didn't change. Of course the result didn't change, your grid didn't change. A 5% increase in cell count isn't even statistically significant. A 5% increase in cell count is like changing my cell sizes from 1mm to 1.02mm. Furthermore the 3% change is only meaningful if you interpret it correctly, it's a 3% change relative to some % change in your grid. If it changed 3% after a 5% change, then after a 800% change it will change by 480%! Anything less than doubling the cell count to me is not a mesh study, but just remeshing. Your mesh resolution needs to change significantly for it to make any sense.

Quote:
Originally Posted by lcarasik View Post
Are you defining mesh dependency as mesh convergence (i.e. this mesh is sufficient for further use?)? Or for determining the associated numerical uncertainty with the simulation values?
With regards to the mesh, those mean the same thing. Numerical simulations, just like experiments, always has uncertainties with respect to the measured variables that are caused by the mesh. Maybe mesh sensitivity is a better way of saying it? It is a pedantic argument. We call it many things, but everyone understands that eventually you need to do something where you change your mesh and rerun with all other settings the same and see if you still get the same result.
Daaman and shizuka like this.
LuckyTran is offline   Reply With Quote

Old   February 25, 2017, 03:21
Default
  #6
Senior Member
 
Lane Carasik
Join Date: Aug 2014
Posts: 692
Rep Power: 15
lcarasik is on a distinguished road
Quote:
Originally Posted by LuckyTran View Post

With regards to the mesh, those mean the same thing. Numerical simulations, just like experiments, always has uncertainties with respect to the measured variables that are caused by the mesh. Maybe mesh sensitivity is a better way of saying it? It is a pedantic argument. We call it many things, but everyone understands that eventually you need to do something where you change your mesh and rerun with all other settings the same and see if you still get the same result.
I view the definitions of mesh convergence and sensitivity to be two different things. Whereas, mesh convergence would be defined as reaching a mesh density which is sufficiently refined to not find any discernible change for additional levels of refinement. Also, mesh sensitivity would be defined as the creation of numerical uncertainty associated with your meshing.

Frankly, I find GCI and its variants to be somewhat restrictive in formulation. The calculations of observed order of accuracy causes issues with making GCI work properly. In general, I feel the community should try and determine a better means of developing uncertainty with respect to meshing.

I would also argue there is very little consensus on how to appropriate do mesh convergence studies and mesh sensitivity studies. In my opinion, GCI is not a means to determine your mesh is converged, just a means of trying to develop uncertainty bands.
lcarasik is offline   Reply With Quote

Old   February 25, 2017, 14:19
Default
  #7
Senior Member
 
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,754
Rep Power: 66
LuckyTran has a spectacular aura aboutLuckyTran has a spectacular aura aboutLuckyTran has a spectacular aura about
Quote:
Originally Posted by lcarasik View Post
I view the definitions of mesh convergence and sensitivity to be two different things. Whereas, mesh convergence would be defined as reaching a mesh density which is sufficiently refined to not find any discernible change for additional levels of refinement. Also, mesh sensitivity would be defined as the creation of numerical uncertainty associated with your meshing.
Okay I follow you better now. So let's say I run a bunch of grids and plot the result versus some grid metric (only the result, not the error bars). This plot would give me the mesh convergence. I would look at it and see if the solution is asymptotic, etc. Then I plot the error bars and that's the mesh sensitivity.

Imagine if you ran only 1 grid. And then you somehow magically knew, ah-hah, my result is converged within this much and the error is this much. Ideally you would be able to get your mesh sensitivity without needing to run different grids. That would be convenient. However, we are so far, limited to an experimentalist approach. That is, make a new grid, re-run, compare. Ok so I'll just call it a mesh study. I generally don't look for grid convergence because of the following point...

The problem of calculating the order of convergence is more complicated than Roache's writeup. If you follow the GCI method exactly as written by Roache you will have some issues. By the way, if anyone is unfamiliar with this issue. Just try calculating the GCI when you have any N1, N2, N3 and the results are 1, 1, 1. The method fails, even though the result is numerically exact. And then try 1.00 0.99 1.01. Because the convergence is not monotonic, GCI has issues. Now compare that to a method that gives you the sequence 1.01 1.0011 1.03. GCI looks really nice for this case, because it converges nicely. But you as the user aren't aware that it converged to 1.01 instead of 1.0. But these are not bad problems to have.

The bigger problem is that most people do not run enough grids with large enough variations to see any apparent order of convergence. I would love to see a more robust method. But I look around in so many CFD papers where people do not even mention any type of mesh convergence or mesh sensitivity being performed.

Before we are able to agree on a 95% confidence interval, we have to get people to be able to do any interval studies first. =( We also have to accept that there are always people without the proper tools to do everything perfectly. E.g. in engineering it is common to assume all errors follow Gaussian distribution. To me, the GCI is similar to this approach. It is simple enough that you don't need to really understand what it is to be able to use it. But it gives you a reasonable uncertainty than other people will be able to use to judge your work. That is, to me, the GCI is great as a minimum standard. You must do at least the GCI, or something better! The people that can do better, generally know better and you don't need to worry about them.
lcarasik and shizuka like this.

Last edited by LuckyTran; February 25, 2017 at 17:22.
LuckyTran is offline   Reply With Quote

Old   February 25, 2017, 15:42
Default
  #8
Senior Member
 
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,882
Rep Power: 73
FMDenaro has a spectacular aura aboutFMDenaro has a spectacular aura aboutFMDenaro has a spectacular aura about
I find interesting the discussion.
However, I have the idea that the study of the order of convergence lies within the developing of a own-made code for which one uses analytical test-cases to assess that the code has no bugs. The key is to assess that the local truncation error vanishes asymptotically as one expects from the order of the discretization.
In case of engineering/industrial applications one assumes that the code will do what one expects and the goals is to determine the sensitivity of the solution (for a complex problem) to the mesh sizes. Unfortunately, often such analysis is done using, for example, RANS modelling that is much more relevant in terms of magnitude than the local truncation error. So, you would get an erroneous intepretation of the mesh sensitivity.
Other critical case is the solution with shocks.

I am not sure we can assess in a unique way how to afford such issues...
FMDenaro is offline   Reply With Quote

Old   February 25, 2017, 17:45
Default
  #9
Senior Member
 
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,754
Rep Power: 66
LuckyTran has a spectacular aura aboutLuckyTran has a spectacular aura aboutLuckyTran has a spectacular aura about
Quote:
Originally Posted by FMDenaro View Post
In case of engineering/industrial applications one assumes that the code will do what one expects and the goals is to determine the sensitivity of the solution (for a complex problem) to the mesh sizes. Unfortunately, often such analysis is done using, for example, RANS modelling that is much more relevant in terms of magnitude than the local truncation error. So, you would get an erroneous intepretation of the mesh sensitivity.
In the case of engineering problems, one is not interested in figuring out or trying to prove that the only error is the truncation error. One assumes all mesh errors are present, whatever they are. One then changes the mesh and watches the errors change. The interpretation is erroneous only if you think they are truncation errors. But many other errors may be present that are much more relevant, as you say. They are still errors caused by meshes. They are discretization errors, even if they are not truncation errors.

That's why it's still important to look for an order of convergence of your solution. You would expect that it converges at least linearly or better. So when you do your mesh study, you should vary the discretization (the mesh) enough to be able to find an order of convergence.

If you make small enough steps, everything will look linear (because the truncation terms are small). The idea is to make very large steps so that truncation errors become large so that you can easily tell, ah-hah, there is a difference (or not)!
LuckyTran is offline   Reply With Quote

Old   February 25, 2017, 18:09
Default
  #10
Senior Member
 
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,882
Rep Power: 73
FMDenaro has a spectacular aura aboutFMDenaro has a spectacular aura aboutFMDenaro has a spectacular aura about
Well, I am not sure about a general rule for all cases...Immagine LES and RANS, the former will tend to the DNS for via via refined grids while the latter will be no longer dependent on the grid size when the magnitude of the local truncation error becomes smaller than that of the modelling...
FMDenaro is offline   Reply With Quote

Old   February 25, 2017, 18:19
Default
  #11
Senior Member
 
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,754
Rep Power: 66
LuckyTran has a spectacular aura aboutLuckyTran has a spectacular aura aboutLuckyTran has a spectacular aura about
Quote:
Originally Posted by FMDenaro View Post
Well, I am not sure about a general rule for all cases...Immagine LES and RANS, the former will tend to the DNS for via via refined grids while the latter will be no longer dependent on the grid size when the magnitude of the local truncation error becomes smaller than that of the modelling...
Well that's because you change your filter width. If you change the grid and keep the same filter width then the LES does not tend toward the DNS and will be like the RANS, you will keep decreasing your truncation error but modelling error (the filter width) dominates. Of course no one really wants to do LES this way. The difference is that in LES you change your model (the filter width) as you update the grid, whereas in RANS there is no such knob.

But for whatever it is you are trying to measure with LES (the mean flow field for example), as you make the refine the grid and model less and less, you want the same or similar result even if the model is changing. Of course the small scale information changes, but the small scale stuff shouldn't have a significant influence on the large scale stuff. We haven't talked about time-stepping, but a similar argument holds there. Do you get a different answer if you make your time-step smaller? If so, then you are not converged w.r.t. the time-step. If no, then you are converged.
LuckyTran is offline   Reply With Quote

Old   February 25, 2017, 18:43
Default
  #12
Senior Member
 
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,882
Rep Power: 73
FMDenaro has a spectacular aura aboutFMDenaro has a spectacular aura aboutFMDenaro has a spectacular aura about
Well, using an explicit filter in LES and fixing the filter width then you are right, you can refine the grid and look for a solution tending to the filtered similary to what happens in RANS. But, differently from LES, the model in RANS strongly affects also large scale.
Concerning the time step, of course it has no relevance in RANS while LES and DNS are performed using small values, of the order of the kolmogorov time scale.
FMDenaro is offline   Reply With Quote

Reply

Tags
mesh dependency, survey


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
[ICEM] 2D hybrid mesh (unstructured mesh highly dependent on structured mesh parameters) shubham jain ANSYS Meshing & Geometry 1 April 10, 2017 06:03
polynomial BC srv537 OpenFOAM Pre-Processing 4 December 3, 2016 10:07
[ICEM] Generating Mesh for STL Car in Windtunnel Simulation tommymoose ANSYS Meshing & Geometry 48 April 15, 2013 05:24
[snappyHexMesh] Layers:problem with curvature giulio.topazio OpenFOAM Meshing & Mesh Conversion 10 August 22, 2012 10:03
fluent add additional zones for the mesh file SSL FLUENT 2 January 26, 2008 12:55


All times are GMT -4. The time now is 22:19.