|
[Sponsors] |
February 23, 2017, 16:51 |
Survey: Mesh Dependency
|
#1 |
Senior Member
Matt
Join Date: Aug 2014
Posts: 947
Rep Power: 18 |
Greetings All,
I was hoping to get some feedback from the community on how they approach mesh dependency studies. What mesh metrics do you target when refining and to what threshold do you subject your figures of merit? For example: I typically try to target a 5% increase in cell count and look for less than 3% change in my figures of merit. Would love to hear back from as many people as possible on this. |
|
February 24, 2017, 11:28 |
|
#2 | |
Senior Member
Lane Carasik
Join Date: Aug 2014
Posts: 692
Rep Power: 15 |
Quote:
|
||
February 24, 2017, 11:58 |
|
#3 |
Senior Member
Matt
Join Date: Aug 2014
Posts: 947
Rep Power: 18 |
Independence of figures of merit with respect to changes in mesh refinement level. I suppose you could call it mesh convergence, I have always heard it referred to as mesh dependency. It is a common practice.
|
|
February 24, 2017, 13:22 |
|
#4 | |
Senior Member
Lane Carasik
Join Date: Aug 2014
Posts: 692
Rep Power: 15 |
Quote:
It is important to provide a definition of the terminology used for a discussion such as this. |
||
February 25, 2017, 02:52 |
|
#5 | |
Senior Member
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,763
Rep Power: 66 |
I find Patrick Roache's several papers and book on grid convergence index to be extremely useful. It's more systematic and gives a common way to interpret the results of a mesh study. It has a nice theoretical basis and is strongly linked with uncertainty analysis. You can easily find it via google, a few NASA sites also contain most of the information that you need. Essentially, the GCI is a uncertainty estimate w/ factor of safety. I highly recommend GCI or a better method. GCI is intended to be applied to your sought-after-parameter, which is usually a bulk or volumetric quantity. But often you want to compare detailed local distributions. In these cases, I use a localized GCI, a GCI at every cell face, and compute uncertainty fields.
I treat the results of a grid convergence study the same way I treat my experiments. Essentially it is my error bar that I apply to all my results and figures. One of the things people don't check anymore is the order of convergence of their problem, which you can really only check by going to much denser grids. Much denser in this case means at least halving the cell widths in every dimension, which usually results in 8x more cells for 3D problems (2*2*2=8). Even in industry, we recognize the importance of doing these checks and do them whenever we can. When we can't run 8x more cells, we're not afraid to go 8x coarser! If we can't afford to run 8x more cells (because the mesh is already 100 million cells) then we'll settle for 2x more cells. But I would never consider a change of less than 2x. We do these studies for 2 things. 1 to get the uncertainty, which pops out via the GCI. 2 to determine what mesh resolution will work for that type of problem and then apply that resolution to other problems. For example, I need # cells across this feature in my domain. Once you understand the mesh resolution required for your problem, you can quickly do many other problems without worrying about whether your grid is fine enough. Of course, your new simulations will always have some sensitivity to the mesh, which is the same error that you would always get from the GCI. Quote:
With regards to the mesh, those mean the same thing. Numerical simulations, just like experiments, always has uncertainties with respect to the measured variables that are caused by the mesh. Maybe mesh sensitivity is a better way of saying it? It is a pedantic argument. We call it many things, but everyone understands that eventually you need to do something where you change your mesh and rerun with all other settings the same and see if you still get the same result. |
||
February 25, 2017, 03:21 |
|
#6 | |
Senior Member
Lane Carasik
Join Date: Aug 2014
Posts: 692
Rep Power: 15 |
Quote:
Frankly, I find GCI and its variants to be somewhat restrictive in formulation. The calculations of observed order of accuracy causes issues with making GCI work properly. In general, I feel the community should try and determine a better means of developing uncertainty with respect to meshing. I would also argue there is very little consensus on how to appropriate do mesh convergence studies and mesh sensitivity studies. In my opinion, GCI is not a means to determine your mesh is converged, just a means of trying to develop uncertainty bands. |
||
February 25, 2017, 14:19 |
|
#7 | |
Senior Member
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,763
Rep Power: 66 |
Quote:
Imagine if you ran only 1 grid. And then you somehow magically knew, ah-hah, my result is converged within this much and the error is this much. Ideally you would be able to get your mesh sensitivity without needing to run different grids. That would be convenient. However, we are so far, limited to an experimentalist approach. That is, make a new grid, re-run, compare. Ok so I'll just call it a mesh study. I generally don't look for grid convergence because of the following point... The problem of calculating the order of convergence is more complicated than Roache's writeup. If you follow the GCI method exactly as written by Roache you will have some issues. By the way, if anyone is unfamiliar with this issue. Just try calculating the GCI when you have any N1, N2, N3 and the results are 1, 1, 1. The method fails, even though the result is numerically exact. And then try 1.00 0.99 1.01. Because the convergence is not monotonic, GCI has issues. Now compare that to a method that gives you the sequence 1.01 1.0011 1.03. GCI looks really nice for this case, because it converges nicely. But you as the user aren't aware that it converged to 1.01 instead of 1.0. But these are not bad problems to have. The bigger problem is that most people do not run enough grids with large enough variations to see any apparent order of convergence. I would love to see a more robust method. But I look around in so many CFD papers where people do not even mention any type of mesh convergence or mesh sensitivity being performed. Before we are able to agree on a 95% confidence interval, we have to get people to be able to do any interval studies first. =( We also have to accept that there are always people without the proper tools to do everything perfectly. E.g. in engineering it is common to assume all errors follow Gaussian distribution. To me, the GCI is similar to this approach. It is simple enough that you don't need to really understand what it is to be able to use it. But it gives you a reasonable uncertainty than other people will be able to use to judge your work. That is, to me, the GCI is great as a minimum standard. You must do at least the GCI, or something better! The people that can do better, generally know better and you don't need to worry about them. Last edited by LuckyTran; February 25, 2017 at 17:22. |
||
February 25, 2017, 15:42 |
|
#8 |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,897
Rep Power: 73 |
I find interesting the discussion.
However, I have the idea that the study of the order of convergence lies within the developing of a own-made code for which one uses analytical test-cases to assess that the code has no bugs. The key is to assess that the local truncation error vanishes asymptotically as one expects from the order of the discretization. In case of engineering/industrial applications one assumes that the code will do what one expects and the goals is to determine the sensitivity of the solution (for a complex problem) to the mesh sizes. Unfortunately, often such analysis is done using, for example, RANS modelling that is much more relevant in terms of magnitude than the local truncation error. So, you would get an erroneous intepretation of the mesh sensitivity. Other critical case is the solution with shocks. I am not sure we can assess in a unique way how to afford such issues... |
|
February 25, 2017, 17:45 |
|
#9 | |
Senior Member
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,763
Rep Power: 66 |
Quote:
That's why it's still important to look for an order of convergence of your solution. You would expect that it converges at least linearly or better. So when you do your mesh study, you should vary the discretization (the mesh) enough to be able to find an order of convergence. If you make small enough steps, everything will look linear (because the truncation terms are small). The idea is to make very large steps so that truncation errors become large so that you can easily tell, ah-hah, there is a difference (or not)! |
||
February 25, 2017, 18:09 |
|
#10 |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,897
Rep Power: 73 |
Well, I am not sure about a general rule for all cases...Immagine LES and RANS, the former will tend to the DNS for via via refined grids while the latter will be no longer dependent on the grid size when the magnitude of the local truncation error becomes smaller than that of the modelling...
|
|
February 25, 2017, 18:19 |
|
#11 | |
Senior Member
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,763
Rep Power: 66 |
Quote:
But for whatever it is you are trying to measure with LES (the mean flow field for example), as you make the refine the grid and model less and less, you want the same or similar result even if the model is changing. Of course the small scale information changes, but the small scale stuff shouldn't have a significant influence on the large scale stuff. We haven't talked about time-stepping, but a similar argument holds there. Do you get a different answer if you make your time-step smaller? If so, then you are not converged w.r.t. the time-step. If no, then you are converged. |
||
February 25, 2017, 18:43 |
|
#12 |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,897
Rep Power: 73 |
Well, using an explicit filter in LES and fixing the filter width then you are right, you can refine the grid and look for a solution tending to the filtered similary to what happens in RANS. But, differently from LES, the model in RANS strongly affects also large scale.
Concerning the time step, of course it has no relevance in RANS while LES and DNS are performed using small values, of the order of the kolmogorov time scale. |
|
Tags |
mesh dependency, survey |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
[ICEM] 2D hybrid mesh (unstructured mesh highly dependent on structured mesh parameters) | shubham jain | ANSYS Meshing & Geometry | 1 | April 10, 2017 06:03 |
polynomial BC | srv537 | OpenFOAM Pre-Processing | 4 | December 3, 2016 10:07 |
[ICEM] Generating Mesh for STL Car in Windtunnel Simulation | tommymoose | ANSYS Meshing & Geometry | 48 | April 15, 2013 05:24 |
[snappyHexMesh] Layers:problem with curvature | giulio.topazio | OpenFOAM Meshing & Mesh Conversion | 10 | August 22, 2012 10:03 |
fluent add additional zones for the mesh file | SSL | FLUENT | 2 | January 26, 2008 12:55 |