|
[Sponsors] |
precision issues in multi-scale simulations |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
February 15, 2022, 07:43 |
precision issues in multi-scale simulations
|
#1 |
Senior Member
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8 |
Taking a rocket's combustion chamber as an example : the scale of different fluid phenomenons happening inside the chamber are orders of magnitude different from each other. The whole combustion chamber + nozzle itself can be 1 m long, while the combustion and viscous effects are extremely small.
My question is, how do we represent this whole domain with a standard 64 bit floating point number? IIRC, In some astrophysics codes the scientists just use 128 bit floating point numbers because the scale is too large. But can't we do anything better? If we take a quadtree grid to represent the whole domain, we can subdivide as much as we want but the cells will be so small that the physical properties like position, cell area, cell volume, cell surface areas, etc. will be less than machine epsilon for a 64 bit floating point number. Can't we logically scale the smaller cells up (and make them 100, 1000, 10000 times larger), so we can do our simulation and then "somehow" combine the data back to the real values? I think we can do that, since electrons are significantly small and quantum chemistry codes scale them up so the simulation can use a 64 bit floating point number. I'm not 100% sure how they do it, but it involves some form of non-dimensionalization and different scaling factors with which they multiply their different parameters. Can we do something like this in CFD too? I don't know if someone already did this. |
|
February 15, 2022, 09:44 |
|
#2 |
Senior Member
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,747
Rep Power: 66 |
Not withstanding that codes don't really have units and 1 m in any code is really just 1 with a flag that specifies the m, you can freely reinterpret 1 m as 1 banana or 1 monkey or 1 elephant.
Floating point numbers are s*b^e. You can shift the decimal point with e and even extreme cases shift the base as well. Binary is base 2 and hexadecimal base 16 but you could use just about any base. We use floating point numbers precisely for this reason. Numbers with fixed points are called fixed points. |
|
February 15, 2022, 10:44 |
|
#3 |
Senior Member
andy
Join Date: May 2009
Posts: 301
Rep Power: 18 |
Non-dimensionalisation of some form was normal for CFD codes in the first few decades when most codes used 32 bit floating point numbers rather than 64 bit due to strong memory constraints and to a lesser extent low computational speeds. With the relaxing of memory constraints and the growing width of memory buses 64 bit floating point numbers became the default and it tended to get less attention except for some special cases.
Getting a simulation code to perform well with low precision floating point numbers is typically a time consuming but fairly straightforward exercise. As well as non-dimensionalisation it can involve rewriting terms and possibly changing solution variables depending on what is leading to a difference of large numbers. If your code is not misbehaving for your current cases it may not be a worthwhile exercise but getting some idea of the approaches available possibly is for the future. |
|
February 15, 2022, 10:53 |
|
#4 |
Senior Member
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,747
Rep Power: 66 |
Non-dimensionalization doesn't affect the range of scales you need to simulate. Your biggest scale is still your biggest and smallest is still your smallest and the ratio between them is the same.
|
|
February 15, 2022, 13:15 |
|
#5 | |
Senior Member
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8 |
Quote:
The problem occurs when the simulation contains cells that are very large, and very small. How do we deal with it then? If say, a few thousand cells' volume is less than the machine epsilon, I don't know if we can use the value in mathematical equations. ex, if y is lower than machine epsilon, x = y + z is probably going to be saturated to x = z or x = z + machine_epsilon This might be a problem, right? ex, if x,y,z represent point co-ordinates, and the equation saturates, then two points essentially start co-inciding with each other. That's a problem. I don't have the answer to this. I have to write some code and test this out. |
||
February 15, 2022, 13:25 |
|
#6 | |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,849
Rep Power: 73 |
Quote:
But are you talking about a case where a normalized max cell size is O(1) and the lowest size is less than 10^-16 ?? |
||
February 15, 2022, 13:41 |
|
#7 | |
Senior Member
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8 |
Quote:
1/(2^64) is 5.42101086243e-20 That's a achievable if a quadtree is divided 64 times and the firrst cell is 1 unit in length. I hope my maths is correct. Obviously the simulation won't work because the cell sizes are so different. I gave a wrong example before and led to confusion. Instead of cell volumes, the issue will be present in the position co-ordinates. If a point x1 is less than machine_epsilon distance from the point x2, then both of them will most likely coincide because we can't test for the difference or distance between them. Such a situation might arise at the highest level of refinement. I'm guessing at level 51 or 52 of the quadtree where the root cell is 1 unit in length. Not sure if this arises in normal CFD simulation cases, but astrophysics simulations and computational metallurgy simulations seem to span such large scales. |
||
February 15, 2022, 13:47 |
|
#8 | |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,849
Rep Power: 73 |
Quote:
|
||
February 15, 2022, 13:49 |
|
#9 | |
Senior Member
andy
Join Date: May 2009
Posts: 301
Rep Power: 18 |
Quote:
For your grid example if you stored cell dimensions rather than grid coordinates then you would avoid the issue (assuming I have understood the example). |
||
February 15, 2022, 13:54 |
|
#10 | |
Senior Member
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8 |
Quote:
it won't probably be necessary for conventional cfd, but i might want to figure out if it's possible if the need arises. |
||
February 15, 2022, 14:06 |
|
#11 | |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,849
Rep Power: 73 |
Quote:
Quadruple precision can be adopted but I think that would be more an issue in a specific field of study of round-off error propagation. For example in linear algebra computation. Something like that: https://archive.siam.org/meetings/la...s/hhasegaw.pdf |
||
February 15, 2022, 14:50 |
|
#12 | |
Senior Member
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,747
Rep Power: 66 |
Quote:
You could apply a gauge transformation and store and operate on only distance vectors between cells (the same way we do for gauge pressures) and then keep track of a reference location somewhere so that you can recover absolute x,y,z coordinates. This reduces the range of values you need to work with and allows you to stay precise. Now the reason we don't do this in CFD is that no CAD software works this way. CAD writes into very imprecise binary and ascii files of their raw x,y,z coordinates. Which is actually the first place where you'll encounter this problem: not in CFD, but in CAD. Good luck CAD'ing it in the first place. |
||
February 15, 2022, 17:44 |
|
#13 |
Senior Member
|
I think you know that floating point numbers are such that, given a representable number x, the closest larger representable number x+dx is such that dx/x is around the precision of the given floating point (except close to the boundaries), so around 1e-15 for double precision.
So, the question is, can you give a practical example where, on earth, this range is not enough? We can put the earth in a box of side 1.3e7 meters. If centered in 0, it covers from -6.5e6 to 6.5e6 meters in each direction. At a coordinate of 6.5e6 the smallest representable dx with a double precision is approximately dx = 6.5e-9m, 6.5 nanometers!!! Also, if you have x = 1e-9 and y = 1e-15, then z = x*y = 1e-24 without any problem at all. So, in the end, as mentioned by LuckyTran, the problem is indeed with the CAD, as most of them uses single precision. In my experience, the CAD problem is not something you have to look at with paranoia, because 6 orders of magnitude are still a lot and you will certainly have a ring bell when dealing with something possibly troublesome. EDIT: Of course, there will always be someone trying to set the origin of his system on the sun and pretending to work with millimiter sized objects on earth (spoiler alert, you can't with double precision), but we have Darwin Awards for them and they don't deserve using CFD. |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Issues with running simulations after recomposing and decomposing | bobcieri | OpenFOAM Running, Solving & CFD | 1 | June 3, 2020 10:32 |
Multi Region Mesh Issues - Star CCM+ | Hammerson99 | STAR-CCM+ | 3 | September 27, 2018 00:25 |
Modelled Length Scale in SAS-SST Simulation | jego | Main CFD Forum | 0 | January 16, 2016 07:45 |
Single precision better than double precision? | 140raiders | CFX | 1 | July 30, 2015 03:32 |
Compilation Order and Single Precision Issues | gocarts | OpenFOAM Bugs | 1 | October 14, 2009 18:19 |